text
stringlengths 256
16.4k
|
|---|
Sukalpa Biswas
☆
India,
2019-08-06 09:15
(edited by Sukalpa Biswas on 2019-08-07 09:07)
Posting: # 20475
Views: 631
This is regarding NTI drug Bio equivalence study Statistical approach.
As per regulatory,
Observations during study:
All the above criteria has been met except 90% confidence interval for the test-to-reference ratio of the within-subject variability ≤ 2.5 were not meet the criteria for all PK variables (Cmax, AUCt and inf).
Exercises, Observations and Analysis:
Kindly respond.
Edit: Please follow the Forum’s Policy. Category changed; see also this post #1. [Helmut]
Helmut
★★★
Vienna, Austria,
2019-08-07 11:09
@ Sukalpa Biswas
Posting: # 20482
Views: 520
Hi Sukalpa,
»
3. The within-subject standard deviation of test and reference products will be compared, and the upper limit of the 90% confidence interval for the test-to-reference ratio of the within-subject variability should be ≤ 2.5.
» […] 90% confidence interval for the test-to-reference ratio of the within-subject variability ≤ 2.5 were not meet the criteria for all PK variables (Cmax, AUCt and inf).
Failed to demonstrate BE due to the higher within-subject variability of the test product. Full stop.
» Exercises, Observations and Analysis:
What do you mean by „Exercises”? Since the study failed are you asking for a recipe to cherry-pick?
»
1. We have taken subjects who have completed at least 2R or 2T in Reference Scaled Average Bio equivalence calculation (existing study).
That’s my interpretation as well. Although only the calculation of
s is given in Step 1 of the guidance by analogy the same procedure should be applicable for WR s. WT
»
2. We have done the exercise who have completed all four treatments and did the statistical calculation- still failing on the same criteria marginally.
Leaving cherry-picking aside: By doing so, you drop available information. One should always use all. The more data you have, the more accurate/precise an estimate will be. Have a look at the formula to calculate the 100(1–α) CI of σ
/σ WT :$$\left(\frac{s_{WT} / s_{WR}}{\sqrt[]{F_{\alpha /2,\nu_1,\nu_2}}},\frac{s_{WT} / s_{WR}}{\sqrt[]{F_{1-\alpha /2,\nu_1,\nu_2}}} \right)$$We have two different degrees of freedom ( WR ν 1, ν 1), the first associated with s and the second with WT s. WR
»
3. It was observed that if the “SWT” value should be closed to SWR value or lower, then 90% CI for the test-to-reference ratio of the within-subject variability ≤ 2.5 will meet the criteria.
Of course.
»
1. Which Reference Scaled Average Bioequivalence approach is acceptable in regulatory?
Yes.
»
Approach 2: Subjects who completed four period will be consider for SWR & SWT calculation.
No.
»
or both.
Which one will you pick at the end if one
passes and the other one fails? The passing one, right? The FDA will love that. Be aware that the FDA recalculates every study.
BTW, how would you describe that in the SAP?
»
2. which are the factors adding variability to SWT?
That’s product-related. The idea behind the FDA’s reference-scaling for NTIDs is not only the narrow the limits but also to prevent products with higher variability than the reference’s entering the market.
»
3. Whether same formulation can be taken for the repeat bio-study with some clinical restrictions? If yes then what are the clinical factor to be considered?
As I wrote above, the failure to show BE was product-related. If you introduce clinical restrictions* in order to reduce within-subject variability – due to randomization – both products will be affected in the same way and
s/ WT s be essentially the same like in the failed study. WR
Reformulate.
PS: I changed the category of your post to yesterday and you to today. Wrong. Don’t test my patience – your problems are definitely study-specific (see the Policy for a description of categories).
—
Cheers,
Helmut Schütz
The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
Sukalpa Biswas
☆
India,
2019-08-09 06:11
@ Helmut
Posting: # 20488
Views: 439
» Failed to demonstrate BE due to the higher within-subject variability of the test product. Full stop.
Accepted.
» » Exercises, Observations and Analysis:
» What do you mean by „Exercises”?
Since the study has been failed, we wanted to dig out the probable reasons for the study failure. In that process certain statistical exercises have be done officially.
» Although only the calculation of
s is given in Step 1 of the guidance by analogy the same procedure should be applicable for WR s. WT
Accepted. Thanks.
» […] you drop available information. One should always use all. […]
Suggestion well accepted. Thanks.
» »
1. Which Reference Scaled Average Bioequivalence approach is acceptable in regulatory?
» Yes.
» »
Approach 2: Subjects who completed four period will be consider for SWR & SWT calculation.
» No.
» »
or both.
» Which one will you pick at the end if one
passes and the other one fails? The passing one, right? The FDA will love that. Be aware that the FDA recalculates every study.
Thanks for your suggestion.
» »
2. which are the factors adding variability to SWT?
» That’s product-related. The idea behind the FDA’s reference-scaling for NTIDs is not only the narrow the limits but also to prevent products with higher variability than the reference’s entering the market.
Agreed
» As I wrote above, the failure to show BE was product-related. If you introduce clinical restrictions* in order to reduce within-subject variability – due to randomization – both products will be affected in the same way and
s/ WT s be essentially the same like in the failed study. WR
» Reformulate.
OK. I would like to mention one thing that, the failed study was fed one, fasting study passed quite comfortably (both ABE and SABE). Is there any possibility that the test formulation is more variable in fed condition?
» PS: I changed the category of your post […].
Sorry. This is the first time I am posting something in this forum. Bit confused regarding the rules and regulation of this forum.
»
Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5! [Helmut]
Helmut
★★★
Vienna, Austria,
2019-08-09 12:36
@ Sukalpa Biswas
Posting: # 20489
Views: 403
Hi Sukalpa,
» » Reformulate.
»
» OK. I would like to mention one thing that, the failed study was fed one, fasting study passed quite comfortably (both ABE and SABE). Is there any possibility that the test formulation is more variable in fed condition?
That’s quite possible. An extreme example of the past: The first PPIs were monolithic gastric-resistant formulations. Crazy variability, both fasting and fed. Current formulations are capsules with gastric-resistant pellets. Variability still high but way better than the monolithic forms. Of course, when the capsules were introduced, BE studies were performed. All PK metrics passed but by inspecting the profiles you could clearly see the lower variability of the capsules. OK, these are easy drugs (now many are already OTCs). Imagine that they would be NTIDs and the formulation change the other way ‘round (capsule → monolithic). No way ever to pass the
s/ wT s criterion. wR
In your case this means again to reformulate. Don’t ask me
how (I’m not a formulation chemist). Maybe dissolution testing in the various stinking FeSSIF “biorelevant” media helps.
» This is the first time I am posting something in this forum. Bit confused regarding the rules and regulation of this forum.
Some hints in the Policy.
—
Cheers,
Helmut Schütz
The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
|
7th SSC CGL Tier II level Question Set, topic Trigonometry 1
This is the 7th question set of 10 practice problem exercise for SSC CGL Tier II level exam and 1st on topic Trigonometry.
We repeat the method of taking the test. It is important to follow result bearing methods even in practice test environment.
Method of taking the test for getting the best results from the test: Before start,go through Tutorial on Basic and rich concepts in Trigonometry and its applications, or any short but good material to refresh your concepts if you so require. Tutorial on Basic and rich concepts in Trigonometry Part 2, Compound angle functions Answer the questionsin an undisturbed environment with no interruption, full concentration and alarm set at 15 minutes. When the time limit of 15 minutes is over,mark up to which you have answered, but go on to complete the set. At the end,refer to the answers given at the end to mark your score at 15 minutes. For every correct answer add 1 and for every incorrect answer deduct 0.25 (or whatever is the scoring pattern in the coming test). Write your score on top of the answer sheet with date and time. Identify and analyzethe problems that you couldn't doto learn how to solve those problems. Identify and analyzethe problems that you solved incorrectly. Identify the reasons behind the errors. If it is because of your shortcoming in topic knowledgeimprove it by referring to only that part of conceptfrom the best source you can get hold of. You might google it. If it is because of your method of answering,analyze and improve those aspects specifically. Identify and analyzethe problems that posed difficulties for you and delayed you. Analyze and learn how to solve the problems using basic concepts and relevant problem solving strategies and techniques. Give a gapbefore you take a 10 problem practice test again.
Important:both and practice tests must be timed, analyzed, improving actions taken and then repeated. With intelligent method, it is possible to reach highest excellence level in performance. mock tests Resources that should be useful for you Before taking the test it is recommended that you refer to You may also refer to the related resources: or 7 steps for sure success in SSC CGL tier 1 and tier 2 competitive tests to access all the valuable student resources that we have created specifically for SSC CGL, but section on SSC CGL generally for any hard MCQ test.
If you like,you may to get latest subscribe content on competitive examspublished in your mail as soon as we publish it.
Now set the stopwatch alarm and start taking this test. It is not difficult.
7th question set- 10 problems for SSC CGL Tier II exam: 1st on Trigonometry - testing time 15 mins Problem 1.
If $x=rsin\alpha {cos\beta}$, $y=rsin\alpha{sin\beta}$ and $z=rcos\alpha$, then,
$x^2-y^2+z^2=r^2$ $x^2+y^2+z^2=r^2$ $x^2+y^2-z^2=r^2$ $y^2+z^2-x^2=r^2$ Problem 2.
With $\angle \theta$ acute, the value of the expression, $\left(\displaystyle\frac{5\ cos \theta - 4}{3-5\ sin \theta} - \displaystyle\frac{3+5\ sin \theta}{4+5\ cos \theta}\right)$ is,
$1$ $0$ $\displaystyle\frac{1}{2}$ $\displaystyle\frac{1}{4}$ Problem 3.
If $4+ 3\ tan \alpha=0$, where $\displaystyle\frac{\pi}{2} \lt \alpha \lt \pi$, the value of $2\ cot \alpha - 5\ cos \alpha + \sin \alpha$ is,
$\displaystyle\frac{23}{10}$ $-\displaystyle\frac{53}{10}$ $\displaystyle\frac{37}{10}$ $\displaystyle\frac{7}{10}$ Problem 4.
If $\ sin \theta + \ sin^2 \theta=1$, then which of the following is true?
$\ cos \theta +\ cos^2 \theta=1$ $\ cos^2 \theta +\ cos^3 \theta=1$ $\ cos^2 \theta +\ cos^4 \theta=1$ $\ cos \theta -\ cos^2 \theta=1$ Problem 5.
If $a=\sec \theta+\tan \theta$, then $\displaystyle\frac{a^2-1}{a^2+1}$ is,
$\sec \theta$ $\cos \theta$ $\tan \theta$ $\sin \theta$ Problem 6.
The value of $\displaystyle\frac{cot \theta + cosec \theta - 1}{cot \theta -cosec \theta +1}$ is,
$cosec \theta - cot \theta$ $cosec \theta + cot \theta$ $sec \theta + cot \theta$ $cosec \theta + tan \theta$ Problem 7.
If $\displaystyle\frac{sin \theta + cos \theta}{sin \theta - cos \theta}=3$, then the value of $sin^4 \theta -cos^4 \theta$ is,
$\displaystyle\frac{2}{5}$ $\displaystyle\frac{1}{5}$ $\displaystyle\frac{4}{5}$ $\displaystyle\frac{3}{5}$ Problem 8.
If $asec \theta+btan \theta +c=0$, and $psec \theta +qtan \theta +r=0$, then the value of $(br-qc)^2-(pc-ar)^2$ is,
$(aq+bp)^3$ $(aq-bp)^3$ $(aq+bp)^2$ $(aq-bp)^2$ Problem 9.
If $\alpha + \beta + \gamma=\pi$, then the value of $(sin^2 \alpha + sin^2 \beta - sin^2 \gamma)$ is,
$2sin \alpha{sin \beta}cos \gamma$ $2sin \alpha$ $2sin \alpha{cos \beta}sin \gamma$ $2sin \alpha{sin \beta}sin \gamma$ Problem 10.
If $sin\alpha sin\beta-cos\alpha cos\beta + 1=0$, then the value of $cot\alpha tan\beta$ is,
$-1$ $1$ $0$ None of these
The answers are given below, but you will find the
to these questions in detailed conceptual solutions . SSC CGL Tier II level Solution Set 7 on Trigonometry 1
If you wish, you may watch the two-part video solutions below.
Part 1: Q1 to Q5 Part 2: Q6 to Q10 Answers to the problems Problem 1. Answer: b: $x^2+y^2+z^2=r^2$. Problem 2. Answer: b: 0. Problem 3. Answer: a: $\displaystyle\frac{23}{10}$. Problem 4. Answer: c: $cos^2\theta + cos^4\theta=1$ Problem 5. Answer: d: $sin \theta$. Problem 6. Answer: b: $cosec \theta + cot \theta$. Problem 7. Answer: d: 1. Problem 8. Answer: d: $(aq-bp)^2$. Problem 9. Answer: a: $2sin\alpha sin\beta cos \gamma$. Problem 10. Answer: a: $-1$. Resources on Trigonometry and related topics
You may refer to our useful resources on Trigonometry and other related topics especially algebra.
Tutorials on Trigonometry General guidelines for success in SSC CGL Efficient problem solving in Trigonometry A note on usability: The Efficient math problem solving sessions on School maths are equally usable for SSC CGL aspirants, as firstly, the "Prove the identity" problems can easily be converted to a MCQ type question, and secondly, the same set of problem solving reasoning and techniques have been used for any efficient Trigonometry problem solving. SSC CGL Tier II level question and solution sets on Trigonometry SSC CGL Tier II level Question Set 7 on Trigonometry 1
|
I have a nonlinear inequality constrained optimization problem of the form $$\begin{array}{cc} \min & f(x) \\ \textrm{s.t.} & g(x) \ge 0 \end{array}$$ where $x \in \mathbb{R}^n$, $f : \mathbb{R}^n \to \mathbb{R}$, $g : \mathbb{R}^n \to \mathbb{R}^m$. I am currently solving this system with pseudotransient continuation (3) applied to a semismooth Newton's method (1,2). The semismooth Newton's method encodes the KKT conditions for a local solution to the constrained optimization into a single semismooth equation. Pseudotransient continuation is an unnecessarily fancy name for adding a diagonal term to the gradient of this equation (which includes the Hessian of the energy $f$) and running Newton's method.
Unfortunately, pseudotransient continuation globally convergent only for sufficiently large diagonal adjustments, and my current problem is converging only down to a nasty period two oscillation between two states. Without the constraints, global convergence could be enforced using line search using the original energy function $f$. However, the KKT conditions depend only on the gradient of $f$, not its value, and the original energy is somewhat obscured in the passage to the semismooth equation.
Question: Are there natural ways to perform line search in the context of constrained optimization? Note that it is not sufficient to apply line search to the residual norm, as described by Jed Brown here.
References:
|
GR8677 #20
Problem
Special Relativity}Rest Energy
If one remembers the formulae from special relativity, arithmetic would be the hardest part of the problem.
The problem is to solve , where and .
The rest-mass of both the kaon and the proton are given in the problem. Thus, the equation reduces to .
Now, because the range of velocities vary significantly between and , one can't directly approximate that as 2. Boo. So, long-division by hand yields approximately .
The author of this site prefers to use fractions instead of decimals. Thus . Express in terms of to get .
is about 64, so the velocity has to be greater than 0.8c. The only choice left is (E).
If one has the time, one might want to memorize the following:
, or perhaps a more elaborate list of correlations.
If one knew that before-hand, then one would immediately arrive at choice (E).
Alternate Solutions
There are no Alternate Solutions for this problem. Be the first to post one!
Comments
ernest21 2019-08-10 03:09:34 Its a good approach to solve problems @ home,interesting. wolfgang krauser fredluis 2019-08-09 04:13:03 Both get me closer to 2 compared to the other given answers, but nothing I can think of can give me an exact answer of two. As for the answer explanation here, I am just fairly uncomfortable with the idea that adding two different units together can get the correct answer. tree removal joshuaprice153 2019-08-08 07:33:21 Pretty insightful post. Never thought that it was this simple after all. I had spent a good deal of my time looking for someone to explain this subject clearly and you’re the only one that ever did that. Keep it up. house painting JoshWasHere 2014-08-20 08:12:31 In working this out, I approximated = =1.9 which makes it easier to work with. Then in working out the algebra, I approximated = which made taking the square root extremely simple. is nearly .85 c so without too many assumptions or generous approximations, I got nearly the right answer in a short amount of time. dak213 2014-06-30 17:58:42 I think they forgot to add the 1 in the equation. I got then solving for I got exactly 0.85c E
dak213 2014-06-30 18:00:13 Ah, I see this issue has been addressed. I should have read the comments at the bottom. deniskrasnov 2013-01-19 00:08:42 i have a suspicion that ETS only wants you to remember Pythagorean triple 3:4:5
wich translates into:
beta = 0.8 => gamma = 5/3 = 1.66...
beta = 0.6 => gamma = 5/4 = 1.25
and thus, dealing with similar problem one should first check if one of these is enough to identify the correct answer quickly
the first one is sufficient in this case
gamma = 938/494 > 850/500 > 1.7 > 1.66.. => beta > 0.8 oliTUTilo 2012-11-01 16:43:56 If we algebraically solve for v (or ) instead of , as Yosun does, it seems easier to approximate the answer.
is approximately , so is approximately , which is 1.7 something divided by 2, or approximately 0.85. v is , so badabingbadaboom.
In general, I'd algebraically solve for the variable in question first, gaining the greatest transparency, but redmomatt's method is even a bit faster here.
redmomatt 2011-10-14 08:59:20 Why can't we do this....?
(E) checkyoself 2011-10-05 10:46:45 Approximating is fine. You end up with which you know must be greater than 0.75c so the answer must be E. archard 2010-05-25 18:33:08 Instead of doing the long division by hand, you could just say that gamma is less than 2. Solving the inequality you get that v > . E is the only answer that satisfies this.
Crand0r 2010-11-12 08:49:13 Actually, you should get that . All of the answers satisfy this, but you also know that , which leaves only E. lamejiaa 2009-08-30 09:09:49 The result is good but (19/10)(19/10) = 361/100
walczyk 2011-04-06 17:37:34 ^^this is true. you can check this quickly by . After a couple steps you will end up with . Do a little math and you'll get Phew, all these relativistic problems really have you scratching away with your pencil, but the most time consuming part of this problem will be the long division, good luckk. jmason86 2009-08-16 16:22:01 If you DO use then you get roughly . Not bad, I say. If the listed velocities were closer together, we might have a problem but they are pretty well spaced. Since it takes about 20 seconds to solve through with , I suggest using this, and if the answer came out too far from those listed, to go back and solve for 1.x. If you did have to do this, at least you already solved for v and you could just plug back in without deriving it again.
wittensdog 2009-09-25 17:26:02 I strongly agree - whenever you see spaced out answers, that's always your clue to approximate.
One thing I always remember is that when gamma is 2, the speed is sqrt(3)/2 , which is something like 0.866 or something like that. A value of 2 for gamma comes up in a lot of places, such as when rest energy and kinetic energy are equal, so it's a good thing to remember. Seeing that the ratio is almost two, that the answers are well spaced, and that there's something in the ball park of 86% of c, you should immediately go for it.
Seriously, NEVER do math if you see spaced out answers. One big example I can think of is the drift velocity problem, where each of the answers were spaced out by like 7 orders of magnitude. There was a factor of 1.6(pi) or something like that, but that's certainly not enough to swing 7 OM! Of course it's not quite as dramatic here, but still, no one wants to do long division on the physics GRE. nz_gre 2007-09-24 18:12:08 why are we neglecting the fact that
And then finding the momentum velocity?
Surely, if the total kaon energy is equal to the proton rest mass then
938 = (pc) + (494) and we go from there?
bkardon 2007-10-05 13:31:04 The expression for is in fact equivalent to the expresssion
as follows (here is relativistic mass, is rest mass)
Here I will use
QED
FutureDrSteve 2011-10-29 13:44:19 Another consideration with your approach is that you still need to calculate because once you get down to
in order to solve for v, you need to use:
p =
Since you need the other approach to solve for , you might as well stick with that method to save time and effort
FutureDrSteve 2011-10-29 13:56:04 Correction: You don't NEED to solve for using the other method. You can use the definition of to solve for v, but using the simpler approach will reduce the risk of making a careless mistake, plus you will save lots of time, which, as far as I'm concerned, is the cardinal rule of the PGRE.
ryanjsfx 2014-10-20 11:32:20 I thought using wasn't so bad. Observe (with c set to 1):rnrn is about 1.7 and 1.7 divided by 2 is about 0.84
Done! leftynm 2006-10-30 17:05:27 This is much simpler than that.
Once you have gamma*m_k = m_p, plug in gamma = [1-(v/c)^2)]^-1, and solve for v/c. You get v/c = sqrt(1 - (m_k/m_p)^2). If you allow m_k/m_p = 1/2, which it just about is, then you get v/c = sqrt(3)/2. From seeing this so much in trig, I know sqrt(3)/2 = 0.866. Then v/c is about 0.866c. So choice E is very close.
blah22 2008-03-22 13:10:35 How is that not, basically, what she did? Just in more detail. radicaltyro 2006-10-21 12:59:15 1.9^2 = 3.61 = 361/100, not 262/100
achca 2012-06-26 03:55:13 absolutely right!
Post A Comment!
Bare Basic LaTeX Rosetta Stone
LaTeX syntax supported through dollar sign wrappers $, ex.,
$\alpha^2_0$ produces .
type this...
to get...
$\int_0^\infty$
$\partial$
$\Rightarrow$
$\ddot{x},\dot{x}$
$\sqrt{z}$
$\langle my \rangle$
$\left( abacadabra \right)_{me}$
$\vec{E}$
$\frac{a}{b}$
The Sidebar Chatbox... Scroll to see it, or resize your browser to ignore it...
|
This is our first contributed gemstone! Submitted by user Anon1.
In the following, $p$ denotes a prime. We wish to prove that, for all positive integers $n$, there is a finite field of order $p^{n}$.
Step 1. Restating the problem. Claim: It suffices to show that, for some power of $p$ (call it $q$), there exists a finite field of order $q^{n}$. Proof. Suppose there is a field $F$ such that $|F| = q^{n}$. The claim is that the solution set to $x^{p^{n}} = x$ in $F$ is a subfield of order $p^{n}$.
Since $q$ is a power of $p$, we have $$p^{n}-1 | q^{n}-1.$$ Since $q^{n}-1$ is the order of the cyclic group $F^{ \times }$, we know that $F^{ \times }$ has a unique cyclic subgroup of order $p^{n}-1$. This contributes $p^{n}-1$ solutions to $x^{p^{n}} = x$ in $F$. But $0$ is another solution, since it is not an element of $F^{ \times }$. This gives $p^{n}$ solutions, so this gives all solutions.
It remains to show that these solutions form a subfield of $F$. Since they form a finite set, it suffices to show that they are closed under addition and, since they include 0, it suffices to show they are closed under multiplication.
Closure under multiplication is immediate: $x^{p^{n}} = x$ and $y^{p^{n}} = y$ implies $$(xy)^{p^{n}} = x^{p^{n}}y^{p^{n}} = xy.$$
Closure under addition is similarly easy: Since $|F| = q$ and $q$ is a power of $p$, the field $F$ has characteristic $p$. This means that, for all $x,y \in F$, $$(x+y)^{p} = x^{p} + y^{p}.$$ Applying this $n$ times yields $(x+y)^{p^{n}} = x^{p^{n}} + y^{p^{n}}$. If $x^{p^{n}} = x$ and $y^{p^{n}} = y$, then it follows that $(x+y)^{p^{n}} = x + y$. We are done with Step 1. $\square$
Step 2. Reducing to an asymptotic problem. Claim: There are arbitrarily large $k$ such that there is a finite field of order $q = p^{k}$. Proof. If $f \in \mathbb{F}_{p}[x]$ is irreducible and of degree $k$, then $\mathbb{F}_{p}[x]/(f)$ is a field of order $p^{k}$. So we need to prove that the degrees of irreducible polynomials in $\mathbb{F}_{p}[x]$ get arbitrarily high. Since $\mathbb{F}_{p}[x]$ has only $(p-1)p^{k}$ polynomials of degree $k$, it suffices to prove that $\mathbb{F}_{p}[x]$ has infinitely many irreducible polynomials.
At this point we use Euclid’s argument: if $f_{1}, \ldots , f_{N}$ are all the irreducibles in $\mathbb{F}_{p}[x]$, then $1 + f_{1} \cdot \ldots \cdot f_{N}$ cannot be factored into irreducibles (and isn’t constant), contradicting the fact that $\mathbb{F}_{p}[x]$ is a unique factorization domain. Therefore $\mathbb{F}_{p}[x]$ has infinitely many irreducibles and we are done. $\square$
Step 3. Counting the irreducible polynomials of degree $d$ over various $\mathbb{F}_q$. Claim: If $q$ is a power of $p$ for which there is a finite field of order $q$ (here denoted $\mathbb{F}_{q}$), then the number of irreducible monic polynomials of degree $d$ over $\mathbb{F}_{q}$ is a polynomial (call it $I_{d}(q)$) depending only on $d$, evaluated at $q$. Proof. This proceeds by strong induction on $d$. The base case is $d = 1$: any linear polynomial is irreducible, and there are $q$ monic linear polynomials over $\mathbb{F}_{q}$.
Now suppose $d > 1$. There are $q^{d}$ monic degree $d$ polynomials over $\mathbb{F}_{q}$. To count how many are irreducible, we remove the ones which are reducible: If a monic polynomial is reducible, it can be written uniquely as a product of monic irreducibles of lower degree. These degrees add up to $d$, so they give a partition $\lambda$ of $d$ which can be any partition except $(d)$ itself.
Specifically, for each such partition $\lambda$ suppose $a_{i}$ is the number of parts of size $i$. Then
$$I_{d}(q) = q^{d} – \sum_{ \lambda \vdash d, \lambda \neq (d)}{ \prod_{i=1}^{d} \binom{I_{i}(q) + a_{i} – 1}{a_{i}} },$$ which is a polynomial in $q$ by the strong induction hypothesis. $\square$
Now, it suffices to show that for any given $n$, $I_n(q)>0$ for sufficiently large $q$. For, by step 2, this will give us a power of $p$ called $q$ for which $\mathbb{F}_q$ exists and such that there is a monic irreducible polynomial over $\mathbb{F}_q$ of degree $n$. By step 1 we are then done.
Since $I_{n}$ is a polynomial, this amounts to showing that $I_{n}$ has a positive leading coefficient for all $n$. This we do now:
Step 4. Showing that the counting polynomial $I_n$ has positive leading coefficient. Claim: For all $n$, $I_{n}$ has degree $n$ and leading coefficient $1/n$. Proof. Again we proceed by strong induction on $n$. When $n = 1$, we have $I_{n}(q) = q$ and we are done. So suppose $n>1$. Then as in step 3, we have $$I_{n}(q) = q^{n} – \sum_{ \lambda \vdash n, \lambda \neq (n) }{ \Pi_{i=1}^{n} \binom{I_{i}(q) + a_{i} – 1}{a_{i}} }.$$ Since $a_{n} = 0$, we can restrict the products to only $i \lt n$. Then the degree of each of these products is $\sum_{i=1}^{n} i\cdot a_i=n$ by the induction hypothesis, and so $I_n(q)$ has degree at most $n$. Moreover, assuming by the induction hypothesis that the coefficient of $q^i$ in $I_i(q)$ is $1/i$, we have that the coefficient of $q^{i\cdot a_i}$ in $\binom{I_i(q)+a_i-1}{a_i}$ is $$\frac{1}{a_i!\cdot i^{a_i}}.$$
Finally, it follows that the coefficient of $q^n$ in $I_n(q)$ is $$1-\sum_{ \lambda \vdash n, \lambda \neq (n) }{ \Pi_{i=1}^{n}{ \frac{1}{a_{i}!i^{a_{i}}}}}.$$ Now, notice that for a given $\lambda$, the product $$z_\lambda=a_1!\cdot 1^{a_1}\cdot a_2!\cdot 2^{a_2}\cdot\cdots\cdot a_n!\cdot n^{a_n}$$ is the well-known demoninator of the formula $$n!/z_\lambda$$ for the size of the conjugacy class of elements of shape $\lambda$ in the symmetric group $S_n$. Thus we have $$\sum_{\lambda\vdash n} \frac{n!}{z_\lambda}=n!,$$ and so $$\sum_{\lambda\vdash n}\frac{1}{z_\lambda}=1.$$ It follows from this and the formula for our coefficient above that the coefficient is simply $$\sum_{\lambda=(n)}\frac{1}{z_\lambda}=1/n,$$ and the claim is proven. $\square$
|
GR8677 #22
Problem
Electromagnetism}Lorentz Transformation
When an electric field is Lorentz-transformed, afterwards, there might be both a magnetic and electric field (in transverse components). Or, more rigorously, one has,
for motion in the direction.
(A) Obviously not. Suppose initially, one has just , afterwards, there's still .
(B) True, as can be seen from the equations above.
(C) Not true in general. Suppose . Afterwards, the transverse components are off by a even if there's no B field to start with.
(D) Nope.
(E) Mmmm... no need for gauge transformations.
Comments
casseverhart13 2019-09-26 03:29:58 Hi,I just discovered your problem.... car window tinting fredluis 2019-08-09 04:20:09 I believe Jeremy ans was better. In the copper there is no significant gain in number of conductive electron but decrease in their sweeping velocity because of the impurities seen as obstacles. pressure cleaning joshuaprice153 2019-08-08 07:37:32 Hey, would you mind if I share your blog with my twitter group? There’s a lot of folks that I think would enjoy your content. Please let me know. Thank you. towing service djh101 2014-08-27 16:38:09 You can eliminate almost all of the answers without really knowing anything about relativity.
A. Assume observer 1 = observer 2. According to this, an identity transformation will transform an electric field into a magnetic field.
C. From basic E&M, you should know that there's a little bit of symmetry going on between electricity and magnetism. With only that in mind, you can eliminate both C and D. They can't both be true, so by symmetry, neither is true.
D. See above. wittensdog 2009-09-25 17:53:58 In general, what happens is that the field tensor transforms with two factors of whatever matrix corresponds to the given Lorentz transformation. The conditions on the matrices that are in the Lorentz group are not terribly strict, and certainly we know from the simple case of a boost along the x-axis that the components can mix into each other (think about the usual space and time four-vector, and how it transforms).
In general, for a Lorentz transformation, whatever four-vector or tensor you have can have its components mix together pretty generally.
Also, at least in the classical realm, fields NEVER depend on gauge in any way. The issue of gauge fixing only ever comes up when you start working with potentials, which are not measurable. So immediately throw out anything involving gauge dependence when asked about fields. FortranMan 2008-10-17 23:15:56 For more on this topic, look around for Relativistic Electrodynamics, specifically in Griffiths section 12.3. Imagine the case of a large parallel-plate capacitor moving perpendicular to their surface areas, and imagine what the Lorentz contraction does to the charge distributions on the plates. This ends up increasing the electric field that's perpendicular to the direction of motion. Note for a parallel-plate capacitor situated such their motion is parallel to their surfaces, this contraction does not occur and the field remains unchanged. grae313 2007-10-07 15:50:57 On choice (E), does the Lorentz transformation already assume the Lorentz guage?
rorytheherb 2008-10-06 10:22:16 yeah I'm looking at griffiths EM and it's not really illuminating this particular question. i knew of course that E and B mix in S.T.R., but when I saw choice E, I recalled the Lorentz gauge and got duped. Can anyone clarify why how we can say E and B mix _without_ first specifying a gauge?
dean 2008-10-09 21:00:55 I'm not sure there's any relevance at all. Gauge transformations talk about potentials, not fields. A (the vector potential) may change depending on what gauge you're using, but B does not, and can not, else we'd have a physically different situation. Lorentz transformations talk about fields directly in different inertial frames.
Post A Comment!
Bare Basic LaTeX Rosetta Stone
LaTeX syntax supported through dollar sign wrappers $, ex.,
$\alpha^2_0$ produces .
type this...
to get...
$\int_0^\infty$
$\partial$
$\Rightarrow$
$\ddot{x},\dot{x}$
$\sqrt{z}$
$\langle my \rangle$
$\left( abacadabra \right)_{me}$
$\vec{E}$
$\frac{a}{b}$
The Sidebar Chatbox... Scroll to see it, or resize your browser to ignore it...
|
News
Recent result from PandaX-II was published on Physical Review Letter (Phys. Rev. Lett. 119, 181302, "Editor's Suggestion") on October 30, 2017, back-to-back with the first result from XENON1T experiment. The papers are highlighted by a Physics "Viewpoint" commentary by Dan Hooper from FNAL and Univ. of Chicago, commenting that “...
The PandaX-II collaboration released the official WIMP search results using 54 ton-day exposure on Aug. 23, 2017. No excess events were found above the expected background. The most stringent upper limit on spin-independent WIMP-nucleon cross section was set for a WIMP mass greater than $100 GeV/c^{2}$, with the lowest exclusion at $8.6\times10^{-47} cm^{2}$ at $40 GeV/c^{2}$. The result reported here is more conservative than the preliminary result shown during the TeVPA2017 conference, due to the adoption of updated photon/electron detection efficiencies.
Prof. Xiangdong Ji of Shanghai Jiao Tong University and University of Maryland, spokesperson of the PandaX Collaboration, announced new results on the dark matter (DM) search from the PandaX-II experiment on Monday Aug. 7, during the TeV Particle Astrophysics 2017 Conference at Columbus, Ohio, the United States. No DM candidate was identified within the data from an exposure of 54 ton-day, the largest reported DM direct detection data set to date.
The PandaX observatory uses xenon as target and detector to search for WIMP particles as well as neutrinoless double beta decay ($0\nu\beta\beta$) in ${}^{136}Xe$. At present, the PandaX-II experiment is in operation in CJPL-I. The future PandaX program will pursue the following three main directions:
1 of 2 next › Events Tuesday, June 25, 2019 - 08:00YuGang Bao Library
Following the activity on Exotic Hadrons initiated last year at T. D. Lee Institute in Shanghai, we are glad to announce the Workshop
Exotic Hadrons: Theory and Experiment at Lepton and Hadron Colliders
to be held at the T.D. Lee Institute, located in Shanghai Jiao Tong University, June 25-27, 2019.
Sunday, April 28, 2019 - 18:00T. D. Lee Library
GEANT4 is a powerful toolkit for detector simulations which is widely used in many applications of high energy physics, space and radiation, medical and so on.
Wednesday, January 9, 2019 - 09:00T. D. Lee Library
Description Vidyo connection:
|
It is true that K-means clustering and PCA appear to have very different goals and at first sight do not seem to be related. However, as explained in the Ding & He 2004 paper K-means Clustering via Principal Component Analysis, there is a deep connection between them.
The intuition is that PCA seeks to represent all $n$ data vectors as linear combinations of a small number of eigenvectors, and does it to minimize the mean-squared reconstruction error. In contrast, K-means seeks to represent all $n$ data vectors via small number of cluster centroids, i.e. to represent them as linear combinations of a small number of cluster centroid vectors where linear combination weights must be all zero except for the single $1$. This is also done to minimize the mean-squared reconstruction error.
So K-means can be seen as a super-sparse PCA.
What Ding & He paper does, it to make this connection more precise.
Unfortunately, the Ding & He paper contains some sloppy formulations (at best) and can easily be misunderstood. E.g. it might seem that Ding & He claim to have proved that cluster centroids of K-means clustering solution lie in the $(K-1)$-dimensional PCA subspace:
Theorem 3.3. Cluster centroid subspace is spanned by the first
$K-1$ principal directions [...].
For $K=2$ this would imply that projections on PC1 axis will necessarily be negative for one cluster and positive for another cluster, i.e. PC2 axis will separate clusters perfectly.
This is either a mistake or some sloppy writing; in any case, taken literally, this particular claim is false.
Let's start with looking at some toy examples in 2D for $K=2$. I generated some samples from the two normal distributions with the same covariance matrix but varying means. I then ran both K-means and PCA. The following figure shows the scatter plot of the data above, and the same data colored according to the K-means solution below. I also show the first principal direction as a black line and class centroids found by K-means with black crosses. PC2 axis is shown with the dashed black line. K-means was repeated $100$ times with random seeds to ensure convergence to the global optimum.
One can clearly see that even though the class centroids tend to be pretty close to the first PC direction, they do
not fall on it exactly. Moreover, even though PC2 axis separates clusters perfectly in subplots 1 and 4, there is a couple of points on the wrong side of it in subplots 2 and 3.
So the agreement between K-means and PCA is quite good, but it is not exact.
So what did Ding & He prove? For simplicity, I will consider only $K=2$ case. Let the number of points assigned to each cluster be $n_1$ and $n_2$ and the total number of points $n=n_1+n_2$. Following Ding & He, let's define
cluster indicator vector $\mathbf q\in\mathbb R^n$ as follows: $q_i = \sqrt{n_2/nn_1}$ if $i$-th points belongs to cluster 1 and $q_i = -\sqrt{n_1/nn_2}$ if it belongs to cluster 2. Cluster indicator vector has unit length $\|\mathbf q\| = 1$ and is "centered", i.e. its elements sum to zero $\sum q_i = 0$.
Ding & He show that K-means loss function $\sum_k \sum_i (\mathbf x_i - \boldsymbol \mu_k)^2$ (that K-means algorithm minimizes) can be equivalently rewritten as $-\mathbf q^\top \mathbf G \mathbf q$, where $\mathbf G$ is the $n\times n$ Gram matrix of scalar products between all points: $\mathbf G = \mathbf X_c^\top \mathbf X_c$, where $\mathbf X$ is the $n\times 2$ data matrix and $\mathbf X_c$ is the centered data matrix.
(Note: I am using notation and terminology that slightly differs from their paper but that I find clearer).
So the K-means solution $\mathbf q$ is a centered unit vector maximizing $\mathbf q^\top \mathbf G \mathbf q$. It is easy to show that the first principal component (when normalized to have unit sum of squares) is the leading eigenvector of the Gram matrix, i.e. it is also a centered unit vector $\mathbf p$ maximizing $\mathbf p^\top \mathbf G \mathbf p$. The only difference is that $\mathbf q$ is additionally constrained to have only two different values whereas $\mathbf p$ does not have this constraint.
In other words, K-means and PCA maximize the same objective function, with the only difference being that K-means has additional "categorical" constraint.
It stands to reason that most of the times the K-means (constrained) and PCA (unconstrained) solutions will be pretty to close to each other, as we saw above in the simulation, but one should not expect them to be identical. Taking $\mathbf p$ and setting all its negative elements to be equal to $-\sqrt{n_1/nn_2}$ and all its positive elements to $\sqrt{n_2/nn_1}$ will generally
not give exactly $\mathbf q$.
Ding & He seem to understand this well because they formulate their theorem as follows:
Theorem 2.2. For K-means clustering where $K= 2$, the continuous solution of the cluster indicator vector is the [first] principal component
Note that words "continuous solution". After proving this theorem they additionally comment that PCA can be used to initialize K-means iterations which makes total sense given that we expect $\mathbf q$ to be close to $\mathbf p$. But one still needs to perform the iterations, because they are not identical.
However, Ding & He then go on to develop a more general treatment for $K>2$ and end up formulating Theorem 3.3 as
Theorem 3.3. Cluster centroid subspace is spanned by the first
$K-1$ principal directions [...].
I did not go through the math of Section 3, but I believe that this theorem in fact also refers to the "continuous solution" of K-means, i.e. its statement should read "cluster centroid space of the continuous solution of K-means is spanned [...]".
Ding & He, however, do not make this important qualification, and moreover write in their abstract that
Here we prove
that principal components are the continuous
solutions to the discrete cluster membership
indicators for
K-means clustering. Equivalently, we show that the subspace spanned
by the cluster centroids are given by spectral expansion of the data covariance matrix truncated at $K-1$ terms.
The first sentence is absolutely correct, but the second one is not.
It is not clear to me if this is a (very) sloppy writing or a genuine mistake. I have very politely emailed both authors asking for clarification. (Update two months later: I have never heard back from them.) Matlab simulation code
figure('Position', [100 100 1200 600])
n = 50;
Sigma = [2 1.8; 1.8 2];
for i=1:4
means = [0 0; i*2 0];
rng(42)
X = [bsxfun(@plus, means(1,:), randn(n,2) * chol(Sigma)); ...
bsxfun(@plus, means(2,:), randn(n,2) * chol(Sigma))];
X = bsxfun(@minus, X, mean(X));
[U,S,V] = svd(X,0);
[ind, centroids] = kmeans(X,2, 'Replicates', 100);
subplot(2,4,i)
scatter(X(:,1), X(:,2), [], [0 0 0])
subplot(2,4,i+4)
hold on
scatter(X(ind==1,1), X(ind==1,2), [], [1 0 0])
scatter(X(ind==2,1), X(ind==2,2), [], [0 0 1])
plot([-1 1]*10*V(1,1), [-1 1]*10*V(2,1), 'k', 'LineWidth', 2)
plot(centroids(1,1), centroids(1,2), 'w+', 'MarkerSize', 15, 'LineWidth', 4)
plot(centroids(1,1), centroids(1,2), 'k+', 'MarkerSize', 10, 'LineWidth', 2)
plot(centroids(2,1), centroids(2,2), 'w+', 'MarkerSize', 15, 'LineWidth', 4)
plot(centroids(2,1), centroids(2,2), 'k+', 'MarkerSize', 10, 'LineWidth', 2)
plot([-1 1]*5*V(1,2), [-1 1]*5*V(2,2), 'k--')
end
for i=1:8
subplot(2,4,i)
axis([-8 8 -8 8])
axis square
set(gca,'xtick',[],'ytick',[])
end
|
I decided to translate a book but I have hard time to design my own LaTeX template for the book
Quantum Mechanics by Leonard Susskind & Art Friedman Below is showing one page: I wonder what kind of LaTeX book class the author uses?
I decided to translate a book but I have hard time to design my own LaTeX template for the book
closed as too broad by Henri Menke, ChrisS, TeXnician, Johannes_B, CarLaTeX Sep 8 '17 at 5:47
Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
This recreates the looks of your image (not exactly obviously):
\documentclass[% numbers=endperiod% For the point after the number, in your image it is used in headmarks but not at the actual headings ,a5paper% ,fontsize=12pt% ,egregdoesnotlikesansseriftitles%]{scrbook}\usepackage[markcase=upper]{scrlayer-scrpage}\usepackage{amsmath}\usepackage[a5paper,margin=2cm]{geometry}\clearpairofpagestyles\ihead*{\headmark}% don't use the star if the chapter-pages should be head-less\ohead*{\pagemark}\automark[chapter]{chapter}%both right and left chapters\automark*[section]{}%if there is a section right head containing section\usepackage{blindtext}\begin{document}\chapter{foo}\cleardoublepage\noindentOr even more explicitly, we can combine three terms into a single matrix:\begin{equation} \sigma_n = \begin{pmatrix} n_z & (n_x - in_y)\\ (n_x+in_y) & -n_z \end{pmatrix}. \label{eq:sigma}\end{equation}What is this good for? Not much, until we find the eigenvectors andeigenvalues of $\sigma_n$. But once we do that, we will know the possibleoutcomes of a measurement along the direction of $\hat{n}$. And we will alsobe able to calculate probabilities for those outcomes. In other words, we willhave a complete picture of spin measurements in three-dimensional space. Thatis pretty darn cool, if I say so myself.\section{Reaping the Results}We are now positioned to make some real calculations, something that should makeyour inner physicist jump for joy. Let's look at the special case where$\hat{n}$ lies in the $x$-$z$ plane, which is the plane of this page. Since$\hat{n}$ is a unit vector, we can write\begin{align*} n_z = \cos\,\theta\\ n_x = \sin\,\theta\\ n_y = 0,\end{align*}where $\theta$ is the angle between the $z$ axis and the $\hat{n}$ axis.Plugging these values into Eq.~\ref{eq:sigma}, we can write\begin{equation*} \sigma_n = \begin{pmatrix} \cos\,\theta & \sin\,\theta\\ \sin\,\theta & -\cos\,\theta \end{pmatrix}.\end{equation*}\blinddocument\end{document}
|
GR8677 #23
Problem
Advanced Topics}Solid State
Actually, one can figure this one out with only knowledge of lower div baby physics.
(A) Electrical conductivities for conductors, semiconductors, and insulators go (in general), like this . Thus, copper should be more conductive than silicon. This is true (but one is trying to find a false statement).
(B) The resistivity , thus the conductivity, . As increases, decreases. This is true.
(C) Silicon is a semi-conductor, and thus it probably does not follow the same relations as (B). In fact, semiconductors have negative temperature coefficient of resistivities. Thus, , which implies that for temperature increase.
(D) Doping a conductor like copper will just make it cheap. Think of cheap wire.
(E) Doping a semiconductor, however, can make it more conductive.
Comments
casseverhart13 2019-10-04 04:06:36 yeah very nice information.... our site fredluis 2019-08-09 04:22:50 You can eliminate almost all of the answers without really knowing anything about relativity. landscaper joshuaprice153 2019-08-09 03:39:18 This is such an amazing useful resource that you’re providing and you give it away for free. I really like seeing websites that perceive the value of providing a quality useful resource for free. Good work. towing Mt Vernon jumbocrab 2010-11-12 16:11:11 What if you added insulative impuritesto the silicon such as rubber, wouldn't that decrease its conductivity
flyboy621 2010-11-13 13:53:31 You have to look at words like "always" in the answers. Even if you could find some impurity that would decrease the conductivity of silicon, that still wouldn't make (E) a true statement. If there exists ANY impurity that increases the conductivity of silicon, then (E) is not true (and hence the correct choice).
dmn322 2017-09-24 11:44:16 Will even silver make copper less conductive? apr2010 2010-04-09 07:05:27 (E)
pam d 2011-09-23 20:52:49 Nope
Kyle M 2012-09-19 14:02:56 E is the correct answer. thebigshow500 2008-10-13 21:53:29 Whether you understand, just know that doping a semiconductor will push itself to be more conductive. Learn it as physicists, or the engineers are going to laugh at us! :)
garserdt216 2015-08-15 05:01:49 Hahaha! Glad we\'re in this together.... bkardon 2007-10-05 20:07:42 This makes lots of sense to me, but to be argumentative, I wonder - how can we know that doping copper with some metal, like gold, can't increase it's conductivity? Or does 'doping' specifically refer to adding some non-metallic contaminant? Just wondering.
Jeremy 2007-10-08 17:40:47 This is actually a really good point/question. As I understand, an impurity is any atom or molecule (regardless of conductivity) added to a pure substance. I think the key point might be that only trace amounts are added. While the new atoms/molecules may still contribute free electrons, they change the local crystal structure, and this is probably seen as an obstacle from the perspective of the free electrons. That's my guess, but I am also interested in a solid answer.
dean 2008-10-09 21:24:27 Copper conducts well because the fermi level is smackdab in the middle of a band, meaning there is essentially no energy cost to pop electrons out of their states so they can scoot around in the metal.
Semiconductors have a gap in between the valence band and the conduction band, and the fermi level lands somewhere in between. There are a few holes in the valence band and a few electrons in the conduction band, but not enough to make an electron gas.
Doping, if I understand correctly, drops a few levels in the band gap, so that it is easier to change energy level, leading to better conduction.
Doping copper should result in a few extra levels in the middle of a band. This might add a little to the conductivity, but shouldn't be nearly as effective as doping a semiconductor. Apparently there are other reasons for doping copper, but I'm not very familiar with them.
As a sidenote, doping copper with gold shouldn't do much at all, they are in the same column.
Ami 2009-10-11 08:33:36 I believe Jeremy ans was better. In the copper there is no significant gain in number of conductive electron but decrease in their sweeping velocity because of the impurities seen as obstacles.
Ami 2009-10-11 11:13:14 Sorry I meant drift velocity.
Post A Comment!
Bare Basic LaTeX Rosetta Stone
LaTeX syntax supported through dollar sign wrappers $, ex.,
$\alpha^2_0$ produces .
type this...
to get...
$\int_0^\infty$
$\partial$
$\Rightarrow$
$\ddot{x},\dot{x}$
$\sqrt{z}$
$\langle my \rangle$
$\left( abacadabra \right)_{me}$
$\vec{E}$
$\frac{a}{b}$
The Sidebar Chatbox... Scroll to see it, or resize your browser to ignore it...
|
Per the title, other than using a general purpose LP solver, is there an approach for solving systems of inequalities over variables $x_i, \ldots, x_k$ where inequalities have the form $\sum_{i \in I} x_i < \sum_{j \in J} x_j$? What about the special case of inequalities that form a total order over the sums of the members of the power set of $\{x_i, \ldots, x_k\}$?
For your first question, without the total order, the answer to your question is that it's essentially as hard as linear programming. Here's an outline of a proof.
First, let's establish a variable $x_1>0$, which we call $\epsilon$. Now, let's choose another variable $x_i$, which we will call $1$. We want to make sure that $$\epsilon \ll 1\, .$$ To do this, consider the inequalities $$x_1 < x_2,$$ $$x_1 + x_2 < x_3,$$ $$x_2+x_3 < x_4,$$ and so on. With a long enough chain, this will tell us that $Nx_1 < x_i$, or $\epsilon < 1/N$, for some very large $N$ ($N$ is a Fibonacci number, and so grows exponentially in $i$).
We can now manufacture a linear program with integer coefficients. If we want a coefficient of 3 on $x_t$, we add the inequalities $$x_t < x_{t'} < x_{t''} < x_t + \epsilon$$ and let $x_t + x_{t'} + x_{t''}$ stand in for 3$x_t$. If you want larger coefficients, you can get them by expressing the coefficients in binary notation, and making inequalities that guarantee that $x_u \approx 2x_t$, $x_v \approx 2x_u$, and so on. To get the right-hand-side, we do the same with the variable $x_i = 1$. This technique will let us use linear programs of the OP's form to approximately check feasibility for arbitrary linear programs with integer coefficients, a task which is essentially as hard as linear programming.
I don't know how to analyze the second question, asking about the case where there's a total order on all subsets.
|
Let $\mu^\star$ be a real-valued function defined on the power set of the positive integers $\mathbf{N}^+$ such that for all $X,Y\subseteq \mathbf{N}^+$ the following axioms hold:
(F1) $\mu^\star(\mathbf{N}^+)=1$;
(F2) $\mu^\star(X) \le \mu^\star(Y)$ if $X\subseteq Y$;
(F3) $\mu^\star(X\cup Y) \le \mu^\star(X)+\mu^\star(Y)$;
(F4) $\mu^\star(\{kx:x \in X\})=\mu^\star(X)/k$ for all $k \in \mathbf{N}^+$;
(F5) $\mu^\star(\{x+h: x \in X\})=\mu^\star(X)$ for all $h \in \mathbf{N}^+$.
A function of this type is said to be an
(arithmetic) upper density. The set of these functions include the upper asymptotic, Banach, logarithmic, analytic, Polya, Buck densities and many others. Related questions on these type of functions can be found here and here.
At this point we can defined its associated lower density $\mu_\star$ for all $X\subseteq \mathbf{N}^+$ by $$\mu_\star(X)=1-\mu^\star(X^c).$$ Now, all examples I have in mind are superadditive functions, namely $$ \mu_\star(X\cup Y) \ge \mu_\star(X)+\mu_\star(Y) $$ whenever $X$ and $Y$ are disjoint subsets of $\mathbf{N}^+$. Does this property hold in general?
After an easy manipulation, the question turns out to be equivalent to the following:
Question.Let $\mu^\star$ be an upper density on $\mathbf{N}^+$, that is, a function satisfying axioms (F1)-(F5). Is it true that if $X,Y$ are subsets of $\mathbf{N}^+$ such that $X\cup Y=\mathbf{N}^+$ then $$ 1+\mu^\star(X\cap Y) \le \mu^\star(X)+\mu^\star(Y)? $$
In turn, this can be viewed as a strenghtening of (F3) above; moreover, it would imply that the induced density $\mu$, which can be seen as the restriction of $\mu^\star$ on $\{X\subseteq \mathbf{N}^+:\mu^\star(X)=\mu_\star(X)\}$, is additive.
|
Alright, I have this group $\langle x_i, i\in\mathbb{Z}\mid x_i^2=x_{i-1}x_{i+1}\rangle$ and I'm trying to determine whether $x_ix_j=x_jx_i$ or not. I'm unsure there is enough information to decide this, to be honest.
Nah, I have a pretty garbage question. Let me spell it out.
I have a fiber bundle $p : E \to M$ where $\dim M = m$ and $\dim E = m+k$. Usually a normal person defines $J^r E$ as follows: for any point $x \in M$ look at local sections of $p$ over $x$.
For two local sections $s_1, s_2$ defined on some nbhd of $x$ with $s_1(x) = s_2(x) = y$, say $J^r_p s_1 = J^r_p s_2$ if with respect to some choice of coordinates $(x_1, \cdots, x_m)$ near $x$ and $(x_1, \cdots, x_{m+k})$ near $y$ such that $p$ is projection to first $m$ variables in these coordinates, $D^I s_1(0) = D^I s_2(0)$ for all $|I| \leq r$.
This is a coordinate-independent (chain rule) equivalence relation on local sections of $p$ defined near $x$. So let the set of equivalence classes be $J^r_x E$ which inherits a natural topology after identifying it with $J^r_0(\Bbb R^m, \Bbb R^k)$ which is space of $r$-order Taylor expansions at $0$ of functions $\Bbb R^m \to \Bbb R^k$ preserving origin.
Then declare $J^r p : J^r E \to M$ is the bundle whose fiber over $x$ is $J^r_x E$, and you can set up the transition functions etc no problem so all topology is set. This becomes an affine bundle.
Define the $r$-jet sheaf $\mathscr{J}^r_E$ to be the sheaf which assigns to every open set $U \subset M$ an $(r+1)$-tuple $(s = s_0, s_1, s_2, \cdots, s_r)$ where $s$ is a section of $p : E \to M$ over $U$, $s_1$ is a section of $dp : TE \to TU$ over $U$, $\cdots$, $s_r$ is a section of $d^r p : T^r E \to T^r U$ where $T^k X$ is the iterated $k$-fold tangent bundle of $X$, and the tuple satisfies the following commutation relation for all $0 \leq k < r$
$$\require{AMScd}\begin{CD} T^{k+1} E @>>> T^k E\\ @AAA @AAA \\ T^{k+1} U @>>> T^k U \end{CD}$$
@user193319 It converges uniformly on $[0,r]$ for any $r\in(0,1)$, but not on $[0,1)$, cause deleting a measure zero set won't prevent you from getting arbitrarily close to $1$ (for a non-degenerate interval has positive measure).
The top and bottom maps are tangent bundle projections, and the left and right maps are $s_{k+1}$ and $s_k$.
@RyanUnger Well I am going to dispense with the bundle altogether and work with the sheaf, is the idea.
The presheaf is $U \mapsto \mathscr{J}^r_E(U)$ where $\mathscr{J}^r_E(U) \subset \prod_{k = 0}^r \Gamma_{T^k E}(T^k U)$ consists of all the $(r+1)$-tuples of the sort I described
It's easy to check that this is a sheaf, because basically sections of a bundle form a sheaf, and when you glue two of those $(r+1)$-tuples of the sort I describe, you still get an $(r+1)$-tuple that preserves the commutation relation
The stalk of $\mathscr{J}^r_E$ over a point $x \in M$ is clearly the same as $J^r_x E$, consisting of all possible $r$-order Taylor series expansions of sections of $E$ defined near $x$ possible.
Let $M \subset \mathbb{R}^d$ be a compact smooth $k$-dimensional manifold embedded in $\mathbb{R}^d$. Let $\mathcal{N}(\varepsilon)$ denote the minimal cardinal of an $\varepsilon$-cover $P$ of $M$; that is for every point $x \in M$ there exists a $p \in P$ such that $\| x - p\|_{2}<\varepsilon$....
The same result should be true for abstract Riemannian manifolds. Do you know how to prove it in that case?
I think there you really do need some kind of PDEs to construct good charts.
I might be way overcomplicating this.
If we define $\tilde{\mathcal H}^k_\delta$ to be the $\delta$-Hausdorff "measure" but instead of $diam(U_i)\le\delta$ we set $diam(U_i)=\delta$, does this converge to the usual Hausdorff measure as $\delta\searrow 0$?
I think so by the squeeze theorem or something.
this is a larger "measure" than $\mathcal H^k_\delta$ and that increases to $\mathcal H^k$
but then we can replace all of those $U_i$'s with balls, incurring some fixed error
In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set S in a Euclidean space Rn, or more generally in a metric space (X, d). It is named after the German mathematician Hermann Minkowski and the French mathematician Georges Bouligand.To calculate this dimension for a fractal S, imagine this fractal lying on an evenly spaced grid, and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid...
@BalarkaSen what is this
ok but this does confirm that what I'm trying to do is wrong haha
In mathematics, Hausdorff dimension (a.k.a. fractal dimension) is a measure of roughness and/or chaos that was first introduced in 1918 by mathematician Felix Hausdorff. Applying the mathematical formula, the Hausdorff dimension of a single point is zero, of a line segment is 1, of a square is 2, and of a cube is 3. That is, for sets of points that define a smooth shape or a shape that has a small number of corners—the shapes of traditional geometry and science—the Hausdorff dimension is an integer agreeing with the usual sense of dimension, also known as the topological dimension. However, formulas...
Let $a,b \in \Bbb{R}$ be fixed, and let $n \in \Bbb{Z}$. If $[\cdot]$ denotes the greatest integer function, is it possible to bound $|[abn] - [a[bn]|$ by a constant that is independent of $n$? Are there any nice inequalities with the greatest integer function?
I am trying to show that $n \mapsto [abn]$ and $n \mapsto [a[bn]]$ are equivalent quasi-isometries of $\Bbb{Z}$...that's the motivation.
|
It looks like you're new here. If you want to get involved, click one of these buttons!
We've seen that classical logic is closely connected to the logic of subsets. For any set \( X \) we get a poset \( P(X) \), the
power set of \(X\), whose elements are subsets of \(X\), with the partial order being \( \subseteq \). If \( X \) is a set of "states" of the world, elements of \( P(X) \) are "propositions" about the world. Less grandiosely, if \( X \) is the set of states of any system, elements of \( P(X) \) are propositions about that system.
This trick turns logical operations on propositions - like "and" and "or" - into operations on subsets, like intersection \(\cap\) and union \(\cup\). And these operations are then special cases of things we can do in
other posets, too, like join \(\vee\) and meet \(\wedge\).
We could march much further in this direction. I won't, but try it yourself!
Puzzle 22. What operation on subsets corresponds to the logical operation "not"? Describe this operation in the language of posets, so it has a chance of generalizing to other posets. Based on your description, find some posets that do have a "not" operation and some that don't.
I want to march in another direction. Suppose we have a function \(f : X \to Y\) between sets. This could describe an
observation, or measurement. For example, \( X \) could be the set of states of your room, and \( Y \) could be the set of states of a thermometer in your room: that is, thermometer readings. Then for any state \( x \) of your room there will be a thermometer reading, the temperature of your room, which we can call \( f(x) \).
This should yield some function between \( P(X) \), the set of propositions about your room, and \( P(Y) \), the set of propositions about your thermometer. It does. But in fact there are
three such functions! And they're related in a beautiful way!
The most fundamental is this:
Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq Y \) define its inverse image under \(f\) to be
$$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} . $$ The pullback is a subset of \( X \).
The inverse image is also called the
preimage, and it's often written as \(f^{-1}(S)\). That's okay, but I won't do that: I don't want to fool you into thinking \(f\) needs to have an inverse \( f^{-1} \) - it doesn't. Also, I want to match the notation in Example 1.89 of Seven Sketches.
The inverse image gives a monotone function
$$ f^{\ast}: P(Y) \to P(X), $$ since if \(S,T \in P(Y)\) and \(S \subseteq T \) then
$$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\}
\subseteq \{x \in X:\; f(x) \in T\} = f^{\ast}(T) . $$ Why is this so fundamental? Simple: in our example, propositions about the state of your thermometer give propositions about the state of your room! If the thermometer says it's 35°, then your room is 35°, at least near your thermometer. Propositions about the measuring apparatus are useful because they give propositions about the system it's measuring - that's what measurement is all about! This explains the "backwards" nature of the function \(f^{\ast}: P(Y) \to P(X)\), going back from \(P(Y)\) to \(P(X)\).
Propositions about the system being measured also give propositions about the measurement apparatus, but this is more tricky. What does "there's a living cat in my room" tell us about the temperature I read on my thermometer? This is a bit confusing... but there is an answer because a function \(f\) really does also give a "forwards" function from \(P(X) \) to \(P(Y)\). Here it is:
Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define its image under \(f\) to be
$$ f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S\} . $$ The image is a subset of \( Y \).
The image is often written as \(f(S)\), but I'm using the notation of
Seven Sketches, which comes from category theory. People pronounce \(f_{!}\) as "\(f\) lower shriek".
The image gives a monotone function
$$ f_{!}: P(X) \to P(Y) $$ since if \(S,T \in P(X)\) and \(S \subseteq T \) then
$$f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S \}
\subseteq \{y \in Y: \; y = f(x) \textrm{ for some } x \in T \} = f_{!}(T) . $$ But here's the cool part: Theorem. \( f_{!}: P(X) \to P(Y) \) is the left adjoint of \( f^{\ast}: P(Y) \to P(X) \). Proof. We need to show that for any \(S \subseteq X\) and \(T \subseteq Y\) we have
$$ f_{!}(S) \subseteq T \textrm{ if and only if } S \subseteq f^{\ast}(T) . $$ David Tanzer gave a quick proof in Puzzle 19. It goes like this: \(f_{!}(S) \subseteq T\) is true if and only if \(f\) maps elements of \(S\) to elements of \(T\), which is true if and only if \( S \subseteq \{x \in X: \; f(x) \in T\} = f^{\ast}(T) \). \(\quad \blacksquare\)
This is great! But there's also
another way to go forwards from \(P(X)\) to \(P(Y)\), which is a right adjoint of \( f^{\ast}: P(Y) \to P(X) \). This is less widely known, and I don't even know a simple name for it. Apparently it's less useful. Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define
$$ f_{\ast}(S) = \{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\} . $$ This is a subset of \(Y \).
Puzzle 23. Show that \( f_{\ast}: P(X) \to P(Y) \) is the right adjoint of \( f^{\ast}: P(Y) \to P(X) \).
What's amazing is this. Here's another way of describing our friend \(f_{!}\). For any \(S \subseteq X \) we have
$$ f_{!}(S) = \{y \in Y: x \in S \textrm{ for some } x \textrm{ such that } y = f(x)\} . $$This looks almost exactly like \(f_{\ast}\). The only difference is that while the left adjoint \(f_{!}\) is defined using "for some", the right adjoint \(f_{\ast}\) is defined using "for all". In logic "for some \(x\)" is called the
existential quantifier \(\exists x\), and "for all \(x\)" is called the universal quantifier \(\forall x\). So we are seeing that existential and universal quantifiers arise as left and right adjoints!
This was discovered by Bill Lawvere in this revolutionary paper:
By now this observation is part of a big story that "explains" logic using category theory.
Two more puzzles! Let \( X \) be the set of states of your room, and \( Y \) the set of states of a thermometer in your room: that is, thermometer readings. Let \(f : X \to Y \) map any state of your room to the thermometer reading.
Puzzle 24. What is \(f_{!}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "liberal" or "generous" nature of left adjoints, meaning that they're a "best approximation from above"? Puzzle 25. What is \(f_{\ast}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "conservative" or "cautious" nature of right adjoints, meaning that they're a "best approximation from below"?
|
EMP models are defined by both the usual content of a GAMS model and annotations found in a simple text file named
emp.info (aka the EMP info file). It is often most convenient to create this file via the GAMS put writing facility. The annotations primarily serve to
define the model (e.g. to specify that a variable
u is really the dual multiplier for a constraint
g) but can also specify how a solver should process the model. The annotations make use of EMP keywords to do this.
A simple example will serve as illustration. Consider the following NLP:
\begin{equation} \tag{1} \begin{array}{lll} \textrm{Min}_{x,y,z} & -3x + xy \\ \text{s.t.} & x + y \leq 1 \\ & x + y - z = 2 \\ & x, y \geq 0 \\ \end{array} \end{equation}
We will use EMP annotations to automatically generate the first order conditions (KKT conditions) of this NLP and thus reformulate the NLP as an MCP:
Variables f, z;Positive Variables x, y;Equations g, h, defobj;g.. x + y =l= 1;h.. x + y - z =e= 2;defobj.. f =e= -3*x + x*y;Model comp / defobj, g, h /;File info / '%emp.info%' /;putclose info / 'modeltype mcp';solve comp using EMP minimizing f;
Observe that the model is defined in the usual way and the file
emp.info contains just one line:
modeltype mcp. The EMP keyword
modeltype indicates that the value following the keyword is the model type to be used for the reformulation. In this example the model type is
mcp. Here this specification is required: the sole point of our EMP annotations is to generate an MCP and not (as is usually the case) to define the model. Usually, the model algebra and annotations together imply the type of the reformulated model and so no
modeltype specification is required or wanted. Finally, note that the model type in the solve statement is
EMP: this is typical.
The solver JAMS implements the EMP framework. It processes the model and the annotations, automatically reformulates the original EMP model as a model of a different (more easily solved) type, passes the reformulated model on to an appropriate subsolver, and maps the resulting solution back into the original problem space.
In case users wish to inspect the (scalar) reformulated model, the JAMS option
FileName may be used to specify the name of the file containing this model. Adding the following lines
before the solve statement in the GAMS code above will cause the MCP reformulation to be saved in the file
myReform.gms.
File empopt / 'jams.opt' /;comp.optfile = 1;putclose empopt / 'FileName myReform.gms';
The listing file will contain some additional information - the
EMP Summary - as part of the output for each EMP model solved. We provide details on the EMP summary for each reformulation that we discuss below.
|
GR8677 #27
Problem
Quantum Mechanics}Spin
Spin explains a lot of things.
(A) Remember orbitals? Whether a shell is full or not determines the properties of each column of the periodic table. A full shell has all electron spins paired together, while a partially filled or empty shell doesn't have that. So, spin is definitely in the Periodic Table.
(B) The specific heat of metals differs if one calculates it using the Fermi-Dirac or Bose-Einstein distributions; the first is used for fermions and the second for bosons. So, spin plays a role here.
(C) The Zeeman effect has to do with splitting caused by spin.
(D) The deflection of a moving electron is due to the magnetic field contribution to the Lorentz Force. This is a classical non-spin related phenomenon, on first analysis. This is the best choice.
(E) Fine structure has to do with splitting caused by spin.
Comments
harmonxjim33 2019-10-01 14:07:00 I believe there are many more pleasurable opportunities ahead for individuals that looked at your site. clash royale strategy casseverhart13 2019-09-12 05:16:11 Things like this are useful to solve. Gainesville Tree Pro Kensington fredluis 2019-08-09 04:31:25 Im not sure why people are talking about atomic transitions. If you want to produce x-rays in the K series, you must knock out the first electron. carpet cleaners joshuaprice153 2019-08-09 02:35:29 I think I could disagree with the main ideas. I won’t share it with my friends.. You should think of other ways to express your ideas. MacBook repair QuantumCat 2014-09-02 13:43:05 Remember the Stern-Gerlach experiment sent silver ions through an magnetic field. VKBhartiya 2013-09-10 05:49:32 I would say that fine structure is taking birth because of relativistic and quantum mechanical (spin) effect. This simply means that in this limit degenerate levels gets split. So now this option is not correct at all.rn Manuel Abad 2012-04-04 13:57:16 Nevertheless, there's a coupling between magnetic moment (related to spin by the gyromagnetic ratio) and the uniform magnetic field, which contributes to the hamiltonian operator. Thus, there MUST be a contribution to electron deflection coming from spin, am I wrong? If not, then ALL of the options involve spin. Then why, again, is (D) the right choice?
mpdude8 2012-04-15 20:47:38 I'd pick D on the basis that I learned this fact in 9th grade physics, long, long before I knew anything about spin.
C and E are both related to fine structure, and thus, I eliminate both. Both can't be right. The periodic table is broken up quite naturally by the s, p, d, and f orbitals, which are dictated by spin.
I narrowed it down to B and D. Had I not taken a stat mech course, I wouldn't have eliminated B. You can sort of "guess" D by the fact that the motion of an electron in a field is explained by high school physics.
Not a rigorous answer, but in a multiple choice environment, it really doesn't matter.
drizzo01 2012-10-23 12:05:52 To be clear, s, p, d orbitals refer to the ORBITAL angular momentum, and not the spin. That being said, the stability of these orbitals is related to whether or not they are "filled", and you can only "fill" them with spin-1/2 particles (as in, fermions). As such, spin contributes to the shape periodic table in that the placement of an element on the periodic table dictates how one might add or subtract fermions from the neutral atom's orbitals to create a more stable state (say, through chemical bonding)
llama 2013-10-16 19:47:02 The question states 'qualitatively significant' though, and the qualitatively significant factor for an electron moving in a magnetic field is simply the charge. And I'm not sure that the spin has even a small effect on the motion of a free electron in a homogeneous magnetic field - consider that the Stern-Gerlach experiment requires inhomogeneous fields. pam d 2011-09-23 21:23:06 Ah, wittensdog, if only we were a part of the same pgre generation. physics_gre 2009-12-24 02:47:50 The answer given by jeka is totally wrong.Because Stern -Gerlach expt occurs in case of non uniform magnetic field. jeka 2007-02-17 08:30:53 Specific heat of metal at low temperatures is proportional to whereas specific heat of lattice is proportional to . Assume , then heat capacity of lattice is much less than that of electron gas, which can be explained only if the electron posesses spin.
Anomalous Zeeman effect can be explained if there is a magnetic moment due to spin:
It also explaines the Stern-Gerlach effect (D).
Fine structure of atomic spectra is due to quantum-mechanical sum rule of orbital angular momentum and spin.
The first variant, (A), doesn't relate to spin because structure of the periodic table is corresponds to increase of the charge of nucleus, that is the number of protons in it.
So, the right answer is (A).
StrangeQuark 2007-07-01 12:19:53 Not entirely, if the periodic table is based just on the increase of the charge of nucleus then why is it not just a straight line, or why is helium not directly after hydrogen, why all the extra space between, Answer: SPIN
Blake7 2007-07-23 18:42:28 The magnetic field used in Stern-Gerlach is not uniform.
Spin gives us the shell structure of the periodic chart with features like halogens, noble gases, alkali metals and transition elements.
Yosun and StrangeQuark are correct.
FortranMan 2008-10-19 12:39:37 The of the periodic table (or its shape if you will) corresponds to the orbital shells, s, p, d, f. Each element is grouped by its highest complete or incomplete orbital. This was first discovered empirically through chemistry by categorizing the chemical similarities and differences between different elements. If the structure of the periodic table just corresponded to the nucleus's charge (or its number of protons), than it would just be a linear list rather than the strange assortment of boxes we are used to seeing.
wittensdog 2009-07-25 17:13:28 The Stern-Gerlach experiment does not involve moving electrons in a magnetic field; it involves neutral silver atoms with unpaired electrons. In fact, the reason that the silver atoms must be neutral is to avoid large-scale deflections due to the interaction of a moving charged particle with a magnetic field, which is indeed a very classical effect, which does not require spin for an explanation. The idea in the Stern-Gerlach experiment is that the unpaired electron results in a non-zero magnetic dipole moment, due to the electron's spin, which is why the silver atom is deflected. The way that the the electron's spin affects atomic energy levels and other aspects of atomic structure is a huge part of why the periodic table has the structure that it does. The number of valence electrons is very important for various considerations in chemistry, and the fact that the electron has spin 1/2 affects this. Also, the answers to all of these questions are provided with the released tests. These tests are checked many times by experts, and it is highly unlikely that an answer provided by ETS would be incorrect.
VKBhartiya 2013-09-10 05:56:20 I am agree with your answer, wittensdog
Post A Comment!
Bare Basic LaTeX Rosetta Stone
LaTeX syntax supported through dollar sign wrappers $, ex.,
$\alpha^2_0$ produces .
type this...
to get...
$\int_0^\infty$
$\partial$
$\Rightarrow$
$\ddot{x},\dot{x}$
$\sqrt{z}$
$\langle my \rangle$
$\left( abacadabra \right)_{me}$
$\vec{E}$
$\frac{a}{b}$
The Sidebar Chatbox... Scroll to see it, or resize your browser to ignore it...
|
Journal of Operator Theory
Volume 46, Issue 3, Supplementary 2001 pp. 605-618.Composition operators between Nevanlinna classes and Bergman spaces with weights
Authors: Hans Jarchow (1), and Jie Xiao (2) Author institution:(1) Institut f\"ur Mathematik, Universit\"at Z\"urich, CH-8057 Z\"urich, Switzerland
(2) Department of Mathematics and Statistics, Concordia University, 1455 de Maisonneue Blvd. West, H3G 1M8 Montreal, Qc, Canada
Summary:We investigate composition operators between spaces of analytic functions on the unit disk $\De$ in the complex plane. The spaces we consider are the weighted Nevanlinna class $\cN_\al$, which consists of all analytic functions $f$ on $\De$ such that $\int\limits_\De\log^+ |f(z)|(1-|z|^2)^\al {\, {\rm d}x \, {\rm d}y}<\iy$, and the corresponding weighted Bergman spaces $\cA^p_\al$, $-1<\al<\iy$, $0
-1$, $0<q<\iy$. We characterize, in function theoretic terms, when the composition operator $\Cf:f\mt f\ci\vf$ induced by an analytic function $\vf:\De\to\De$ defines an operator $X\to Y$ which is continuous, respectively compact, respectively order bounded.
Contents Full-Text PDF
|
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty).
And Chrome has a Personal Blocklist extension which does what you want.
: )
Of course you already have a Google account but Chrome is cool : )
Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies?
do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created.
@QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value.
I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$.
@QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0.
@KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc.
In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people
in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results
@QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O
@NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that.
@NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment.
@QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h).
@KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow)
Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know?
|
Let $G$ be a finite group. Consider the set $$X = \bigcup_{H \le G} G/H$$ which is a disjoint union of left cosets of subgroups $H$ of $G$. Then $G$ acts on $X$ by left multiplication, and the number $|X/G|$ of orbits is the number of subgroups of $G$. I want to apply Burnsides Lemma in this situation $$|X/G| = \frac{1}{|G|} \sum_{g \in G} |X^g|$$ where $X^g = \{ x \in X | g \cdot x = x\}$, to maybe get a "formula" for the number of subgroups of $G$. For this I need to "compute" $|X^g|$. Is there any other nice description of this quantity? Thanks for your help!
We have $|X^g| = |\{g'H \in X| g\cdot g' \cdot H = g' \cdot H\}|$, but how to proceed?
Edit The reason I suspect such a formula can be computed is the group $G=C_n$, for which we have:
$$\tau(n)=\frac{1}{n}\sum_{k=0}^{n-1}\sigma(\gcd(n,k))$$
Also, using the Lagarias inequality, one can show that an upper bound on $\tau(n)$ is equivalent to RH.
|
1d diffusion equation
Integrating the diffusion equation,
$$ \frac{\partial u}{\partial t} = D \frac{\partial^2 u}{\partial x^2}, $$
with a constant diffusion coefficient D using forward Euler for time and the finite difference approximation for space,
$$ u_i^{t+1} = u_i^t + D\frac{\Delta t}{\Delta x^2} ( u_{i+1}^{t} + u_{i-1}^{t} - 2u_i^{t} ), $$
leads to the conservation of $\bar{u}=\sum_i \Delta x\, u_i$ over time (see animation 1 and figure 1), because reflective Neumann boundary conditions,$\partial_x u=0$, are employed at the borders (forward differences: $u_i = u_{i \pm 1}$):
$$ u_i^{t+1} = u_i^t + D\frac{\Delta t}{\Delta x^2} \cdot ( u_{i\pm1}^{t} - u_{i}^{t} ) $$
Space and time are discretized with $\Delta x=0.01$ and $\Delta t=10^{-7}$. The diffusion coefficient is $D=10$.
radial diffusion equation
In 2d polar coordinates $(r,\phi)$, the Laplacian is given by:
$$ \nabla^2 u = \frac{\partial^2 u}{\partial r^2} + \frac{1}{r}\frac{\partial u}{\partial r} + \frac{1}{r^2} \frac{\partial u^2}{\partial \phi^2}. $$
In case of an axisymmetric distribution $u(r,\phi)=u(r)$, the Laplacian reduces to:
$$ \nabla^2 u = \frac{\partial^2 u}{\partial r^2} + \frac{1}{r}\frac{\partial u}{\partial r} $$
Forward Euler and finite difference approximation with central differences for the advective term leads to (derivation):
$$ u_i^{t+1} = u_i^t + D\frac{\Delta t}{\Delta r^2} \big( u_{i+1}^{t} + u_{i-1}^{t} - 2u_i^{t} \big) + D\frac{\Delta t}{2 \Delta r} \big( u_{i+1}^{t} - u_{i-1}^{t} \big) \\ = u_i^t + D\frac{\Delta t}{\Delta r^2} \big( (1+0.5/i)u_{i+1}^{t} + (1-0.5/i)u_{i-1}^{t} -2u_i^{t} \big). $$
The same approximation is obtained with the Finite Volume Method (derivation). The boundary condition at the origin exploits the rotational symmetry and thus is $\partial_r u = 0$, leading to (central differences: $u_{i-1}=u_{i+1}$)
$$ u_i^{t+1} = u_i^t + D\frac{\Delta t}{\Delta r^2} \cdot 4( u_{i+1}^{t} - u_i^{t} ) $$
At the other boundary, Neumann conditions $\partial_r u = 0$ are employed and realized as (central differences: $u_{i-1}=u_{i+1}$):
$$ u_i^{t+1} = u_i^t + D\frac{\Delta t}{\Delta r^2} \cdot 2( u_{i-1}^{t} - u_{i}^{t} ) $$
However, there is a problem: Conservation of u is violated (see animation 2 and figure 2).
Over time the total amount of u grows as calculated via
$$ 2\pi \int_\Omega u(r) r dr = 2\pi \sum_{i=0}^{n} u_i\; (i \,\Delta r) \; \cdot \; \Delta r, $$
where $(i \Delta r)$ is the discretized volume element from 2d polar coordinates. The dashed line gives the analytical result and the regular line the numerical result. Numerical parameters $\Delta r=\Delta x$, $\Delta t$ and $D$ are as before.
Question
Why is conservation of u violated when going from the 1d system to the 1d radial (reduced from 2d polar coordinates)? What can I do to regain the conservation?
Finite volume methods lead to the same discretization scheme as shown above. Robin boundaries (as used for conservation in advection equations) are not applicable, since the flux $j$ at the boundaries is given by $j=r\partial_r u$, which leads to the already employed Neumann boundary conditions $\partial_r u=0$ at $r=0$ and $r=r_{end}$.
Edit:
While testing a few things, I came up with an MWE in c++ for others interested to try. Compilation instructions are given at the top. It reproduces the conservation violation problem :(
// test program to check conservation in radial diffusion// compilation: g++ -Wall radial_diffusion.cpp -std=c++11 -fopenmp -O3 -o radial_diffusion.exe#include <fstream> // std::ofstream#include <string> // std::string#include <cmath> // expint main(){ // initialize variables std::string name = "/tmp/u_sum.dat"; int nx = 1000; size_t nsteps=100000; double d = 10; double dr = 0.01; double dt = 1e-7; double diffcoeff = d*dt/(dr*dr); double *u = new double[nx]; double *unew = new double[nx]; double *usum = new double[nsteps](); double usum0; // initial condition float mu_x = 0*nx; // mean x float xsigma = 0.01*nx; // variance x for(int x=0; x<nx; x++) u[x] = exp(-0.25*((x-mu_x)*(x-mu_x)/(xsigma*xsigma))); // time evolution for(size_t step=0; step<nsteps; step++){ #pragma omp parallel for for(int x=0; x<nx; x++){ int left=x-1; int right=x+1; // calculation, central difference with lHospital at r->0 with du/dr -> 0 if(x==0){ unew[x] = u[x] + diffcoeff*( 4*(u[right] - u[x]) ); }else if(x==nx-1){ unew[x] = u[x] + diffcoeff*( 2*(u[left] - u[x]) ); }else{ unew[x] = u[x] + diffcoeff*( u[right]*(1+0.5/x) + u[left]*(1-0.5/x) - 2*u[x] ); } } // sum up to check conservation usum0 = 0; for(int x=0; x<nx; x++) usum0 += 2*M_PI*dr*dr*x*unew[x]; if(!(step%10000)) printf("%12.12f\n",usum0); for(int x=0; x<nx; x++) usum[step] += 2*M_PI*dr*dr*x*unew[x]; // update u,unew std::swap(u,unew); } // save stuff FILE *fp; if((fp=fopen(name.c_str(), "w"))==NULL){ printf("Cannot open file.\n"); } for(size_t step=0; step<nsteps; step++) fprintf(fp,"%12.12f\n",usum[step]); fclose(fp); // free memory delete[] u; delete[] unew; delete[] usum;}
Update
In the paper "High-order schemes for cylindrical/spherical geometries with cylindrical/spherical symmetry" by Wang et al. (2013), the authors propose the following scheme:
$$ \frac{d u_i}{dt} = \frac{1}{\Delta V} (r_{i+\frac{1}{2}}j_{i+\frac{1}{2}} - r_{i-\frac{1}{2}}j_{i-\frac{1}{2}}), \\ \Delta V = \frac{1}{2}(r^2_{i+\frac{1}{2}} - r^2_{i-\frac{1}{2}}), \\ j_i = \nabla u_i = \frac{1}{2\Delta r}(u_{i+1} - u_{i-1}). $$ The half index, $i+\frac{1}{2}$, implies an arithmetic mean. Is this scheme applicable here? Is the gradient discretized correctly?
|
By Joannes Vermorel, February 2012
More accurate demand forecasts generate savings as far inventory is concerned. This article quantifies savings for
inventories with turnovers lower than 15
. We adopt a viewpoint where the extra accuracy is entirely invested in lowering inventory levels while keeping stockout rates unchanged.
The formula
The detail of the proof is given below, but let's start with the final result. Let's introduce the following variables:
$V$ the total inventory value. $H$ the yearly carrying cost (percentage), which represents the sum of all the frictions associated to the inventory. $\sigma$ the forecast error of the system in place expressed in unit MAE (mean absolute error). The definition of this measure is given below. $\sigma_n$ the forecast error of the new system being benchmarked (hopefully lower than $\sigma$).
The yearly benefit $B$ of revising the forecasts is given by:$$B=V H \left(\sigma - \sigma_n \right)$$
Unit MAE
The formula introduced here works as long as errors are measured over the lead time
and made homegeneous to a percentage with respect of the total sales during the lead time.
Although the MAPE (Mean Absolute Percentage Error) measured over the lead time would fit this definition, we strongly
advise not to use the MAPE
here. Indeed, the MAPE gives erratic measurements when slow mover's are present in the inventory. Since this article focuses on inventories with
low turnover
, the existence of slow mover's is a quasi-certainty.
In order to compute the
unit
MAE (i.e. homogeneous to a percentage), let's introduce:
$y_i$ the actual demand for the item $i$, for the lead time duration. $\hat{y}_i$ the demand forecast for the item $i$, for the lead time duration.
For the consistency of the measurement, we assume that the same starting date $t$ is used for all items. Then, for a set of items $i$, the unit MAE could be written as:$$\sigma = \frac{\sum_i |y_i - \hat{y}_i|}{\sum_i y_i}$$ This value is
homegeneous to a percentage and behaves essentially like the MAE
. Contrary to the MAPE, it is not negatively impacted by slow mover's, i.e. items where $y_i = 0$ for the period being considered.
Practical example
Let's consider a large B2B retail network of professional equiments that can obtain a 20% reduction of the relative forecast error through a new forecasting system.
$V = 100,000,000$ € (100 millions Euros) $H = 0.2$ (20% yearly friction cost on inventory) $\sigma=0.2$ (old system has 20% error) $\sigma_n=0.16$ (new system has 16% error)
Based on the formula above, we obtain a gain at $B=800,000$€ per year.
Proof of the formula
In order to prove the result given here above, let's introduce introduce a
systematic lowering bias
of $\sigma - \sigma_n$ percents to all forecasts produced by the new forecasting system. By introducing this bias, we are:
increasing the error of all underforecasts of $\sigma - \sigma_n$ percents. lowering the average error of overforecasts (however the quantification is unclear).
Dismissing the improvement brought by the bias on overforecasts, we see that, in the worst case, the accuracy of the new - and now biased - forecasting system is degraded of $\sigma - \sigma_n$ percents, which turns into an overall accuracy that remains lower or equal to $\sigma$.
Then, we note that the total amount of inventory $V$ is
proportional to the lead demand
. The behavior is explicit when using a safety stock
model for determining inventory levels, but basically, it applies for alternative methodologies as well.
By lowering the forecasts of $\sigma - \sigma_n$ percents, we are thus applying a similar reduction on the amount of inventory $V$. Then, since accuracy of the biased system remains lower to $\sigma$, the stockouts frequency should also stay lower than the one of the old system.
Finally, we have shown that based on a more accurate forecast, it is possible to build a lower inventory level of $\sigma - \sigma_n$ percents that does generate more stockouts - because forecasts remain better or equal (accuracy wise) to the ones of the old system.
Thus, the inventory reduction is $V \left(\sigma - \sigma_n \right)$. Considering the total yearly friction costs $H$, this reduction generates savings equal to $B=V H \left(\sigma - \sigma_n \right)$.
Misconceptions about carrying costs
The variable $H$ should include
all friction costs involved with the possession of inventory
. In particular, a misconception that we routinely observe consists of stating that the value of $H$ is between 4% and 6%. However, that is only the cost for the company to fund its working capital by borrowing money to the bank.
It's easy to turn cash into inventory, the challenge is to turn inventory back into cash.
Taking into account only the strict financial cost is vastly underestimating the real cost of inventory:
The storage itself typically add an overhead of 2% to 5% on a yearly basis. Obsolescence costs account for 10% to 20% on a yearly basis for nearly all kind of manufactured products.
Thus a 20% yearly overhead is typically a rather sensible friction percentage for most finished products inventory.
Lokad gotcha's
For inventories with low turnover, native quantile forecasts
typically deliver superior results as far accuracy is concerned. Indeed, classic
mean
forecasts are behaving poorly when it comes intermittent demand. Don't hesitate to benchmark your current inventory practices to the forecasts generated through our webapp
.
|
The Concurrency of the Altitudes in a Triangle A Trigonometric Proof Dušan Vallo February 2012
Let \(ABC\) be a triangle. Using the standard notations, we denote the altitudes \(h_a = AH_a\), \(h_b = BH_b\), \(h_c = CH_c\).
Theorem
The three altitudes of a triangle are concurrent.
Proof
First observe that in a right-angled triangle ABC the altitudes do meet at the right angled vertex. Now, suppose that triangle ABC is not right-angled and denote \(D = h_a \cap h_b\), \(E = h_a\cap h_c\). We wish to show that \(D\) coincides with \(E\).
The triangles \(ABH_a\), \(ACH_a\) are right-angled and we easy derive
(1)
\(h_a = c \cdot \sin B = b \cdot \sin C\).
From the right-angled triangles \(ACH_a\), \(ABH_b\) we obtain
(2)
\(CH_a = b \cdot \cos C,\) \(AH_b = c \cdot \cos A.\)
The triangles \(CEH_a\), \(ADH_b\) are also right-angled and so \(\angle ADH_b = \angle C\), \(\angle CEH_a = \angle B\). From (2), we obtain
\(\displaystyle AD=\frac{c\cdot \cos A}{\sin C}\), \(\displaystyle EH_a=\frac{b\cdot \cos C}{\tan B}\).
Let \(x=DE\). Then, for the altitude \(AH_a\),
\(AH_a = AD + x + EH_a\).
Using (1), this can be rewritten as
(3)
\(\displaystyle c\cdot\sin C=c\cdot\frac{\cos A}{\sin C}+x+b\cdot\frac{\cos C}{\tan B}\).
Next we apply the theorem about the sum of angles in a triangle and well-known formulas
\(\cos (X+Y)=\cos X\cdot\cos Y-\sin X\cdot\sin Y\),
\(\cos (\pi -X)=-\cos X.\)
We use the latter in (3) first with \(X = A=\pi - (B+C)\) to eventually arrive at
\(\displaystyle x=(b\cdot\cos C-c\cdot\cos B)\bigg[(1 - \frac{1}{\tan B\cdot\tan C}\bigg]\).
By (1), the first factor equals \(b\cdot\cos C-c\cdot\cos B=h_a - h_a=0\), implying \(x=0\), as required. This completes the proof.
Trigonometry What Is Trigonometry? Addition and Subtraction Formulas for Sine and Cosine The Law of Cosines (Cosine Rule) Cosine of 36 degrees Tangent of 22.5 o- Proof Wthout Words Sine and Cosine of 15 Degrees Angle Sine, Cosine, and Ptolemy's Theorem arctan(1) + arctan(2) + arctan(3) = π Trigonometry by Watching arctan(1/2) + arctan(1/3) = arctan(1) Morley's Miracle Napoleon's Theorem A Trigonometric Solution to a Difficult Sangaku Problem Trigonometric Form of Complex Numbers Derivatives of Sine and Cosine ΔABC is right iff sin²A + sin²B + sin²C = 2 Advanced Identities Hunting Right Angles Point on Bisector in Right Angle Trigonometric Identities with Arctangents The Concurrency of the Altitudes in a Triangle - Trigonometric Proof Butterfly Trigonometry Binet's Formula with Cosines Another Face and Proof of a Trigonometric Identity cos/sin inequality On the Intersection of kx and |sin(x)| Cevians And Semicircles Double and Half Angle Formulas A Nice Trig Formula Another Golden Ratio in Semicircle Leo Giugiuc's Trigonometric Lemma Another Property of Points on Incircle Much from Little The Law of Cosines and the Law of Sines Are Equivalent Wonderful Trigonometry In Equilateral Triangle A Trigonometric Observation in Right Triangle A Quick Proof of cos(pi/7)cos(2.pi/7)cos(3.pi/7)=1/8
Copyright © 1996-2018 Alexander Bogomolny
65608430
|
Write the Taylor series of $\text{Log}(1+w)$ with center at $w=0$ on $|w|<1$;
check that if $|z-2|<1$, then $|z|>1$. (If you have difficulties in checking this formally, try to draw a picture of the situation). Use these facts to compute
$$ \int_C \text{Log}\left(1+\frac{1}{z}\right)dz\, $$ $$ C:w(\theta) = 2 + (1/2)e^{i\theta},\quad \theta \in [0,2\pi] $$
My attempt so far:
I know the Taylor series of Log(1+w).
(I am pretty sure) I know what |z-2| < 1 looks like on the complex plane, and |z| > 1 also, but how do those two hold at the same time? |z| > 1 contains the region that |z-2| < 1 (disk centered at 2, radius 1), but that's it? They for sure aren't equal.
Writing out the Taylor series expansion of Log(1+(1/z)) seems to be reminiscent of a Laurent series with all all positive term coefficients equal to zero, but given our simple closed curve, there seems to be only singularities at z = 0, so I don't see anything resembling using residues or such.
I am sorry I am not fluent in LaTeX to formalize my question, but thanks in advance.
EDIT: I have not done any problems in a long time that simply just refer to Cauchy-Goursat holding, but IIRC, if f(z) is holomorphic within the interior of our curve of integration (and writing out several Taylor expansion terms of Log(1+(1/z))), there are no singularities in the interior of our curve, so is the integral simply zero? I doubt this, but just throwing my ideas out there.
|
Let $f$ be a map whose domain is $X$. If $f$ satisfies the property that for all $x\in X$, $$f(f(x))=f(x)\text{,}$$ is there any standard name for such a function? Not sure if "projection" is the answer.
This is called an idempotent map. More generally, one can talk about idempotent elements: given a set $S$ with a binary operator $*: S \times S \to S$, an element $x \in S$ is called
idempotent if $x * x = x$. (Idempotent maps are idempotent elements in the endomorphism monoid of some object.)
In linear algebra, idempotent linear operators on a vector space are sometimes called "projections" or "projectors".
I think the term for that would be "idempotent."
|
Abstract
A comprehensive design of wavefront-splitting interferometers (WSIs) is introduced for integration into wafer light circuits and optical fiber systems. The WSI detects the coherent interference of two wavefronts after being equally split by a single buried resonator and collected into a single mode waveguide. WSIs present a desirable compact sensor as possible with Fabry Perot interferometers (FPIs), with the additional benefits of strong visibility contrast and high sensitivity to external parameters as expected with Mach–Zehnder interferometers. Theoretical models and finite-difference time-domain simulations show the WSI spectral responses to be 2
$\times$
to 120
$\times$
more sensitive to changes in refractive index, temperature, and strain over comparable Bragg grating waveguides and FPIs. Femtosecond laser irradiation with selective chemical etching provided the flexible means for 3-D geometric structuring and waveguide integration below the surface, delivering precise WSIs with a small
$\sim$
12-nm rms surface roughness. Temperature and vacuum sensing were demonstrated with high sensing resolution (0.8
${}^{\circ}$
C and 1.8
$\times$
10
${}^{-5}$
RIU) and sensitivity (60.6 pm/
${}^{\circ}$
C and 2800 nm/RIU) to match theoretically anticipated values. Such high finesse optical elements open a new realm of optical sensing and integrated optical circuit concepts without relying on tedious nanoprecision assembly methods or the use of large optical components.
© 2015 IEEE
PDF Article
References
You do not have subscription access to this journal. Citation lists with outbound citation links are available to subscribers only. You may subscribe either as an OSA member, or as an authorized user of your institution.
Contact your librarian or system administrator
or Login to access OSA Member Subscription
Cited By
You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an OSA member, or as an authorized user of your institution.
Contact your librarian or system administrator
or Login to access OSA Member Subscription
|
Motivation:
There is a persistent belief among evolutionary biologists that chance played a significant role in biological evolution. This would imply that life and its building blocks could have been significantly different. In particular, some protein scientists suspect that evolution could have used as few as 10 amino acids to build life rather than the original 20. We may arrive at such a conclusion by considering the minimum number of letters required to fold a protein [1].
However, there are a couple limitations with such analyses. First they ignore second-order, third-order and fourth-order constraints on the structure of amino acids which quickly lead to a combinatorial explosion in the number of constraints facing any minimal set of amino acids. Second, they ignore the time and space constraints on the search problem of finding the last universal common ancestor containing the set of 20 amino acids that we know.
By taking these two points into consideration we find that there was actually very little room for chance, leading us to a biological equivalent of the anthropic principle.
The argument for sufficiency:
There isn’t much to be said about sufficiency except that the diversity of multicellular life forms are an existence proof that 20 amino acids are effectively sufficient. Establishing necessity is a lot harder.
The compositional structure of life forms:
It’s worth noting that biological systems have the following compositional structure:
\begin{equation} \text{amino acids}(\mathcal{A}) \rightarrow \text{proteins}(\mathcal{P}) \rightarrow \text{cells}(\mathcal{C}) \rightarrow \text{eukarya}(\mathcal{E}) \end{equation}
where each level of abstraction is consistent with but not reducible to the levels of abstraction below it. These may also be represented as a hierarchy of nonlinear transformations:
\begin{equation} \mathcal{A} \overset{f_1}{\rightarrow} \mathcal{P} \overset{f_2}{\rightarrow} \mathcal{C} \overset{f_3}{\rightarrow} \mathcal{E} \end{equation}
\begin{equation} E = f_3 \circ \mathcal{C} \end{equation}
\begin{equation} E = f_3 \circ f_2 \circ \mathcal{P} \end{equation}
\begin{equation} E = f_3 \circ f_2 \circ f_1 \circ \mathcal{A} \end{equation}
where the nonlinearity of the implies that eukaryotes are more sensitive to variations in their basic set of amino acids than variations in cell type. In other words, if and are nonlinear mappings then is most probably even more nonlinear in its behaviour.
We may proceed in this manner with a stability analysis of functions on trees [2]. However, given that the functional form of the isn’t exactly known a combinatorial analysis may be a better approach. Specifically, we may try to infer the number of independent constraints on given the number of independent constraints on .
Complexity and Robustness:
We may make the reasonable assumption that common life forms persist because they are robust. From an algorithmic perspective one way for a pattern to be robust is if it is maximally informative. In other words, given a biological pattern the Kolmogorov Complexity of is given by:
\begin{equation} 3 \leq K(L) \approx \lvert L \rvert \end{equation}
where is the length of and is a reasonable lower-bound.
In order to apply this method to our analysis of compositional structures we must choose a representative from each set: .
Using the principle that mundane life forms must be robust and the fact that biological systems have a compositional structure we define as follows:
\begin{equation} E := \text{most common eukaryote} \end{equation}
\begin{equation} C := \text{most common cell type in } E \end{equation}
\begin{equation} P := \text{set of proteins found in } C \end{equation}
\begin{equation} \mathcal{A} := \text{set of 20 amino acids} \end{equation}
Now, let’s suppose that the number of independent constraints on the reproduction of is approximately given by:
\begin{equation} K(E) = \text{Kolmogorov Complexity of genome of } E \end{equation}
\begin{equation} K(E) \geq 3 \end{equation}
If each independent constraint on may be expressed in terms of , each independent constraint on may be expressed in terms of …the number of constraints on must be on the order of:
\begin{equation} \mathcal{L} = K(A)^{K(P)^{K(C)^{K(E)}}} \geq e^{e^{e^{e}}} > 10^{1000} \end{equation}
assuming that:
\begin{equation} \min(K(\mathcal{A}),K(P),K(C),K(E)) \geq 3 \end{equation}
At present I can’t fully explain the origin of the iterated exponential but I think it arises naturally in complex systems with emergent behaviour. The basic idea is that you can think of unfolding higher-level abstractions into more fundamental lower-level abstractions in higher-dimensional spaces using a sequence of multipartite graphs.
Edges that define interactions in one space become points in another.
The number of independent constraints on the fundamental set of amino acids:
It follows that finding a fundamental set of amino acids that simultaneously satify all constraints is a multi-objective discrete optimisation problem in a search space that is at least as large as:
\begin{equation} 2^{\mathcal{L}} > 2^{10^{1000}} \end{equation}
since for each independent constraint there are at least two options. The probability that this set of amino acids was discovered by chance is probably small… but exactly how small?
Time and Space constraints on biological evolution:
I’d like to make a compelling argument that the time-bounded search space for the 20 amino acids we know is much smaller than .
Consider that the smallest organism weighs on the order of one picogram. If there are less than kilograms of living organism at any moment on any Earth-like planet, bacteria reproduce every 20 mins and the universe is less than years old then the effective search space for a universal common ancestor containing all 20 amino acids is less than:
\begin{equation} S = \text{time} \times \text{space} \leq 10^{20} \cdot 10^{10} \cdot 10^{15} \cdot 365 \cdot 24 \cdot 3 \leq 10^{36} \end{equation}
so the probability that the 20 amino acids were discovered by chance has to be much smaller than:
\begin{equation} \frac{S}{2^\mathcal{L}} < \frac{1}{10^{1000}} \end{equation}
which suggests that either a highly effective search procedure was responsible for their discovery or that fundamental laws of physics impose regularities on the search space for amino acids and thus considerably simplify the optimisation problem. Both of these ideas lead us to the conclusion that there must be a biological equivalent to the anthropic principle in cosmology.
Discussion:
The above calculations started as back-of-the-envelope calculations and I think they can be refined. One approach would be to think of this as a problem in programming language design. What if we allowed programming languages for dynamic control problems to evolve in such a way that higher-level languages emerge? Would we find that the more primitive instruction set is more sensitive to random variation?
Alternatively, it’s a good idea to take a closer look at the problem from the perspective of protein folding. How many amino acids are required to cover the most fundamental types of protein interactions?
References: Ke Fan and Wei Wang. What is the minimum number of letters required to fold a protein? Journal of Molecular Biology. Roozbeh Farhoodi, Khashayar Filom, Ilenna Simone Jones, Konrad Paul Kording. On functions computed on trees. Arxiv. 2019. Yi Liu and Stephen Freeland. On the evolution of the standard amino-acid alphabet. Genome Biology. 2006. H. James Cleaves. The origin of the biologically coded amino acids. Journal of Theoretical Biology. 2009. Matthias Granold et al. Modern diversification of the amino acid repertoire driven by oxygen. PNAS. 2017.
|
Is there a formula to calculate the distribution of the mean waiting time for a server which processes a workload of $N$ jobs, which behaves like a M/G/1 queue?
I would like to be able to answer the following: what is the chance that for a workload of $N$ items, the mean waiting time is above X. If taking the distribution of the mean is too troublesome, can we relax the problem to give the distribution of the waiting times for $N$ jobs (although I guess this does not relax the problem that much).
I hope the question is clear.
Update
Based on the comments I see that some things are unclear. I hope context of my problem will make it more clear.
I have made a simulator for a server which takes jobs as input with inter-arrival times which are exponentially distributed. The job service times are according to a general distribution. Basically I use random samples to mimic a M/G/1. For large amounts of jobs I see the mean waiting time converge to the value which corresponds to the following formula for the expected waiting time[1]:
$$ E[W] = \frac{(\lambda * E[\tau^2])}{ (2*(1- \rho))}$$
Where $E[\tau^2]$ is the second moment of the service time.
However when I have a smaller amount of jobs the mean waiting time can vary a lot between runs. Is there a formula which grasps this variance as function of the service-time, the arrival rate, and most importantly, the amount of jobs. Basically this would allow me to verify if my simulator is correct.
|
Degree $n$ : $39$ Transitive number $t$ : $21$ Group : $C_{13}\times C_{13}:C_3$ Parity: $1$ Primitive: No Nilpotency class: $-1$ (not nilpotent) Generators: (1,28,22,11,35,20,9,31,24,2,38,19,13,39,25,10,37,23,12,33,16,7,32,18,4,30,21,5,34,15,8,36,14,3,29,26,6,27,17), (1,21,27,8,23,34,7,24,33,13,17,38,11,15,28,3,16,36,4,19,32,10,22,39,9,14,35,6,18,29,5,25,30,12,20,37,2,26,31) $|\Aut(F/K)|$: $13$
|G/N| Galois groups for stem field(s) 3: $C_3$ 13: $C_{13}$ 39: $C_{13}:C_3$, $C_{39}$
Resolvents shown for degrees $\leq 47$
Degree 3: $C_3$
Degree 13: None
39T21 x 3
Siblings are shown with degree $\leq 47$
A number field with this Galois group has no arithmetically equivalent fields.
There are 91 conjugacy classes of elements. Data not shown.
Order: $507=3 \cdot 13^{2}$ Cyclic: No Abelian: No Solvable: Yes GAP id: [507, 3]
Character table: Data not available.
|
You could accomplish this using a secret sharing scheme, which doesn't require any public key cryptography. An $(n,k)$ secret sharing scheme allows a secret to be divided up into $n$ shares such that the secret can be recovered from any $k$ shares. However, if you only have $k - 1$ shares you cannot recover any information about the secret.
Here, we will use a simple additive $(3,3)$ secret sharing scheme over a finite field $\mathbb{F}$. Given a secret $s \in \mathbb{F}$, we can split it into three shares:
$$[s]_1, [s]_2, [s]_3 \in_R \mathbb{F} \mid s = [s]_1 + [s]_2 + [s]_3$$
Each of the three shares is uniformly distributed at random, so an adversary holding only one or two shares learns nothing about the secret $s$. Recovering the secret from all three shares is simple: just add themall together. A nice property of this scheme is that it's
linear, so we can add two secrets together by adding their shares together:
$$s_a + s_b = ([s_a]_1 + [s_b]_1) + ([s_a]_2 + [s_b]_2) + ([s_a]_3 + [s_b]_3)$$
This means that we can do additions on the secret values
while they are secret shared, and thus we can compute simple functions without revealingthe secret inputs! In fact, there are even (complicated) protocols fordoing other operations on these secret shares, such as multiplication and exponentiation. The protocol
Alice wants to send her secret $s_a$ and Bob wants to send his secret $s_b$ such that Charlie can only learn the sum $s' = s_a + s_b$ but not the individual values. We assume here that all of the players are connected by secure authenticated channels.
Alice shares her secret $s_a$ by: Keeping $[s_a]_A$ for herself Sending $[s_a]_B$ to Bob Sending $[s_a]_C$ to Charlie Bob shares his secret $s_b$ the same way: Sending $[s_b]_A$ to Alice Keeping $[s_b]_B$ for himself Sending $[s_b]_C$ to Charlie Now we want each party to compute $s' = s_a + s_b$ on their local shares: Alice computes $[s']_A = [s_a]_A + [s_b]_A$ Bob computes $[s']_B = [s_a]_B + [s_b]_B$ Charlie computes $[s']_C = [s_a]_C + [s_b]_C$ Finally, Alice and Bob open their result towards Charlie: Alice sends Charlie $[s']_A$ Bob sends Charlie $[s']_B$ Charlie now has the following information: One share of $s_a$: $[s_a]_C$ One share of $s_b$: $[s_b]_C$ Three shares of $s'$: $[s']_A, [s']_B, [s']_C$
Charlie doesn't have enough shares to recover $s_a$ or $s_b$, but he caneasily recover $s'$ as:
$$s' = [s']_A + [s']_B + [s']_C$$
Note that this simplistic scheme is not robust, because Alice (or Bob)could lie and provide shares $[s_a]_A + [s_b]_B + [s_b]_C \ne s_a$. Toprotect against this, you'd want to use a
verifiable secret sharingscheme so that the players can detect an inconsistent sharing.
This protocol is
interactive, because it requires Alice and Bob to send their $[s_a]_B$ respectively $[s_b]_A$ values to each other. With someminor modifications, it could be made non-interactive.
|
Another great feature of SDC Verifier is described here. Connections made with bolts according to
EN 1993-1-8 (2005) Chapter 3 can now be checked with SDC Verifier directly within your standard FEA program. For the EN 1993-1-8 bolt check the following checks can be performed:
Shear connection
A.bearing type; B.slip-resistance at serviceability; C.slip-resistance at ultimate;
Tension connection
D.non-preloaded; E.preloaded.
For shear and tension together
Table 3.2: Categories of bolted connections
Category Criteria Remarks
bearing type
\(F _{\nu,Ed} \leq F _{\nu,Rd} \); \(F _{\nu,Ed} \leq F _{b,Rd} \) No preloading required. Bolt classes from 4.6 to 10.9 may be used
slip-resistant at serviceability
\(F _{\nu,Ed,ser} \leq F _{\nu,Rd,ser} \); \(F _{\nu,Ed} \leq F _{\nu,Rd} \); \(F _{\nu,Ed} \leq F _{b,Rd} \) Preloaded 8.8 or 10.9 bolts should be used. For slip resistance at serviceability see 3.9.
slip-resistant at ultimate
\(F _{\nu,Ed} \leq F _{s,Rd} \); \(F _{\nu,Ed} \leq F _{b,Rd} \); \(|AC_{2} \sum{F_{\nu,Ed}} \leq N_{net,Rd} AC_{2}|\) Preloaded 8.8 or 10.9 bolts should be used. For slip resistance at ultimate see 3.9.\(N_{net,Rd}\) see 3.4.1(1) c) Tension Connections D
non-preloaded
\(F _{t,Ed} \leq F _{t,Rd} \); \(F _{t,Ed} \leq B _{p,Rd} \) No preloading required.
Bolt classes from 4.6 to 10.9 may be used.\( B _{p,Rd} \) see Table 3.4
preloaded
\(F _{t,Ed} \leq F _{t,Rd} \); \(F _{t,Ed} \leq B _{p,Rd} \) Preloaded 8.8 or 10.9 bolts should be used. \( B _{p,Rd} \) see Table 3.4 The design tensile force F t,Ed. should include any force due to prying action, see 3.11. Bolts subjected to both shear force and tensile force should also satisfy the criteria given in Table 3.4.
Axial and Shear Forces are automatically extracted from the FEA model (Ansys, FEMAP or Simcenter 3D /NX CAE) by SDC Verifier, the bolt diameter is taken by default from the property but the user has off course the possibility to override this.
In the following example a check on the M20 bolts, Class 8.8 is shown. The properties and locations of the bolt is taken from the model setup report (an automatic generated report with a complete description of the FEA model):
The nominal values of the yield strength f_yb and the ultimate tensile strength f_ub for bolts are taken from the table after selection of the bolt class of the element:
Table 3.1: Nominal values of the yield strength f and the ultimate tensile strength yb ffor bolts ub yb ub
Bolt class 4.6 4.8 5.6 5.8 6.8 8.8 10.9 f yb(N/mm 2) 240 320 300 400 480 640 900 f ub(N/mm 2) 400 400 500 500 600 800 1000
In the following example the bolt connection is checked without controlled preloading or pretension, but this can be selected as well by selection the relevant check category, see below:
Picture of Eurocode 3 Bolt menu with the tables and plots used in this article
The table below includes formulas for design resistance calculation used in the SDC check.
Because controlled preloading is not applied the bolts in the example will be checked according to the limits of category A: bearing type and for the shear loading the category D non-preloaded of Table 3.2 is used.
Detailed results of the bolts checking according to EN 1993-1-8 (2005) is shown in the table.
The Bolts IDs (=the element numbers of the bolt elements):
The table below shows the worst results on the bolt selection. IDs of the bolts with worst results shown in the table.
The total utilization factor shows the maximum results of the checks on both shear and axial loading of the bolt (=last column of the previous table)
With the help of the calculation core of SDC Verifier this check can be repeated for different load situations. In the example below 5 more load situations and a load group are created. The load sets below consist of a factor times a unit force. (for the ease of the explanation the unit force results in 0.71 N axial load and 0.71 shear load in each bolt)
All load combinations combined into the load group
By showing the load group result, the maximum load situation of every bolt is reported by plotting the maximum utilization of the bolt check for all the load situations in the load group:
The table below shows design load (axial and shear force) on the bolt ID 3 over loads. The last row of the table shows results for the Load group, in this case, is equal to the worst load set 5.
Table with detailed results over loads for bolt ID 3 are shown in the table below.
This is another example how to speed up verification of FEA results according to standards, rules and regulations. With help of SDC Verifier it is very easy to perform bolt check according to EN 1993-1-8 (2005), all checks are directly done within the FEA model and the results can be included in the generated overall result report.
No more tedious procedures of selecting the bolt elements, exporting the bolt results for all different load situations to a spreadsheet program, setting up a bolt check and summarizing all results in the spreadsheet mode and updating this procedure after each design update.
Interested to see how the FEA result report of this small example looks? Download the pdf-version of the result report here
|
Search
Now showing items 1-10 of 17
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2014-06)
The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ...
Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2014-01)
In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ...
Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2014-01)
The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ...
Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2014-03)
A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ...
Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider
(American Physical Society, 2014-02-26)
Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ...
Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV
(American Physical Society, 2014-12-05)
We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ...
Measurement of quarkonium production at forward rapidity in pp collisions at √s=7 TeV
(Springer, 2014-08)
The inclusive production cross sections at forward rapidity of J/ψ , ψ(2S) , Υ (1S) and Υ (2S) are measured in pp collisions at s√=7 TeV with the ALICE detector at the LHC. The analysis is based on a data sample corresponding ...
|
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, Timo Seppäläinen, UW-Madison
Title:
Geometry of the corner growth model
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
February 21, Diane Holcomb, KTH
Title:
On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. Wednesday, February 27 at 1:10pm Jon Peterson, Purdue March 7, TBA March 14, TBA March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the
character ratio:
$$ \text{trace}(\rho(g))/\text{dim}(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant
rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM).
|
A forum where anything goes. Introduce yourselves to other members of the forums, discuss how your name evolves when written out in the Game of Life, or just tell us how you found it. This is the forum for "non-academic" content.
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: viewtopic.php?p=44724#p44724
Like this:
[/url][/wiki][/url]
[/wiki]
[/url][/code]
Many different combinations work. To reproduce, paste the above into a new post and click "preview".
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
I wonder if this works on other sites? (Remove/Change )
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
Related:[url=http://a.com/]
[/url][/wiki]
My signature gets quoted. This too. And my avatar gets moved down
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: Saka wrote:
Related:
[
Code: Select all
[wiki][url=http://a.com/][quote][wiki][url=http://a.com/]a[/url][/wiki][/quote][/url][/wiki]
]
My signature gets quoted. This too. And my avatar gets moved down
It appears to be possible to quote the entire page by repeating that several times. I guess it leaves <div> and <blockquote> elements open and then autofills the closing tags in the wrong places.
Here, I'll fix it:
[/wiki][url]conwaylife.com[/url]
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact:
It appears I fixed @Saka's open <div>.
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
toroidalet Posts: 1018 Joined: August 7th, 2016, 1:48 pm Location: my computer Contact:
A for awesome wrote:It appears I fixed @Saka's open <div>.
what fixed it, exactly?
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: toroidalet wrote:
A for awesome wrote:It appears I fixed @Saka's open <div>.
what fixed it, exactly?
The post before the one you quoted. The code was:
Code: Select all
[wiki][viewer]5[/viewer][/wiki][wiki][url]conwaylife.com[/url][/wiki]
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
Aidan, could you fix your ultra quote? Now you can't even see replies and the post reply button. Also, a few more ones eith unique effects popped up.
Appart from Aidan Mode, there is now: -Saka Quote -Daniel Mode -Aidan Superquote We should write descriptions for these: -Adian Mode: A combination of url, wiki, and code tags that leaves the page shaterred in pieces. Future replies are large and centered, making the page look somewhat old-ish. -Saka Quote: A combination of a dilluted Aidan Mode and quotes, leaves an open div and blockquote that quotes the entire message and signature. Enough can quote entire pages. -Daniel Mode: A derivative of Aidan Mode that adds code tags and pushes things around rather than scrambling them around. Pushes bottom bar to the side. Signature gets coded. -Aidan Superqoute: The most lethal of all. The Aidan Superquote is a broken superquote made of lots of Saka Quotes, not normally allowed on the forums by software. Leaves the rest of the page white and quotes. Replies and post reply button become invisible. I would not like new users playing with this. I'll write articles on my userpage.
Last edited by Saka
on June 21st, 2017, 10:51 pm, edited 1 time in total.
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA
I actually laughed at the terminology.
"IT'S TIME FOR MY ULTIMATE ATTACK. I, A FOR AWESOME, WILL NOW PRESENT: THE AIDAN SUPERQUOTE" shoots out lasers
This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.)
Current rule interest: B2ce3-ir4a5y/S2-c3-y
fluffykitty
Posts: 638 Joined: June 14th, 2014, 5:03 pm
There's actually a bug like this on XKCD Forums. Something about custom tags and phpBB. Anyways,
[/wiki]
I like making rules
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
Here's another one. It pushes the avatar down all the way to the signature bar. Let's name it...
-Fluffykitty Pusher
Unless we know your real name that's going to be it lel. It's also interesting that it makes a code tag with purple text.
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact:
Probably the simplest ultra-page-breaker:
Code: Select all
[viewer][wiki][/viewer][viewer][/wiki][/viewer]
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X A for awesome wrote:
Probably the simplest ultra-page-breaker:
Code: Select all
[viewer][wiki][/viewer][viewer][/wiki][/viewer]
Screenshot?
New one yay.
-Adian Bomb: The smallest ultra-page breaker. Leaks into the bottom and pushes the pages button, post reply, and new replies to the side.
Last edited by Saka
on June 21st, 2017, 10:20 pm, edited 1 time in total.
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA
Someone should create a phpBB-based forum so we can experiment without mucking about with the forums.
This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.)
Current rule interest: B2ce3-ir4a5y/S2-c3-y
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
The testing grounds have now become similar to actual military testing grounds.
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
fluffykitty
Posts: 638 Joined: June 14th, 2014, 5:03 pm
We also have this thread. Also,
is now officialy the Fluffy Pusher. Also, it does bad things to the thread preview when posting. And now, another pagebreaker for you:
Code: Select all
[wiki][viewer][/wiki][viewer][/viewer][/viewer]
Last edited by fluffykitty
on June 22nd, 2017, 11:50 am, edited 1 time in total.
I like making rules
83bismuth38 Posts: 453 Joined: March 2nd, 2017, 4:23 pm Location: Still sitting around in Sagittarius A... Contact:
oh my, i want to quote somebody and now i have to look in a diffrent scrollbar to type this. intersting thing, though, is that it's never impossible to fully hide the entire page -- it will always be in a nested scrollbar.
EDIT: oh also, the thing above is kinda bad. not horrible though -- i'd put it at a 1/13 on the broken scale.
Code: Select all
x = 8, y = 10, rule = B3/S23
3b2o$3b2o$2b3o$4bobo$2obobobo$3bo2bo$2bobo2bo$2bo4bo$2bo4bo$2bo!
No football of any dui mauris said that.
Cclee Posts: 56 Joined: October 5th, 2017, 9:51 pm Location: de internet
Code: Select all
[quote][wiki][viewer][/wiki][/viewer][wiki][/quote][/wiki]
This dosen't do good things
Edit:
Code: Select all
[wiki][url][size=200][wiki][viewer][viewer][url=http://www.conwaylife.com/forums/viewtopic.php?f=4&t=2907][/wiki][/url][quote][/url][/quote][/viewer][wiki][quote][/wiki][/quote][url][wiki][quote][/url][/wiki][/quote][url][wiki][/url][/wiki][/wiki][/viewer][/size][quote][viewer][/quote][/viewer][/wiki][/url]
Neither does this
^
What ever up there likely useless Cclee Posts: 56 Joined: October 5th, 2017, 9:51 pm Location: de internet
Code: Select all
[viewer][wiki][/viewer][wiki][url][size=200][wiki][viewer][viewer][url=http://www.conwaylife.com/forums/viewtopic.php?f=4&t=2907][/wiki][/url][quote][/url][/quote][/viewer][wiki][quote][/wiki][/quote][url][wiki][quote][/url][/wiki][/quote][url][wiki][/url][/wiki][/wiki][/viewer][/size][quote][viewer][/quote][/viewer][/wiki][/url][viewer][/wiki][/viewer]
I get about five different scroll bars when I preview this
Edit:
Code: Select all
[viewer][wiki][quote][viewer][wiki][/viewer][/wiki][viewer][viewer][wiki][/viewer][/wiki][/quote][viewer][wiki][/viewer][/wiki][quote][viewer][wiki][/viewer][viewer][wiki][/viewer][/wiki][/wiki][/viewer][/quote][/viewer][/wiki]
Makes a really long post and makes the rest of the thread large and centred
Edit 2:
Code: Select all
[url][quote][quote][quote][wiki][/quote][viewer][/wiki][/quote][/viewer][/quote][viewer][/url][/viewer]
Just don't do this
(Sorry I'm having a lot of fun with this)
^
What ever up there likely useless cordership3 Posts: 127 Joined: August 23rd, 2016, 8:53 am Location: haha long boy
Here's another small one:
Code: Select all
[url][wiki][viewer][/wiki][/url][/viewer]
fg
Moosey Posts: 2483 Joined: January 27th, 2019, 5:54 pm Location: A house, or perhaps the OCA board. Contact:
Code: Select all
[wiki][color=#4000BF][quote][wiki]I eat food[/quote][/color][/wiki][code][wiki]
[/code]
Is a pinch broken
Doesn’t this thread belong in the sandbox?
I am a prolific creator of many rather pathetic googological functions
My CA rules can be found here
Also, the tree game
Bill Watterson once wrote: "How do soldiers killing each other solve the world's problems?"
77topaz Posts: 1345 Joined: January 12th, 2018, 9:19 pm
Well, it started out as a thread to documents "Bugs & Errors" in the forum's code...
Moosey Posts: 2483 Joined: January 27th, 2019, 5:54 pm Location: A house, or perhaps the OCA board. Contact:
77topaz wrote:Well, it started out as a thread to documents "Bugs & Errors" in the forum's code...
Now it's half an aidan mode testing grounds.
Also, fluffykitty's messmaker:
Code: Select all
[viewer][wiki][*][/viewer][/*][/wiki][/quote]
I am a prolific creator of many rather pathetic googological functions
My CA rules can be found here
Also, the tree game
Bill Watterson once wrote: "How do soldiers killing each other solve the world's problems?"
PkmnQ
Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode
Don't worry about this post, it's just gonna push conversation to the next page so I can test something while actually being able to see it. (The testing grounds in the sandbox crashed golly)
Code: Select all
x = 12, y = 12, rule = AnimatedPixelArt
4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX
2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW
EqWE$2.4vX!
i like loaf
|
Newform invariants
Coefficients of the \(q\)-expansion are expressed in terms of \(\beta = \frac{1}{2}(1 + \sqrt{17})\). We also show the integral \(q\)-expansion of the trace form.
For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below.
For more information on an embedded modular form you can click on its label.
This newform does not admit any (nontrivial) inner twists.
\( p \) Sign \(2\) \(-1\) \(3\) \(-1\) \(19\) \(-1\) \(53\) \(-1\)
This newform can be constructed as the intersection of the kernels of the following linear operators acting on \(S_{2}^{\mathrm{new}}(\Gamma_0(6042))\):
\(T_{5}^{2} \) \(\mathstrut -\mathstrut T_{5} \) \(\mathstrut -\mathstrut 4 \) \(T_{7}^{2} \) \(\mathstrut +\mathstrut T_{7} \) \(\mathstrut -\mathstrut 4 \) \(T_{11} \) \(\mathstrut +\mathstrut 4 \)
|
Good day, I am attempting an optional exercise and I am finding it hard to interpret the problem in terms of matrices and vectors.
Coin 1 has probability 0.4 of coming up heads, and coin 2 has probability 0.8 of coming up heads. The coin to be flipped initially is equally likely to be coin 1 or coin 2. Thereafter, if the flipped coin shows heads then coin 1 is chosen for the next flip, and if the flipped coin shows tails then coin 2 is chosen for the next flip.
Let $X_0$ be the coin chosen for the initial flip, and, for $n \geq1$, let Xn be the coin chosen for the nth flip after the initial flip.
(a) Explain why $X_0,X_1,X_2, . . .$ is a Markov chain. Write down its statespace and its transition matrix.
(b) Let $p^{(n)}$ be the probability row vector giving the distribution of $X_n$. Find $p^{(0)}, p^{(1)}, p^{(2)}$.
(c) Write down the probability that coin 1 is chosen for the second flip after the initial flip.
(d) Find the probability that coin 1 is chosen for the second and third flip after the initial flip.
What I thought of is that the state space is $S=(1,2)$ and the transition matrix would be
$$\begin{pmatrix} 0.4 & 0.6\\\ 0.8 & 0.2\end{pmatrix}$$
After doing this, I think the rest of the questions will be easy to approach
Using this, I get that $p^{(0)}=( \frac{1}{2}, \frac{1}{2})$ as the initial toss has equal probability of being in state 1 or 2.
Using the answer below, I get that
$p^{(1)}=( 0.6, 0.4)$
$p^{(2)}=( 0.56, 0.44)$
The answer to $c)$ is then $0.6$.
But the last part ($d)$) is still puzzling me.
I feel like there is some Bayes involved and that the answer is not simply $0.6*0.56$.
|
Let $\ \mathbf N = \{1\ 2\ \ldots\}\ $ be the set of natural numbers. Let $\ f : \mathbf N\rightarrow\mathbf N\ $ be an arbitrary function, and $\ \forall_{n\in\mathbf N}\, F(n)\ :=\ \max_{k = 1\ldots n}\, f(k)$.
Let's assume that, with respect to a fixed universal Turing machine, there exists at least one algorithm which computes $\ f,\ $ and let $\ ||f||_A(n)\ $ be the number of Turing operations which compute $\ f(n)\ $ by algorithm $\ A$.
By a polynomial $\ \mathbf N\rightarrow\mathbf N\ $ I mean a function which differs from a (true real) polynomial by less then 1 for almost all natural numbers $\ n\in\mathbf N$.
DEFINITION 1 Function $\ f\ $ is called a fast counter $\ \Leftarrow:\Rightarrow\ $ there exists an algorithm $\ A\ $ and a polynomial $\ p : \mathbf N\rightarrow \mathbf N\ $ such that
$$\forall_{n\in\mathbf N}\ \ ||f||_A(n)\ \le\ \frac{p(n)}{n!}\cdot F(n) $$
DEFINITION 2 Function $\ f\ $ is called a slow counter $\ \Leftarrow:\Rightarrow\ $ for every algorithm $\ A\ $ there exists polynomial $\ q : \mathbf N\rightarrow \mathbf N\ $
$$\forall_{n\in\mathbf N}\ \ ||f||_A(n)\ \ge\ \frac{F(n)}{n!\cdot q(n)}$$
DEFINITION 3 Function $\ f\ $ is called an algorithmic counter $\ \Leftarrow:\Rightarrow\ \ f\ $ is both a fast and a slow counter.
QUESTIONLet $\ pos(n)\ $ be the number of all partial orders in the integer interval $\ \{0\ \ldots\ n\!-\!1\}.\ $ Is function $\ pos\ $ an algorithmic counter?
A similar question holds for the number of quasi-orders (i.e. of topologies).
|
Definition:Boolean Algebra/Definition 1 Contents Definition
Furthermore, these operations are required to satisfy the following axioms:
\((BA_1 \ 0)\) $:$ $S$ is closed under $\vee$, $\wedge$ and $\neg$ \((BA_1 \ 1)\) $:$ Both $\vee$ and $\wedge$ are commutative \((BA_1 \ 2)\) $:$ Both $\vee$ and $\wedge$ distribute over the other \((BA_1 \ 3)\) $:$ Both $\vee$ and $\wedge$ have identities $\bot$ and $\top$ respectively \((BA_1 \ 4)\) $:$ $\forall a \in S: a \vee \neg a = \top, a \wedge \neg a = \bot$ The operations $\vee$ and $\wedge$ are called join and meet, respectively.
The identities $\bot$ and $\top$ are called
bottom and top, respectively.
The operation $\neg$ is called
complementation. $\begin{array}{c|cc} + & 0 & 1 \\ \hline 0 & 0 & 1 \\ 1 & 1 & 0 \\ \end{array} \qquad \begin{array}{c|cc} \times & 0 & 1 \\ \hline 0 & 0 & 0 \\ 1 & 0 & 1 \\ \end{array}$
Some sources refer to a Boolean algebra as:
or
both of which terms already have a different definition on $\mathsf{Pr} \infty \mathsf{fWiki}$.
Other common notations for the elements of a Boolean algebra include: $0$ and $1$ for $\bot$ and $\top$, respectively $a'$ for $\neg a$.
When this convention is used, $0$ is called
zero, and $1$ is called one or unit. Also see Results about Boolean algebrascan be found here. Source of Name
This entry was named for George Boole.
|
In Fig. 7, two equal circles, with centres O and O?, touch each other at X. OO? produced meets the circle with centre O? at A. AC is tangent to the circle with centre O, at the point C. O'D is perpendicular AC. Find the value of \[\frac{DO'}{CO}\] . Answer:
Given, AC is tangent to the circle with centre O and O'D is perpendicular to AC. then, \[\angle ACO=90{}^\circ \] Also, \[\angle ADO'=90{}^\circ \] \[\angle CAO=\angle DAO'\] (\[\because \] Common angle) \[\therefore \Delta \,AO'D\tilde{\ }\Delta \,AOC\] \[\Rightarrow \frac{AO'}{AO}=\frac{DO'}{CO}\] \[\therefore \frac{AO'}{3.AO'}=\frac{DO'}{CO}\] \[\left( \begin{align} & \because \,AX=2AO' \\ & \,\,\,\,OX=AO' \\ \end{align} \right)\] \[\frac{DO'}{CO}=\frac{1}{3}\]
You need to login to perform this action.
You will be redirected in 3 sec
|
I have $k$ distinct prime numbers $\ell_1 < \dots <\ell_k$, and for each $i=1,\dots,k$, a subset $A_i$ of $\mathbb Z / \ell_i \mathbb Z$. Let $L=\ell_1 \dots \ell_k$. Now using the chinese reminder theorem, $\mathbb Z/ L \mathbb Z = \prod_{i=1}^k \mathbb Z /\ell_i \mathbb Z$, hence the subset $\prod_{i=1}^k {A_i}$ of the RHS is identified to a subset $A$ of $\mathbb Z/L \mathbb Z$ (that what I call a "product set").
Now consider an interval $I$ of $\mathbb Z / \ell \mathbb Z$. I would say (that's acknowledgedly a vague definition) that $I$ and $A$ are "approximately independent" of $\frac{|A \cap I|}{|I|}$ is close to $\frac{|A|}{L}$. For exemple, when $I = \mathbb Z/L\mathbb Z$, then these two fractions are trivially equal.
Now it seems natural to believe that when $|I|$ is large with respect to the $\ell_i$, even if it is small with respect to $L=\prod_i \ell_i$, then $A$ is approximately independent to $I$.
Is this intuition true in some sense?
I apologize for this vague question: I think it makes sense as it is and it seems that any of my attempt to make it more precise will result in something false or trivial. I also have the frustrating impression that I have already seen this problem, or something very close to it (that is concerning the "independence" of product sets in $\mathbb Z / L \mathbb Z$ with other natural subsets of $\mathbb Z / L \mathbb Z$) discussed somewhere, perhaps even on MO. But I am not able to recall where, and missing even a name or keywords for this problem, I don't know where to look for. So any reference or names for this question is welcome. (PS: I don't even know how to tag this question. Please feel free to change tags)
|
Forgot password? New user? Sign up
Existing user? Log in
How many ordered pairs of real numbers (x,y) (x,y) (x,y) are there such that
{x2−y2+πx+ϕyx2+y2=2,2xy+ϕx−πyx2+y2=0. \begin{cases} x^2 - y^2 & + \frac{\pi x+\phi y}{x^2+y^2} & = \sqrt{2}, \\2xy & + \frac{ \phi x-\pi y}{x^2 + y^2} & = 0. \\ \end{cases} {x2−y22xy+x2+y2πx+ϕy+x2+y2ϕx−πy=2,=0.
( ϕ \phiϕ is the golden ratio 1+52 \frac{1+ \sqrt{5} } { 2} 21+5. π \pi π is pi, which is approximately 3.14159 3.14159 3.14159.)
Problem Loading...
Note Loading...
Set Loading...
|
45 1 Summary Clarifying the concept of RKHS
I have been reading a lot about Reproducing Kernel Hilbert Spaces mainly because of their application in machine learning. I do not have a formal background in topology, took linear algebra as an undergrad but mainly have encountered things such as, inner product, norm, vector space, orthogonality, and independence with respect to these ideas associated in Physics. I think I am making headway but wanted to clarify before I go further.
-While in physics we generally think of an inner-product as a dot product giving the angle between two vectors (with the exception of quantum mechanics were inner products play a more abstract role in the sense of operators) it is really a more general mathematical concept endowed over a vector space,## \mathbf V ##, such that ##\mathbf V \mathsf x \mathbf V \rightarrow \mathbb R , \mathbb C ## with the properties of conjugate symmetry, linearity, and positive definite. Now lets say I have a topological space in ## \mathbb R^\infty## additionally I have a vector space ##\mathbf V \subset \mathbb R^\infty## endowed with an inner product which is defined over ##\mathbf V## but not all of ##\mathbb R^\infty##. Now here is where I have a little bit of uncertainty. Since the inner product is not defined over the whole topological space I cannot use the basis over the topological space to decompose ##\mathbf V ## into an independent combination of basis vector in ##\mathbb R^\infty##. Here is where I think the "magic" happens with RKHS. Given the defined inner product over ##\mathbf V## I can define a unique set of spanning vectors ##\mathbf K## in which I can express and vector ## v \in \mathbf V ## as a linear combination of the vectors in ##\mathbf K## such that any ## v \in \mathbf V=\alpha\mathbf K##. Furthermore there is a orthonormal basis, ##\mathbf U## over ##\mathbf V## such that ##\mathbf K=\mathbf U\mathbf U^t## allowing me to express any vector in ##\mathbf V## as ##v=\beta\mathbf U## I just want to correct any misinterpretations as I go forward any figuring out how this allows the "kernel trick" to work in SVMs and other ML algorithms. Any help would be great!
-While in physics we generally think of an inner-product as a dot product giving the angle between two vectors (with the exception of quantum mechanics were inner products play a more abstract role in the sense of operators) it is really a more general mathematical concept endowed over a vector space,## \mathbf V ##, such that ##\mathbf V \mathsf x \mathbf V \rightarrow \mathbb R , \mathbb C ## with the properties of conjugate symmetry, linearity, and positive definite.
Now lets say I have a topological space in ## \mathbb R^\infty## additionally I have a vector space ##\mathbf V \subset \mathbb R^\infty## endowed with an inner product which is defined over ##\mathbf V## but not all of ##\mathbb R^\infty##.
Now here is where I have a little bit of uncertainty. Since the inner product is not defined over the whole topological space I cannot use the basis over the topological space to decompose ##\mathbf V ## into an independent combination of basis vector in ##\mathbb R^\infty##. Here is where I think the "magic" happens with RKHS. Given the defined inner product over ##\mathbf V## I can define a unique set of spanning vectors ##\mathbf K## in which I can express and vector ## v \in \mathbf V ## as a linear combination of the vectors in ##\mathbf K## such that any ## v \in \mathbf V=\alpha\mathbf K##. Furthermore there is a orthonormal basis, ##\mathbf U## over ##\mathbf V## such that ##\mathbf K=\mathbf U\mathbf U^t## allowing me to express any vector in ##\mathbf V## as ##v=\beta\mathbf U##
I just want to correct any misinterpretations as I go forward any figuring out how this allows the "kernel trick" to work in SVMs and other ML algorithms. Any help would be great!
|
Search
Now showing items 1-5 of 5
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Elliptic flow of muons from heavy-flavour hadron decays at forward rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(Elsevier, 2016-02)
The elliptic flow, $v_{2}$, of muons from heavy-flavour hadron decays at forward rapidity ($2.5 < y < 4$) is measured in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The scalar ...
Centrality dependence of the pseudorapidity density distribution for charged particles in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2013-11)
We present the first wide-range measurement of the charged-particle pseudorapidity density distribution, for different centralities (the 0-5%, 5-10%, 10-20%, and 20-30% most central events) in Pb-Pb collisions at $\sqrt{s_{NN}}$ ...
Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays
(Elsevier, 2014-11)
The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ...
|
Search
Now showing items 1-2 of 2
Search for new resonances in $W\gamma$ and $Z\gamma$ Final States in $pp$ Collisions at $\sqrt{s}=8\,\mathrm{TeV}$ with the ATLAS Detector
(Elsevier, 2014-11-10)
This letter presents a search for new resonances decaying to final states with a vector boson produced in association with a high transverse momentum photon, $V\gamma$, with $V= W(\rightarrow \ell \nu)$ or $Z(\rightarrow ...
Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $\boldsymbol{pp}$ collisions at $\boldsymbol{\sqrt{s}}$ = 8 TeV with the ATLAS detector
(Elsevier, 2014-11-10)
Measurements of fiducial and differential cross sections of Higgs boson production in the ${H \rightarrow ZZ ^{*}\rightarrow 4\ell}$ decay channel are presented. The cross sections are determined within a fiducial phase ...
|
GR9677 #74
Alternate Solutions
asdfman 2009-11-05 00:11:38 Used POE.
Expect there to be a change in entropy - E is out.
Know the typical integral to find S will involve a - A is out.
Expect the entropy to increase for the system - D is out.
Based on the numbers, expect there to be a ratio as an argument of the - C is tentatively out.
I'd come back to it if there was time and run the math to see if B is actually correct. If not this logic narrows it down to 2 choices.
Comments
q 2019-10-09 14:08:02 Sevilla: 12-2-4, 39 pointsFollow Damon Salvadore sur TwitterFOR PLUS NOUVELLES ET MISES à JOUR SPORTS, cliquez sur ce lien pour visiter LATIN POST.\r\nq http://www.eurotrends.it/en6.asp physicsphysics 2011-10-11 08:21:33 I think this problem is a little bit tricky. To obtain the equation form of integral, the system should be changed reversibly. If this system is irreversible, the entropy change of this system is - zero.
physicsphysics 2011-10-11 08:24:13 Sorry. I confused two cases. Just reversible case is zero. irreversible case is S>=0. mrTrig 2010-11-05 13:50:32 yosun, you can reduce the logarithms much faster by simply knowing that addition of two results in multiplication of arguments. asdfman 2009-11-05 00:11:38 Used POE.
Expect there to be a change in entropy - E is out.
Know the typical integral to find S will involve a - A is out.
Expect the entropy to increase for the system - D is out.
Based on the numbers, expect there to be a ratio as an argument of the - C is tentatively out.
I'd come back to it if there was time and run the math to see if B is actually correct. If not this logic narrows it down to 2 choices. dstahlke 2009-10-09 10:53:51 But isn't it true that with equality only in the case where the process is reversible? This process doesn't seem reversible to me.
kroner 2009-10-11 20:16:00 In the context of the whole system it's not a reversible process. From that perspective dQ = 0 so you are correct that dS > dQ/T = 0. Clearly the change in entropy is positive.
But considering the two objects separately, they're each undergoing a reversible process (being uniformly heated or cooled). The change in entropy for each can be found by setting dS = dQ/T where dQ is the heat flowing into that object. Then you sum the changes contributed by each object. vinograd19 2007-02-02 10:27:17 I think there is a mistake in solution. One integral is positive, the other is negative. But in solution they are both positive.
hungrychemist 2007-10-07 21:34:32 Solution is correct. The sign of integral is determined solely from the limits of integration. Notice ln(3/5) itself is a minus number as expected.
Post A Comment!
Bare Basic LaTeX Rosetta Stone
LaTeX syntax supported through dollar sign wrappers $, ex.,
$\alpha^2_0$ produces .
type this...
to get...
$\int_0^\infty$
$\partial$
$\Rightarrow$
$\ddot{x},\dot{x}$
$\sqrt{z}$
$\langle my \rangle$
$\left( abacadabra \right)_{me}$
$\vec{E}$
$\frac{a}{b}$
The Sidebar Chatbox... Scroll to see it, or resize your browser to ignore it...
|
Difference between revisions of "Moser-lower.tex"
Line 207: Line 207: −
Actually it is possible to improve upon these bounds by a slight amount. Observe that if $B$ is a maximiser for the right-hand side of \eqref{cn3} (subject to $B$ not containing isosceles triangles), then any triple $(a,b,c)$ not in $B$ must be the vertex of a (possibly degenerate) isosceles triangle with the other vertices in $B$. If this triangle is non-degenerate, or if $(a,b,c)$ is the upper vertex of a degenerate isosceles triangle, then no point from $\Gamma_{a,b,c}$ can be added to $A_B$ without creating a geometric line. However, if $(a,b,c) = (a'+r,b',c'+r)$ is only the lower vertex of a degenerate isosceles triangle $(a'+r,b',c'+r), (a',b'+2r,c')$, then one can add any subset of $\Gamma_{a,b,c}$ to $A_B$ and still have a Moser set as long as no pair of elements in that subset is separated by Hamming distance $2r$. For instance, in the $n=
+
Actually it is possible to improve upon these bounds by a slight amount. Observe that if $B$ is a maximiser for the right-hand side of \eqref{cn3} (subject to $B$ not containing isosceles triangles), then any triple $(a,b,c)$ not in $B$ must be the vertex of a (possibly degenerate) isosceles triangle with the other vertices in $B$. If this triangle is non-degenerate, or if $(a,b,c)$ is the upper vertex of a degenerate isosceles triangle, then no point from $\Gamma_{a,b,c}$ can be added to $A_B$ without creating a geometric line. However, if $(a,b,c) = (a'+r,b',c'+r)$ is only the lower vertex of a degenerate isosceles triangle $(a'+r,b',c'+r), (a',b'+2r,c')$, then one can add any subset of $\Gamma_{a,b,c}$ to $A_B$ and still have a Moser set as long as no pair of elements in that subset is separated by Hamming distance $2r$. For instance, in the $n=$ case, the set
−
(0
+
$$ B = \{ (0 ), (0 5 ), (1 ), (1 ), (1 ), (4), (2 4 2), (3 ), (3 2 ), (2), (3 ), (4 0 ), (1 1), (0),
+
(1) \}$$
generates the lower bound $c'_{8,3} \geq 2902$ given above (and, up to reflection $a \leftrightarrow c$, is the only such set that does so); but by adding the four elements $11333333, 33113333, 33331133, 33333311$ from $\Gamma_{2,0,6}$ one can increase the lower bound slightly to $2906$.
generates the lower bound $c'_{8,3} \geq 2902$ given above (and, up to reflection $a \leftrightarrow c$, is the only such set that does so); but by adding the four elements $11333333, 33113333, 33331133, 33333311$ from $\Gamma_{2,0,6}$ one can increase the lower bound slightly to $2906$.
Revision as of 11:54, 18 June 2009
\section{Lower bounds for the Moser problem}\label{moser-lower-sec}
In this section we discuss lower bounds for $c'_{n,3}$. Clearly we have $c'_{0,3}=1$ and $c'_{1,3}=2$, so we focus on the case $n \ge 2$. The first lower bounds may be due to Koml\'{o}s \cite{komlos}, who observed that the sphere $S_{i,n}$ of elements with exactly $n-i$ 2 entries (see Section \ref{notation-sec} for definition), is a Moser set, so that \begin{equation}\label{cin} c'_{n,3}\geq \vert S_{i,n}\vert \end{equation}
holds for all $i$. Choosing $i=\lfloor \frac{2n}{3}\rfloor$ and
applying Stirling's formula, we see that this lower bound takes the form \begin{equation}\label{cpn3} c'_{n,3} \geq (C-o(1)) 3^n / \sqrt{n} \end{equation} for some absolute constant $C>0$; in fact \eqref{cin} gives \eqref{cpn3} with $C := \sqrt{\frac{9}{4\pi}}$. In particular $c'_{3,3} \geq 12, c'_{4,3}\geq 24, c'_{5,3}\geq 80, c'_{6,3}\geq 240$. Asymptotically, the best lower bounds we know of are still of this type, but the values can be improved by studying combinations of several spheres or
semispheres or applying elementary results from coding theory.
Observe that if $\{w(1),w(2),w(3)\}$ is a geometric line in $[3]^n$, then $w(1), w(3)$ both lie in the same sphere $S_{i,n}$, and that $w(2)$ lies in a lower sphere $S_{i-r,n}$ for some $1 \leq r \leq i \leq n$. Furthermore, $w(1)$ and $w(3)$ are separated by Hamming distance $r$.
As a consequence, we see that $S_{i-1,n} \cup S_{i,n}^e$ (or $S_{i-1,n} \cup S_{i,n}^o$) is a Moser set for any $1 \leq i \leq n$, since any two distinct elements $S_{i,n}^e$ are separated by a Hamming distance of at least two. (Recall Section \ref{notation-sec} for definitions), this leads to the lower bound \begin{equation}\label{cn3-low}
c'_{n,3} \geq \binom{n}{i-1}
2^{i-1} + \binom{n}{i} 2^{i-1} = \binom{n+1}{i} 2^{i-1}. \end{equation} It is not hard to see that $\binom{n+1}{i+1} 2^{i} > \binom{n+1}{i} 2^{i-1}$ if and only if $3i < 2n+1$, and so this lower bound is maximised when $i = \lfloor \frac{2n+1}{3} \rfloor$ for $n \geq 2$, giving the formula \eqref{binom}. This leads to the lower bounds $$ c'_{2,3} \geq 6; c'_{3,3} \geq 16; c'_{4,3} \geq 40; c'_{5,3} \geq 120; c'_{6,3} \geq 336$$ which gives the right lower bounds for $n=2,3$, but is slightly off for $n=4,5$. Asymptotically, Stirling's formula and \eqref{cn3-low} then give the lower bound \eqref{cpn3} with $C = \frac{3}{2} \times \sqrt{\frac{9}{4\pi}}$, which is asymptotically $50\%$ better than the bound \eqref{cin}.
The work of Chv\'{a}tal \cite{chvatal1} already contained a refinement of this idea which we here translate into the usual notation of coding theory: Let $A(n,d)$ denote the size of the largest binary code of length $n$
and minimal distance $d$.
Then \begin{equation}\label{cnchvatal} c'_{n,3}\geq \max_k \left( \sum_{j=0}^k \binom{n}{j} A(n-j, k-j+1)\right). \end{equation}
With the following values for $A(n,d)$: {\tiny{ \[ \begin{array}{llllllll} A(1,1)=2&&&&&&&\\ A(2,1)=4& A(2,2)=2&&&&&&\\ A(3,1)=8&A(3,2)=4&A(3,3)=2&&&&&\\ A(4,1)=16&A(4,2)=8& A(4,3)=2& A(4,4)=2&&&&\\ A(5,1)=32&A(5,2)=16& A(5,3)=4& A(5,4)=2&A(5,5)=2&&&\\ A(6,1)=64&A(6,2)=32& A(6,3)=8& A(6,4)=4&A(6,5)=2&A(6,6)=2&&\\ A(7,1)=128&A(7,2)=64& A(7,3)=16& A(7,4)=8&A(7,5)=2&A(7,6)=2&A(7,7)=2&\\ A(8,1)=256&A(8,2)=128& A(8,3)=20& A(8,4)=16&A(8,5)=4&A(8,6)=2 &A(8,7)=2&A(8,8)=2\\ A(9,1)=512&A(9,2)=256& A(9,3)=40& A(9,4)=20&A(9,5)=6&A(9,6)=4 &A(9,7)=2&A(9,8)=2\\ A(10,1)=1024&A(10,2)=512& A(10,3)=72& A(10,4)=40&A(10,5)=12&A(10,6)=6 &A(10,7)=2&A(10,8)=2\\ A(11,1)=2048&A(11,2)=1024& A(11,3)=144& A(11,4)=72&A(11,5)=24&A(11,6)=12 &A(11,7)=2&A(11,8)=2\\ A(12,1)=4096&A(12,2)=2048& A(12,3)=256& A(12,4)=144&A(12,5)=32&A(12,6)=24 &A(12,7)=4&A(12,8)=2\\ A(13,1)=8192&A(13,2)=4096& A(13,3)=512& A(13,4)=256&A(13,5)=64&A(12,6)=32 &A(13,7)=8&A(13,8)=4\\ \end{array} \] }}
Generally, $A(n,1)=2^n, A(n,2)=2^{n-1}, A(n-1,2e-1)=A(n,2e), A(n,d)=2$, if $d>\frac{2n}{3}$. The values were taken or derived from Andries Brower's table at\\ http://www.win.tue.nl/$\sim$aeb/codes/binary-1.html \textbf{include to references? or other book with explicit values of $A(n,d)$ }
For $c'_{n,3}$ we obtain the following lower bounds: with $k=2$ \[ \begin{array}{llll} c'_{4,3}&\geq &\binom{4}{0}A(4,3)+\binom{4}{1}A(3,2)+\binom{4}{2}A(2,1) =1\cdot 2+4 \cdot 4+6\cdot 4&=42.\\ c'_{5,3}&\geq &\binom{5}{0}A(5,3)+\binom{5}{1}A(4,2)+\binom{5}{2}A(3,1) =1\cdot 4+5 \cdot 8+10\cdot 8&=124.\\ c'_{6,3}&\geq &\binom{6}{0}A(6,3)+\binom{6}{1}A(5,2)+\binom{6}{2}A(4,1) =1\cdot 8+6 \cdot 16+15\cdot 16&=344. \end{array} \] With k=3 \[ \begin{array}{llll} c'_{7,3}&\geq& \binom{7}{0}A(7,4)+\binom{7}{1}A(6,3)+\binom{7}{2}A(5,2) + \binom{7}{3}A(4,1)&=960.\\ c'_{8,3}&\geq &\binom{8}{0}A(8,4)+\binom{8}{1}A(7,3)+\binom{8}{2}A(6,2) + \binom{8}{3}A(5,1)&=2832.\\ c'_{9,3}&\geq & \binom{9}{0}A(9,4)+\binom{9}{1}A(8,3)+\binom{9}{2}A(7,2) + \binom{9}{3}A(6,1)&=7880. \end{array}\] With k=4 \[ \begin{array}{llll} c'_{10,3}&\geq &\binom{10}{0}A(10,5)+\binom{10}{1}A(9,4)+\binom{10}{2}A(8,3) + \binom{10}{3}A(7,2)+\binom{10}{4}A(6,1)&=22232.\\ c'_{11,3}&\geq &\binom{11}{0}A(11,5)+\binom{11}{1}A(10,4)+\binom{11}{2}A(9,3) + \binom{11}{3}A(8,2)+\binom{11}{4}A(7,1)&=66024.\\ c'_{12,3}&\geq &\binom{12}{0}A(12,5)+\binom{12}{1}A(11,4)+\binom{12}{2}A(10,3) + \binom{12}{3}A(9,2)+\binom{12}{4}A(8,1)&=188688.\\ \end{array}\] With $k=5$ \[ c'_{13,3}\geq 539168.\]
It should be pointed out that these bounds are even numbers, so that $c'_{4,3}=43$ shows that one cannot generally expect this lower bound gives the optimum.
The maximum value appears to occur for $k=\lfloor\frac{n+2}{3}\rfloor$, so that using Stirling's formula and explicit bounds on $A(n,d)$ the best possible value known to date of the constant $C$ in equation \eqref{cpn3} can be worked out, but we refrain from doing this here. Using the Singleton bound $A(n,d)\leq 2^{n-d+1}$ Chv\'{a}tal \cite{chvatal1} proved that the expression on the right hand side of \eqref{cnchvatal} is also $O\left( \frac{3^n}{\sqrt{n}}\right)$, so that the refinement described above gains a constant factor over the initial construction only.
For $n=4$ the above does not yet give the exact value. The value $c'_{4,3}=43$ was first proven by Chandra \cite{chandra}. A uniform way of describing examples for the optimum values of $c'_{4,3}=43$ and $c'_{5,3}=124$ is the following:
Let us consider the sets $$ A := S_{i-1,n} \cup S_{i,n}^e \cup A'$$ where $A' \subset S_{i+1,n}$ has the property that any two elements in $A'$ are separated by a Hamming distance of at least three, or have a Hamming distance of exactly one but their midpoint lies in $S_{i,n}^o$. By the previous discussion we see that this is a Moser set, and we have the lower bound \begin{equation}\label{cnn} c'_{n,3} \geq \binom{n+1}{i} 2^{i-1} + |A'|. \end{equation} This gives some improved lower bounds for $c'_{n,3}$:
\begin{itemize} \item By taking $n=4$, $i=3$, and $A' = \{ 1111, 3331, 3333\}$, we obtain $c'_{4,3} \geq 43$; \item By taking $n=5$, $i=4$, and $A' = \{ 11111, 11333, 33311, 33331 \}$, we obtain $c'_{5,3} \geq 124$. \item By taking $n=6$, $i=5$, and $A' = \{ 111111, 111113, 111331, 111333, 331111, 331113\}$, we obtain $c'_{6,3} \geq 342$. \end{itemize}
This gives the lower bounds in Theorem \ref{moser} up to $n=5$, but the bound for $n=6$ is inferior to the lower bound $c'_{6,3}\geq 344$ given above.
A modification of the construction in \eqref{cn3-low} leads to a slightly better lower bound. Observe that if $B \subset \Delta_n$, then the set $A_B := \bigcup_{\vec a \in B} \Gamma_{a,b,c}$ is a Moser set as long as $B$ does not contain any ``isosceles triangles
$(a+r,b,c+s), (a+s,b,c+r), (a,b+r+s,c)$ for any $r,s \geq 0$ not both zero; in particular, $B$ cannot contain any ``vertical line segments $(a+r,b,c+r), (a,b+2r,c)$. An example of such a set is provided by selecting $0 \leq i \leq n-3$ and letting $B$ consist of the triples $(a, n-i, i-a)$ when $a \neq 3 \mod 3$, $(a,n-i-1,i+1-a)$ when $a \neq 1 \mod 3$, $(a,n-i-2,i+2-a)$ when $a=0 \mod 3$, and $(a,n-i-3,i+3-a)$ when $a=2 \mod 3$. Asymptotically, this set occues about two thirds of the spheres $S_{n,i}$, $S_{n,i+1}$ and one third of the spheres $S_{n,i+2}, S_{n,i+3}$ and (setting $i$ close to $n/3$) gives a lower bound \eqref{cpn3} with $C = 2 \times \frac{\sqrt{9}}{4\pi}$, which is thus superior to the previous constructions.
An integer program was run to obtain the optimal lower bounds achievable by the $A_B$ construction (using \eqref{cn3}, of course). The results for $1 \leq n \leq 20$ are displayed in Figure \ref{nlow-moser}:
\begin{figure}[tb] \centerline{ \begin{tabular}{|ll|ll|} \hline n & lower bound & n & lower bound \\ \hline 1 & 2 &11& 71766\\ 2 & 6 & 12& 212423\\ 3 & 16 & 13& 614875\\ 4 & 43 & 14& 1794212\\ 5 & 122& 15& 5321796\\ 6 & 353& 16& 15455256\\ 7 & 1017& 17& 45345052\\ 8 & 2902&18& 134438520\\ 9 & 8622&19& 391796798\\ 10& 24786& 20& 1153402148\\ \hline \end{tabular}} \caption{Lower bounds for $c'_n$ obtained by the $A_B$ construction.} \label{nlow-moser} \end{figure}
More complete data, including the list of optimisers, can be found at {\tt http://abel.math.umu.se/~klasm/Data/HJ/}.
This indicates that greedily filling in spheres, semispheres or codes is no longer the optimal strategy in dimensions six and higher. The lower bound $c'_{6,3} \geq 353$ was first located by a genetic algorithm: see Appendix \ref{genetic-alg}.
\begin{figure}[tb] \centerline{\includegraphics{moser353new.png}} \caption{One of the examples of $353$-point sets in $[3]^6$ (elements of the set being indicated by white squares).} \label{moser353-fig} \end{figure}
Actually it is possible to improve upon these bounds by a slight amount. Observe that if $B$ is a maximiser for the right-hand side of \eqref{cn3} (subject to $B$ not containing isosceles triangles), then any triple $(a,b,c)$ not in $B$ must be the vertex of a (possibly degenerate) isosceles triangle with the other vertices in $B$. If this triangle is non-degenerate, or if $(a,b,c)$ is the upper vertex of a degenerate isosceles triangle, then no point from $\Gamma_{a,b,c}$ can be added to $A_B$ without creating a geometric line. However, if $(a,b,c) = (a'+r,b',c'+r)$ is only the lower vertex of a degenerate isosceles triangle $(a'+r,b',c'+r), (a',b'+2r,c')$, then one can add any subset of $\Gamma_{a,b,c}$ to $A_B$ and still have a Moser set as long as no pair of elements in that subset is separated by Hamming distance $2r$. For instance, in the $n=8$ case, the set $$ B = \{ (0 3 5), (0 5 3), (1 3 4), (1 4 3), (2 1 5), (2 2 4), (2 4 2), (3 0 5), (3 2 3), (3 3 2), (3 5 0), (4 0 4), (4 1 3), (4 3 1), (4 4 0), (5 2 1) \}$$ generates the lower bound $c'_{8,3} \geq 2902$ given above (and, up to reflection $a \leftrightarrow c$, is the only such set that does so); but by adding the four elements $11333333, 33113333, 33331133, 33333311$ from $\Gamma_{2,0,6}$ one can increase the lower bound slightly to $2906$.
However, we have been unable to locate a lower bound which is asymptotically better than \eqref{cpn3}. Indeed, any method based purely on the $A_B$ construction cannot do asymptotically better than the previous constructions:
\begin{proposition} Let $B \subset \Delta_n$ be such that $A_B$ is a Moser set. Then $|A_B| \leq (2 \sqrt{\frac{9}{4\pi}} + o(1)) \frac{3^n}{\sqrt{n}}$. \end{proposition}
\begin{proof} By the previous discussion, $B$ cannot contain any pair of the form $(a,b+2r,c), (a+r,b,c+r)$ with $r>0$. In other words, for any $-n \leq h \leq n$, $B$ can contain at most one triple $(a,b,c)$ with $c-a=h$. From this and \eqref{cn3}, we see that $$ |A_B| \leq \sum_{h=-n}^n \max_{(a,b,c) \in \Delta_n: c-a=h} \frac{n!}{a! b! c!}.$$ From the Chernoff inequality (or the Stirling formula computation below) we see that $\frac{n!}{a! b! c!} \leq \frac{1}{n^{10}} 3^n$ unless $a,b,c = n/3 + O( n^{1/2} \log^{1/2} n )$, so we may restrict to this regime, which also forces $h = O( n^{1/2}/\log^{1/2} n)$. If we write $a = n/3 + \alpha$, $b = n/3 + \beta$, $c = n/3+\gamma$ and apply Stirling's formula $n! = (1+o(1)) \sqrt{2\pi n} n^n e^{-n}$, we obtain $$ \frac{n!}{a! b! c!} = (1+o(1)) \frac{3^{3/2}}{2\pi n} 3^n \exp( - (\frac{n}{3}+\alpha) \log (1 + \frac{3\alpha}{n} ) - (\frac{n}{3}+\beta) \log (1 + \frac{3\beta}{n} ) - (\frac{n}{3}+\gamma) \log (1 + \frac{3\gamma}{n} ) ).$$ From Taylor expansion one has $$ (\frac{n}{3}+\alpha) \log (1 + \frac{3\alpha}{n} ) = -\alpha - \frac{3}{2} \frac{\alpha^2}{n} + o(1)$$ and similarly for $\beta,\gamma$; since $\alpha+\beta+\gamma=0$, we conclude that $$ \frac{n!}{a! b! c!} = (1+o(1)) \frac{3^{3/2}}{2\pi n} 3^n \exp( - \frac{3}{2n} (\alpha^2+\beta^2+\gamma^2) ).$$ If $c-a=h$, then $\alpha^2+\beta^2+\gamma^2 = \frac{3\beta^2}{2} + \frac{h^2}{2}$. Thus we see that $$ \max_{(a,b,c) \in \Delta_n: c-a=h} \frac{n!}{a! b! c!} \leq (1+o(1)) \frac{3^{3/2}}{2\pi n} 3^n \exp( - \frac{3}{4n} h^2 ).$$ Using the integral test, we thus have $$ |A_B| \leq (1+o(1)) \frac{3^{3/2}}{2\pi} 3^n \int_\R \exp( - \frac{3}{4n} x^2 )\ dx.$$ Since $\int_\R \exp( - \frac{3}{4n} x^2 )\ dx = \sqrt{\frac{4\pi n}{3}}$, we obtain the claim. \end{proof}
|
Let us reformulate OP's question as follows:
Give a proof that a local coordinate transformation $x^{\mu} \to y^{\rho}=y^{\rho}(x)$ between two local coordinate systems (on a 3+1 dimensional Lorentzian manifold) must be affine if the metric $g_{\mu\nu}$ in both coordinate systems happen to be on constant flat Minkowski form $\eta_{\mu\nu}$.
Here we will present a proof that works both with Minkowski and Euclidean signature; in fact for any signature and for any finite non-zero number of dimensions, as long as the metric $g_{\mu\nu}$ is invertible.
1) Let us first recall the transformation property of the inverse metric $g^{\mu\nu}$, which is a contravariant $(2,0)$ symmetric tensor,
$$ \frac{\partial y^{\rho}}{\partial x^{\mu}} g^{\mu\nu}_{(x)}\frac{\partial y^{\sigma}}{\partial x^{\nu}}~=~g^{\rho\sigma}_{(y)}, $$
where $x^{\mu} \to y^{\rho}=y^{\rho}(x)$ is a local coordinate transformation. Recall that the metric $g_{\mu\nu}=\eta_{\mu\nu}$ is the flat constant metric in both coordinate systems. So we can write
$$ \frac{\partial y^{\rho}}{\partial x^{\mu}} \eta^{\mu\nu}\frac{\partial y^{\sigma}}{\partial x^{\nu}}~=~\eta^{\rho\sigma}. \qquad (1) $$
2) Let us assume that the local coordinate transformation is real analytic
$$y^{\rho} ~=~ a^{(0)\rho} + a^{(1)\rho}_{\mu} x^{\mu} + \frac{1}{2} a^{(2)\rho}_{\mu\nu}x^{\mu}x^{\nu} + \frac{1}{3!} a^{(3)\rho}_{\mu\nu\lambda}x^{\mu} x^{\nu} x^{\lambda} + \ldots. $$
By possibly performing an appropriate translation we will from now on assume without loss of generality that the constant shift $ a^{(0)\rho} =0 $ is zero.
3) To the zeroth order in $x$, the equation $(1)$ reads
$$ a^{(1)\rho}_{\mu} \eta^{\mu\nu}a^{(1)\sigma}_{\nu}~=~\eta^{\rho\sigma}, $$
which not surprisingly says that the matrix $a^{(1)\rho}_{\mu}$ is a Lorentz (or an orthogonal) matrix, respectively. By possibly performing an appropriate "rotation", we will from now on assume without loss of generality that the constant matrix
$$ a^{(1)\rho}_{\mu}~=~\delta^{\rho}_{\mu} $$
is the unit matrix.
4) In the following, it will be convenient to lower the index of the $y^{\sigma}$ coordinate as
$$y_{\rho}~:=~\eta_{\rho\sigma}y^{\sigma}.$$
Then the local coordinate transformation becomes
$$y_{\rho} ~=~ \eta_{\rho\mu} x^{\mu} + \frac{1}{2} a^{(2)}_{\rho,\mu\nu}x^{\mu}x^{\nu} + \frac{1}{3!} a^{(3)}_{\rho,\mu\nu\lambda}x^{\mu} x^{\nu} x^{\lambda}+ \ldots$$$$+\frac{1}{n!} a^{(n)}_{\rho,\mu_1\ldots\mu_n}x^{\mu_1} \cdots x^{\mu_n}+ \ldots. $$
5) To the first order in $x$, the equation $(1)$ reads
$$ a^{(2)}_{\rho,\sigma\mu}+a^{(2)}_{\sigma,\rho\mu}~=~0.$$
That is, $a^{(2)}_{\rho,\mu\nu}$ is symmetric in $\mu\leftrightarrow \nu$, but antisymmetric in $\rho\leftrightarrow \mu$. It is not hard to see (by applying the symmetry and the antisymmetry property in alternating order three times each), that the second order coefficients $a^{(2)}_{\rho,\mu\nu}=0$ must vanish.
6) To the second order in $x$, the equation $(1)$ reads
$$ a^{(3)}_{\rho,\sigma\mu\nu}+a^{(3)}_{\sigma,\rho\mu\nu}~=~0.$$
That is, $a^{(3)}_{\rho,\mu\nu\lambda}$ is symmetric in $\mu\leftrightarrow \nu\leftrightarrow \lambda $, but antisymmetric in $\rho\leftrightarrow \mu$. For fixed $\lambda$, we can again reach the conclusion $a^{(3)}_{\rho,\mu\nu\lambda}=0$.
7) Similarly, we conclude inductively that the higher order coefficients $a^{(n)}_{\rho,\mu_1\ldots\mu_n}=0$ must vanish as well. So $y^{\mu}= x^{\mu}$. Q.E.D.
|
Let us take as a vocabulary the $\in$ relation (is an element of), and a single unary predicate $C$, where $Cx$ is read "$x$ is constructible" or "$x$ is a constructible set" (I'm making this up, but the term seems appropriate). We may then write down an alterative set theory with three intuitive axioms (one of them an axiom schema):
Extensionality: $$\forall X \,\forall X'\;:\; (\forall x\; x \in X \longleftrightarrow x \in X') \to X = X'.$$
(I.e.: Objects with the same elements are equal.)
Schema of construction: For any formula $\varphi$ which
does not contain C, if $\varphi$ has free variables $x$ and $\overline{y} = (y_1, y_2, \ldots, y_k)$, IF $$ \forall \overline{y} \, \forall x \;:\; (Cy_1 \land \cdots \land Cy_k \land \varphi(x,\overline{y})) \to Cx $$ THEN $$ \forall \overline{y} \, \exists X \, \forall x \; :\;x \in X \longleftrightarrow \varphi(x,\overline{y}). $$ In other words, if it is possible to deduce that $x$ is constructible from only the fact that $y_i$ are constructible and $\varphi(x,\overline{y})$, then the set $X = \{x \mid \varphi(x,\overline{y})\}$ exists.
Constructible sets are those with constructible elements:
$$\forall X \; : \; CX \longleftrightarrow (\forall x \;:\; x \in X \to Cx).$$
It seems to me I can deduce many of the axioms of ZF from these: at least, pairing, union, power set, and specification.* So I am thinking there must be a contradiction lurking somewhere. The question:
(i) Is there an (obvious) contradiction in these three axoims?
For instance, we could try to encode Russell's paradox. It can't translate directly, since if we just assume "$x \in x$", it doesn't follow that $Cx$. And one cannot play tricks with "the set of all constructible sets that don't contain themselves" because $C$ is not allowed in $\varphi$ in the comprehension schema.
However, there does seem to be something fishy going on: in ZFC, well-founded induction is a theorem, and well-founded induction cannot be true here or else we could prove that all sets are constructible (using axiom 3). At that point, (2) reduces to unrestricted comprehension and the theory becomes inconsistent.
I would also like to know:
(ii). Is this theory similar to any existing alternative set theories?
When I wrote down these axioms this afternoon, I was trying to formalize the intuitive justification for the axioms of ZFC, namely, that every axiom constructs bigger sets out of smaller sets which have already been defined.
*For pairing, if $A$ and $B$ are constructible, and $x = A$ or $x = B$ then $x$ is constructible. The other axioms I mentioned (union, power set, and specification) use axiom (3): For union, if $A$ is constructible and $x \in a$, $a \in A$, $x$ is constructible. For power set, if $A$ is constructible and $B \subseteq A$, then every $x \in B$ is constructible so $B$ is constructible. For specification, if $x \in A$ and$ \varphi(x)$, and $A$ is constructible, then in particular $x \in A$ so $x$ is constructible.
|
Let $\Omega$ = closed ball $B_1(0)$ in $\mathbb{R}^n$ with metric d induced by the Euclidean norm. Suppose the mapping $T: \Omega \to \Omega$ satisfies
$d(Tx,Ty) \leq d(x,y)$ for all $x,y \in \Omega$
Prove that there exists at least on fixed point of T. Hint consider the map $T_k = (1-\frac{1}{k})T$
So I first start off with proving T is a contraction, nonetheless, consider
$|T_k(x) - T_k(y)| = |(1-\frac{1}{k})T(x) - (1-\frac{1}{k})T(y)|$
$=|(1-\frac{1}{k})(T(x)-T(y))|$
$\leq |1-\frac{1}{k}||T(x)-T(y)$
However, the answer says that
$|T_k(x) - T_k(y)| \leq (1-\frac{1}{k})^2|x-y|$
Where did the square come from is it because we're dealing with the Euclidean norm or did I do something wrong? Any help would be greatly appreciated
|
Left Distributive and Commutative implies Distributive
Jump to navigation Jump to search
Theorem
Let $\struct {S, \circ, *}$ be an algebraic structure.
Let $\circ$ be commutative.
Then $\circ$ is distributive over $*$. Proof
Let $a, b, c \in S$.
Then
\(\displaystyle \paren {a * b} \circ c\) \(=\) \(\displaystyle c \circ \paren {a * b}\) $\circ$ is commutative \(\displaystyle \) \(=\) \(\displaystyle \paren {c \circ a} * \paren {c \circ b}\) $\circ$ is left distributive over $*$ \(\displaystyle \) \(=\) \(\displaystyle \paren {a \circ c} * \paren {b \circ c}\) $\circ$ is commutative
So $\circ$ is right distributive over $*$.
$\blacksquare$
|
Let $a \in \Bbb{R}$. Let $\Bbb{Z}$ act on $S^1$ via $(n,z) \mapsto ze^{2 \pi i \cdot an}$. Claim: The is action is not free if and only if $a \Bbb{Q}$. Here's an attempt at the forward direction: If the action is not free, there is some nonzero $n$ and $z \in S^1$ such that $ze^{2 \pi i \cdot an} = 1$. Note $z = e^{2 \pi i \theta}$ for some $\theta \in [0,1)$.
Then the equation becomes $e^{2 \pi i(\theta + an)} = 1$, which holds if and only if $2\pi (\theta + an) = 2 \pi k$ for some $k \in \Bbb{Z}$. Solving for $a$ gives $a = \frac{k-\theta}{n}$...
What if $\theta$ is irrational...what did I do wrong?
'cause I understand that second one but I'm having a hard time explaining it in words
(Re: the first one: a matrix transpose "looks" like the equation $Ax\cdot y=x\cdot A^\top y$. Which implies several things, like how $A^\top x$ is perpendicular to $A^{-1}x^\top$ where $x^\top$ is the vector space perpendicular to $x$.)
DogAteMy: I looked at the link. You're writing garbage with regard to the transpose stuff. Why should a linear map from $\Bbb R^n$ to $\Bbb R^m$ have an inverse in the first place? And for goodness sake don't use $x^\top$ to mean the orthogonal complement when it already means something.
he based much of his success on principles like this I cant believe ive forgotten it
it's basically saying that it's a waste of time to throw a parade for a scholar or win he or she over with compliments and awards etc but this is the biggest source of sense of purpose in the non scholar
yeah there is this thing called the internet and well yes there are better books than others you can study from provided they are not stolen from you by drug dealers you should buy a text book that they base university courses on if you can save for one
I was working from "Problems in Analytic Number Theory" Second Edition, by M.Ram Murty prior to the idiots robbing me and taking that with them which was a fantastic book to self learn from one of the best ive had actually
Yeah I wasn't happy about it either it was more than $200 usd actually well look if you want my honest opinion self study doesn't exist, you are still being taught something by Euclid if you read his works despite him having died a few thousand years ago but he is as much a teacher as you'll get, and if you don't plan on reading the works of others, to maintain some sort of purity in the word self study, well, no you have failed in life and should give up entirely. but that is a very good book
regardless of you attending Princeton university or not
yeah me neither you are the only one I remember talking to on it but I have been well and truly banned from this IP address for that forum now, which, which was as you might have guessed for being too polite and sensitive to delicate religious sensibilities
but no it's not my forum I just remembered it was one of the first I started talking math on, and it was a long road for someone like me being receptive to constructive criticism, especially from a kid a third my age which according to your profile at the time you were
i have a chronological disability that prevents me from accurately recalling exactly when this was, don't worry about it
well yeah it said you were 10, so it was a troubling thought to be getting advice from a ten year old at the time i think i was still holding on to some sort of hopes of a career in non stupidity related fields which was at some point abandoned
@TedShifrin thanks for that in bookmarking all of these under 3500, is there a 101 i should start with and find my way into four digits? what level of expertise is required for all of these is a more clear way of asking
Well, there are various math sources all over the web, including Khan Academy, etc. My particular course was intended for people seriously interested in mathematics (i.e., proofs as well as computations and applications). The students in there were about half first-year students who had taken BC AP calculus in high school and gotten the top score, about half second-year students who'd taken various first-year calculus paths in college.
long time ago tho even the credits have expired not the student debt though so i think they are trying to hint i should go back a start from first year and double said debt but im a terrible student it really wasn't worth while the first time round considering my rate of attendance then and how unlikely that would be different going back now
@BalarkaSen yeah from the number theory i got into in my most recent years it's bizarre how i almost became allergic to calculus i loved it back then and for some reason not quite so when i began focusing on prime numbers
What do you all think of this theorem: The number of ways to write $n$ as a sum of four squares is equal to $8$ times the sum of divisors of $n$ if $n$ is odd and $24$ times sum of odd divisors of $n$ if $n$ is even
A proof of this uses (basically) Fourier analysis
Even though it looks rather innocuous albeit surprising result in pure number theory
@BalarkaSen well because it was what Wikipedia deemed my interests to be categorized as i have simply told myself that is what i am studying, it really starting with me horsing around not even knowing what category of math you call it. actually, ill show you the exact subject you and i discussed on mmf that reminds me you were actually right, i don't know if i would have taken it well at the time tho
yeah looks like i deleted the stack exchange question on it anyway i had found a discrete Fourier transform for $\lfloor \frac{n}{m} \rfloor$ and you attempted to explain to me that is what it was that's all i remember lol @BalarkaSen
oh and when it comes to transcripts involving me on the internet, don't worry, the younger version of you most definitely will be seen in a positive light, and just contemplating all the possibilities of things said by someone as insane as me, agree that pulling up said past conversations isn't productive
absolutely me too but would we have it any other way? i mean i know im like a dog chasing a car as far as any real "purpose" in learning is concerned i think id be terrified if something didnt unfold into a myriad of new things I'm clueless about
@Daminark They key thing if I remember correctly was that if you look at the subgroup $\Gamma$ of $\text{PSL}_2(\Bbb Z)$ generated by (1, 2|0, 1) and (0, -1|1, 0), then any holomorphic function $f : \Bbb H^2 \to \Bbb C$ invariant under $\Gamma$ (in the sense that $f(z + 2) = f(z)$ and $f(-1/z) = z^{2k} f(z)$, $2k$ is called the weight) such that the Fourier expansion of $f$ at infinity and $-1$ having no constant coefficients is called a cusp form (on $\Bbb H^2/\Gamma$).
The $r_4(n)$ thing follows as an immediate corollary of the fact that the only weight $2$ cusp form is identically zero.
I can try to recall more if you're interested.
It's insightful to look at the picture of $\Bbb H^2/\Gamma$... it's like, take the line $\Re[z] = 1$, the semicircle $|z| = 1, z > 0$, and the line $\Re[z] = -1$. This gives a certain region in the upper half plane
Paste those two lines, and paste half of the semicircle (from -1 to i, and then from i to 1) to the other half by folding along i
Yup, that $E_4$ and $E_6$ generates the space of modular forms, that type of things
I think in general if you start thinking about modular forms as eigenfunctions of a Laplacian, the space generated by the Eisenstein series are orthogonal to the space of cusp forms - there's a general story I don't quite know
Cusp forms vanish at the cusp (those are the $-1$ and $\infty$ points in the quotient $\Bbb H^2/\Gamma$ picture I described above, where the hyperbolic metric gets coned off), whereas given any values on the cusps you can make a linear combination of Eisenstein series which takes those specific values on the cusps
So it sort of makes sense
Regarding that particular result, saying it's a weight 2 cusp form is like specifying a strong decay rate of the cusp form towards the cusp. Indeed, one basically argues like the maximum value theorem in complex analysis
@BalarkaSen no you didn't come across as pretentious at all, i can only imagine being so young and having the mind you have would have resulted in many accusing you of such, but really, my experience in life is diverse to say the least, and I've met know it all types that are in everyway detestable, you shouldn't be so hard on your character you are very humble considering your calibre
You probably don't realise how low the bar drops when it comes to integrity of character is concerned, trust me, you wouldn't have come as far as you clearly have if you were a know it all
it was actually the best thing for me to have met a 10 year old at the age of 30 that was well beyond what ill ever realistically become as far as math is concerned someone like you is going to be accused of arrogance simply because you intimidate many ignore the good majority of that mate
|
This is a nice question, as it confronts a very replicable and common experience with a well established yet seemingly contradictory fact. As you expected, the smell of metal has nothing to do with the metal actually getting into your nose, as most metals have far too low of a vapor pressure at ordinary temperatures to allow direct detection. The ...
Let's consider, for example, a tetrahedral Ni(II) complex ($\mathrm{d^8}$), like $\ce{[NiCl4]^2-}$. According to hybridisation theory, the central nickel ion has $\mathrm{sp^3}$ hybridisation, the four $\mathrm{sp^3}$-type orbitals are filled by electrons from the chloride ligands, and the 3d orbitals are not involved in bonding.Already there are several ...
There is an explanation to this that can be generalized, which dips a little into quantum chemistry, which is known as the idea of pairing energy. I'm sure you can look up the specifics, but basically in comparing the possible configurations of $\ce{Nb}$, we see the choice of either pairing electrons at a lower energy, or of separating them at higher energy, ...
In addition to the general rules of how electronic configurations of atoms and ions are calculated, the elements from the d-block (aka the transition metals) obey one special rule:In general, electrons are removed from the valence-shell s-orbitals before they are removed from valence d-orbitals when transition metals are ionized.(I took this ...
As I understand this, there are basically two effects at work here.When you populate an s orbital, you add a significant amount of electron density close to the nucleus. This screens the attractive charge of the nucleus from the d orbitals, making them higher in energy (and more radially diffuse). The difference in energy between putting all the electrons ...
The geometry of the complex changes going from $\ce{[NiCl4]^2-}$ to $\ce{[PdCl4]^2-}$. Clearly this cannot be due to any change in the ligand since it is the same in both cases. It is the other factor, the metal, that leads to the difference.Consider the splitting of the $\mathrm{d}$ orbitals in a generic $\mathrm{d^8}$ complex. If it were to adopt a ...
You're right--it's got to do with them being transition metals (usually). Transition metal ions form coordination complexes. Their empty $d$ orbitals accept lone pairs from other molecules (called "ligands") and form larger molecules (though we don't call them that--we call them "complexes"). When put in water, the ligand is $\ce{H2O}$, and you get complexes ...
Disclaimer: I now believe this answer to be fully incorrect.Please consider un-upvoting it and/or downvoting it. I do not like seeing incorrect answers at +22.However, I will leave it up for now. It is a reflection of what is taught in many undergraduate-level textbooks or courses. However, there have been criticisms of this particular graph in ...
On negative oxidation states, in generalAlthough it's usually a topic that's covered relatively late in a chemistry education, negative oxidation states for transition metals[1] are actually quite alright. On the Wikipedia list of oxidation states, there are quite a number of negative oxidation states. Some textbooks have tables which only show positive ...
You are absolutely correct, it all about the metal's electrons and also about their d orbitals.Transition elements are usually characterised by having d orbitals. Now when the metal is not bonded to anything else, these d orbitals are degenerate, meaning that they all have the same energy level.However when the metal starts bonding with other ligands, ...
This is just a confirmation to Aesin's answer...Say, we take copper. The expected electronic configuration (as we blindly fill the d-orbitals along the period) is $\ce{[Ar] 3d^9 4s^2}$, whereas the real configuration is $\ce{[Ar] 3d^{10} 4s^1}$. There is a famous interpretation for this, that d-orbitals are more stable when half-filled and completely-...
Selection rulesThe intensity of the transition from a state $\mathrm{i}$ to a state $\mathrm{f}$ is governed by the transition dipole moment $\mu_{\mathrm{fi}}$ (strictly, it is proportional to $|\mu_{\mathrm{fi}}|^2$):$$\iint \Psi_\mathrm{f}^*\hat{\mu}\Psi_\mathrm{i}\,\mathrm{d}\tau \,\mathrm{d}\omega \tag{1}$$where $\mathrm{d}\tau$ is the usual ...
These species usually do not exist in nature, but they can be synthesized.Silver has been reduced in liquid ammonia to give $\ce{Ag-}$.A lot of anionic metal carbonyl complexes $\ce{M(CO)_{n}^{m-}}$ have been synthesized:-1$\ce{[V(CO)6]-}$, $\ce{[Nb(CO)6]-}$, $\ce{[Ta(CO)6]-}$, $\ce{[Mn(CO)5]-}$, $\ce{[Ir(CO)4]-}$, $\ce{[Co(CO)4]-}$, $\ce{[Rh(CO)4]-}$...
The answer simply has to do with the accessibility of the high +6 oxidation state.In Cr, the 3d electrons drop in energy extremely rapidly as you remove electrons. So, it is much harder to remove multiple electrons one after another; the only Cr(VI) compounds that we know of are paired with extremely hard bases like the oxide ion, viz. CrO3, CrO42−, Cr2O72−...
Absorption of a photon typically results in a vibrationally excited higher electronic state of the same multiplicity.$$\ce{S_0 ->[$h\nu_\mathrm{ex}$] S_1}$$In most cases, the excited state deactivates through internal conversion in a radiationless process via vibrational energy exchange with solvent molecules. No light is emitted here, but the ...
The partially full d-orbitals in transition metals have energy splittings that happen to lie in the visible range. Depending on the arrangement of substituents (known as ligands) that attach to them, the electron energies split according to crystal field theory. Similar splitting in the s or p orbitals produce gaps in the ultraviolet, and any visible light ...
Let’s take a look at a qualitative MO scheme for a tetrahedric transition metal complex whose ligands have three p-type orbitals each. On the left of figure 1 you have the metal orbitals ($\mathrm{3d}$, $\mathrm{4s}$ and $\mathrm{4p}$) and on the right the twelve degenerate ligand p-orbitals (transform as $\mathrm{a_1 + e + t_1 + 2t_2}$). Only orbitals of ...
Yes, $^{56}\ce{Fe}$ has the most stable nucleus, and $\ce{He}$ is the most chemically inert element. These are different and unrelated qualities, pretty much like physical fitness and intelligence in a man. As for structural stability, there is no such thing in chemistry (there is one in architecture and another in mathematics, but those are out of scope of ...
The electronic configuration has nothing to do with it. The reduction potentials of $\ce{Ni^3+}/\ce{Ni^2+}$, $\ce{Cu^3+}/\ce{Cu^2+}$ and $\ce{Zn^3+}/\ce{Zn^2+}$, if they have been/could be measured, would be even greater.The reduction potential for $\ce{M^3+}/\ce{M^2+}$ is most dependent upon the third ionisation energy. If $I_3$ is large then it will be ...
Perhaps this shouldn't be counted as an answer, but since this topic has been resurrected, I'd like to point to Cann.[1] He explains the apparent stability of half-filled and filled subshells by invoking exchange energy (actually more of a decrease in destabilization due to smaller-than-expected electron-electron repulsions).According to him, there is a ...
What is the structure of $\ce{FeSO4 \cdot NO}$ that is formed when $\ce{NO}$ is passed through ferrous sulphate solution?The structure is octahedral.The Fe ion is at the center of the octahedron.Five water molecules and the NO molecule occupy the vertices of the octahedron.Sulfate is a separate spectator ion.The overall charge of the iron ...
Yes, it is all about the absorption of light at specific wavelength.Azobenzene, the parent compound has an absorption maximum around $\lambda$= 430 nm in the visible spectrum.The interesting part is: The absorption can be tuned by substitution of the arenes. This is done before the azo coupling.Some examples are Allura Red (1), Chrysoine Resorcinol (2),...
It is very convenient to use crystal field theory to discuss this.It is usually assumed that in octahedral coordination the energy levels of the five d-orbitals are split, with two orbitals ($d_{z^2}$ and $d_{x^2-y^2}$) well above the other three. The splitting is assumed to be large enough to overcome electron pairing energy.The first six electrons ...
Usually when adding electrons based on the Aufbau principle, you go from one element to the next highest one, e.g. from $\ce{Ti}: \ce{[Ar] 4s^2 3d^2}$ to $\ce{V: [Ar] 4s^2 3d^3}$. Thus you add not only an electron but also a proton to your atom.When you remove electrons to get to a cation, you only remove electrons. Thus it is a different situation, with ...
The question of anomalous electronic configurations, meaning $\mathrm{s^1}$ or $\mathrm{s^0}$ in one case (Pd) is very badly explained in textbooks.For example, the anomalous configuration of Cr ($\mathrm{3d^5~4s^1}$) is typically explained as being due to "half-filled subshell stability". This is wrong for several reasons. First of all there is nothing ...
You have to think about the whole process. When a metal loses electrons to make a metal ion the following happens:The metallic bonds holding the metal atoms together are broken.The metal atom loses the electrons.The resulting metal ion is hydrated.In your analysis you are only focusing on step 2. The enthalpy and entropy of the entire process factor ...
Although less common than transition metal complexes, sodium does form complexes with some ligands, particularly oxygen based ligands.Aqua complexes are formed in aqueous solution, the most common being $\ce{[Na(H2O)6]+}$.Sodium forms many complexes with crown ethers, cryptands and other related ligands. For example, 15-crown-5:
We sometimes call this type of complex 'pseudotetrahedral' since there is an isomerism from a tetrahedral to a square planar complex possible. I was unable to find the original work here but this link gives some information. As you already mentioned there are two strong and two weak ligands so it's hard to tell how strong the ligand field splitting will be. ...
|
Welcome in part 4 of this AR series. In the previous chapter, you could see how homography makes possible to draw into a projected planar surface. This chapter will extend the previously calculated homography into form, which allows drawing 3D objects into the scene.
The program related to this chapter is CameraPoseVideoTestApp. You can download the whole project right here.
The structure here would be the same as in the previous chapter. First, you will see the equations and then the practical example at the end. Don’t be stress about the number of parameters and variables. It’s not that difficult, once it comes to coding.
Camera and Homography
The camera is a device which projects points from 3D space into to 2D plane. For this project, I have chosen to use the classical pinhole camera model, without worrying about perspective distortions. This model makes point projection as simple as matrix-vector multiplication in homogeneous coordinates (arrows on top of the lower case letters symbolize vectors).
\[
\vec{p_{2D}}=P\vec{p_{3D}} \\ \begin{bmatrix} wx_{2D} \\ wy_{2D} \\ w \end{bmatrix} = \begin{bmatrix} p_{11} & p_{12} & p_{13} & p_{14} \\ p_{21} & p_{22} & p_{23} & p_{24} \\ p_{31} & p_{32} & p_{33} & p_{34} \end{bmatrix} \begin{bmatrix} x_{3D} \\ y_{3D} \\ z_{3D} \\ 1 \end{bmatrix} \]
\(P\) is called a projection matrix and has 3 rows, 4 columns. This realizes the dimension drop into the projection plane.
The camera is a product which has some properties, most notably it’s a focal length. Important is that these properties are constant for a given camera (assuming you are not zooming). This is the internal set of properties. Then there is an external set of properties which is a position and direction of the camera.
This can be reflected in a matrix language as decomposing the matrix \(P\) into a 3×3 calibration matrix \(K\) (internal matrix with camera properties), and a 3×4 view matrix \(V\) (external matrix with camera position and rotation). These matrices are sometimes called intrinsic and extrinsic. And you drill them down into the following form.
\[
P=KV=K[R|T]=K[R_1|R_2|R_3|T]= \begin{bmatrix} f_x & s & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} r_{11} & r_{12} & r_{13} & t_x \\ r_{21} & r_{22} & r_{23} & t_y \\ r_{31} & r_{32} & r_{33} & t_z \end{bmatrix} \] \(f_x\) and \(f_y\) are focal lengths in the respective axes. \(s\) is a skew factor. \(c_x\) and \(c_y\) are the principal points of the camera. \(R\) is a camera rotation matrix. \(R_1,R_2,R_3\) are columns of the rotation matrix, and \(r_{ab}\) are the elements. The rotation matrix is orthonormal(unit vectors, and orthogonal to each other). Remember this one, because it will be discussed later. \(T\) is a camera translations vector with elements \(t_x,t_y,t_z\). Calibration Matrix
All the elements of matrix \(K\) are the properties of the camera. One way to get them is to make the proper measurement. If you want to do that, then OpenCV contains a pretty lot of materials for that. I just picked them up manually as the following.
\(f_x,f_y=400\ or\ 800\) \(s=0\) \(c_x,c_y=\) center of the input image (for 640×480 image, these will be 320 and 240) Relation with Homography
To show you how camera pose and homography are related, let’s start with writing down the equations for point projection.
\[ \begin{bmatrix} wx_{2D} \\ wy_{2D} \\ w \end{bmatrix} = \begin{bmatrix} p_{11} & p_{12} & p_{13} & p_{14} \\ p_{21} & p_{22} & p_{23} & p_{24} \\ p_{31} & p_{32} & p_{33} & p_{34} \end{bmatrix} \begin{bmatrix} x_{3D} \\ y_{3D} \\ z_{3D} \\ 1 \end{bmatrix} = K[R|T]\begin{bmatrix} x_{3D} \\ y_{3D} \\ z_{3D} \\ 1 \end{bmatrix} = \\ = K[R_1|R_2|R_3|T]\begin{bmatrix} x_{3D} \\ y_{3D} \\ z_{3D} \\ 1 \end{bmatrix} = \\ = \begin{bmatrix} f_x & s & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} r_{11} & r_{12} & r_{13} & t_x \\ r_{21} & r_{22} & r_{23} & t_y \\ r_{31} & r_{32} & r_{33} & t_z \end{bmatrix} \begin{bmatrix} x_{3D} \\ y_{3D} \\ z_{3D} \\ 1 \end{bmatrix} \]
If \(z_{3D}=0\), then equations will look like this.
\[ \begin{bmatrix} wx_{2D} \\ wy_{2D} \\ w \end{bmatrix} = \begin{bmatrix} p_{11} & p_{12} & p_{13} & p_{14} \\ p_{21} & p_{22} & p_{23} & p_{24} \\ p_{31} & p_{32} & p_{33} & p_{34} \end{bmatrix} \begin{bmatrix} x_{3D} \\ y_{3D} \\ 0 \\ 1 \end{bmatrix} = K[R|T]\begin{bmatrix} x_{3D} \\ y_{3D} \\ 0 \\ 1 \end{bmatrix} = \\ = K[R_1|R_2|R_3|T]\begin{bmatrix} x_{3D} \\ y_{3D} \\ 0 \\ 1 \end{bmatrix} = \\ = \begin{bmatrix} f_x & s & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} r_{11} & r_{12} & r_{13} & t_x \\ r_{21} & r_{22} & r_{23} & t_y \\ r_{31} & r_{32} & r_{33} & t_z \end{bmatrix} \begin{bmatrix} x_{3D} \\ y_{3D} \\ 0 \\ 1 \end{bmatrix} \]
Then you can make the matrix multiplication to figure out, that you can drop the third column of the rotation matrix and z coordinate of the 3D point and get the same results (
reminder, you can do this only if \(z_{3D}=0\), otherwise it won’t work). This will give you the following. \[ \begin{bmatrix} wx_{2D} \\ wy_{2D} \\ w \end{bmatrix} = \begin{bmatrix} p_{11} & p_{12} & p_{14} \\ p_{21} & p_{22} & p_{24} \\ p_{31} & p_{32} & p_{34} \end{bmatrix} \begin{bmatrix} x_{3D} \\ y_{3D} \\ 1 \end{bmatrix} = K[R_1|R_2|T]\begin{bmatrix} x_{3D} \\ y_{3D} \\ 1 \end{bmatrix} = \\ = \begin{bmatrix} f_x & s & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} r_{11} & r_{12} & t_x \\ r_{21} & r_{22} & t_y \\ r_{31} & r_{32} & t_z \end{bmatrix} \begin{bmatrix} x_{3D} \\ y_{3D} \\ 1 \end{bmatrix} \]
Now note that \([R_1|R_2|T]\) is a 3×3 matrix and at the same time, you can consider that \(K[R_1|R_2|T]=H\) from the previous chapter. That’s how homography is related to the camera projection. And that’s also why you can project points on the \(z=0\) plane, without worrying about the camera internal parameters at all.
Extending Homography
Going to full camera pose. Seems the easiest way is to calculate \([R_1|R_2 |T]=K^{-1}H\), then make \(R_3=R_1\times R_2\) and have full \([R|T]\) matrix.
Unfortunately, this doesn’t work. Remember, a little bit above I mentioned that matrix \(R\) is orthonormal? \(K\) and \(H\) are already coming out of estimations, carrying errors, so it’s not guaranteed that \(R_1\) and \(R_2\) obtained in this way are orthonormal. That would make the final image look weird. Therefore let’s make them orthonormal.
The implementation of the following text is available inside Ar class, method estimateMvMatrix. And here I would like to refer you the “Augmented Reality with Python and OpenCV” article written by Juan Gallostra. This is where I first discovered the method which I am going to describe at the moment.
Let’s start by constructing \([G_1|G_2|G_3 ]=K^{-1}H\). In the implementation, you will also see that I am negating the homography matrix before plugging it into the equation. That’s because the real pinhole camera would project the flipped image, but there is no flipping here.
Now \([G_1|G_2|G_3]\) is close to desired \([R_1|R_2|T]\), because it’s still the estimation. Therefore \([G_1|G_2|G_3]\) is nearly orthonormal. Then you can write.
\[
l=\sqrt{\| G_1 \| \| G_2 \|} ,\ \ G_1’=\frac{G_1}{l} ,\ \ G_2’=\frac{G_2}{l} ,\ \ G_3’=\frac{G_3}{l} \\ \vec{c}=G_1′ + G_2′ ,\ \ \vec{p}=G_1′ \times G_2′ ,\ \ \vec{d}=\vec{c} \times \vec{p} \\ R_1=\frac{1}{\sqrt{2}}\left( \frac{\vec{c}}{\| \vec{c} \| } + \frac{\vec{d}}{\| \vec{d} \| } \right) ,\ \ R_2=\frac{1}{\sqrt{2}}\left( \frac{\vec{c}}{\| \vec{c} \| } – \frac{\vec{d}}{\| \vec{d} \| } \right) \\ R_3=R_1 \times R_2 ,\ \ T=G_3′ \]
Then you can stack vectors into columns to get the final \(V=[R_1|R_2|R_3|T]\) 3×4 matrix. Finally, compute \(P=KV\) and start projecting points.
Summary
Now you know, how to draw 3D objects into the scene. So far, all the drawing is done through the simple image operations, which is useful only for the basic demos. In the last chapter, you will discover how to hook up the whole thing with video and OpenGL to make more funky stuff.
|
There is a widely accepted opinion that the Axiom of Countable Choice (further,
ACC)
$$ \forall n\in \mathbb{N} . \exists x \in X . \varphi [n, x] \implies \exists f: \mathbb{N} \longrightarrow X . \forall n \in \mathbb{N} . \varphi [n, f(n)] $$
is justified constructively due to certain interpretations of intuitionistic logic (for example,
BHK). For example, this question already highlights interpretation of ACC. My question is more of a "practical" one: the best I hope for is to see an example of algorithm extraction from a constructive theorem which used ACC. ACC means that if for any natural $n$ and any $x$ of any set $X$ there is a proof that $\varphi [n, x]$ holds, then there exists a function that produces $x$'s such that $\varphi [n, x]$ holds for any given $n$ as input. The catch is, a constructive proof already means a certain procedure of finding $x$ with the property $\varphi [n, x]$.
There have been, however, doubts in accepting
ACC (or dependent choice) in constructive mathematics (see, for example, 1, 2, 3, 4, 5, 6). Since the question whether to accept ACC (or dependent choice) is rather methamatematical, I'd like to discuss its meaning in terms of algorithms (or computer programs).
Roughly speaking, if you "put" a proof on paper and everything is constructive, and say you used
ACC, does it mean that it somehow hinders programming this proof?
The things I learned from the literature are:
(1) Some important constructive results rely on ACC or even dependent choice. It concerns the Fundamental Theorem of Algebra, existence of bases of Hilbert spaces, existence of square roots of complex numbers, some facts about complete metric spaces. For example, Bridges, Richman and Schuster showed that the Bishop's lemma used at least a certain weaker form of ACC: Lemma. Let $A$ be a non-empty located complete subset of a metric space $X$ and $x$ some point in $X$. Then, there exists a point $a \in A$ such that $\rho(x, a) > 0 \implies \rho(x, A) > 0$. (2) There is a distinction between general Cauchy sequences and the so called modulated Cauchy sequences. The former have the form:
$$ \forall n \in \mathbb{N}. \exists N \in \mathbb{N}. \forall k,m \ge N. |x_k - x_m | \le \frac{1}{n} $$
whereas the letter have the so called
convergence modulus $\mathcal{N}: \mathbb{N} \rightarrow \mathbb{N}$:
$$ \forall n \in \mathbb{N}. \forall k,m \ge \mathcal{N}(n). |x_k - x_m | \le \frac{1}{n} $$
It is easy to see that both are equivalent as long as
ACC holds. In particular, Cauchy and Dedekind reals are isomorphic. Consequently, Cauchy reals are Cauchy complete. It not the case if ACC does not hold as was pointed out by Lubarsky. However, every modulated Cauchy sequence of modulated Cauchy reals converges to a modulated Cauchy real. (3) ACC was given various interpretations. For example, non-deterministic algorithm, black-box etc. Some even claimed that ACC is in some sense responsible for "discontinuities" in computation. (4) There does not seem to be a proof assistant which allows program extraction under ACC. In Coq, one can define ACC in the
Type universe, but in
Prop it has to be an axiom and Coq cannot crack it open to extract a program. I am aware only of results on "computational" content of
ACC which use bar induction/recursion, fan theorem, Gödel numbering etc. (5) In some formal systems, ACC and even stronger axioms of choice can be proven. For example, in Martin-Löf's type theory. Another example is this system of Ye which is even weaker than Bishop's constructive mathematics. Theorem 11, item 16 has a form similar to a choice axiom. Consequently, Ye, Lemma 27 is the Bishop's lemma. Now, my question is
What exactly is
ACC and what exactly is an extracted program from ACC? Are there any practical examples? What does it mean for a "realized" ACC to be a "black box" program? Particular question: why does one need ACC to prove the Bishop's lemma? Take, for example, construction in Lemma 27 of Ye. He explicitly constructs a Cauchy sequence, which is even modulated in an evident way, and shows the implication. Where exactly is an ambiguity of choices?
|
Difference between revisions of "Timeline of prime gap bounds"
Line 893: Line 893:
[http://www.cs.cmu.edu/~xfxie/project/admissible/k0/sol_varpi600d7m5_3473955908.mpl 3,473,955,908]? [m=5]* ([http://terrytao.wordpress.com/2014/04/14/polymath8b-x-writing-the-paper-and-chasing-down-loose-ends/#comment-302031 xfxie])
[http://www.cs.cmu.edu/~xfxie/project/admissible/k0/sol_varpi600d7m5_3473955908.mpl 3,473,955,908]? [m=5]* ([http://terrytao.wordpress.com/2014/04/14/polymath8b-x-writing-the-paper-and-chasing-down-loose-ends/#comment-302031 xfxie])
−
|398,646? [m=2]*
+
|398,646? [m=2]*
25,816,462? [m=3]* ([http://terrytao.wordpress.com/2014/04/14/polymath8b-x-writing-the-paper-and-chasing-down-loose-ends/#comment-302101 Sutherland])
25,816,462? [m=3]* ([http://terrytao.wordpress.com/2014/04/14/polymath8b-x-writing-the-paper-and-chasing-down-loose-ends/#comment-302101 Sutherland])
Line 900: Line 900:
84,449,123,072? [m=5]* ([http://terrytao.wordpress.com/2014/04/14/polymath8b-x-writing-the-paper-and-chasing-down-loose-ends/#comment-302101 Sutherland])
84,449,123,072? [m=5]* ([http://terrytao.wordpress.com/2014/04/14/polymath8b-x-writing-the-paper-and-chasing-down-loose-ends/#comment-302101 Sutherland])
| Redoing the m=2,3,4,5 computations using the confirmed MPZ estimates rather than the unconfirmed ones
| Redoing the m=2,3,4,5 computations using the confirmed MPZ estimates rather than the unconfirmed ones
+ + + + + + + +
|}
|}
Revision as of 23:04, 19 April 2014
Date [math]\varpi[/math] or [math](\varpi,\delta)[/math] [math]k_0[/math] [math]H[/math] Comments Aug 10 2005 6 [EH] 16 [EH] ([Goldston-Pintz-Yildirim]) First bounded prime gap result (conditional on Elliott-Halberstam) May 14 2013 1/1,168 (Zhang) 3,500,000 (Zhang) 70,000,000 (Zhang) All subsequent work (until the work of Maynard) is based on Zhang's breakthrough paper. May 21 63,374,611 (Lewko) Optimises Zhang's condition [math]\pi(H)-\pi(k_0) \gt k_0[/math]; can be reduced by 1 by parity considerations May 28 59,874,594 (Trudgian) Uses [math](p_{m+1},\ldots,p_{m+k_0})[/math] with [math]p_{m+1} \gt k_0[/math] May 30 59,470,640 (Morrison)
58,885,998? (Tao)
59,093,364 (Morrison)
57,554,086 (Morrison)
Uses [math](p_{m+1},\ldots,p_{m+k_0})[/math] and then [math](\pm 1, \pm p_{m+1}, \ldots, \pm p_{m+k_0/2-1})[/math] following [HR1973], [HR1973b], [R1974] and optimises in m May 31 2,947,442 (Morrison)
2,618,607 (Morrison)
48,112,378 (Morrison)
42,543,038 (Morrison)
42,342,946 (Morrison)
Optimizes Zhang's condition [math]\omega\gt0[/math], and then uses an improved bound on [math]\delta_2[/math] Jun 1 42,342,924 (Tao) Tiny improvement using the parity of [math]k_0[/math] Jun 2 866,605 (Morrison) 13,008,612 (Morrison) Uses a further improvement on the quantity [math]\Sigma_2[/math] in Zhang's analysis (replacing the previous bounds on [math]\delta_2[/math]) Jun 3 1/1,040? (v08ltu) 341,640 (Morrison) 4,982,086 (Morrison)
4,802,222 (Morrison)
Uses a different method to establish [math]DHL[k_0,2][/math] that removes most of the inefficiency from Zhang's method. Jun 4 1/224?? (v08ltu)
1/240?? (v08ltu)
4,801,744 (Sutherland)
4,788,240 (Sutherland)
Uses asymmetric version of the Hensley-Richards tuples Jun 5 34,429? (Paldi/v08ltu) 4,725,021 (Elsholtz)
4,717,560 (Sutherland)
397,110? (Sutherland)
4,656,298 (Sutherland)
389,922 (Sutherland)
388,310 (Sutherland)
388,284 (Castryck)
388,248 (Sutherland)
387,982 (Castryck)
387,974 (Castryck)
[math]k_0[/math] bound uses the optimal Bessel function cutoff. Originally only provisional due to neglect of the kappa error, but then it was confirmed that the kappa error was within the allowed tolerance.
[math]H[/math] bound obtained by a hybrid Schinzel/greedy (or "greedy-greedy") sieve
Jun 6 387,960 (Angelveit)
387,904 (Angeltveit)
Improved [math]H[/math]-bounds based on experimentation with different residue classes and different intervals, and randomized tie-breaking in the greedy sieve. Jun 7
26,024? (vo8ltu)
387,534 (pedant-Sutherland)
Many of the results ended up being retracted due to a number of issues found in the most recent preprint of Pintz. Jun 8 286,224 (Sutherland)
285,752 (pedant-Sutherland)
values of [math]\varpi,\delta,k_0[/math] now confirmed; most tuples available on dropbox. New bounds on [math]H[/math] obtained via iterated merging using a randomized greedy sieve. Jun 9 181,000*? (Pintz) 2,530,338*? (Pintz) New bounds on [math]H[/math] obtained by interleaving iterated merging with local optimizations. Jun 10 23,283? (Harcos/v08ltu) 285,210 (Sutherland) More efficient control of the [math]\kappa[/math] error using the fact that numbers with no small prime factor are usually coprime Jun 11 252,804 (Sutherland) More refined local "adjustment" optimizations, as detailed here.
An issue with the [math]k_0[/math] computation has been discovered, but is in the process of being repaired.
Jun 12 22,951 (Tao/v08ltu)
22,949 (Harcos)
249,180 (Castryck) Improved bound on [math]k_0[/math] avoids the technical issue in previous computations. Jun 13 Jun 14 248,898 (Sutherland) Jun 15 [math]348\varpi+68\delta \lt 1[/math]? (Tao) 6,330? (v08ltu)
6,329? (Harcos)
6,329 (v08ltu)
60,830? (Sutherland) Taking more advantage of the [math]\alpha[/math] convolution in the Type III sums Jun 16 [math]348\varpi+68\delta \lt 1[/math] (v08ltu) 60,760* (Sutherland) Attempting to make the Weyl differencing more efficient; unfortunately, it did not work Jun 18 5,937? (Pintz/Tao/v08ltu)
5,672? (v08ltu)
5,459? (v08ltu)
5,454? (v08ltu)
5,453? (v08ltu)
60,740 (xfxie)
58,866? (Sun)
53,898? (Sun)
53,842? (Sun)
A new truncated sieve of Pintz virtually eliminates the influence of [math]\delta[/math] Jun 19 5,455? (v08ltu)
5,453? (v08ltu)
5,452? (v08ltu)
53,774? (Sun)
53,672*? (Sun)
Some typos in [math]\kappa_3[/math] estimation had placed the 5,454 and 5,453 values of [math]k_0[/math] into doubt; however other refinements have counteracted this Jun 20 [math]178\varpi + 52\delta \lt 1[/math]? (Tao)
[math]148\varpi + 33\delta \lt 1[/math]? (Tao)
Replaced "completion of sums + Weil bounds" in estimation of incomplete Kloosterman-type sums by "Fourier transform + Weyl differencing + Weil bounds", taking advantage of factorability of moduli Jun 21 [math]148\varpi + 33\delta \lt 1[/math] (v08ltu) 1,470 (v08ltu)
1,467 (v08ltu)
12,042 (Engelsma) Systematic tables of tuples of small length have been set up here and here (update: As of June 27 these tables have been merged and uploaded to an online database of current bounds on [math]H(k)[/math] for [math]k[/math] up to 5000). Jun 22 Slight improvement in the [math]\tilde \theta[/math] parameter in the Pintz sieve; unfortunately, it does not seem to currently give an actual improvement to the optimal value of [math]k_0[/math] Jun 23 1,466 (Paldi/Harcos) 12,006 (Engelsma) An improved monotonicity formula for [math]G_{k_0-1,\tilde \theta}[/math] reduces [math]\kappa_3[/math] somewhat Jun 24 [math](134 + \tfrac{2}{3}) \varpi + 28\delta \le 1[/math]? (v08ltu)
[math]140\varpi + 32 \delta \lt 1[/math]? (Tao)
1,268? (v08ltu) 10,206? (Engelsma) A theoretical gain from rebalancing the exponents in the Type I exponential sum estimates Jun 25 [math]116\varpi+30\delta\lt1[/math]? (Fouvry-Kowalski-Michel-Nelson/Tao) 1,346? (Hannes)
1,007? (Hannes)
10,876? (Engelsma) Optimistic projections arise from combining the Graham-Ringrose numerology with the announced Fouvry-Kowalski-Michel-Nelson results on d_3 distribution Jun 26 [math]116\varpi + 25.5 \delta \lt 1[/math]? (Nielsen)
[math](112 + \tfrac{4}{7}) \varpi + (27 + \tfrac{6}{7}) \delta \lt 1[/math]? (Tao)
962? (Hannes) 7,470? (Engelsma) Beginning to flesh out various "levels" of Type I, Type II, and Type III estimates, see this page, in particular optimising van der Corput in the Type I sums. Integrated tuples page now online. Jun 27 [math]108\varpi + 30 \delta \lt 1[/math]? (Tao) 902? (Hannes) 6,966? (Engelsma) Improved the Type III estimates by averaging in [math]\alpha[/math]; also some slight improvements to the Type II sums. Tuples page is now accepting submissions. Jul 1 [math](93 + \frac{1}{3}) \varpi + (26 + \frac{2}{3}) \delta \lt 1[/math]? (Tao)
873? (Hannes)
Refactored the final Cauchy-Schwarz in the Type I sums to rebalance the off-diagonal and diagonal contributions Jul 5 [math] (93 + \frac{1}{3}) \varpi + (26 + \frac{2}{3}) \delta \lt 1[/math] (Tao)
Weakened the assumption of [math]x^\delta[/math]-smoothness of the original moduli to that of double [math]x^\delta[/math]-dense divisibility
Jul 10 7/600? (Tao) An in principle refinement of the van der Corput estimate based on exploiting additional averaging Jul 19 [math](85 + \frac{5}{7})\varpi + (25 + \frac{5}{7}) \delta \lt 1[/math]? (Tao) A more detailed computation of the Jul 10 refinement Jul 20 Jul 5 computations now confirmed Jul 27 633 (Tao)
632 (Harcos)
4,686 (Engelsma) Jul 30 [math]168\varpi + 48\delta \lt 1[/math]# (Tao) 1,788# (Tao) 14,994# (Sutherland) Bound obtained without using Deligne's theorems. Aug 17 1,783# (xfxie) 14,950# (Sutherland) Oct 3 13/1080?? (Nelson/Michel/Tao) 604?? (Tao) 4,428?? (Engelsma) Found an additional variable to apply van der Corput to Oct 11 [math]83\frac{1}{13}\varpi + 25\frac{5}{13} \delta \lt 1[/math]? (Tao) 603? (xfxie) 4,422?(Engelsma)
12 [EH] (Maynard)
Worked out the dependence on [math]\delta[/math] in the Oct 3 calculation Oct 21 All sections of the paper relating to the bounds obtained on Jul 27 and Aug 17 have been proofread at least twice Oct 23 700#? (Maynard) Announced at a talk in Oberwolfach Oct 24 110#? (Maynard) 628#? (Clark-Jarvis) With this value of [math]k_0[/math], the value of [math]H[/math] given is best possible (and similarly for smaller values of [math]k_0[/math]) Nov 19 105# (Maynard)
5 [EH] (Maynard)
600# (Maynard/Clark-Jarvis) One also gets three primes in intervals of length 600 if one assumes Elliott-Halberstam Nov 20 Optimizing the numerology in Maynard's large k analysis; unfortunately there was an error in the variance calculation Nov 21 68?? (Maynard)
582#*? (Nielsen])
59,451 [m=2]#? (Nielsen])
42,392 [m=2]? (Nielsen)
356?? (Clark-Jarvis) Optimistically inserting the Polymath8a distribution estimate into Maynard's low k calculations, ignoring the role of delta Nov 22 388*? (xfxie)
448#*? (Nielsen)
43,134 [m=2]#? (Nielsen)
698,288 [m=2]#? (Sutherland) Uses the m=2 values of k_0 from Nov 21 Nov 23 493,528 [m=2]#? Sutherland Nov 24 484,234 [m=2]? (Sutherland) Nov 25 385#*? (xfxie) 484,176 [m=2]? (Sutherland) Using the exponential moment method to control errors Nov 26 102# (Nielsen) 493,426 [m=2]#? (Sutherland) Optimising the original Maynard variational problem Nov 27 484,162 [m=2]? (Sutherland) Nov 28 484,136 [m=2]? (Sutherland Dec 4 64#? (Nielsen) 330#? (Clark-Jarvis) Searching over a wider range of polynomials than in Maynard's paper Dec 6 493,408 [m=2]#? (Sutherland) Dec 19 59#? (Nielsen)
10,000,000? [m=3] (Tao)
1,700,000? [m=3] (Tao)
38,000? [m=2] (Tao)
300#? (Clark-Jarvis)
182,087,080? [m=3] (Sutherland)
179,933,380? [m=3] (Sutherland)
More efficient memory management allows for an increase in the degree of the polynomials used; the m=2,3 results use an explicit version of the [math]M_k \geq \frac{k}{k-1} \log k - O(1)[/math] lower bound. Dec 20
55#? (Nielsen)
36,000? [m=2] (xfxie)
175,225,874? [m=3] (Sutherland)
27,398,976? [m=3] (Sutherland)
Dec 21 1,640,042? [m=3] (Sutherland) 429,798? [m=2] (Sutherland) Optimising the explicit lower bound [math]M_k \geq \log k-O(1)[/math] Dec 22 1,628,944? [m=3] (Castryck)
75,000,000? [m=4] (Castryck)
3,400,000,000? [m=5] (Castryck)
5,511? [EH] [m=3] (Sutherland)
2,114,964#? [m=3] (Sutherland)
309,954? [EH] [m=5] (Sutherland)
395,154? [m=2] (Sutherland)
1,523,781,850? [m=4] (Sutherland)
82,575,303,678? [m=5] (Sutherland)
A numerical precision issue was discovered in the earlier m=4 calculations Dec 23 41,589? [EH] [m=4] (Sutherland) 24,462,774? [m=3] (Sutherland)
1,512,832,950? [m=4] (Sutherland)
2,186,561,568#? [m=4] (Sutherland)
131,161,149,090#? [m=5] (Sutherland)
Dec 24 474,320? [EH] [m=4] (Sutherland)
1,497,901,734? [m=4] (Sutherland)
Dec 28 474,296? [EH] [m=4] (Sutherland) Jan 2 2014 474,290? [EH] [m=4] (Sutherland) Jan 6 54# (Nielsen) 270# (Clark-Jarvis) Jan 8 4 [GEH] (Nielsen) 8 [GEH] (Nielsen) Using a "gracefully degrading" lower bound for the numerator of the optimisation problem. Calculations confirmed here. Jan 9 474,266? [EH] [m=4] (Sutherland) Jan 28 395,106? [m=2] (Sutherland) Jan 29 3 [GEH] (Nielsen) 6 [GEH] (Nielsen) A new idea of Maynard exploits GEH to allow for cutoff functions whose support extends beyond the unit cube Feb 9 Jan 29 results confirmed here Feb 17 53?# (Nielsen) 264?# (Clark-Jarvis) Managed to get the epsilon trick to be computationally feasible for medium k Feb 22 51?# (Nielsen) 252?# (Clark-Jarvis) More efficient matrix computation allows for higher degrees to be used Mar 4 Jan 6 computations confirmed Apr 14 50?# (Nielsen) 246?# (Clark-Jarvis) A 2-week computer calculation! Apr 17 35,410? [m=2]* (xfxie) 398,646? [m=2]* v
25,816,462? [m=3]* (Sutherland)
1,541,858,666? [m=4]* (Sutherland)
84,449,123,072? [m=5]* (Sutherland)
Redoing the m=2,3,4,5 computations using the confirmed MPZ estimates rather than the unconfirmed ones Apr 18 398,244? [m=2]* (Sutherland) Legend: ? - unconfirmed or conditional ?? - theoretical limit of an analysis, rather than a claimed record * - is majorized by an earlier but independent result # - bound does not rely on Deligne's theorems [EH] - bound is conditional the Elliott-Halberstam conjecture [GEH] - bound is conditional the generalized Elliott-Halberstam conjecture [m=N] - bound on intervals containing N+1 consecutive primes, rather than two strikethrough - values relied on a computation that has now been retracted
See also the article on
Finding narrow admissible tuples for benchmark values of [math]H[/math] for various key values of [math]k_0[/math].
|
GTU First Year Engineering (Semester 2)
Vector Calculus and Linear Algebra June 2014
Vector Calculus and Linear Algebra
June 2014
Total marks: --
Total time: --
Total time: --
INSTRUCTIONS
(1) Assume appropriate data and state your reasons (2) Marks are given to the right of every question (3) Draw neat diagrams wherever necessary
(1) Assume appropriate data and state your reasons
(2) Marks are given to the right of every question
(3) Draw neat diagrams wherever necessary
1(a).1 The number of solutions of the system of equations AX = 0 where A is a singular matrix is
(a) 0 (b)1 (c) 2 (d) infinite
(a) 0
(b)1
(c) 2
(d) infinite
1 M
1(a).2
Let A be a unitary matrix then 1 A
Let A be a unitary matrix then 1 A
-1 is (a) A (b) \(\bar{A}\) (c) A T (d)\(\bar{A}\ ^{T}\)
1 M
1(a).3 Let W = span{cos
(a) 0 (b) 1 (c)2 (d)3 2,sin 2x,cos2x}then the dimension of W is
(a) 0
(b) 1
(c)2
(d)3
1 M
1(a).4 Let P
(a)1 (b)2 (c) 3 (d)4 2be the vector space of all polynomials with d egree less than or equal to two then the dimension of P 2is
(a)1
(b)2
(c) 3
(d)4
1 M
1(a).5 The column vectors of an orthogonal matrix are
(a)Orthogonal (b) Orthonomal (c) dependent (d) none of these
(a)Orthogonal
(b) Orthonomal
(c) dependent
(d) none of these
1 M
1(a).6 Let T:R
(a)one to one (b) onto (c)both (d)neither 2? R 2be a linear transformation defined by T(x,y) =(y,x) then it is
(a)one to one
(b) onto
(c)both
(d)neither
1 M
1(a).7 Let T:R
(a)0 (b)1 (c) 2 (d) 3 3→ R 3be a linear transformation defined by T (x,y,z) = (x,z,0)then the dimention of R(T) is
(a)0
(b)1
(c) 2
(d) 3
1 M
2(a) Solve the following system of equations using Gauss Elimination method
\[2x_{1}+x_{2}+2x_{3}+x_{4}=6 , 6x_{1}-x_{2}+6x_{3}+12x_{4}=36\] \[4x_{1}+3x_{2}+3x_{3}-3x_{4}=1 , 2x_{1}+2x_{2}-x_{3}+x_{4}=10\]
\[2x_{1}+x_{2}+2x_{3}+x_{4}=6 , 6x_{1}-x_{2}+6x_{3}+12x_{4}=36\]
\[4x_{1}+3x_{2}+3x_{3}-3x_{4}=1 , 2x_{1}+2x_{2}-x_{3}+x_{4}=10\]
5 M
2(b) Find the inverse of \[\begin{bmatrix} 1 & 2& 3 &1 \\ 1& 3 & 3 &2 \\ 2& 4 & 3 & 3\\ 1 & 1 & 1 & 1 \end{bmatrix}\] using Gauss Jordan method.
5 M
2(b).1 If\[\left \| u+v \right \|^{2} =\left \| u \right \|^{2}+\left \| v \right \|^{2} \] then u and v are
(a)parallel (b)perpendicular (c) dependent (d)none of these
(a)parallel
(b)perpendicular
(c) dependent
(d)none of these
1 M
2(b).2 \[\left \| u+v \right \|^{2}-\left \| u- v \right \|^{2}\] is
(a)
(a)
(b)2
(b)2
(c) 3
(c) 3
(d)4
(d)4
1 M
2(b).3 Let T:R
(a) 0 (b) 1 (c) 2 (d)3 3rightarrow; R 3be a one to one linear transformation then the dimention of ker(T) is
(a) 0
(b) 1
(c) 2
(d)3
1 M
2(b).4 Let A = \[\begin{bmatrix} 2 &1 \\ 2&3 \end{bmatrix}\] then the eigen values of A
(a)1,2 (b) 1,4 (c) 1,6 (d)1,16 2are
(a)1,2
(b) 1,4
(c) 1,6
(d)1,16
1 M
2(b).5 Let A = \[\begin{bmatrix} 2 &1 \\ 2&3 \end{bmatrix}\] then the eigen values of A+3I are
(a) 1,2 (b)2,5 (c) 3,6 (d)4,7
(a) 1,2
(b)2,5
(c) 3,6
(d)4,7
1 M
2(b).6 divr is
(a)0 (b)1 (c) 2 (d) 3
(a)0
(b)1
(c) 2
(d) 3
1 M
2(b).7 If the value of line integral \[\int_{c} \bar{F} .\bar{dr}\] does not depend on path C then \[\bar{F}\] is
(a) solenoidal (b) incompressible (c) irrotational (d) none of these
(a) solenoidal
(b) incompressible
(c) irrotational
(d) none of these
1 M
2(c) Express \[\begin{bmatrix} 4+2i &7 & 3-i\\ 0& 3i & -2\\ 5+3i&-7+i & 9+6i \end{bmatrix}\] as the sum of a hermitian and a skew-hermitian matrix
4 M
3(a) Let V be the set of all ordered pairs of real numbers with vector addition defined as \[(x_{1},y_{1})+(x_{2},y_{2})=(x_{1}+x_{2}+1,y_{1}+y_{2}+1)\] Show that the first five axioms for vector addition are satisfied. Clearly mention the zero vector and additive inverse.
5 M
3(b) Find a basis for the subspace of P
2spanned by the vectors \[1+x,x^{2},-2+2x^{2},-3x\]
5 M
3(c) Express the matrix \[\begin{bmatrix} 5 & 1\\ -1& 9 \end{bmatrix}\] as linear combination of \[\begin{bmatrix} 1& -1\\ 0& 3 \end{bmatrix},\begin{bmatrix} 1&1\\ 0& 2 \end{bmatrix},\begin{bmatrix} 2& 2\\ -1&1 \end{bmatrix}\]
4 M
4(a) Consider the basis S={v
1,v 2} for R 2where v 1=(1,1)and v 2=(2,3). Let T: R 2→ P 2be the linear transformation such that T(v 1) =2-3x+x 2and T(v 2)=1-x 2then find the formula of T(a,b).
5 M
4(b) Verify Rank-Nullity theorem for the linear transformation T:R
4→ R 3defined by \[x_{1},x_{2},\ x_{3},\ x_{4}=(4x_{1}+x_{2}-2x_{3}-3x_{4},\\ 2x_{1}+x_{2}+x_{3}-4x_{4},\ 6x_{1}-9x_{3}+9x_{4}) \]
5 M
4(c) Find the algebraic and geometric multiplicity of each of the eigen value of \[\begin{bmatrix} 0 &1 &1 \\ 1& 0 &1 \\ 1&1 &0 \end{bmatrix}\]
4 M
5(a) For A =\[\begin{bmatrix} a_{1} & b_{1}\\ c_{1}& d_{1} \end{bmatrix}\] and B = \[\begin{bmatrix} a_{2} & b_{2}\\ c_{2}& d_{2} \end{bmatrix}\] Let the inner product on M
22be defined as =a 1a 2+b 1b 2+c 1c 2+d 1d 1.Let A = \[\begin{bmatrix} 2 & 6\\ 1 & -3 \end{bmatrix}\] and B = \[\begin{bmatrix} 3 & 2\\ 1 & 0 \end{bmatrix}\] then verify cauchy-Schwarz inequality and find the angle between A and B.
5 M
5(b) Let R
3have the inner product defined by <(x 1,x 2,x 3)y 1,y 2,y 3)>,=x 1y 1+ 2x 2y 2+3x 3y 3.Apply the Gram-Schmidt process to transform the vectors (1,1,1),(1,1,0) and (1,0,0) into orthonormal vectors
5 M
5(c) Find a basis for the orthogonal complement of the subspace spanned by the vectors(2,-1.1.3.0) ,(1,2,0,1,-2),(4,3,1,5,-4),(3,1,2,-1,1) and (2,-1,2,-2,3)
4 M
6(a) Verify Cayley-Hamilton theorem for A = \[\begin{bmatrix} 6 & -1 & 1\\ -2 & 5& -1\\ 2 & 1 & 7 \end{bmatrix}\] and hence find A
4
5 M
6(b) Show that the vector field \[\sqrt{F} =(ysinz-sinx)I + (xsinz + 2yz)j +(xy cos z+ y^{2})k\] is conservation and find the corresponding scalar potential.
5 M
6(c) Find the directional derivatives of x
2y 2z 2at (1,1,-1) along a direction equally inclined with coordinates axes.
4 M
7(a) Verify Green's Theorem for \[\int_{c} (3x -8y
2)dx+(4y-6xy)dy\] where C is the boundary of the triangle with vertices (0, 0) , (1, 0) and (0, 1)
5 M
7(b) Verify stokes's Theorem for \[\bar{F} = (x+y)I +(y+z)j-xk\] and S is the surface of the plane 2x + y +z =2 which is in the first octant.
5 M
7(c) Find the work done when a force \[ \bar{F}=(x^2-y^2+x)I-(2xy+y)j \] moves a particle in the XY plane from (0,0) to (1,1) along the parabola y
2=x
4 M
More question papers from Vector Calculus and Linear Algebra
|
As every MO user knows, and can easily prove, the inverse of the matrix $\begin{pmatrix} a & b \\\ c & d \end{pmatrix}$ is $\frac{1}{ad - bc} \begin{pmatrix} d & -b \\\ -c & a \end{pmatrix}$. This can be proved, for example, by writing the inverse as $ \begin{pmatrix} r & s \\\ t & u \end{pmatrix}$ and solving the resulting system of four equations in four variables.
As a grad student, when studying the theory of modular forms, I repeatedly forgot this formula (do you switch the $a$ and $d$ and invert the sign of $b$ and $c$... or was it the other way around?) and continually had to rederive it. Much later, it occurred to me that it was better to remember the formula was obvious in a couple of special cases such as $\begin{pmatrix} 1 & b \\\ 0 & 1 \end{pmatrix}$, and diagonal matrices, for which the geometric intuition is simple. One can also remember this as a special case of the adjugate matrix.
Is there some way to just write down $\frac{1}{ad - bc} \begin{pmatrix} d & -b \\\ -c & a \end{pmatrix}$, even in the case where $ad - bc = 1$, by pure thought -- without having to compute? In particular, is there some geometric intuition, in terms of a linear transformation on a two-dimensional vector space, that renders this fact crystal clear?
Or may as well I be asking how to remember why $43 \times 87$ is equal to 3741 and not 3731?
Thank you! --Frank
|
Note that the value of the series
$$\sum_{n=1}^{\infty}B_n\sin(nx)\tag{1}$$
for $x=0$ is always zero, regardless of the coefficients $B_n$ (assuming convergence). Furthermore, since the functions $\sin(nx)$ are odd, (1) can only represent odd functions. So you cannot represent general functions with the series (1). This is why you also need the cosine terms. As an alternative, you could as well use a representation with phases in the argument of the sine:
$$f(x)=A_0+\sum_{n=1}^{\infty}C_n\sin(nx+\phi_n)\tag{2}$$
In this case you don't need any cosine terms because of the phases $\phi_n$. The representation in (2) is equivalent to the series in your question using sine and cosine terms. You can see this by using the identity $\sin(\alpha+\beta)=\cos\alpha\sin\beta+\sin\alpha\cos\beta$:
$$\sum_{n=1}^{\infty}C_n\sin(nx+\phi_n)=\sum_{n=1}^{\infty}\left[C_n\sin\phi_n\cos(nx)+C_n\cos\phi_n\sin(nx)\right]$$
So the coefficients $A_n$ and $B_n$ in your question are
$$A_n=C_n\sin\phi_n\\B_n=C_n\cos\phi_n$$
And the other way around you have
$$C_n=\sqrt{A_n^2+B_n^2}\\\phi_n=\arctan\left(\frac{A_n}{B_n}\right)\quad(\pm\pi)$$
|
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced.
Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit.
@Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form.
A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts
I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it.
Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis.
Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)?
No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet.
@MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it.
Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow.
@QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary.
@Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer.
@QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits...
@QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right.
OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ...
So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study?
> I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago
In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a...
@MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really.
When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.?
@tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...)
@MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences).
|
Abstract:
The main contribution of this article is to show that within the context of adaptive agents the Causal Path Entropy and Empowerment are equivalent only in deterministic environments. In non-deterministic environments, it is shown that the Causal Path Entropy generally under-estimates the number of intrinsic options available to an agent unlike Empowerment. In fact, it is shown that the difference between Causal Path Entropy and Empowerment can’t be increased without diminishing Empowerment.
Causal Path Entropy for adaptive agents:
We shall start by defining the notion of Causal Path Entropy, introduced in [1], show how it simplifies to a conditional Shannon entropy for digital organisms, and extend it so that it explicitly accounts for actions taken by an organism.
For any open thermodynamic system such as a biological organism we may treat phase-space paths taken by the system over a time interval as microstates and partition them into macrostates using the equivalence relation:
\begin{equation} x(t) \sim x’(t) \iff x(0)=x’(0) \end{equation}
As a result, we can identify each macrostate with a present system state .
We may then define the Causal Path Entropy of a macrostate associated with the present system state as the path integral:
\begin{equation} S_c(X_i,\tau)=-k_B \int_{x(t)} P(x(t)|x(0))\ln P(x(t)|x(0)) Dx(t) \end{equation}
where is the Boltzmann constant and it must be noted that in order to calculate we need the state-transition probability distribution which corresponds to an exact simulator of the agent’s environment. Given that this is generally unknown to the agent at the instant , macrostates are generally unknown to the agent as well, and therefore it’s more epistemically sound to denote the Causal Path Entropy as:
\begin{equation} S_c(x(0))=-k_B \int_{x(t)} p(x(t)|x(0))\ln p(x(t)|x(0)) Dx(t) \end{equation}
where denotes a subjective state-transition probability distribution.
Now, if the organism is digital(i.e. simulated by a Turing machine) we may drop the Boltzmann constant and a discrete phase-space implies that simplifies to the Shannon ‘path entropy’:
In order for the calculation of Causal Path Entropy to be useful for a digital organism we must explicitly account for its agency which is determined by its capacity for rational action in the world. If denotes a discrete action space and denotes a discrete state space:
\begin{equation} p(x_n|x_0)= \frac{p(x_n,a_{1:n}|x_0)}{p(a_{1:n}|x_n,x_0)} \end{equation}
where is an n-tuple of actions and .
Now, we may note that the numerator of may be expressed in terms of the agent’s conditional distribution over n-step action sequences:
\begin{equation} p(x_n,a_{1:n}|x_0)= w(a_{1:n}|x_0)p(x_n|a_{1:n},x_0) \end{equation}
By combining and we have:
\begin{equation} p(x_n|x_0)= \frac{w(a_{1:n}|x_0)p(x_n|a_{1:n},x_0)}{p(a_{1:n}|x_n,x_0)} \end{equation}
Using the Causal Path Entropy becomes:
\begin{equation} S_c(x_0) = \max\limits_{w} \mathbb{E} \big[ \ln \big( \frac{p(a_{1:n}|x_n,x_0)}{w(a_{1:n}|x_0)p(x_n|a_{1:n},x_0)}\big)\big] \end{equation}
Hence, we have:
\begin{equation} S_c(x_0) = \max\limits_{w} \big[H(a_{1:n}|x_0)-H(a_{1:n}|x_n,x_0) +H(x_n|a_{1:n},x_0)\big] \end{equation}
shall be useful in analysing the difference between the Causal Path Entropy and Empowerment, which we shall now introduce.
Empowerment:
We shall introduce the n-step empowerment as was done in [3] where the n-step empowerment is defined by searching for the maximal mutual information conditional on a starting state between a sequence of actions and the final state reached :
\begin{equation} \xi(x_0) = \max\limits_{w} I(a_{1:n},x_n|x_0)=\max\limits_{w} \mathbb{E} \big[ \ln \big( \frac{p(a_{1:n},x_n|x_0)}{w(a_{1:n}|x_0)p(x_n|x_0)}\big)\big] \end{equation}
Hence, may be expressed as the difference of two conditional Shannon entropies:
\begin{equation} \xi(x_0) = \max\limits_{w} \big[H(a_{1:n}|x_0)-H(a_{1:n}|x_n,x_0) \big] \end{equation}
Analysis of equivalence:
If we combine and we find that the Causal Path Entropy at may be expressed in terms of the Empowerment at :
\begin{equation} S_c(x_0) = \xi(x_0) + \max\limits_{w} \big[H(x_n|a_{1:n},x_0)\big] \end{equation}
Therefore, in order to have equivalence we must have:
\begin{equation} H(x_n|a_{1:n},x_0) = 0 \end{equation}
which is true if and only if (where denotes XOR) and this is the case only in deterministic environments.
It must be noted that in deterministic environments simplifies to:
\begin{equation} S_c(x_0) = \xi(x_0) = \ln N_{x_0} \end{equation}
where represents the number of intrinsic options available at .
Discussion:
Whenever we must have so the Causal Path Entropy provides intrinsic compensation for:
Exploring unpredictable environments. Exploring unknown environments. Unreliable actuators. Unreliable sensors.
To be precise, maximisation of corresponds to making actions maximally uninformative about the terminal state . It follows that the Causal Path Entropy does less than accurately measure an agent’s number of intrinsic options. This is especially clear if we use and to re-formulate the Empowerment of the agent at :
\begin{equation} \xi(x_0) = S_c(x_0) - \max\limits_{w} \big[H(x_n|a_{1:n},x_0)\big] = \max\limits_{w} \big[H(x_n|x_0) -H(x_n|a_{1:n},x_0)\big] \end{equation}
From we deduce that in non-deterministic environments the difference between Causal Path Entropy and Empowerment, i.e. , can’t be increased without diminishing Empowerment.
References: Gross, A. Wissner. (2013) Causal Entropic Forces. Physical Review Letters. Salge, C., Glackin, C. & Polani, D. Empowerment-An Introduction. Arxiv. Mohamed, S., Rezende, D. Variational Information Maximisation for Intrinsically Motivated Reinforcement Learning. Arxiv.
|
Inspired by the ancient Greek mythological creature Chimera (χίμαιρα) which had a lion’s head, a goat’s body and a serpent’s tail, Abrams and Strogatz [1] coined the term “chimera states” for the counterintuitive self-organized phenomenon in which
synchronous and desynchronous oscillatory behavior coexist in the same system, first found by Kuramoto and Battogtokh [2]. It is believed that these states may be related to unihemispheric sleep, observed in birds and dolphins, as they are found to sleep with one eye open, meaning that half of their brain is synchronized whilst the other half is desynchronized.
Chimera states were previously analyzed in complex networks. However, they have not been extensively studied in
modular networks, where interactions within and across modules are attributed to different types of links that play their own roles in the self-organized dynamics. Here, we consider the neural network of the C.elegans soil worm, equipped with electrical and chemical neural synapses for communication. Using a community detection method, we split its neural network into six interconnected communities as shown below:
We also assume that neurons obey chaotic bursting dynamics given by the Hindmarsh-Rose system and are connected with electrical synapses (dark gray links) within their communities and with chemical (cyan links) across them. The described network-organized system has the form,
[
\dot{p} i = q_i-a p_i^3+bp_i^2-n_i+I{\text{ext}}+\color{#222A2B}{g_{el}\sum_{j=1}^{N}L_{ij}H(p_j)}-\color{#45B6B6}{g_{ch}(p_i-V_{\text{syn}})\sum_{j=1}^{N}T_{ij}S(p_j)},\[2pt] \dot{q}_i = c-dp_i^2-q_i,\[2pt] \dot{n}_i = r[s(p_i-p_0)-n_i], ]
where (i=1,\ldots,N) is the neuron index, (p_i) is the membrane potential of the (i)-th neuron, (q_i) is associated with the fast current and (n_i) with the slow current. The parameters are chosen such that the system exhibits a multi-scale chaotic behavior characterized as
spike bursting. (r) modulates the slow dynamics of the system so that each neuron lies in the chaotic regime (see [3] for details).
The connectivity structure of the
electrical synapses is described in terms of the Laplacian matrix (\mathbf{L}). The strength of the electrical coupling is given by the parameter (g_{el}) and its functionality is governed by the linear function (H(p)=p).
The connectivity structure of the
chemical synapses is described in terms of the adjacency matrix (\mathbf{T}). The chemical coupling is nonlinear and its functionality is described by the sigmoidal function (S(p)={1+\exp[-\lambda(p-\theta_{\text{syn}})]}^{-1}\,), which acts as a continuous mechanism for the activation and deactivation of the chemical synapses. The coupling strength associated to this type of synapses is (g_{ch}).
The
coaction of these synapses with the dynamics was studied in [3] and revealed that they were able to drive the dynamics to the emergence of evidenced by the coexistence of strongly synchronized and desynchronized communities of neurons. A topological analysis of the network’s structure has revealed that the most populated communities drive this peculiar phenomenon, being the most influential among all (see [3] for more details). chimera-like states, Synchronous oscillations: Desynchronous oscillations: Chimera-like state:
Further reading:
Y. Kuramoto and D. Battogtokh: Coexistence of Coherence and Incoherence in Nonlocally Coupled Phase Oscillators. NONLINEAR PHENOMENA IN COMPLEX SYSTEMS 5(4) 380-385 (2002). [journal] D. M. Abrams and S. H. Strogatz: Chimera States for Coupled Oscillators. Phys. Rev. Lett. 93 174102 (2004). [PRL] J. Hizanidis, N. E. Kouvaris, G. Zamora-Lopez, A. Diaz-Guilera and C. Antonopoulos Chimera-like states in modular neural networks. Scientific Reports 6, 19845 (2016). [Scientific Reports][arXiv]
|
Note first that the first $=$ (equals) in $\frac{dl(\theta)}{d\theta} = 0 = −\frac{1}{2\sigma^2}(0−2X^TY + X^TX \theta)$ should be interpreted as a "is set to", that is, we set $\frac{dl(\theta)}{d\theta} = 0$. Given that (apparently) $\frac{dl(\theta)}{d\theta} = −\frac{1}{2\sigma^2}(0−2X^TY + X^TX \theta)$, $\frac{dl(\theta)}{d\theta} = 0$ is equivalent to $0 = −\frac{1}{2\sigma^2}(0−2X^TY + X^TX \theta)$.
Now, let's apply some basic linear algebra:
\begin{align}0 &= −\frac{1}{2\sigma^2}(0−2X^TY + X^TX \theta) \iff \\0 &= −(0−2X^TY + X^TX \theta) \iff \\0 &= −0 + 2X^TY - X^TX \theta) \iff \\0 &= 2X^TY - X^TX \theta \iff \\X^TX \theta &= 2X^TY \iff \\(X^TX)^{-1}(X^TX) \theta &= (X^TX)^{-1}2X^TY \iff \\\theta &= (X^TX)^{-1}2X^TY\end{align}
Now, you can ignore the $2$, because it is just a constant, and, when optimizing, this does not influence the result.
Note that using $\hat{\theta}$ instead of $\theta$ is just to indicate that what we will get is an "estimate" of the real $\theta$, because of round off errors during the computations, etc.
|
It looks like you're new here. If you want to get involved, click one of these buttons!
We've seen that classical logic is closely connected to the logic of subsets. For any set \( X \) we get a poset \( P(X) \), the
power set of \(X\), whose elements are subsets of \(X\), with the partial order being \( \subseteq \). If \( X \) is a set of "states" of the world, elements of \( P(X) \) are "propositions" about the world. Less grandiosely, if \( X \) is the set of states of any system, elements of \( P(X) \) are propositions about that system.
This trick turns logical operations on propositions - like "and" and "or" - into operations on subsets, like intersection \(\cap\) and union \(\cup\). And these operations are then special cases of things we can do in
other posets, too, like join \(\vee\) and meet \(\wedge\).
We could march much further in this direction. I won't, but try it yourself!
Puzzle 22. What operation on subsets corresponds to the logical operation "not"? Describe this operation in the language of posets, so it has a chance of generalizing to other posets. Based on your description, find some posets that do have a "not" operation and some that don't.
I want to march in another direction. Suppose we have a function \(f : X \to Y\) between sets. This could describe an
observation, or measurement. For example, \( X \) could be the set of states of your room, and \( Y \) could be the set of states of a thermometer in your room: that is, thermometer readings. Then for any state \( x \) of your room there will be a thermometer reading, the temperature of your room, which we can call \( f(x) \).
This should yield some function between \( P(X) \), the set of propositions about your room, and \( P(Y) \), the set of propositions about your thermometer. It does. But in fact there are
three such functions! And they're related in a beautiful way!
The most fundamental is this:
Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq Y \) define its inverse image under \(f\) to be
$$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} . $$ The pullback is a subset of \( X \).
The inverse image is also called the
preimage, and it's often written as \(f^{-1}(S)\). That's okay, but I won't do that: I don't want to fool you into thinking \(f\) needs to have an inverse \( f^{-1} \) - it doesn't. Also, I want to match the notation in Example 1.89 of Seven Sketches.
The inverse image gives a monotone function
$$ f^{\ast}: P(Y) \to P(X), $$ since if \(S,T \in P(Y)\) and \(S \subseteq T \) then
$$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\}
\subseteq \{x \in X:\; f(x) \in T\} = f^{\ast}(T) . $$ Why is this so fundamental? Simple: in our example, propositions about the state of your thermometer give propositions about the state of your room! If the thermometer says it's 35°, then your room is 35°, at least near your thermometer. Propositions about the measuring apparatus are useful because they give propositions about the system it's measuring - that's what measurement is all about! This explains the "backwards" nature of the function \(f^{\ast}: P(Y) \to P(X)\), going back from \(P(Y)\) to \(P(X)\).
Propositions about the system being measured also give propositions about the measurement apparatus, but this is more tricky. What does "there's a living cat in my room" tell us about the temperature I read on my thermometer? This is a bit confusing... but there is an answer because a function \(f\) really does also give a "forwards" function from \(P(X) \) to \(P(Y)\). Here it is:
Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define its image under \(f\) to be
$$ f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S\} . $$ The image is a subset of \( Y \).
The image is often written as \(f(S)\), but I'm using the notation of
Seven Sketches, which comes from category theory. People pronounce \(f_{!}\) as "\(f\) lower shriek".
The image gives a monotone function
$$ f_{!}: P(X) \to P(Y) $$ since if \(S,T \in P(X)\) and \(S \subseteq T \) then
$$f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S \}
\subseteq \{y \in Y: \; y = f(x) \textrm{ for some } x \in T \} = f_{!}(T) . $$ But here's the cool part: Theorem. \( f_{!}: P(X) \to P(Y) \) is the left adjoint of \( f^{\ast}: P(Y) \to P(X) \). Proof. We need to show that for any \(S \subseteq X\) and \(T \subseteq Y\) we have
$$ f_{!}(S) \subseteq T \textrm{ if and only if } S \subseteq f^{\ast}(T) . $$ David Tanzer gave a quick proof in Puzzle 19. It goes like this: \(f_{!}(S) \subseteq T\) is true if and only if \(f\) maps elements of \(S\) to elements of \(T\), which is true if and only if \( S \subseteq \{x \in X: \; f(x) \in T\} = f^{\ast}(T) \). \(\quad \blacksquare\)
This is great! But there's also
another way to go forwards from \(P(X)\) to \(P(Y)\), which is a right adjoint of \( f^{\ast}: P(Y) \to P(X) \). This is less widely known, and I don't even know a simple name for it. Apparently it's less useful. Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define
$$ f_{\ast}(S) = \{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\} . $$ This is a subset of \(Y \).
Puzzle 23. Show that \( f_{\ast}: P(X) \to P(Y) \) is the right adjoint of \( f^{\ast}: P(Y) \to P(X) \).
What's amazing is this. Here's another way of describing our friend \(f_{!}\). For any \(S \subseteq X \) we have
$$ f_{!}(S) = \{y \in Y: x \in S \textrm{ for some } x \textrm{ such that } y = f(x)\} . $$This looks almost exactly like \(f_{\ast}\). The only difference is that while the left adjoint \(f_{!}\) is defined using "for some", the right adjoint \(f_{\ast}\) is defined using "for all". In logic "for some \(x\)" is called the
existential quantifier \(\exists x\), and "for all \(x\)" is called the universal quantifier \(\forall x\). So we are seeing that existential and universal quantifiers arise as left and right adjoints!
This was discovered by Bill Lawvere in this revolutionary paper:
By now this observation is part of a big story that "explains" logic using category theory.
Two more puzzles! Let \( X \) be the set of states of your room, and \( Y \) the set of states of a thermometer in your room: that is, thermometer readings. Let \(f : X \to Y \) map any state of your room to the thermometer reading.
Puzzle 24. What is \(f_{!}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "liberal" or "generous" nature of left adjoints, meaning that they're a "best approximation from above"? Puzzle 25. What is \(f_{\ast}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "conservative" or "cautious" nature of right adjoints, meaning that they're a "best approximation from below"?
|
I'm following the derivation from Finite Element Method using Matlab 2nd Edition, pg 311-315, which derives of the local stiffness matrix for planar isotropic linear elasticity as follows:
Force Balance Equations
$\frac{\partial\sigma_x}{\partial x}+\frac{\tau_{xy}}{\partial y} + f_x=0$ $\frac{\partial\tau_{xy}}{\partial x}+\frac{\sigma_y}{\partial y} + f_y=0$
Using galerkin method, we multiply the first and second equation by test functions $w_1$ and $w_2$, respectively and integrate over the domain $\Omega$. Integrating by parts, I see that we obtain the weak formulation:
$$\int_\Omega \begin{bmatrix} \frac{\partial w_1}{\partial x} & 0 & \frac{\partial w_1}{\partial y}\\ 0 & \frac{\partial w_2}{\partial y} & \frac{\partial w_2}{\partial x} \end{bmatrix} \begin{bmatrix} \sigma_x \\ \sigma_y \\ \tau_{xy} \end{bmatrix} = \int_\Omega \begin{bmatrix} w_1 f_x \\ w_2 f_y \end{bmatrix} + \int_{\partial\Gamma} \begin{bmatrix} w_1 \Phi_x \\ w_2 \Phi_y \end{bmatrix}$$ where $\Gamma$ is the portion of the boundary with the neumann (traction) boundary condition in the x and y directions $\Phi_x$ and $\Phi_y$.
Let's just consider the integral on the left hand side of this equation. Using the linear isotropic stress strain relationship in two dimensions we can rewrite this equation as
$$\int_\Omega M D \epsilon$$
where
$M=\begin{bmatrix} \frac{\partial w_1}{\partial x} & 0 & \frac{\partial w_1}{\partial y}\\ 0 & \frac{\partial w_2}{\partial y} & \frac{\partial w_2}{\partial x}\end{bmatrix}$, $D=\frac{E}{1-\nu^2} \begin{bmatrix} 1 & \nu & 0 \\ \nu & 1 & 0 \\ 0 & 0 & \frac{1-\nu}{2} \end{bmatrix}$, and $\epsilon = \begin{bmatrix} \frac{\partial u}{\partial x} \\ \frac{\partial v}{\partial y} \\ \frac{\partial u}{\partial y} + \frac{\partial v}{\partial x} \end{bmatrix}$.
Suppose the domain is tesselated into triangular elements. Consider a single element $e$ and the three basis functions (unit hat functions) on this element as $H_i(x)$ for i=1,2,3. We can charaterize the displacement functions on this element as $u(x,y)=\sum_{i=1}^3 u_iH_i$ and $v(x,y)=\sum_{i=1}^3 v_iH_i$, then we can rewrite
$$\epsilon=Bd$$
where $B=\begin{bmatrix} \frac{\partial H_1}{\partial x} & 0 & \frac{\partial H_2}{\partial x} & 0 & \frac{\partial H_3}{\partial x} & 0 \\ 0 & \frac{\partial H_1}{\partial y} & 0 & \frac{\partial H_2}{\partial y} & 0 & \frac{\partial H_3}{\partial y} \\ \frac{\partial H_1}{\partial y} & \frac{\partial H_1}{\partial x} & \frac{\partial H_2}{\partial y} & \frac{\partial H_2}{\partial x} & \frac{\partial H_3}{\partial y} & \frac{\partial H_3}{\partial x}\end{bmatrix}$ and $d=\begin{bmatrix} u_1 \\ v_1 \\ u_2 \\ v_2 \\ u_3 \\ v_3 \end{bmatrix}$.
Thus, we can write the integral over each element as
$$\int_e M D \epsilon= \int_e MDBd$$.
The author claims that the matrix $M$ becomes $B^T$ when we only consider the test functions equivalent to basis functions with support on element $e$. That is, when $w_1,w_2 = H_i$ for $i=1,2,3$ we obtain
$$\int_e B^TDBd$$
It's not immediately obvious to me why the matrix $M$ becomes $B^T$ over the element $e$. How can I arrive at this conclusion just by letting the test functions be the basis functions? Any help with this would be greatly appreciated! :)
|
Difference between revisions of "Main Page"
(→IP-Szemeredi (a weaker problem than DHJ))
(Highlighting headings)
Line 71: Line 71: −
High-dimensional Sperner
+
High-dimensional Sperner
Kalai.29: There is an analogous for Sperner but with high dimensional combinatorial spaces instead of "lines" but I do not remember the details (Kleitman(?) Katona(?) those are ususal suspects.)
Kalai.29: There is an analogous for Sperner but with high dimensional combinatorial spaces instead of "lines" but I do not remember the details (Kleitman(?) Katona(?) those are ususal suspects.)
−
Fourier approach
+
Fourier approach
Line 92: Line 92: −
DHJ for dense subsets of a random set
+
DHJ for dense subsets of a random set
Tao.18: A sufficiently good Varnavides type theorem for DHJ may have a separate application from the one in this project, namely to obtain a “relative” DHJ for dense subsets of a sufficiently pseudorandom subset of {}[3]^n, much as I did with Ben Green for the primes (and which now has a significantly simpler proof by Gowers and by Reingold-Trevisan-Tulsiani-Vadhan). There are other obstacles though to that task (e.g. understanding the analogue of “dual functions” for Hales-Jewett), and so this is probably a bit off-topic.
Tao.18: A sufficiently good Varnavides type theorem for DHJ may have a separate application from the one in this project, namely to obtain a “relative” DHJ for dense subsets of a sufficiently pseudorandom subset of {}[3]^n, much as I did with Ben Green for the primes (and which now has a significantly simpler proof by Gowers and by Reingold-Trevisan-Tulsiani-Vadhan). There are other obstacles though to that task (e.g. understanding the analogue of “dual functions” for Hales-Jewett), and so this is probably a bit off-topic.
Revision as of 20:56, 11 February 2009 Contents The Problem
Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A
combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. Density Hales-Jewett (DHJ) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math]
The original proof of DHJ used arguments from ergodic theory. The basic problem to be consider by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers.
Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (active) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (final call) (500-599) TBA (600-699) A reading seminar on density Hales-Jewett (active)
A spreadsheet containing the latest lower and upper bounds for [math]c_n[/math] can be found here.
Unsolved questions Gowers.462: Incidentally, it occurs to me that we as a collective are doing what I as an individual mathematician do all the time: have an idea that leads to an interesting avenue to explore, get diverted by some temporarily more exciting idea, and forget about the first one. I think we should probably go through the various threads and collect together all the unsolved questions we can find (even if they are vague ones like, “Can an approach of the following kind work?”) and write them up in a single post. If this were a more massive collaboration, then we could work on the various questions in parallel, and update the post if they got answered, or reformulated, or if new questions arose. IP-Szemeredi (a weaker problem than DHJ) Solymosi.2: In this note I will try to argue that we should consider a variant of the original problem first. If the removal technique doesn’t work here, then it won’t work in the more difficult setting. If it works, then we have a nice result! Consider the Cartesian product of an IP_d set. (An IP_d set is generated by d numbers by taking all the [math]2^d[/math] possible sums. So, if the n numbers are independent then the size of the IP_d set is [math]2^d[/math]. In the following statements we will suppose that our IP_d sets have size [math]2^n[/math].)
Prove that for any [math]c\gt0[/math] there is a [math]d[/math], such that any [math]c[/math]-dense subset of the Cartesian product of an IP_d set (it is a two dimensional pointset) has a corner.
The statement is true. One can even prove that the dense subset of a Cartesian product contains a square, by using the density HJ for [math]k=4[/math]. (I will sketch the simple proof later) What is promising here is that one can build a not-very-large tripartite graph where we can try to prove a removal lemma. The vertex sets are the vertical, horizontal, and slope -1 lines, having intersection with the Cartesian product. Two vertices are connected by an edge if the corresponding lines meet in a point of our [math]c[/math]-dense subset. Every point defines a triangle, and if you can find another, non-degenerate, triangle then we are done. This graph is still sparse, but maybe it is well-structured for a removal lemma.
Finally, let me prove that there is square if [math]d[/math] is large enough compare to [math]c[/math]. Every point of the Cartesian product has two coordinates, a 0,1 sequence of length [math]d[/math]. It has a one to one mapping to [math][4]^d[/math]; Given a point [math]( (x_1,…,x_d),(y_1,…,y_d) )[/math] where [math]x_i,y_j[/math] are 0 or 1, it maps to [math](z_1,…,z_d)[/math], where [math]z_i=0[/math] if [math]x_i=y_i=0[/math], [math]z_i=1[/math] if [math]x_i=1[/math] and [math]y_i=0, z_i=2[/math] if [math]x_i=0[/math] and [math]y_i=1[/math], and finally [math]z_i=3[/math] if [math]x_i=y_i=1[/math]. Any combinatorial line in [math][4]^d[/math] defines a square in the Cartesian product, so the density HJ implies the statement.
Gowers.7: With reference to Jozsef’s comment, if we suppose that the d numbers used to generate the set are indeed independent, then it’s natural to label a typical point of the Cartesian product as (\epsilon,\eta), where each of \epsilon and \eta is a 01-sequence of length d. Then a corner is a triple of the form (\epsilon,\eta), (\epsilon,\eta+\delta), (\epsilon+\delta,\eta), where \delta is a \{-1,0,1\}-valued sequence of length d with the property that both \epsilon+\delta and \eta+\delta are 01-sequences. So the question is whether corners exist in every dense subset of the original Cartesian product.
This is simpler than the density Hales-Jewett problem in at least one respect: it involves 01-sequences rather than 012-sequences. But that simplicity may be slightly misleading because we are looking for corners in the Cartesian product. A possible disadvantage is that in this formulation we lose the symmetry of the corners: the horizontal and vertical lines will intersect this set in a different way from how the lines of slope -1 do.
I feel that this is a promising avenue to explore, but I would also like a little more justification of the suggestion that this variant is likely to be simpler.
Gowers.22: A slight variant of the problem you propose is this. Let’s take as our ground set the set of all pairs (U,V) of subsets of \null [n], and let’s take as our definition of a corner a triple of the form (U,V), (U\cup D,V), (U,V\cup D), where both the unions must be disjoint unions. This is asking for more than you asked for because I insist that the difference D is positive, so to speak. It seems to be a nice combination of Sperner’s theorem and the usual corners result. But perhaps it would be more sensible not to insist on that positivity and instead ask for a triple of the form (U,V), ((U\cup D)\setminus C,V), (U, (V\cup D)\setminus C, where D is disjoint from both U and V and C is contained in both U and V. That is your original problem I think.
I think I now understand better why your problem could be a good toy problem to look at first. Let’s quickly work out what triangle-removal statement would be needed to solve it. (You’ve already done that, so I just want to reformulate it in set-theoretic language, which I find easier to understand.) We let all of X, Y and Z equal the power set of \null [n]. We join U\in X to V\in Y if (U,V)\in A.
Ah, I see now that there’s a problem with what I’m suggesting, which is that in the normal corners problem we say that (x,y+d) and (x+d,y) lie in a line because both points have the same coordinate sum. When should we say that (U,V\cup D) and (U\cup D,V) lie in a line? It looks to me as though we have to treat the sets as 01-sequences and take the sum again. So it’s not really a set-theoretic reformulation after all.
O'Donnell.35: Just to confirm I have the question right…
There is a dense subset A of {0,1}^n x {0,1}^n. Is it true that it must contain three nonidentical strings (x,x’), (y,y’), (z,z’) such that for each i = 1…n, the 6 bits
[ x_i x'_i ] [ y_i y'_i ] [ z_i z'_i ]
are equal to one of the following:
[ 0 0 ] [ 0 0 ] [ 0, 1 ] [ 1 0 ] [ 1 1 ] [ 1 1 ] [ 0 0 ], [ 0 1 ], [ 0, 1 ], [ 1 0 ], [ 1 0 ], [ 1 1 ], [ 0 0 ] [ 1 0 ] [ 0, 1 ] [ 1 0 ] [ 0 1 ] [ 1 1 ]
?
McCutcheon.469: IP Roth:
Just to be clear on the formulation I had in mind (with apologies for the unprocessed code): for every $\delta>0$ there is an $n$ such that any $E\subset [n]^{[n]}\times [n]^{[n]}$ having relative density at least $\delta$ contains a corner of the form $\{a, a+(\sum_{i\in \alpha} e_i ,0),a+(0, \sum_{i\in \alpha} e_i)\}$. Here $(e_i)$ is the coordinate basis for $[n]^{[n]}$, i.e. $e_i(j)=\delta_{ij}$.
Presumably, this should be (perhaps much) simpler than DHJ, k=3.
High-dimensional Sperner
Kalai.29: There is an analogous for Sperner but with high dimensional combinatorial spaces instead of "lines" but I do not remember the details (Kleitman(?) Katona(?) those are ususal suspects.)
Fourier approach
Kalai.29: A sort of generic attack one can try with Sperner is to look at f=1_A and express using the Fourier expansion of f the expression \int f(x)f(y)1_{x<y} where x<y is the partial order (=containment) for 0-1 vectors. Then one may hope that if f does not have a large Fourier coefficient then the expression above is similar to what we get when A is random and otherwise we can raise the density for subspaces. (OK, you can try it directly for the k=3 density HJ problem too but Sperner would be easier;) This is not unrealeted to the regularity philosophy.
Gowers.31: Gil, a quick remark about Fourier expansions and the k=3 case. I want to explain why I got stuck several years ago when I was trying to develop some kind of Fourier approach. Maybe with your deep knowledge of this kind of thing you can get me unstuck again.
The problem was that the natural Fourier basis in \null [3]^n was the basis you get by thinking of \null [3]^n as the group \mathbb{Z}_3^n. And if that’s what you do, then there appear to be examples that do not behave quasirandomly, but which do not have large Fourier coefficients either. For example, suppose that n is a multiple of 7, and you look at the set A of all sequences where the numbers of 1s, 2s and 3s are all multiples of 7. If two such sequences lie in a combinatorial line, then the set of variable coordinates for that line must have cardinality that’s a multiple of 7, from which it follows that the third point automatically lies in the line. So this set A has too many combinatorial lines. But I’m fairly sure — perhaps you can confirm this — that A has no large Fourier coefficient.
You can use this idea to produce lots more examples. Obviously you can replace 7 by some other small number. But you can also pick some arbitrary subset W of \null[n] and just ask that the numbers of 0s, 1s and 2s inside W are multiples of 7. DHJ for dense subsets of a random set
Tao.18: A sufficiently good Varnavides type theorem for DHJ may have a separate application from the one in this project, namely to obtain a “relative” DHJ for dense subsets of a sufficiently pseudorandom subset of {}[3]^n, much as I did with Ben Green for the primes (and which now has a significantly simpler proof by Gowers and by Reingold-Trevisan-Tulsiani-Vadhan). There are other obstacles though to that task (e.g. understanding the analogue of “dual functions” for Hales-Jewett), and so this is probably a bit off-topic.
Bibliography H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem for k=3“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. R. McCutcheon, “The conclusion of the proof of the density Hales-Jewett theorem for k=3“, unpublished. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem“, J. Anal. Math. 57 (1991), 64–119.
|
20 July 2017:
1) The paper Blake and I wrote on [A compositional framework for reaction networks](https://arxiv.org/abs/1704.02051) has been accepted by _Reviews in Mathematical Physics_. The best part: it seems no corrections were demanded!
2) More news from Daniel, who is at the big annual category theory conference:
> Some updates from Vancouver:
> I ran into Rick Blute who was my masters adviser and the editor of TAC that I sent my Spans of Cospans paper to. He told me he recently sent an email to the referee reminding them to get me a report. We'll see how that goes.
> I gave my talk on Lack and Rosicky's Notions of Lawvere Theories paper today. I got great feedback from Emily Riehl, so that was nice. I presented believing the entire time that this white haired and white bearded man sitting front and center was Bill Lawvere (he was supposed to be here) but it turned out to be Michael Barr. Oddly, I was not the only one of the Kan seminar group to make this mistake.
> Us Kan folk, along with some alumni from the first instance of the online seminar went out to dinner and it turns out Christina Vasilakopoulou was one of these alumni, so we got to chat a bit.
3) James Haydon is applying decorated cospans to computer science:
> What I've done is set up a framework for composing coroutines (= asynchronous cooperating processes) using a category of decorated cospans. Furthermore, I've implemented the whole thing in code: actual composition of concurrent processes via pushouts!
> As an underlying category I take typed channel contexts; this represents a support for a pi-calculus process: channel names and types which it may read and write from. The morphisms map names while respecting the typing structure.
> For such a context \\(X\\), I define a restricted pi-calculus \\(\Pi(X)\\), which is the set of well typed pi-calculi processes that may only read and write to the channel names specified in \\(X\\).
> This defines a monoidal functor
> \[ \Pi : (\mathrm{TyCh}, +) \to (\mathrm{Set}, \times) \]
> with the required properties to form a category of decorated cospans. I have implemented all this in the Idris programming language, the source code is here:
> https://github.com/jameshaydon/cospanProc
> I've experimented with several examples and I think this provides a nice framework for organising code, and composing processes in a safe way. While you compose the processes, you simultaneously compute, via the pushout, the communication interface the resulting process will expose.
|
Diophantine approximation, metric theory of
The branch in number theory whose subject is the study of metric properties of numbers with special approximation properties (cf. Diophantine approximations; Metric theory of numbers). One of the first theorems of the theory was Khinchin's theorem [1], [2] which, in its modern form [3], may be stated as follows. Let $\phi(q)$ be a monotone decreasing function, defined for integers $q \ge 1$. Then the inequalities $\Vert \alpha q \Vert < \phi(q)$ have an infinite number of solutions in integers $q \ge 1$ for almost-all real numbers $\alpha$ if the series $$ \sum_{q=1}^\infty \phi(q) $$ diverges, and have only a finite number of solutions if this series converges (here and in what follows, $\Vert \alpha \Vert$ is the distance from $\alpha$ to the nearest integer, i.e. $$ \Vert x \Vert = \min_n |x-n|\,, $$ where $\min$ is taken over all integers $n \in \mathbf{Z}$; the term "almost-all" refers to Lebesgue measure in the respective space). The theorem describes the accuracy of the approximation of almost-all real numbers by rational fractions. For example, for almost-all $\alpha$ there exists an infinite number of rational approximations $p/q$ satisfying the inequality $$ \left\vert{ \alpha - \frac{p}{q} }\right\vert < \frac{1}{q^2 \log q} $$ whereas the inequality $$ \left\vert{ \alpha - \frac{p}{q} }\right\vert < \frac{1}{q^2 (\log q)^{1+\epsilon}} $$ has for any $\epsilon>0$ an infinite number of solutions only for a set of numbers $\alpha$ of measure zero.
The generalization of this theorem to simultaneous approximations [3] is as follows. The system of inequalities $$ \max(\Vert \alpha_1 q \Vert, \ldots, \Vert \alpha_n q \Vert) < \phi(q) $$ has a finite or infinite number of solutions for almost-all $(\alpha_1,\ldots,\alpha_n) \in \mathbf{R}^n$ depending on whether the series $$ \sum_{q=1}^\infty \phi(q)^n $$ converges or diverges.
More extensive generalizations refer to systems of inequalities in several integer variables [5].
A distinguishing feature of Khinchin's theorem and its many generalizations is the fact that the property of "convergence-divergence" of series of the types (1), (3) serves as a criterion of the corresponding order of the approximation applying to a set of numbers of measure zero or to almost-all numbers. It is a kind of "zero-one" law for the metric theory of Diophantine approximations. Another characteristic of these generalizations is the fact that the metric property of the numbers involved refers to a measure defined throughout the space containing the numbers which participate in the approximation, and that the measure of the space is defined as the product of the measures in the coordinate spaces. For instance, in the case of system (2) one speaks about an approximation of $n$ "independent" numbers and about the Lebesgue measure in $\mathbf{R}^n = \mathbf{R} \times\cdots\times \mathbf{R}$ ($n$ times). In this connection this part of the theory received the name of metric theory of Diophantine approximations of independent variables. It has been fairly thoroughly developed, but a number of unsolved problems still (1988) remain. One such problem concerns the conditions which must be imposed on a sequence of measurable sets $A(q)$, $q=1,2,\ldots$ in the interval $[0,1]$ for the convergence or divergence of the series $\sum_q |A(q)|$ to correspond to the condition $\alpha q \in A(q) \pmod{1}$ to be satisfied a finite or infinite number of times for almost-all $\alpha$. A similar problem arises for the system of numbers $(\alpha_1 q,\ldots,\alpha_n q)$ [4].
The metric theory of Diophantine approximations of dependent variables, which is of a later date, immediately gave rise to several fundamental and characteristic problems [5]. The first one originated in the theory of transcendental numbers (Mahler's conjecture) and concerned simultaneous rational approximations to a system of numbers $t,\ldots,t^n$ for almost-all $t$ for any fixed natural number $n$. A recent result obtained on this subject runs as follows. Let $\phi(q)>0$ be a monotone decreasing function for which the series $$ \sum_{q=1}^\infty \frac{\phi(q)}{q} $$ converges. Then the system of inequalities $$ \max(\Vert t q \Vert, \ldots, \Vert t^n q \Vert) < \frac{\phi(q)^n}{q^n} $$ has only a finite number of solutions in integers $q \ge 1$ for almost-all $t$ [7].
This theorem confirms that it is possible to approximate by rational numbers almost-all the points of a curve $\Gamma \subset \mathbf{R}^n$. Considerations of more general manifolds in $\mathbf{R}^n$ will yield similar results.
If almost-all (in the sense of the measure on $\Gamma$) points $(\alpha_1,\ldots,\alpha_n)$ of the manifold $\Gamma$ are such that the system (2) with $\phi(q) = q^{-1/n-\epsilon}$ has a finite number of solutions in integers $q\ge1$ for any $\epsilon>0$, then $\Gamma$ is said to be
extremal, i.e. almost-all points permit only the worst simultaneous approximation by rational numbers. Schmidt's theorem says that if $\Gamma$ is a curve in $\mathbf{R}^2$ with non-zero curvature at almost-all its points, it is extremal [8].
The method of trigonometric sums (cf. Trigonometric sums, method of; see also Vinogradov method) makes it possible to detect extremality of very general manifolds $\Gamma$ in $\mathbf{R}^n$, under the condition that the topological dimension $\dim\Gamma \ge n/2$. If, on the other hand, $\dim\Gamma < n/2$, the extremal manifold cannot be quite general, and its structure should be fairly definite [9].
References
[1] A. [A.Ya. Khinchin] Khintchine, "Zur metrischen Theorie der diophantischen Approximationen" Math. Z. , 24 (1926) pp. 706–714 [2] A.Ya. [A.Ya. Khinchin] Khintchine, "Kettenbrüche" , Teubner (1956) (Translated from Russian) [3] J.W.S. Cassels, "An introduction to diophantine approximation" , Cambridge Univ. Press (1957) [4] J.W.S. Cassels, "Some metrical theorems in diophantine approximation I" Proc. Cambridge Philos. Soc. , 46 : 2 (1950) pp. 209–218 [5] V.G. Sprindzhuk, "Mahler's problem in metric number theory" , Amer. Math. Soc. (1969) (Translated from Russian) [6] V.G. Sprindzhuk, "New applications of analytic and $p$-adic methods in Diophantine approximations" , Proc. Internat. Congress Mathematicians (Nice, 1970) , 1 , Gauthier-Villars (1971) pp. 505–509 [7] A. Baker, "On a theorem of Sprindžuk" Proc. Roy. Soc. Ser. A , 292 : 1428 (1966) pp. 92–104 [8] W. Schmidt, "Metrische Sätze über simultane Approximation abhängiger Grössen" Monatsh. Math. , 68 : 2 (1964) pp. 154–166 [9] V.G. Sprindzhuk, "The method of trigonometric sums in the metric theory of diophantine approximations of dependent quantities" Proc. Steklov Inst. Math. , 128 : 2 (1972) pp. 251–270 Trudy Mat. Inst. Steklov. , 128 : 2 (1972) pp. 212–228 [10] V.G. Sprindzhuk, "The metric theory of Diophantine approximations" , Current problems of analytic number theory , Minsk (1974) pp. 178–198 (In Russian) How to Cite This Entry:
Diophantine approximation, metric theory of.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Diophantine_approximation,_metric_theory_of&oldid=42930
|
uses two $3\times 3$ matrices $A$ and $B$ and calculates difference between $A$and $B$. It is an online math tool specially programmed to perform matrix subtraction between the two $3\times 3$ matrices $A$ and $B$. 3x3 matrix subtraction calculator
Matrices are a powerful tool in mathematics, science and life. Matrices are everywhere and they have significant applications. For example, spreadsheet such as Excel or written a table represents a matrix. The word "matrix" is the Latin word and it means "womb". This term was introduced by J. J. Sylvester (English mathematician) in 1850.The first need for matrices was in the studying of systems of simultaneous linear equations.
A matrix is a rectangular array of numbers, arranged in the following way $$A=\left( \begin{array}{cccc} a_{11} & a_{12} & \ldots&a_{1n} \\ a_{21} & a_{22} & \ldots& a_{2n} \\ \ldots &\ldots &\ldots&\ldots\\ a_{m1} & a_{m2} & \ldots&a_{mn} \\ \end{array} \right)=\left[ \begin{array}{cccc} a_{11} & a_{12} & \ldots&a_{1n} \\ a_{21} & a_{22} & \ldots& a_{2n} \\ \ldots &\ldots &\ldots&\ldots\\ a_{m1} & a_{m2} & \ldots&a_{mn} \\ \end{array} \right]$$ There are two notation of matrix: in parentheses or box brackets. The terms in the matrix are called its entries or its elements. Matrices are most often denoted by upper-case letters, while the corresponding lower-case letters, with two subscript indices, are the elements of matrices. For examples, matrices are denoted by $A,B,\ldots Z$ and its elements by $a_{11}$ or $a_{1,1}$, etc. The horizontal and vertical lines of entries in a matrix are called rows and columns, respectively. The size of a matrix is a Descartes product of the number of rows and columns that it contains. A matrix with $m$ rows and $n$ columns is called an $m\times n$ matrix. In this case $m$ and $n$ are its dimensions. If a matrix consists of only one row, it is called a row matrix. If a matrix consists only one column is called a column matrix. A matrix which contains only zeros as elements is called a zero matrix.
Matrices $A$ and $B$ can be subtracted if and only if their sizes are equal. Such matrices are called commensurate for addition orsubtraction. The difference $A-B$ of two $m\times n$ matrices is equal to the sum $A + (-B)$,where $-B$ represents the additive inverse of the matrix $B$. So, the difference a matrix $C=A-B$ with elements$$c_{ij}=a_{ij}+(-b_{ij})=a_{ij}-b_{ij}$$The matrix $C$ has the same size as the matrices $A$ and $B$.This means, each element in $C$ is equal to the difference between the elements of $A$ and $B$ that are located in corresponding places. For example, $c_{13}=a_{13}-b_{13}$. If two matrices have different sizes, their difference is not defined.The subtraction of matrices is non-commutative operation, i.e $A-B\ne B-A$. Subtracting two matrices is very similar to adding matrices with the only difference being subtracting corresponding elements.
A $3\times 3$ matrix has $3$ columns and $3$ rows. For example, the difference between two $3\times 3$ matrices $A$ and $B$ is a matrix $C$ such that $$\begin{align} C=&\left( \begin{array}{ccc} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \\ \end{array} \right)- \left( \begin{array}{ccc} b_{11} & b_{12} & b_{13} \\ b_{21} & b_{22} & b_{23} \\ b_{31} &b_{32} & b_{33} \\ \end{array} \right)= \left(\begin{array}{ccc} a_{11}-b_{11}& a_{12}-b_{12}& a_{13}-b_{13} \\ a_{21}-b_{21} &a_{22}-b_{22}& a_{23}-b_{23}\\ a_{31}-b_{31} &a_{32}-b_{32} & a_{33}-b_{33} \\ \end{array}\right)\end{align}$$ For example, let us find the difference $A-B$ for $$A=\left( \begin{array}{ccc} 10 & 20 & 10 \\ 4 & 5 & 6 \\ 2 & 3 & 5 \\ \end{array} \right)\quad\mbox{and}\quad B=\left( \begin{array}{ccc} 3 & 2 & 4 \\ 3 & 3 & 9 \\ 4 & 4 & 2 \\ \end{array} \right)$$ Using the matrix subtraction formula, the difference between the matrices $A$ and $B$ is the matrix $$A-B=\left( \begin{array}{ccc} 10-3 & 20-2 & 10-4 \\ 4-3 & 5-3 & 6-9 \\ 2-4 & 3-4 & 5-2 \\ \end{array} \right)=\left( \begin{array}{ccc} 7 & 18 & 6 \\ 1 & 2 & -3 \\ -2 & -1 & 3 \\ \end{array} \right)$$ The $3\times 3$ matrix subtraction work with steps shows the complete step-by-step calculation for finding the difference of two $3\times3$ matrices $A$ and $B$ using the matrix subtraction formula. For any other matrices, just supply elements of $2$ matrices in terms of a real numbers and click on the Generate Work button. The grade school students may use this 3x3 matrix subtraction calculator to generate the work, verify the results of subtracting matrices derived by hand, or do their homework problems efficiently.
Matrices are applied in many subjects beyond mathematics. For example, designing computer game graphics, 3D geometry and visualization, social networks, representing real world data, economics, robotics and automation, etc.
Practice Problem 1 : Find the difference between the matrices $$C=\left( \begin{array}{ccc} 3.8 & 9 & 1.9 \\ 0 & 1.5 & -6.6 \\ 12 & -9.7 & 5 \\ \end{array} \right)\quad\mbox{and}\quad D=\left( \begin{array}{ccc} 2.8 & 0 & 4.1 \\ 3 & -7.6 & 0 \\ 5 & 8.4 & 2 \\ \end{array} \right)$$ Practice Problem 2 : The following two matrices represent studies to find what types of sports are most popular and how they could attract more people. Data are given in percent.
|
Question #2f185 2 Answers Answer:
Any body may be acted upon by a number of forces in different directions. Forces add up or cancel each other either fully or partially. The net force is the amount of force that is effective on the body. The net force is not the actual amount of force acting on it. It is the sum of the forces acting on the body.
Explanation:
Forces obey the principle of superposition, i.e. the the resultant force at any point in space is equal to the vector sum of all the forces acting at the point.
For example, considering an object kept on the table top, it is acted upon by gravity but it is still in rest. It is due to the equal and opposite normal force due to the table. In this case, two forces act on the body, but there's no net force as one totally cancels the other.
When you're pushing a box on the floor, it is acted by the frictional force due to the floor and by the force you exert on it. Gravity and Normal reaction are among other forces. The force you exert overpowers the frictional force. The frictional forces partially cancels the force you have been exerting. Gravity and Normal reaction totally cancel each other in this case.
The box is acted upon by so many forces but the net force on it is the partially cancelled force you've been exerting on the body. The partial cancellation comes due to the friction which opposes your force.
Another point to note is that if there's a net force on the body, the body is sure to accelerate. It'll either gain or lose speed.
Answer:
Answered here.
The net force is the vector sum of all the forces acting on an object. Explanation:
Whenever a number of forces act on an object, and if the vector sum of all the forces is not balanced, then we have a resultant force. This is called
net force. A net force is capable of accelerating a mass. The acceleration could be linear or circular or both.
In equilibrium state net force acting on an object is zero. The object does not accelerate.
In the figure below force
The net force causes changes in the motion of the object described by the following expressions. Linear acceleration of center of mass #vec a = vec F/ m#; where #vecF#is the Net Force and #m#is mass of the object Angular acceleration of the body #vec alpha = vec tau / I#, where #vectau#is the resultant torque and #I#moment of inertia of the body. Torque, a vector quantity is caused by a net force #vec F#defined with respect to some reference point #vecr#as below #\vec \tau = \vec r \times \vec F# or #|\vec \tau |= k |\vec F|#
|
Search
Now showing items 1-10 of 26
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV
(American Physical Society, 2017-06)
The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ...
Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Springer, 2017-06)
The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ...
Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC
(Springer, 2017-01)
The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ...
Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC
(Springer, 2017-06)
We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
|
Revista Matemática Iberoamericana
Full-Text PDF (270 KB) | Metadata | Table of Contents | RMI summary
Volume 27, Issue 1, 2011, pp. 93–122 DOI: 10.4171/RMI/631
Published online: 2011-04-30
Isoperimetry for spherically symmetric log-concave probability measuresNolwen Huet
[1](1) Université de Toulouse, France
We prove an isoperimetric inequality for probability measures $\mu$ on $\mathbb{R}^n$ with density proportional to $\exp(-\phi(\lambda |x|))$, where $|x|$ is the euclidean norm on $\mathbb{R}^n$ and $\phi$ is a non-decreasing convex function. It applies in particular when $\phi(x)=x^\alpha$ with $\alpha \ge 1$. Under mild assumptions on $\phi$, the inequality is dimension-free if $\lambda$ is chosen such that the covariance of $\mu$ is the identity.
Keywords: Isoperimetric inequalities, log-concave measures
Huet Nolwen: Isoperimetry for spherically symmetric log-concave probability measures.
Rev. Mat. Iberoam. 27 (2011), 93-122. doi: 10.4171/RMI/631
|
EDIT: As pointed out in the comments, the following proof is found in Knuth’s book, The Art of Computer Programming, Vol. 3, and is attributed to MacMahon.
There are several nice series expansions involving basic permutation statistics which I’ve been reviewing lately. One of these involves the number of
inversions of a permutation.
An
inversion of a permutation of the numbers $1,2,\ldots,n$ is a pair of numbers which appear “out of order” in the sequence, that is, with the larger number appearing first. For instance, for $\DeclareMathOperator{\maj}{maj} \DeclareMathOperator{\inv}{inv} n=4$, the permutation $3,1,4,2$ has three inversions: $(3,1)$, $(3,2)$, and $(4,2)$. We write $$\inv(3142)=3.$$
It is not hard to show that
$$(1)(1+q)(1+q+q^2)\cdots(1+q+q^2+\cdots+q^{n-1})=\sum q^{\inv(w)}$$ where the sum ranges over all permutations $w$ of $\{1,2,\ldots,n\}$. (See page 3 of this post for a proof.) The product on the left side is called the $n$th $q$-factorial.
The remarkable thing is that there is a similar expansion of the $q$-factorial in terms of the
major index. Define a descent of a permutation to be an index $i$ for which the $i$th number is greater than the $(i+1)$st. The major index is defined to be the sum of the descents. For instance, the permutation $3,1,4,2$ has two descents, in positions $1$ and $3$, so the major index is $4$. We write $$\maj(3142)=4.$$
With this notation, we have
\begin{equation}(1)(1+q)(1+q+q^2)\cdots(1+q+q^2+\cdots+q^{n-1})=\sum_{w\in S_n} q^{\maj(w)}\end{equation} as well, implying that the number of permutations with $k$ inversions is equal to the number of permutations with major index $k$, for any $k$. There is a direct bijection showing this called the Foata bijection.
I was seeking a way to prove equation (1) directly, without the Foata bijection. But the simplest proof I was able to find in the literature, in Stanley’s
Enumerative Combinatorics, proved it by showing something much more general and then simply stating it as a corollary. Wishing to see a direct proof of the result, I went through Stanley’s more general proof step-by step to see what was going on in this special case. I thought I would share it here since I had a hard time finding it elsewhere. (Turn to page 2.)
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Search
Now showing items 1-2 of 2
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions
(Elsevier, 2017-11)
Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ...
|
I'm looking for a formal definition of
Cluster of Solutions. My current understanding is the following. Let $x$ be a boolean assignment on $n$ variables. Let $f: \{ 0,1 \} ^n \to \mathbb{N}$ be a function that, given a boolean assignment $x$ on $n$ variables, just returns the natural number $i \in [0, 2^n-1]$ corresponding to $x$. Let $g: \mathbb{N} \to \{ 0,1 \} ^n$ be the inverse of $f$: given a natural number $i$, $g$ returns the corresponding solution (i.e. the binary encoding of $i$ in $n$ bits). Now, a Cluster of Solutions is a set $S$ of solutions such that, for each solution $x \in S$ and for each solution $y \in S$, it's the case that $g(i) \in S$ for each $i \in (f(x), f(y))$. Less formally, a Cluster of Solutions is a set whose solutions are "packed", i.e. "there are no non-solutions among the solutions". Is this definition correct?
I'm looking for a formal definition of
I think a cluster of solutions is a maximal set of solutions $T$ s.t. you can reach every $\tau' \in T$ from every other $\tau \in T$ by a sequence of solutions $\{\tau_i\}_{0\leq i\leq n}$ ($\tau = \tau_0$ and $\tau' = \tau_n$) where the hamming distance between each consecutive pair of solutions is bounded, i.e. they are just connected components in the graph where two solutions are adjacent iff the hamming distance between them is less than the bound.
See these notes by Dimitris Achlioptas (or papers on
statistical physics and random k-SAT).
I think a possible alternative for solution cluster definition could be the folowing:
solution cluster is a set of satisfying Boolean assignments inside a ball of some given radius. The distance metric is Hamming distance between two satisfying assignments. This would enable a compact representation of each cluster by giving the center and the radius of the cluster.
|
Beatty Sequences
Let $r$ be a positive irrational number. Set $s = 1/r,$ and define two sequences: $a_{n} = n(r + 1)$ and $b_{n} = n(s + 1)$, $n \gt 0.$
Obviously, all terms of both sequences are irrational. In particular, none of them is integer. A remarkable theorem discovered by Sam Beatty in 1926 states that, for any integer $N,$ there is exactly one element from the union $\{a_{n}\}\cup \{b_{n}\}$ that lies in the interval $(N, N + 1)$.
This property is very remarkable for the following reason. By definition, for a non-integer $\alpha,$ $N \lt \alpha \lt (N + 1)$ is the same as $\lfloor \alpha\rfloor = N.$ Thus Beatty's theorem asserts that the union of sequences of whole parts $\{\lfloor a_{n}\rfloor\}$ and $\{\lfloor b_{n}\rfloor\}$ covers the set of natural numbers $\mathbb{N} = \{1, 2, 3, \ldots\}.$ It's a simple matter to show that Beatty's sequences $\{a_{n}\}$ and $\{b_{n}\}$ do not intersect. But more than that, no two of their combined terms fall into the same interval $(N,N+1),\;$ with $N\;$ an integer:
$\{\lfloor a_{n}\rfloor\}\cup \{\lfloor b_{n}\rfloor\} = \mathbb{N}$ and $\{\lfloor a_{n}\rfloor\}\cap\{\lfloor b_{n}\rfloor\} = \emptyset ,$
which means that the sequences of whole parts complement each other in $\mathbb{N}.$ In this context, two sequences of integers that complement each other in
N are called complementary, and Beatty's theorem shows a surprising way to generate such complementary sequences.
Let's prove Beatty's theorem. One elegant proof was published in 1927 by A. Ostrowski and A.C. Eitken The proof appears in Ross Honsberger's Ingenuity in Mathematics (MAA, 1970, pp 94-95.) While reading the book, I realized that Beatty's theorem is related to the problem of distribution of fractions on a unit interval that has been discussed elsewhere. In fact, Ostrowski and Eitken's proof was readily adaptable to the latter problem. It was then natural to ask whether the original proof for the problem of distribution of fractions might have bearings on Beatty's theorem.
Thinking along these lines led to a curious inequality that sheds some light on the distribution of Beatty's sequences on the number line.
Proof (Beatty's Theorem)
Let N be an integer. There are $\lfloor N/(r + 1)\rfloor$ terms of the first sequence less than N. There are $\lfloor N/(s + 1)\rfloor$ such terms from the second sequence. Since none of $a_{n}$ or $b_{n}$ is integer,
(1)
$\begin{align}&N/(r + 1) - 1 \lt \lfloor N/(r + 1)\rfloor \lt N/(r + 1)\\ &N/(s + 1) - 1 \lt \lfloor N/(s + 1)\rfloor \lt N/(s + 1). \end{align}$
Note that
(2)
$\displaystyle\frac{1}{r+1}+\frac{1}{s+1}=\frac{1}{r+1}+\frac{1}{\displaystyle\frac{1}{r}+1}=\frac{1}{r+1}+\frac{r}{r+1}=1.$
Adding up (1) we thus get
$N - 2 \lt \lfloor N/(r + 1)\rfloor + \lfloor N/(s + 1)\rfloor \lt N,$
which implies $\lfloor N/(r + 1)\rfloor + \lfloor N/(s + 1)\rfloor = N - 1.$ Replacing $N$ with $N + 1$, we see that exactly one term from the union $\{a_{n}\}\cup\{b_{n}\}$ is added. This naturally belongs to the interval $(N, N + 1).$
Note: elsewhere, there's another proof of Beatty's theorem.
Let's see how the points $a_{n}$ and $b_{n}$ may be distributed on the number line. Mark all points of the two sequences. We are interested in the pairs of adjacent points. The distance between $a_{n+1}$ and $a_{n}$ equals $r + 1$, which is greater than $1.$ And the same is true for the second sequence. Which proves that between any two adjacent points that belong to the same sequence there's always an integer.
If, on the other hand, points $a_{n}$ and $b_{m}$ are adjacent, we may consider a linear combination $\alpha a_{n} + \beta b_{n},$ where $\alpha , \beta \gt 0,$ and $\alpha + \beta = 1.$ All such combinations lie between $a_{n}$ and $b_{m}.$ In view of (2), we can take $\alpha = 1/(r + 1)$ and $\beta = 1/(s + 1)$. The result - $(n + m)$ - is an integer that lies between $a_{n}$ and $b_{m}.$
Copyright © 1996-2018 Alexander Bogomolny
Sequences $\{a_{n}\}$ and $\{b_{n}\}$ do not intersect. Indeed assume they have a common element: $a_{i} = b_{j}$, or explicitly: $i(r + 1) = j(s + 1)$. Note that since $rs = 1$,
$\displaystyle\frac{r+1}{s+1}=\frac{r+1}{\displaystyle\frac{1}{r}+1}=\frac{r+1}{(r+1)/r}=r.$
Therefore, $i(r + 1) = j(s + 1)$ would imply
$\displaystyle r=\frac{r+1}{s+1}=\frac{j}{i},$
which, for an irrational $r$ and rational $j/i,$ is impossible.
Copyright © 1996-2018 Alexander Bogomolny
65607843
|
$\newcommand{\s}{\overset{\text{sgn}}=}\newcommand{\Dx}{\text{Dx}}\newcommand{\logDx}{\text{logDx}}\newcommand{\DlogDx}{\text{DlogDx}}\newcommand{\DDDlogDx}{\text{DDDlogDx}}\newcommand{\DDDDDlogDx}{\text{DDDDDlogDx}}\newcommand{\dif}{\text{dif}}\newcommand{\Ddif}{\text{Ddif}}\newcommand{\R}{\mathbb{R}}$Let us show that the inequality in question holds for all real $n\ge5$; the cases when $n\in\{1,2,3,4\}$ are verified directly. By a comment of Pietro Mayer, without loss of generality $0<x<1$. We shall reduce the problem to the completely algorithmic problem of checking sign patterns of several polynomials in $n,x$, of total degrees $\le11$. This reduction is done in a few steps:
Step 1: Eliminating $(\frac{1+x}2)^n$: The inequality in question can be rewritten as \begin{equation} u(x):=u_n(x):=n \ln \left(\frac{x^n+1}{x^{n-1}+1}\right) -\ln \left(x^n+1-z^n\right)\ge0, \end{equation}where $z:=z_x:=\frac{1+x}2$. Note that \begin{multline*} u'(x)\frac{x (1+x)}n \left(x^{1-n}+x^n+x+1\right) \left(x^n+1-z^n\right) \\ =\Dx:=\left(n \left(1-x^2\right)+\left(x^{2-n}-1\right) \left(1+x^n\right)\right) z^n-(n-1) \left(1-x^2\right) \left(1+x^n\right), \end{multline*}so that \begin{equation} u'(x)\s\Dx\s\logDx(x), \end{equation}where $\s$ denotes the equality in sign and \begin{equation} \logDx(x):=\logDx_n(x):=n \ln z-\ln \frac{(n-1) \left(1-x^2\right) \left(1+x^n\right)}{n \left(1-x^2\right)+\left(x^{2-n}-1\right) \left(1+x^n\right)}. \end{equation}Here and in the sequel, $\Dx$, $\logDx$, etc. are atomic, "indivisible" symbols; $\Dx$ refers to the derivative (of $u$) in $x$, $\logDx$ refers to a certain kind of logarithmic modification of $\Dx$, etc. Next, let \begin{multline*} \DlogDx(x):=\DlogDx_n(x):= \\ \logDx'(x)(1-x) (1+x) x^{n-1} \left(1+x^n\right) \left(n \left(1-x^2\right)+\left(x^{2-n}-1\right) \left(1+x^n\right)\right) \\ =n^2 (x-1)^2 (x+1) \left(x-x^n\right) x^{n-2}-2 \left(x^n-1\right) \left(x^n+1\right)^2+\frac{n (x-1) \left(x^n+1\right)^2 \left(x^n+x\right)}{x}.\end{multline*}So, we get a polynomial in $x^n$ of degree $3$ over the field $\R(n,x)$ of all real rational functions in $n,x$.
Step 2: Reducing the degree from $3$ to $2$: Let \begin{multline*} \DDDlogDx(x):= \DlogDx''(x) x^{3 - 3 n}\\ =x^{3-3 n} (n (n x-n+2 x+2) (n^2 x^2-n^2+n x^2+2 n-1) x^{n-3} \\ -2 (n-1) n (x-1) (2 n^2 x^2-2 n^2+n x^2-2 n x+3 n+2 x) x^{2 n-4}+n (3 n-1) (3 n x-3 n-6 x+2) x^{3 n-3}) \\ \s\DlogDx''(x). \end{multline*}Taking the second derivative $\DlogDx''(x)$ of the polynomial $\DlogDx(x)$ in $x^n$ of over $\R(n,x)$ kills the free term of that polynomial. Thus, we get the polynomial $\DDDlogDx(x)$ of degree $2$ in $x^{-n}$ of over $\R(n,x)$.
Step 3: Reducing the degree from $2$ to $1$: Let \begin{equation} \DDDDDlogDx(x):= \frac{\DDDlogDx''(x)}{2 (n - 1) n^2 x^{-3 - 2 n}} = A_n(x) - x^n B_n(x), \end{equation}\begin{equation} A_n(x):=\left(2 n^3+3 n^2-5 n-6\right) x^4+\left(-2 n^3+3 n^2+3 n-2\right) x^3+\left(-2 n^3-n^2+5 n-2\right) x^2+\left(2 n^3-5 n^2+n+2\right) x, \end{equation}\begin{equation} B_n(x):=2 n^3+3 n^2-\left(-2 n^3+5 n^2-n-2\right) x^3-\left(2 n^3+n^2-5 n+2\right) x^2-\left(2 n^3-3 n^2-3 n+2\right) x-5 n-6, \end{equation}so that \begin{equation} \DDDlogDx''(x)\s A_n(x) - x^n B_n(x). \end{equation}Thus, we get the polynomial $\DDDDDlogDx(x)$ of degree $1$ in $x^n$ of over $\R(n,x)$.
Step 4: Reducing the degree from $1$ to $0$: We can see that (under the conditions $n\ge5$ and $0<x<1$, assumed everywhere here) $B_n(x)>0$. So, $\DDDlogDx''(x)<0$ whenever $A_n(x)\le0$.
Further, let \begin{equation} \dif(x) = \dif_n(x) :=\ln\frac{A_n(x)}{B_n(x)} - n \ln x\s A_n(x) - x^n B_n(x)\s \DDDlogDx''(x)\end{equation}wherever $A_n(x)>0$, and then \begin{multline*} \Ddif(x) = \Ddif_n(x) :=\dif'(x)\frac{A_n(x)B_n(x)}{(n+1)(n-2)} \\ =-4 n^5 (x-1)^4 (x+1)^2+4 n^4 (x-1)^4 (x+1)^2+n^3 (x-1)^2 \left(15 x^4+16 x^3-10 x^2+16 x+15\right) \\ -4 n^2 \left(x^2-1\right)^2 \left(5 x^2+x+5\right)+n \left(-x^6+30 x^5+41 x^4-44 x^3+41 x^2+30 x-1\right) \\ +2 \left(3 x^6-6 x^5-11 x^4-36 x^3-11 x^2-6 x+3\right) \\ \s\dif'(x), \end{multline*}finally getting a polynomial in $n,x$.
Now we need to trace the above steps back:
Looking back at the polynomial $A_n(x)$, (for $x\in(0,1)$) we find that $A_n(x)\le0$ iff $x_1\le x\le x_2$, where $x_1=x_1(n)$ and $x_2=x_2(n)$ are the two roots of $A_n(x)$ in $(0,1)$ such that $x_1<x_2$.
Further, $\Ddif<0$ and hence $\dif'<0$ on $(0,x_1]$; and $\Ddif>0$ and hence $\dif'>0$ on $[x_2,1)$. So, $\dif$ decreases on $(0, x_1]$ and increases on $[x_2, 1)$. So, $\dif$ is $+-$ on $(0, x_1]$ (that is, $\dif$ can switch sign at most once on $(0, x_1]$, and only from $+$ to $-$). Similarly, $\dif$ is $-+$ on $[x_2, 1)$.
But also $\dif(1)=0$. So, actually $\dif<0$ on $[x_2, 1)$.
So, $\DDDlogDx''$ is $+-$ on $(0, x_1]$ and $\DDDlogDx'' < 0$ on $[x_2, 1)$.
Also, $A < 0$ and hence $\DDDlogDx'' < 0$ on $[x_1, x_2]$.So, $\DDDlogDx''$ is $+-$ on $(0, 1)$. So, $\DDDlogDx$ is convex-concave on $(0, 1)$. Also, $\DDDlogDx(1)=0$.
So, $\DDDlogDx$ is $+-+$ on $(0, 1)$. So, $\DlogDx$ is convex-concave-convex on $(0, 1)$. Also, $\DlogDx(1)=\DlogDx'(1)=\DlogDx''(1)=0>-8n(n^2-1)=\DlogDx'''(1)$ and $\DlogDx(0+)=2-n<0$. So, $\DlogDx$ is $-+$; so, $\logDx$ is decreasing-increasing.
Also, $\logDx(1-)=0$. So, $\logDx$ is $+-$, and hence so is $\Dx$ (with $z = \frac{1 + x}2$).
Recalling that $u'(x)\s\Dx$, we see that $u_n(x)$ is increasing-decreasing (in $x\in(0,1)$). Also, $u_n(0)=-\ln(1 - 2^{-n})>0$ and $u_n(1)=0$.
Thus, $u>0$, which concludes the proof.
|
uses two $2\times 2$ matrices $A$ and $B$ and calculates both their sum $A+B$ and their difference $A-B$. It is an online math tool specially programmed to perform matrix addition and subtraction between the two $2\times 2$ matrices. 2x2 matrix addition and subtraction calculator
Matrices are a powerful tool in mathematics, science and life. Matrices are everywhere and they have significant applications. For example, spreadsheet such as Excel or written a table represents a matrix. The word "matrix" is the Latin word and it means "womb". This term was introduced by J. J. Sylvester (English mathematician) in 1850.The first need for matrices was in the studying of systems of simultaneous linear equations.
A matrix is a rectangular array of numbers, arranged in the following way $$A=\left( \begin{array}{cccc} a_{11} & a_{12} & \ldots&a_{1n} \\ a_{21} & a_{22} & \ldots& a_{2n} \\ \ldots &\ldots &\ldots&\ldots\\ a_{m1} & a_{m2} & \ldots&a_{mn} \\ \end{array} \right)=\left[ \begin{array}{cccc} a_{11} & a_{12} & \ldots&a_{1n} \\ a_{21} & a_{22} & \ldots& a_{2n} \\ \ldots &\ldots &\ldots&\ldots\\ a_{m1} & a_{m2} & \ldots&a_{mn} \\ \end{array} \right]$$ There are two notation of matrix: in parentheses or box brackets. The terms in the matrix are called its entries or its elements. Matrices are most often denoted by upper-case letters, while the corresponding lower-case letters, with two subscript indices, are the elements of matrices. For examples, matrices are denoted by $A,B,\ldots Z$ and its elements by $a_{11}$ or $a_{1,1}$, etc. The horizontal and vertical lines of entries in a matrix are called rows and columns, respectively. The size of a matrix is a Descartes product of the number of rows and columns that it contains. A matrix with $m$ rows and $n$ columns is called an $m\times n$ matrix. In this case $m$ and $n$ are its dimensions. If a matrix consists of only one row, it is called a row matrix}. If a matrix consists only one column is called a column matrix. A matrix which contains only zeros as elements is called a zero matrix.
Matrices $A$ and $B$ can be added if and only if their sizes are equal. Such matrices are called
commensurate for addition or subtraction. Their sum is a matrix $C=A+B$ with elements $$c_{ij}=a_{ij}+b_{ij}$$ The matrix of sum has the same size as the matrices $A$ and $B$. This means, each element in $C$ is equal to the sum of the elements in $A$ and $B$ that are located in corresponding places. For example, $c_{12}=a_{12}+b_{12}$. If two matrices have different sizes, their sum is not defined. It is easy to prove that $A+B=B+A$, in other words the addition of matrices is commutative operation. A $2\times 2$ matrix has $2$ columns and $2$ rows. For example, the sum of two $2\times 2$ matrices $A$ and $B$ is a matrix $C$ such that $$\begin{align} &C=\left( \begin{array}{cc} a_{11} & a_{12} \\ a_{21} & a_{22} \\ \end{array} \right)+ \left( \begin{array}{cc} b_{11} & b_{12} \\ b_{21} & b_{22} \\ \end{array} \right)= \left(\begin{array}{ccc} a_{11}+b_{11}& a_{12}+b_{12} \\ a_{21}+b_{21} &a_{22}+b_{22}\\ \end{array}\right)\end{align}$$ For example, let us find the sum for $$A=\left( \begin{array}{cc} 5 & 8 \\ 3 & 8 \\ \end{array} \right)\quad\mbox{and}\quad B=\left( \begin{array}{cc} 3 & 8 \\ 8 & 9 \\ \end{array} \right)$$ Using the matrix addition formula, the sum of the matrices $A$ and $B$ is the matrix $$A+B=\left( \begin{array}{cc} 5+3 & 8+8 \\ 3+8 & 8+9 \\ \end{array} \right)=\left( \begin{array}{cc} 8 & 16 \\ 11 & 17 \\ \end{array} \right)$$
Matrices $A$ and $B$ can be subtracted if and only if their sizes are equal. The difference $A-B$ of two $m\times n$ matrices is equal to the sum $A + (-B)$,where $-B$ represents the additive inverse of the matrix $B$. So, the difference a matrix $C=A-B$ with elements$$c_{ij}=a_{ij}+(-b_{ij})=a_{ij}-b_{ij}$$The matrix $C$ has the same size as the matrices $A$ and $B$.This means, each element in $C$ is equal to the difference between the elements of $A$ and $B$ that are located in corresponding places. For example,$c_{12}=a_{12}-b_{12}$. If two matrices have different sizes, their difference is not defined.The subtraction of matrices is non-commutative operation, i.e $A-B\ne B-A$. Subtracting two matrices is very similar to adding matrices with the only difference being subtracting corresponding elements.
A $2\times 2$ matrix has $2$ columns and $2$ rows. For example, the difference between two $2\times 2$ matrices $A$ and $B$ is a matrix $C$ such that $$\begin{align} C=&\left( \begin{array}{cc} a_{11} & a_{12} \\ a_{21} & a_{22} \\ \end{array} \right)- \left( \begin{array}{ccc} b_{11} & b_{12} \\ b_{21} & b_{22} \\ \end{array} \right)= \left(\begin{array}{ccc} a_{11}-b_{11}& a_{12}-b_{12} \\ a_{21}-b_{21} &a_{22}-b_{22}\\ \end{array}\right)\end{align}$$ Let us apply the $2\times 2$ matrix subtraction formula for finding the difference $A-B$ for $$A=\left( \begin{array}{cc} 5 & 8 \\ 3 & 8 \\ \end{array} \right)\quad\mbox{and}\quad B=\left( \begin{array}{cc} 3 & 8 \\ 8 & 9 \\ \end{array} \right)$$ Therefore, the difference between the matrices $A$ and $B$ is the matrix $$A-B=\left( \begin{array}{cc} 5-3 & 8-8 \\ 3-8 & 8-9 \\ \end{array} \right)=\left( \begin{array}{cc} 2 & 0 \\ -5 & -1 \\ \end{array} \right)$$ The $2\times 2$ Matrix Addition and Subtraction work with steps shows the complete step-by-step calculation for finding the sum and difference of two $2\times2$ matrices $A$ and $B$ using the matrix addition and subtraction formulas. For any other matrices, just supply elements of $2$ matrices whose elements are real numbers and click on the GENERATE WORK button. The grade school students may use this $2\times 2$ Matrix Addition and Subtraction to generate the work, verify the results of addition and subtracting matrices derived by hand, or do their homework problems efficiently. This calculator can be of interest for solving linear equations and some other mathematical and real life problems.
Transformations in two-dimensional Euclidean plane can be represented by $2\times 2$ matrices, and ordered pairs (coordinates) can be represented by $2 \times 1$ matrices. Dilation, translation, axes reflections, reflection across the $x$-axis, reflection across the $y$-axis, reflection across the line $y=x$, rotation, rotation of $90^o$ counterclockwise around the origin, rotation of $180^o$ counterclockwise around the origin, etc, use $2\times 2$ matrix operations.
Practice Problem 1 : Find the sum $A+B$ and difference $B-A$ of matrices $$A=\left( \begin{array}{cc} 2 & 10 \\ 15 & -6 \\ \end{array} \right)\quad\mbox{and}\quad B=\left( \begin{array}{cc} -13 & -4 \\ 13 & 0 \\ \end{array} \right)$$ Practice Problem 2 : Translate the vertex matrix $\left( \begin{array}{cc} 1 & 3 \\ 5 & -6 \\ \end{array} \right)$ by the matrix $\left( \begin{array}{cc} -1 & 2 \\ -1 & 2 \\ \end{array} \right)$. The 2x2 matrix addition and subtraction calculator, formula, example calculation (work with steps), real world problems and practice problems would be very useful for grade school students (K-12 education) to understand the addition and subtraction of two or more matrices. Using this concept they can be able to look at real life situations and transform them into mathematical models.
|
I know implication (—>) is used for conditions like if x is true then b will be true but sometimes implication is used in other than these type of sentences For example : All A's are Bs: ∀X (a(X) ⇒ b(X)) I don't understand why implication is used here? And if implication is necessary to use here then why implication is not used in the example written below? Some A's are B's : ∃X (a(X) & b(X))
closed as off-topic by Ben N♦ Oct 6 '18 at 16:39
This question appears to be off-topic. The users who voted to close gave this specific reason:
"This question does not appear to be about artificial intelligence, within the scope defined in the help center." – Ben N
When we state in English that
"All As are Bs", this means that we gain information as soon as we observe an A, we can immediately deduce that it must also be a B. These are the kinds of situations where we use an implication. So, this would be written in formal logic as:
$$\forall X \left( A(X) \rightarrow B(X) \right)$$
When we state in English that
"Some As are Bs", we do not gain any new information just from observing that something is an A, we cannot deduce anything about that A. It might happen to be one of the As that simultaneously is a B, but it also might happen not to be one of those examples. So, it would be wrong to use an implication here. The only information that the English sentence gives us is that there is at least one thing somewhere that happens to be an A as well as a B, which is written formally as:
$$\exists X (A(X) \land B(X))$$
Suppose that we would have written the following in logic:
$$\exists X (A(X) \rightarrow B(X))$$
This would be translated to English as follows:
There exists some $X$ such that,
if it is an$A$, it is also a $B$.
That bolded part is very important there. Note that this logical statement is also true as soon as I find one example $X$ that is
not an $A$. For example, the following statement is true in the real world:
There exists some human $X$ such that, if $X$ can fly, $X$ can also shoot fireballs from his or her hands.
(this is true in the real world, because I can come up with many examples of humans who cannot fly)
|
Motivation:
Rich Sutton’s Bitter Lesson for AI essentially argues that weshould focus on meta-methods that scale well with compute instead of trying to understand the structure and function of biological minds. The latter according to him are
endlessly complex and therefore unlikely to scale. Furthermore, Rich Sutton(who works at Deep Mind) considers Deep Mind’s work on AlphaGo an exemplary model of AI research.
After reading Shimon Whiteson’s detailed rebuttal as well as David Sussilo’s reflection on loss functions I think it’s time to re-evaluate the scientific value of AlphaGo Zero research and the carbon footprint of a win-at-any-cost research culture. In particular, I’d like to address the following questions:
What kinds of real problems can be solved with AlphaGo Zero algorithms? What is the true cost of AlphaGo Zero? (i.e. the carbon footprint) Does Google’s carbon offsetting scheme accomplish more than virtue signalling? Should AI researchers be noting their carbon footprint in their publications? Finally, might energetic constraints present an opportunity for better AI research?
I haven’t seen these questions addressed in one manuscript but I believe they are related and timely, hence this article. Now, I’d like to add that we can’t seriouslyentertain notions of
safe AI without carefully developing environmentally friendly AI especially when the only thing that we know for certain is that we will do exponentially more FLOPs(i.e. computations) in the future. AlphaGo Zero’s contribution to humanity:
Before measuring the carbon footprint of AlphaGo Zero it’s a good idea to remind ourselves of the types of environments this ‘meta-method’ can handle:
The environments must have deterministicdynamics which simplifies planning considerably. The environment must be fully-observablewhich rules out large and complex environments. A perfect simulatormust be available to the agent which rules out any biologically-plausible environments. Evaluation is simpleand objective: win/lose. For biological organisms all rewards are internaland subjective. Static state-spaces and action-spaces: so we can’t generalise…not even to Go where .
These constraints effectively rule out the application of AlphaGo Zero’s algorithms to any practical problem in robotics because perfect simulators are non-existent. But, it may be used to solve any two-person board game which is a historic achievement and a great publicity stunt for Google assuming that the carbon footprint of this project is reasonable. This consideration is doubly important when you take into account the influence of Deep Mind on the modern AI research culture.
However, before estimating the metric tonnes of CO2 blasted into the atmosphere by Deep Mind let’s consider a related question. How much would it cost an entity outside of Google to replicate this type of research?
The cost of AlphaGo Zero in US dollars:
In ‘Mastering the game of Go without human knowledge’ [2] they had both a three day experiment as well as a forty day experiment. Let’s start with the three day experiment.
The three day experiment: Over 72 hours, ~ 5 million games were played. Each move of self-play used ~ 0.4 seconds of computer time and each self-play machine consisted of 4 TPUs. How many self-play machines were used?
If the average game has 200 moves we have:
\begin{equation} \frac{72\cdot 60 \cdot 60 \cdot N_{SP}}{200 \cdot 5 \cdot 10^6} \approx 0.4 \implies N_{SP} \approx 1500 \end{equation}
Given that each self-play machine contained 4 TPUs we have:
\begin{equation} N_{TPU} = 4 \cdot N_{SP} \approx 6000 \end{equation}
If we use the Google’s TPU pricing as of March 2019, the cost for an organisation outside of Google to replicate this experiment is therefore:
\begin{equation} \text{Cost} > 6000 \cdot 72 \cdot 4.5 \approx 2 \cdot 10^6 \quad \text{US dollars} \end{equation}
The forty day experiment:
For the forty day experiment one thing that’s different is that the policy network has twice as many layers so, as Dan Huang pointed out in his article, it’s reasonable to infer that twice the amount of time was used per move. So ~ 0.8 seconds rather than ~ 0.4 seconds.
Over 40 days, ~ 29 million games were played. Each move of self-play used ~ 0.8 seconds of computer time where each self-play machine consisted of 4 TPUs. How many self-play machines were used?
If the average game has 200 moves we have:
\begin{equation} \frac{40 \cdot 24 \cdot 3600 \cdot N_{SP}}{200 \cdot 29 \cdot 10^6} \approx 0.8 \implies N_{SP} \approx 1300 \end{equation}
Given that each self-play machine contained 4 TPUs we have:
\begin{equation} N_{TPU} = 4 \cdot N_{SP} \approx 5000 \end{equation}
If we use the Google’s TPU pricing as of March 2019, the cost for an organisation outside of Google to replicate this experiment is therefore:
\begin{equation} \text{Cost} > 5000 \cdot 960 \cdot 4.5 \approx 2 \cdot 10^7 \quad \text{US dollars} \end{equation}
It goes without saying that this is well outside the budget of any AI lab in academia.
The true cost of Google’s 40 day experiment:
This in my opinion is the more important calculation. While it’s not at all clear that AI research will ‘save the world’ in the long term, in the short term what is certain is that compute-intensive AI experiments have a non-trivial carbon footprint. So I think it would be wise to use our energy budget carefully and, realistically, the only way to do this is to calculate the carbon footprint of any AI research project and place it on the front page of your research paper. Meanwhile, let’s proceed with the calculation.
The nature of this calculation involves first converting TPU hours into KiloWatt Hours(KWH) and then converting this value to metric tonnes of CO2:
~5000 TPUs were used for 960 hours. ~40 Watts per TPU according to [6].
This means that we have:
\begin{equation} \text{KWH} = 5000 \cdot 960 \cdot 40 \approx 1.9 \cdot 10^5 \end{equation}
This is approximately 23 American homes’ electricity for a year according to the EPA.
In the USA, where Google Cloud TPUs are located, we have ~ ,5 kg of CO2/KWH so AlphaGo Zero was responsible for approximately 96 tonnes of CO2 into the atmosphere.
To appreciate the significance of 96 tonnes of CO2 over 40 days…this is approximately equivalent to 1000 hours of air travel and also approximately the carbon footprint of23 American homes for a
year. Relatively speaking, this is a large footprint for a board game ‘experiment’ that lasts 40 days.
Is this reasonable? At this point a Googler might start talking to me about Google’s carbon offsetting scheme.
Google’s carbon offsetting scheme:
I don’t have much time for this section because Google’s carbon offsetting scheme is basically a joke but let’s break it down anyway:
According to Google, the Google Cloud is supposedly 100% sustainable because Google purchases an equal amount of renewable energy for the total amount of energy used by their Cloud infrastructure.
If you check the charts of Urs Hölze, the Senior VP of technical infrastructure at Google, this means that they buy a lot of wind(~ 92%) and some solar(~ 8%).
Let’s suppose we can take these points at face value. Does this carbon offsetting scheme actually work out?
David J.C. Mackay, a giant of 20th century machine learning, would probably be rolling in his grave right now because he spent the last part of his life carefully assessing the potential contribution of wind and solar to humanity’s energy budget [7]. He was in fact
Scientific Advisor to the Department of Energy and Climate Changeand his essential contribution was to explain how the fundamental limits to wind and solar energy technologies weren’t technological; we are talking about hard physical limits. I will refer the reader to ‘Sustainable Energy-without the hot air’ by David J.C. Mackay which is freely available online ratherthan repeat his thorough calculations here.
Unfortunately, no combination of wind and solar energy can provide energy security for a country with the USA’s energy requirements. In the best case scenario, Google’s carbon offsetting scheme is thinly veiled virtue signalling. What then are the serious clean energy solutions?
Past the year 2050 it’s possible to make a strong case for nuclear fusion as being necessary for human civilisation to continue. Between now and the day we figure out how to engineer reliable nuclear fusion reactors we should use our energy budget wisely.
Boltzmann’s razor:
According to various sources the human brain uses ~20 Watts which is incredibly efficient compared to the 200 KiloWatts used by 5000 TPUs. In other words, AlphaGo Zero was ten thousand times less energy efficient than a human being for a comparable result. I don’t see how this is a strong argument for
scalability at all.
The human brain isn’t an outlier. All biological organisms are energy efficient because they must first survive the second law of thermodynamics which is a minimum energy principle. Now, there are two ways organisms perform computations in an economical manner that I am aware of:
Morphological computation:
a. If you check the work of Tad McGeer [8] you will realise that it’s possible to build a walking robot without any electronics that simply exploits the laws of classical mechanics. It does computations by virtue of having a body. Some researchers might say that this is an instance of
embodied cognition[12].
b. Romain Brette and his collaborators have been working on a project that involves a
swimming neuron. This is an organism, the Paramecium, that has a single cell yet it’s capable of navigation, hunting, and procreation in very complex environments. How does the Paramecium do this? What is the reward function? Is it doing reinforcement learning?
The role of development:
a. If you consider any growing organism you will realise that its
state spaceand action spaceare rapidly changing. This should make learning very hard. Yet, development is in some sense a form of curriculum learning and makes learning simpler.
b. I must add that during development the brain of the organism is rapidly changing. Shouldn’t this make learning impossible?
Morphospaces and developmental trajectories are fundamentally physical considerations. In some fundamental way organisms succeed in reorganizing physics locally. Termites in the desert construct mounds whose physical behavior is consistent with but not reducible to the physics of sand. Birds build nests whose physics isn’t reducible to its constituent parts. The resulting systems do
computations in an economical manner by taking thermodynamics into account.
This is why energy efficiency is both a challenge and opportunity. It will force researchers to recognize the importance of understanding the biophysics of organisms at every scale where such biophysics contributes to survival. If I may distill this into a single principle I would call it
Boltzmann’s razor: Given two comparably effective intelligent systems focus on the research and development of those systems which consume less energy.
Naturally, the more economical system would be capable of accomplishing more tasks given the same amount of energy.
Discussion:
Of the AI researchers I have discussed the above issues with I noted a bimodal distribution. Roughly 30% agreed with me and roughly 70% pushed back really hard. Among the counter-arguments of the second group I remember the following:
If you force AI researchers to reduce their carbon footprint you will killAI research. Why do you care about what Google does? It’s their own money and they can do whatever they want with it. You’re not a real AI researcher anyway. Why do you care about things outside your field?
I think these are all terrible arguments. Regarding the ad hominem, like many masters students I’m 1.5 years away from starting a PhD. I have already met a potential PhD supervisor that I have been in touch with since 2017. I will add that last year I worked as a consultant on an object detection project where I engineered a state-of-the-art object detection system inspired by Polygon RNN for a Central European computer vision company using only one NVIDIA GTX 1080 Ti [10]. Part of this system is on Github.
So not only do I know what I’m talking about but I have experience building reliable systems in a resourceful manner. In fact, resourcefulness is a direct implication of
Boltzmann’s razor. References: D. Sutton. The Bitter Lesson. 2019. D. Silver et al. Mastering the game of Go without human knowledge. 2017. A. Karpathy. AlphaGo, in context. 2017. D. Huang. How much did AlphaGo Zero cost? 2018. The Twitter Thread of Shimon Whiteson: https://twitter.com/shimon8282/status/1106534178676506624 This Tweet by David Sussillo: https://twitter.com/SussilloDavid/status/1106643708626137089 Google Inc. In-Datacenter Performance Analysis of a Tensor Processing Unit. 2017. D. McKay. Sustainable Energy-without the hot air. 2008. T. McGeer. Passive Dynamic Walking. 1990. L. Castrejon et al. Annotating Object Instances with a Polygon-RNN. 2017. D. B. Chklovskii & C. F. Stevens. Wiring optimization in the brain. 2000. G. Montufar et al. A Theory of Cheap Control in Embodied Systems. 2014.
|
Image Processing
Image editing software like Photoshop provide many image effects like blur, sharpen, and edge detection. These effects are commonly implemented through a kernel, a matrix describing a certain image effect. The kernel is applied to an image through convolution: flipping both the rows and columns of the kernel and then multiplying locationally similar entries and summing.
For example, let’s say we have the $4 \times 4$ image $ \begin{bmatrix} a & b & c & d \\ e & f & g & h \\ i & j & k & l \\ m & n & o & p \end{bmatrix} $ and the $2 \times 2$ kernel $ \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}. $
After convolution, the resulting $3 \times 3$ image would be:\[ \begin{bmatrix} 4a+3b+2e+1f & 4b+3c+2f+1g & 4c+3d+2g+1h \\ 4e+3f+2i+1j & 4f+3g+2j+1k & 4g+3h+2k+1l \\ 4i+3j+2m+1n & 4j+3k+2n+1o & 4k+3l+2o+1p \end{bmatrix} \]
A real-world example of this is the blur effect:
\[ \Rightarrow \frac{1}{9} \begin{bmatrix} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{bmatrix} \Rightarrow \]
Photo by Michael Plotke cc by-sa 3.0
Input
The first line of the input contains four integers, $H$, the height of the image, $W$, the width of the image, $N$, the height of the kernel, and $M$, the width of the kernel ($1 \leq H \leq 20$, $1 \leq W \leq 20$, $1 \leq N \leq H$, $1 \leq M \leq W$).
Each of the next $H$ lines contains $W$ integers, between $0$ and $100$ inclusive, representing the image.
Each of the next $N$ lines contains $M$ integers, between $0$ and $100$ inclusive, representing the kernel.
Output
Output the resulting image after convolution, consisting of $H-N+1$ lines, each with $W-M+1$ integers.
Sample Input 1 Sample Output 1 4 4 2 2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 1 2 3 4 26 36 46 66 76 86 106 116 126
|
The mass of the Moon is 7.342×10
22 kg.One ton is 10 3 kg. How much is thousands of tons? Let's say you have thousands of thousands of tons. That's one million tons, or 10 9 kg. This is still ten thousands times a billion less than the mass of the moon.
(source: Diego Delso)
Just as a comparison, one of the largest mines that ever operated on Earth, Chuquicamata in Chile, produced much more than "thousands of tons":
...it remains the mine with by far the largest total production of approximately 29 million tonnes of copper to the end of 2007...
And even though it's a big hole in the ground, it is completely negligible relative to the mass of the Earth (or the Moon for that matter).
To answer your question:
Does mining huge amounts of resources on Moon will change its orbit?
The answer is
no.
EDIT - the below is wrong because I don't know physics as much as I thought
Let's assume you mined 100 Chuquicamatas on the Moon and removed this mass to build your colonisation fleet. Let's ignore effects of momentum and potential energies etc. The orbit velocity is defined by $v=\sqrt{GM/r}$, where $M$ is the mass. Assuming constant orbit, the new velocity is defined by $v' = \sqrt{M_0 / M_1}$.
Let's plug in some numbers:
$v' = \sqrt{\frac{7.342×10^{22}}{7.342×10^{22} - 100\times26×10^6}} = 1.0000000000000178$
The Moon's orbit is going to be 1.0000000000000178 times faster.
EDIT - so let's change it a bit.
Since $M$ in the equation $v=\sqrt{GM/r}$ is the mass of Earth, the mass of the Moon means nothing. Then the velocity to radius ratio is fixed regardless. It will not change a thing!
However, we can still calculate the mass change percentage. It will be:
$\frac{7.342×10^{22} - 100\times26×10^6}{7.342×10^{22}} = 0.99999999999996458$
The Moon's new mass will be 99.999999999996458% of the Moon's old mass. Completely meaningless.
Just as an aside, it is often talked about mining asteroids, moons, and all kinds of other extraterrestrial bodies. For example, as you said, the Moon has plenty of helium-3. Asteroids have plenty of precious metals like platinum or iridium. But this is pointless, because you know what place has even more helium, platinum, and iridium? Earth. Earth has much more. And it's easier to mine because you don't need to build spaceships and facilities in hostile environments to do it.
EDIT 2 - space mining
Some comments mentioned that getting things out of the Moon is easier than it is on Earth, and that you can dump things on the Moon without environmental impact. It doesn't work like this.
In films and video games you "mine a resource" (e.g. iridium) from a planetary body. In real life, you build the mining facilities, you build the refining and smelting facilities, you need people to do it, even if you have robots you need people to fix the robots, you need to feed the people, entertain the people, you need a constant supply of consumables to refine the stuff. And this is only the "resource". You don't build spaceships out of helium. You build them out of steel/aluminium/carbon-composites. So you need to mine that as well, and you need to smelt that as well. You also need to build everything on the Moon because otherwise you need to transport all your resources to another place, so you need factories. To do all of that, you are going to need quite a lot of population. And then the environmental factor becomes important. Your mining will generate huge amounts of dust (combination of dryness and low gravity). This just doesn't work.
|
Consider $k$ Hermitian matrices $A_1,\ldots,A_k$ of dimension $d \times d$. Their joint numerical range (JNR) $L(A_1,\ldots,A_k)$ is defined as $$ L(A_1,\ldots,A_k) = \left\{ \left( \Tr \rho A_1,\ldots, \Tr \rho A_k \right): \rho \in \Omega_d \right\}. $$ Due to taking all mixed states the joint numerical range is a convex body in a $k$-dimensional space.
This classification comes from [1]. Consider two Hermitian matrices $A_1$ and $A_2$ of size $3 \times 3$. Then there are four possible shapes of the JNRs
This classification is taken from [2] (see for details). Such JNRs must obey the following rules
all configurations of Figure 2. We are unaware of earli
All configurations permitted by these rules are realized. Let us denote by $e$ the number of ellipses in the boundary and by $s$ the number of segments. There exist object with:
Additionally in the qutrit case, if there exist of the JNR, the following configurations are possible:
|
I have exciting news today: The first ever joint paper by Monks, Monks, Monks, and Monks has been accepted for publication in Discrete Mathematics.
These four Monks’s are my two brothers, my father, and myself. We worked together last summer on the notorious $3x+1$ conjecture (also known as the Collatz conjecture), an open problem which is so easy to state that a child can understand the question, and yet it has stumped mathematicians for over 70 years.
The $3x+1$ problem asks the following: Suppose we start with a positive integer, and if it is odd then multiply it by $3$ and add $1$, and if it is even, divide it by $2$. Then repeat this process as long as you can. Do you eventually reach the integer $1$, no matter what you started with?
For instance, starting with $5$, it is odd, so we apply $3x+1$. We get $16$, which is even, so we divide by $2$. We get $8$, and then $4$, and then $2$, and then $1$. So yes, in this case, we eventually end up at $1$.
That’s it! It’s an addictive and fun problem to play around with (see http://xkcd.com/710/), but it is frustratingly difficult to solve completely.
So far, it is known that all numbers less than $20\cdot 2^{58}$, or about $5.8$ billion
billion, eventually reach $1$ when this process is applied. (See Silva’s and Roosendaal’s websites, where calculations are continually being run to verify the conjecture for higher and higher integers.)
But what about the general case?
Let’s define the Collatz function $T$ on the positive integers by:
$T(x)=\begin{cases}\frac{3x+1}{2} & x\text{ is odd} \\ \frac{x}{2} & x\text{ is even}\end{cases}$.
Then the conjecture states that the sequence $x, T(x), T(T(x)), T(T(T(x))),\ldots$ has $1$ as a term. Notice that $T(1)=2$ and $T(2)=1$, so $1$ is a cyclic point of $T$.
We can draw a graph on the positive integers in which we connect $x$ to $T(x)$ with an arrow for all $x$, and color it red if $x$ is odd and black if $x$ is even. The portion of the graph near $1$ looks like this:
We just want to show that this graph is connected – that there are no other components with very large integers that don’t connect back to $1$.
Last summer, we started out with some previous ideas and partial progress. In 2006, one member of our family research team, my brother Ken M. Monks, demonstrated that it suffices to prove the conjecture for some arithmetic sequence in the Collatz graph. (The paper is available here.) With this as a starting point, we worked to understand how arithmetic sequences are distributed across the Collatz graph. Where do they occur? How far apart are their elements?
By the end of the summer, we realized that there was some beautiful structure in the Collatz graph lying before us. We proved several surprising number-theoretic results, for instance, that every $T$-orbit must contain an integer congruent to $2$ modulo $9$.
We’re not sure if this will lead to a proof of the conjecture, but we found a few big gemstones that might give us a boost in the right direction. A preprint of the paper can be found here:
Enjoy, and feel free to post a comment with any ideas on the conjecture you may have!
|
Kakeya problem
Define a
Kakeya set to be a subset [math]A[/math] of [math][3]^n\equiv{\mathbb F}_3^n[/math] that contains an algebraic line in every direction; that is, for every [math]d\in{\mathbb F}_3^n[/math], there exists [math]a\in{\mathbb F}_3^n[/math] such that [math]a,a+d,a+2d[/math] all lie in [math]A[/math]. Let [math]k_n[/math] be the smallest size of a Kakeya set in [math]{\mathbb F}_3^n[/math].
Clearly, we have [math]k_1=3[/math], and it is easy to see that [math]k_2=7[/math]. Using a computer, I also found [math]k_3=13[/math] and [math]k_4\le 27[/math]. I suspect that, indeed, [math]k_4=27[/math] holds (meaning that in [math]{\mathbb F}_3^4[/math] one cannot get away with just [math]26[/math] elements), and I am very curious to know whether [math]k_5=53[/math]: notice the pattern in
[math]3,7,13,27,53,\ldots[/math]
we have the trivial inequalities
[math]k_n\le k_{n+1}\le 3k_n[/math]
The Cartesian product of two Kakeya sets is another Kakeya set; this implies that [math]k_{n+m} \leq k_m k_n[/math]. This implies that [math]k_n^{1/n}[/math] converges to a limit as n goes to infinity.
General lower bounds
Dvir, Kopparty, Saraf, and Sudan showed that [math]k_n \geq 3^n / 2^n[/math].
We have
[math]k_n(k_n-1)\ge 3(3^n-1)[/math]
since for each [math]d\in {\mathbb F}_3^r\setminus\{0\}[/math] there are at least three ordered pairs of elements of a Kakeya set with difference [math]d[/math]. (I actually can improve the lower bound to something like [math]k_r\gg 3^{0.51r}[/math].)
For instance, we can use the "bush" argument. There are [math]N := (3^n-1)/2[/math] different directions. Take a line in every direction, let E be the union of these lines, and let [math]\mu[/math] be the maximum multiplicity of these lines (i.e. the largest number of lines that are concurrent at a point). On the one hand, from double counting we see that E has cardinality at least [math]3N/\mu[/math]. On the other hand, by considering the "bush" of lines emanating from a point with multiplicity [math]\mu[/math], we see that E has cardinality at least [math]2\mu+1[/math]. If we minimise [math]\max(3N/\mu, 2\mu+1)[/math] over all possible values of [math]\mu[/math] one obtains approximately [math]\sqrt{6N} \approx 3^{(n+1)/2}[/math] as a lower bound of |E|, which is asymptotically better than [math](3/2)^n[/math].
Or, we can use the "slices" argument. Let [math]A, B, C \subset ({\Bbb Z}/3{\Bbb Z})^{n-1}[/math] be the three slices of a Kakeya set E. We can form a graph G between A and B by connecting A and B by an edge if there is a line in E joining A and B. The restricted sumset [math]\{a+b: (a,b) \in G \}[/math] is essentially C, while the difference set [math]\{a-b: (a-b) \in G \}[/math] is all of [math]({\Bbb Z}/3{\Bbb Z})^{n-1}[/math]. Using an estimate from this paper of Katz-Tao, we conclude that [math]3^{n-1} \leq \max(|A|,|B|,|C|)^{11/6}[/math], leading to the bound [math]|E| \geq 3^{6(n-1)/11}[/math], which is asymptotically better still.
General upper bounds
We have
[math]k_n\le 2^{n+1}-1[/math]
since the set of all vectors in [math]{\mathbb F}_3^n[/math] such that at least one of the numbers [math]1[/math] and [math]2[/math] is missing among their coordinates is a Kakeya set.
Question: can the upper bound be strengthened to [math]k_{n+1}\le 2k_n+1[/math]?
Another construction uses the "slices" idea and a construction of Imre Ruzsa. Let [math]A, B \subset [3]^n[/math] be the set of strings with [math]n/3+O(\sqrt{n})[/math] 1's, [math]2n/3+O(\sqrt{n})[/math] 0's, and no 2's; let [math]C \subset [3]^n[/math] be the set of strings with [math]2n/3+O(\sqrt{n})[/math] 2's, [math]n/3+O(\sqrt{n})[/math] 0's, and no 1's, and let [math]E = \{0\} \times A \cup \{1\} \times B \cup \{2\} \times C[/math]. From Stirling's formula we have [math]|E| = (27/4 + o(1))^{n/3}[/math]. Now I claim that for most [math]t \in [3]^{n-1}[/math], there exists an algebraic line in the direction (1,t). Indeed, typically t will have [math]n/3+O(\sqrt{n})[/math] 0s, [math]n/3+O(\sqrt{n})[/math] 1s, and [math]n/3+O(\sqrt{n})[/math] 2s, thus [math]t = e + 2f[/math] where e and f are strings with [math]n/3 + O(\sqrt{n})[/math] 1s and no 2s, with the 1-sets of e and f being disjoint. One then checks that the line [math](0,f), (1,e), (2,2e+2f)[/math] lies in E.
This is already a positive fraction of directions in E. One can use the random rotations trick to get the rest of the directions in E (losing a polynomial factor in n).
Putting all this together, I think we have
[math](3^{6/11} + o(1))^n \leq k_n \leq ( (27/4)^{1/3} + o(1))^n[/math]
or
[math](1.8207\ldots+o(1))^n \leq k_n \leq (1.88988+o(1))^n[/math]
|
A forum where anything goes. Introduce yourselves to other members of the forums, discuss how your name evolves when written out in the Game of Life, or just tell us how you found it. This is the forum for "non-academic" content.
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: viewtopic.php?p=44724#p44724
Like this:
[/url][/wiki][/url]
[/wiki]
[/url][/code]
Many different combinations work. To reproduce, paste the above into a new post and click "preview".
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
I wonder if this works on other sites? (Remove/Change )
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
Related:[url=http://a.com/]
[/url][/wiki]
My signature gets quoted. This too. And my avatar gets moved down
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: Saka wrote:
Related:
[
Code: Select all
[wiki][url=http://a.com/][quote][wiki][url=http://a.com/]a[/url][/wiki][/quote][/url][/wiki]
]
My signature gets quoted. This too. And my avatar gets moved down
It appears to be possible to quote the entire page by repeating that several times. I guess it leaves <div> and <blockquote> elements open and then autofills the closing tags in the wrong places.
Here, I'll fix it:
[/wiki][url]conwaylife.com[/url]
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact:
It appears I fixed @Saka's open <div>.
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
toroidalet Posts: 1018 Joined: August 7th, 2016, 1:48 pm Location: my computer Contact:
A for awesome wrote:It appears I fixed @Saka's open <div>.
what fixed it, exactly?
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: toroidalet wrote:
A for awesome wrote:It appears I fixed @Saka's open <div>.
what fixed it, exactly?
The post before the one you quoted. The code was:
Code: Select all
[wiki][viewer]5[/viewer][/wiki][wiki][url]conwaylife.com[/url][/wiki]
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
Aidan, could you fix your ultra quote? Now you can't even see replies and the post reply button. Also, a few more ones eith unique effects popped up.
Appart from Aidan Mode, there is now: -Saka Quote -Daniel Mode -Aidan Superquote We should write descriptions for these: -Adian Mode: A combination of url, wiki, and code tags that leaves the page shaterred in pieces. Future replies are large and centered, making the page look somewhat old-ish. -Saka Quote: A combination of a dilluted Aidan Mode and quotes, leaves an open div and blockquote that quotes the entire message and signature. Enough can quote entire pages. -Daniel Mode: A derivative of Aidan Mode that adds code tags and pushes things around rather than scrambling them around. Pushes bottom bar to the side. Signature gets coded. -Aidan Superqoute: The most lethal of all. The Aidan Superquote is a broken superquote made of lots of Saka Quotes, not normally allowed on the forums by software. Leaves the rest of the page white and quotes. Replies and post reply button become invisible. I would not like new users playing with this. I'll write articles on my userpage.
Last edited by Saka
on June 21st, 2017, 10:51 pm, edited 1 time in total.
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA
I actually laughed at the terminology.
"IT'S TIME FOR MY ULTIMATE ATTACK. I, A FOR AWESOME, WILL NOW PRESENT: THE AIDAN SUPERQUOTE" shoots out lasers
This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.)
Current rule interest: B2ce3-ir4a5y/S2-c3-y
fluffykitty
Posts: 638 Joined: June 14th, 2014, 5:03 pm
There's actually a bug like this on XKCD Forums. Something about custom tags and phpBB. Anyways,
[/wiki]
I like making rules
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
Here's another one. It pushes the avatar down all the way to the signature bar. Let's name it...
-Fluffykitty Pusher
Unless we know your real name that's going to be it lel. It's also interesting that it makes a code tag with purple text.
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact:
Probably the simplest ultra-page-breaker:
Code: Select all
[viewer][wiki][/viewer][viewer][/wiki][/viewer]
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X A for awesome wrote:
Probably the simplest ultra-page-breaker:
Code: Select all
[viewer][wiki][/viewer][viewer][/wiki][/viewer]
Screenshot?
New one yay.
-Adian Bomb: The smallest ultra-page breaker. Leaks into the bottom and pushes the pages button, post reply, and new replies to the side.
Last edited by Saka
on June 21st, 2017, 10:20 pm, edited 1 time in total.
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA
Someone should create a phpBB-based forum so we can experiment without mucking about with the forums.
This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.)
Current rule interest: B2ce3-ir4a5y/S2-c3-y
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
The testing grounds have now become similar to actual military testing grounds.
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
fluffykitty
Posts: 638 Joined: June 14th, 2014, 5:03 pm
We also have this thread. Also,
is now officialy the Fluffy Pusher. Also, it does bad things to the thread preview when posting. And now, another pagebreaker for you:
Code: Select all
[wiki][viewer][/wiki][viewer][/viewer][/viewer]
Last edited by fluffykitty
on June 22nd, 2017, 11:50 am, edited 1 time in total.
I like making rules
83bismuth38 Posts: 453 Joined: March 2nd, 2017, 4:23 pm Location: Still sitting around in Sagittarius A... Contact:
oh my, i want to quote somebody and now i have to look in a diffrent scrollbar to type this. intersting thing, though, is that it's never impossible to fully hide the entire page -- it will always be in a nested scrollbar.
EDIT: oh also, the thing above is kinda bad. not horrible though -- i'd put it at a 1/13 on the broken scale.
Code: Select all
x = 8, y = 10, rule = B3/S23
3b2o$3b2o$2b3o$4bobo$2obobobo$3bo2bo$2bobo2bo$2bo4bo$2bo4bo$2bo!
No football of any dui mauris said that.
Cclee Posts: 56 Joined: October 5th, 2017, 9:51 pm Location: de internet
Code: Select all
[quote][wiki][viewer][/wiki][/viewer][wiki][/quote][/wiki]
This dosen't do good things
Edit:
Code: Select all
[wiki][url][size=200][wiki][viewer][viewer][url=http://www.conwaylife.com/forums/viewtopic.php?f=4&t=2907][/wiki][/url][quote][/url][/quote][/viewer][wiki][quote][/wiki][/quote][url][wiki][quote][/url][/wiki][/quote][url][wiki][/url][/wiki][/wiki][/viewer][/size][quote][viewer][/quote][/viewer][/wiki][/url]
Neither does this
^
What ever up there likely useless Cclee Posts: 56 Joined: October 5th, 2017, 9:51 pm Location: de internet
Code: Select all
[viewer][wiki][/viewer][wiki][url][size=200][wiki][viewer][viewer][url=http://www.conwaylife.com/forums/viewtopic.php?f=4&t=2907][/wiki][/url][quote][/url][/quote][/viewer][wiki][quote][/wiki][/quote][url][wiki][quote][/url][/wiki][/quote][url][wiki][/url][/wiki][/wiki][/viewer][/size][quote][viewer][/quote][/viewer][/wiki][/url][viewer][/wiki][/viewer]
I get about five different scroll bars when I preview this
Edit:
Code: Select all
[viewer][wiki][quote][viewer][wiki][/viewer][/wiki][viewer][viewer][wiki][/viewer][/wiki][/quote][viewer][wiki][/viewer][/wiki][quote][viewer][wiki][/viewer][viewer][wiki][/viewer][/wiki][/wiki][/viewer][/quote][/viewer][/wiki]
Makes a really long post and makes the rest of the thread large and centred
Edit 2:
Code: Select all
[url][quote][quote][quote][wiki][/quote][viewer][/wiki][/quote][/viewer][/quote][viewer][/url][/viewer]
Just don't do this
(Sorry I'm having a lot of fun with this)
^
What ever up there likely useless cordership3 Posts: 127 Joined: August 23rd, 2016, 8:53 am Location: haha long boy
Here's another small one:
Code: Select all
[url][wiki][viewer][/wiki][/url][/viewer]
fg
Moosey Posts: 2482 Joined: January 27th, 2019, 5:54 pm Location: A house, or perhaps the OCA board. Contact:
Code: Select all
[wiki][color=#4000BF][quote][wiki]I eat food[/quote][/color][/wiki][code][wiki]
[/code]
Is a pinch broken
Doesn’t this thread belong in the sandbox?
I am a prolific creator of many rather pathetic googological functions
My CA rules can be found here
Also, the tree game
Bill Watterson once wrote: "How do soldiers killing each other solve the world's problems?"
77topaz Posts: 1345 Joined: January 12th, 2018, 9:19 pm
Well, it started out as a thread to documents "Bugs & Errors" in the forum's code...
Moosey Posts: 2482 Joined: January 27th, 2019, 5:54 pm Location: A house, or perhaps the OCA board. Contact:
77topaz wrote:Well, it started out as a thread to documents "Bugs & Errors" in the forum's code...
Now it's half an aidan mode testing grounds.
Also, fluffykitty's messmaker:
Code: Select all
[viewer][wiki][*][/viewer][/*][/wiki][/quote]
I am a prolific creator of many rather pathetic googological functions
My CA rules can be found here
Also, the tree game
Bill Watterson once wrote: "How do soldiers killing each other solve the world's problems?"
PkmnQ
Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode
Don't worry about this post, it's just gonna push conversation to the next page so I can test something while actually being able to see it. (The testing grounds in the sandbox crashed golly)
Code: Select all
x = 12, y = 12, rule = AnimatedPixelArt
4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX
2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW
EqWE$2.4vX!
i like loaf
|
Using TeX Notation Note: You are currently viewing documentation for Moodle 2.0. Up-to-date documentation for the latest stable version is available here: Using TeX Notation.
While there are quite a few notational systems employed for the purpose of representing Math notation, Moodle core provides a TeX filter that can be configured to employ identified latex, dvips and convert binaries to display a gif or png representation of a Tex expression (and hopefully will soon be able to take advantage of newer Tex distributions which will rely on latex and dvipng.) The Moodle core Tex filter falls back to the use of MimeTex if these binaries can't be located. Note that the core TeX filter is not the only way to display Tex expressions in Moodle and the discussion of Mathematics tools address a variety of other solutions. Additionally, the results obtained from the Moodle Tex or algebra filters are dependent on the Tex binaries you have installed.
If you use the Moodle native TeX Notation, you have to realize that this is not the only way of using TeX in Moodle and there are quite a few other "flavors" in the Tex world. You must also accept that the Moodle implementation of TeX is very limited, and a lot of things that work in other varieties of TeX and Latex will not work in TeX Notation. For example, there are three major Tex modes, but the Moodle core Tex filter employs only one. To make matters even more confusing, Moodle Docs now use Tex Live, which uses the delimiters <math>statement</math> to denote TeX statements, yet these pages demonstrate the use of tokens, the $$ statement $$ token, that implement TeX in Moodle's native TeX Notation. Essentially, what may work in one Tex implementation, may not work in another - yet a lot of the actual maths coding is exactly the same, no matter how it is denoted.
TeX itself is felt by some to present a significant learning curve, and the internet offers a number of tutorials. A.J. Hildebrand, a Math professor at UIUC offers resources and a tutorial here that you may find helpful. However, the basics of TeX can be mastered quite quickly.
There are a number of Maths tools available and probably one of the more useful tools for using Tex in Moodle (or elsewhere for that matter), is Dragmath which will allow you to use a GUI constructor to build your expression, and then insert it in the format you choose.
There is now, for Moodle 2.x, and Advanced Maths Tools plugin.
Contents NOTICE
The discussion in these pages is centred entirely upon using the TeX Notation filter in Moodle. The code examples given are proven to work inside Moodle, but much of the discussion that addresses Tex syntax can be generalized to other applications that parse Tex. So far, all items tested work in both Moodle 1.9.x and Moodle 2.0 without change or further refinement.
Using TeX Notation with the Moodle Tex filter
For the most part, the TeX Notation has been built using a sub-set of characters from the TeX "default" character set. The trouble is there does not seem to be a "default" character set for TeX. This is one of the most confusing aspects of using TeX Notation in Moodle. When we realise that the documentation we are using is related to the creation of printed documents, and we want to use TeX on line, in Moodle, then further problems occur. There are no environment statements to be made. There are few \begins and \ends. If you go to Administration > Modules > Filters > Filter Manager you will see what filters have been enabled. If you then go to the TeX Notation page, the default preamble is editable via the text box. Using this tool you can add in or subtract font packages and other packages, change the default font package, etc.
Language Conventions
To invoke the TeX filter, use the $$ symbols to open and close statements. To invoke a particular command or control sequence, use the backslash, \. A typical control sequence looks like:
$$ x\ =\ \frac{\sqrt{144}}{2}\ \times\ (y\ +\ 12) $$
Fraction and square root. Additional spaces are placed into the equation by using the \ without a trailing character. Escape characters, of which there are a few in Moodle's TeX Notation, have the \ in front of them. These are usually set aside for reserved characters. NOTE:it also appears that different Moodles will produce different results with regard to spacing so it may require a bit of trial and error to get right. Available Characters
There seems to be a number of differences between what characters are available in Tex Notation and what are not and what is described in Mimetex. There also seems to be great discrepancies between what the Tex and Latex manuals tell you is available and what is actually available. If you are using such manuals or web sites, they are likely to be more confusing than helpful. Using TeX Notation 2 shows a lot of what is available, but not all.
Windows and TeX
Using TeX in Windows is simpler than it used to be. Download and install a TeX for Windows program, like MikTeX. (MikTeX is probably the most useful at this point in time for Windows Users.) Also, a graphics package is required for rendering the scripted TeX statements into images, and probably the most readily available and easily installable program useful for this purpose would be ImageMagick. While there may be better programs available for these purposes, these are the most immediately useful.
Recommendation:The programs are easily installed and configured however, be aware, that for XP you can use the Program files folder to install the graphics program, but not the textmf folder, it should be installed as a stand alone folder. It has been noted that in Windows Vista or Windows 7 you should never use system folders for latex distributions, GhostScript or Imagemagick. Known Issue
The introduction of PHP 5.3 caused some TeX rendering to break in Moodle. The flaw was traced back to the way deprecated functions were reported. Moodle Tracker responded with this:
In file
filter/tex/pix.php line 29: error_reporting(E_ALL);
corrupts TeX images with php 5.3.X and there are at least 2 easy ways to fix it:
1) To totally remove line 29 and error reporting from that file
2) To change line 29 to:
error_reporting(E_ALL & ~E_DEPRECATED);
so that php 5.3.X does not show the deprecated functions warnings that otherwise would prevent showing of TeX images.
Reserved Characters and Keywords
Most characters and numbers on the keyboard can be used at their default value. As with any computing language, though, there are a set of reserved characters and keywords that are used by the program for its own purposes. TeX Notation is no different, but it does have a very small set of Reserved Characters. This will not be a complete list of reserved characters, but some of these are:
@ # $ % ^ & * ( ) .
To use these characters in an equation just place the \ in front of them \$ or \%. If you want to use the backslash, just use \backslash. The only exception here seems to be the &, ampersand. See the characters listed in Using TeX Notation 2 for more details.
The Keywords, they are different. There is only one that is of concern here and that it the word "line". If the \line control sequence, or command, is not properly implemented, then the consequences can be indeterminate. Controlling lines is an adventure of its own, so getting it right when written is important, you can always reposition the line, but you might get it really wrong if you do not use it with some caution.
To use the \line control sequence, go to your text editor, open the filter\lib.php file. In this file look for the array
$tex_blacklist = array(, and in the list of words that follow, you will see the word "\line". Delete the word, with its enclosing single quote marks and trailing comma, from the list.
If you do not have direct access to the server, then the whole thing becomes problematic. You may want to download and install your own Moodle and develop things there rather than on your production site. You can write things out in your own Moodle, render the statement, save the image and then upload it to your production site. If something goes seriously wrong, it is easier to restart your own Moodle than it is your production site.
WARNING: If you get a dimension wrong on a line, it can actually prevent you from creating any new TeX. You will see the offending control sequence in its natural state. Everything that you have written to that point will work, but nothing you have written after that will. Maths Mode
A full TeX version has three modes, a "text mode", an "inline maths mode" and a "maths display mode" but Moodle seems to stay in "inline maths mode". Perhaps a better description of what happens is that Moodle allows a writer to enter "inline maths mode" when the $$ symbols are written and leave it when the $$ symbol appears a second time. Moodle does not appear to use the "maths display mode". The command sequences beginning with the $$ are not current practice in any major version of TeX, and why they are used and work in Moodle is not an issue for discussion here. Current common practice in most other flavours of TeX uses a different set of command initiation sequences.
Superscripts, Subscripts and Roots
Superscripts are recorded using the caret, ^, symbol. An example for a Maths class might be:
$$ 4^2 \ \times \ 4^3 \ = 4^5 $$ This is a shorthand way of saying: (4 x 4) x (4 x 4 x 4) = (4 x 4 x 4 x 4 x 4) or 16 x 64 = 1024.
Subscripts are similar, but use the underscore character.
$$ 3x_2 \ \times \ 2x_3 $$
This is OK if you want superscripts or subscripts, but square roots are a little different. This uses a control sequence.
$$ \sqrt{64} \ = \ 8 $$
You can also take this a little further, but adding in a control character. You may ask a question like:
$$ If \ \sqrt[n]{1024} \ = \ 4, \ what \ is \ the \ value \ of \ n? $$
Using these different commands allows you to develop equations like:
$$ The \sqrt{64} \ \times \ 2 \ \times \ 4^3 \ = \ 1024 $$
Superscripts, Subscripts and roots can also be noted in Matrices.
Fractions
Fractions in TeX are actually simple, as long as you remember the rules.
$$ \frac{numerator}{denominator} $$ which produces .
This can be given as:
.
This is entered as:
$$ \frac{5}{10} \ is \ equal \ to \ \frac{1}{2}.$$
With fractions (as with other commands) the curly brackets can be nested so that for example you can implement negative exponents in fractions. As you can see,
$$\frac {5^{-2}}{3}$$ will produce
$$\left(\frac{3}{4}\right)^{-3}$$ will produce and
$$\frac{3}{4^{-3}}$$ will produce
You likely do not want to use $$\frac{3}{4}^{-3}$$ as it produces
You can also use fractions and negative exponents in Matrices.
Brackets
As students advance through Maths, they come into contact with brackets. Algebraic notation depends heavily on brackets. The usual keyboard values of ( and ) are useful, for example:
This is written as:
$$ d = 2 \ \times \ (4 \ - \ j) $$
Usually, these brackets are enough for most formulae but they will not be in some circumstances. Consider this:
Is OK, but try it this way:
This can be achieved by:
$$ 4x^3 \ + \ \left(x \ + \ \frac{42}{1 + x^4}\right) $$
A simple change using the \left( and \right) symbols instead. Note the actual bracket is both named and presented. Brackets are almost essential in Matrices.
Ellipsis
The Ellipsis is a simple code:
Written like:
$$ x_1, \ x_2, \ \ldots, \ x_n $$
A more practical application could be:
Question:
"Add together all the numbers from 1 38. What is an elegant and simple solution to this problem? Can you create an algebraic function to explain this solution? Will your solution work for all numbers?"
Answer: The question uses an even number to demonstrate a mathematical process and generate an algebraic formula.
Part 1: Part 2. Part 3.
An algebraic function might read something like:
Where t = total and n = the last number.
The solution is that, using the largest and the smallest numbers, the numbers are added and then multiplied by the number of different combinations to produce the same result adding the first and last numbers.
The answer must depend on the number, being a whole number. Therefore, the solution will not work for an odd range of numbers, only an even range.
|
Union of Right-Total Relations is Right-Total
Jump to navigation Jump to search
Theorem Then $\mathcal R_1 \cup \mathcal R_2$ is right-total. Proof
Define the predicates $L$ and $R$ by:
$\map L X \iff \text {$X$ is left-total}$ $\map R X \iff \text {$X$ is right-total}$
\(\displaystyle \map R {\mathcal R_1} \land \map R {\mathcal R_2}\) \(\leadsto\) \(\displaystyle \map L {\mathcal R_1^{-1} } \land \map L {\mathcal R_2^{-1} }\) Inverse of Right-Total Relation is Left-Total \(\displaystyle \) \(\leadsto\) \(\displaystyle \map L {\mathcal R_1^{-1} \cup \mathcal R_2^{-1} }\) Union of Left-Total Relations is Left-Total \(\displaystyle \) \(\leadsto\) \(\displaystyle \map L {\paren {\mathcal R_1 \cup \mathcal R_2}^{-1} }\) Union of Inverse is Inverse of Union \(\displaystyle \) \(\leadsto\) \(\displaystyle \map R {\mathcal R_1 \cup \mathcal R_2}\) Inverse of Right-Total Relation is Left-Total
$\blacksquare$
|
Definition:Polynomial Ring Contents 1 One Indeterminate 2 Multiple Indeterminates 3 Terminology 4 Notation 5 Equivalence of definitions 6 Also defined as 7 Also known as 8 Also see One Indeterminate
Let $R$ be a commutative ring with unity.
$S$ is a commutative ring with unity $\iota : R \to S$ is a unital ring homomorphism, called canonical embedding $X$ is an element of $S$, called indeterminate
that can be defined in several ways:
Let $R^{\left({\N}\right)}$ be the ring of sequences of finite support over $R$.
Let $\iota : R \to R^{\left({\N}\right)}$ be the mapping defined as:
$\iota \left({r}\right) = \left \langle {r, 0, 0, \ldots}\right \rangle$.
Let $X$ be the sequence $\left \langle {0, 1, 0, \ldots}\right \rangle$.
The polynomial ring over $R$ is the ordered triple $\left({R^{\left({\N}\right)}, \iota, X}\right)$.
Let $\N$ denote the additive monoid of natural numbers.
Let $R \left[{\N}\right]$ be the monoid ring of $\N$ over $R$.
The polynomial ring over $R$ is the ordered triple $\left({R \left[{\N}\right], \iota, X}\right)$ where: $X \in R \left[{\N}\right]$ is the standard basis element associated to $1\in \N$. $\iota : R \to R \left[{\N}\right]$ is the canonical mapping. For every pointed $R$-algebra $(A, \kappa, a)$ there exists a unique pointed algebra homomorphism $h : S\to A$, called evaluation homomorphism.
This is known as the
universal property of a polynomial ring. Multiple Indeterminates
Let $R$ be a commutative ring with unity.
Let $I$ be a set.
$S$ is a commutative ring with unity $\iota : R \to S$ is a unital ring homomorphism, called canonical embedding $f : S \to R$ is a family, whose image consists of indeterminates
that can be defined in several ways:
Let $R \left[{\left\{{X_i: i \in I}\right\}}\right]$ be the ring of polynomial forms in $\left\{{X_i: i \in I}\right\}$.
The polynomial ring in $I$ indeterminates over $R$ is the ordered triple $\left({\left({A, +, \circ}\right), \iota, \left\{ {X_i: i \in I}\right\} }\right)$ Terminology Single indeterminate
Let $\left({S, \iota, X}\right)$ be a polynomial ring over $R$.
The indeterminate of $\left({S, \iota, X}\right)$ is the term $X$. Multiple Indeterminates
Let $I$ be a set.
Let $\left({S, \iota, f}\right)$ be a polynomial ring over $R$ in $I$
indeterminates. The unital ring homomorphism $\iota$ is called the canonical embedding into the polynomial ring. Multiple Indeterminates
Let $I$ be a set.
The unital ring homomorphism $\iota$ is called the canonical embedding into the polynomial ring.
The embedding $\iota$ is then implicit.
Equivalence of definitions Also defined as
It is common for an author to define
using a specific construction, and refer to other constructions as the polynomial ring . At $\mathsf{Pr} \infty \mathsf{fWiki}$ we deliberately do not favor any construction. All the more so because at some point it becomes irrelevant. apolynomial ring
It is also common to call any ring isomorphic to a
polynomial ring a polynomial ring. For the precise meaning of this, see Ring Isomorphic to Polynomial Ring is Polynomial Ring. Also known as
The
polynomial ring in one indeterminate over $R$ is often referred to as the polynomial ring over $R$.
That is, if no reference is given to the number of indeterminates, it is assumed to be $1$.
Also see Results about polynomial ringscan be found here.
|
A System of Two Equations Replete with Squares Problem
Solution
Let, for simplicity, $a=x^2,\,b=y^2.\,$ The system can be written as
$\displaystyle \left\{\begin{align}&16a+25b=400\\&a+b=\frac{(4a+5b)^2}{400}\end{align}\right.$
The second equation transforms into $400(a+b)=(4a+5b)^2.\,$ Replacing $400\,$ from the first equation gives
$(16a+25b)(a+b)=(4a+5b)^2,$
i.e.,
$16a^2+25ab+16ab+25b^2=16a^2+40ab+25b^2,$
which simplifies to $ab=0,\,$ same as $xy=0.\,$ Note that $x,y\,$ can't vanish simultaneously. Thus, two cases: either $x=0\,$ or $y=0.\,$ The first case gives solutions $(0,\pm 4),\,$ the second $(\pm 5, 0).$
Acknowledgment
65607981
|
Crane Standard EN 13001-3-1+A2 Limits States and proof competence of steel structure is implemented in
SDC Verifier. The standard is to be used together with EN13001-1 and EN13001-2: EN 13001-3-1 standard deals only with the limit state method. The allowable stress method is reliable in specific cases – for cranes where all masses act only unfavorable with linear relationship between load actions and load effects.
To account for the uncertainty of fatigue strength values and possible consequences of fatigue damage – fatigue strength specific resistance factor should be specified:
Accessibility for inspection Fail-safe detail Non fail-safe detail Without hazards for persons b With hazards for persons Detail accessible without disassembly 1,0 1,05 1,15 Detail accessible by disassembly 1,05 1,10 1,20 Non-accessible detail N/A a 1,15 1,25
Characteristic fatigue strength (fatigue strength at 2 million cycles) and slope constant have to be defined to calculate limit design stress range:
\(\Delta \sigma _{Rd} =\frac{\Delta \sigma _{c}}{\gamma _{mf} \times \sqrt[m]{s_{m}}}\)
Where
— Is the characteristic fatigue strength (Annex D and Annex H of standard) Δσ c — Is the slope constant of the log m Δσ –log Ncurve (Annex D and Annex H of EN13001 standard) — Is the fatigue strength specific resistance factor (see table above) ϒ mf — Is the stress history parameter s m Characteristic fatigue strength Fatigue Strength Parallel to the weld:
See how SDC Verifier calculates fatigue strength parallel to the weld
Detail
No.
Δ σ c Δṫ c Constructional detail Requirements 3.7 m = 3
Normal stress in weld direction
Special conditions:
180 Continuous weld, quality level B 140 Continuous weld, quality level C 80 Intermittent weld, quality level C Fatigue Strength Perpendicular to the weld for welded parts:
See how SDC Verifier calculates fatigue strength perpendicular to the weld
3.9 m = 3
45
Basic conditions:
Special conditions:
\( \sigma _{w} = F/(2\times a\times l)\)
71 Quality level B Stress in the loaded plate at weld toe 63 Quality level C Fatigue Strength Perpendicular to the weld for non-welded parts:
3.28
Continuous component to which parts are welded transversally
Basic conditions:
Special conditions:
112 Double fillet weld, quality level B* 100 Double fillet weld, quality level B 90 Double fillet weld, quality level C 80 Single fillet weld, quality level B, C 80 Partial penetration V-weld on remaining backing, quality level B, C Fatigue Strength Shear direction to the welds:
See how SDC Verifier calculates fatigue strength in shear direction to the weld
Detail
No.
Δ σ c Δṫ c
N|mm
2
Constructional detail Requirements 3.34 m = 5
Continuous groove weld, single or double fillet weld under uniform shear flow
Basic conditions:
Special conditions:
112 With full penetration 90 Partial penetration
SDC Verifier defines all fatigue strength classification
For Direct use of stress history option SDC Verifier will calculate automatically stress history for slope constant 3 and 5 and use it to calculate limit design stress range. Alternatively, user can set manually class S for different directions (parallel, perpendicular to the weld).
SDC Verifier calculate cumulative damage based on
Palmgren-Miner rule:
\(\sum ^{k}_{i=1}\frac{n_{i}}{N_{i}} \ =\ C\)
Where
is the number of cycles accumulated at stress n i S i is the fraction of life consumed by exposure to the cycles at the different stress levels C
|
GR9277 #71
Alternate Solutions
Herminso 2009-09-22 19:12:05 For , all of the systems are in the lowest energy state (minimum entropy), so the only choice is (B). ramparts 2009-08-15 20:35:32 All the solutions are so complicated! :( Here's a way for us stat mech idiots using boundary conditions:
For (or epsilon=0), this should go to . That leaves B and E. (and technically A but that's clearly wrong :) ).
Now E_2 > E_1 so the answer has to be less than for positive epsilon. That leaves B. moonrazor 2006-03-27 13:37:23 (GR9277-71) Alternate Solution with less assumptions:
Using P(E) = (1/Z)exp[-E/kT] we have
P(E_1) = exp[-E_1/kT] / (exp[-E_1/kT] + exp[-E_2/kT])
= 1 / (1 + exp[-(E_1-E_2)/kT])
= 1 / (1 + exp[-\epsilon /kT])
LaTeX
\begin{eqnarray*}
P(E_{1}) &=& \frac{e^{-E_{1}/kT}}{e^{-E_{1}/kT} + e^{-E_{2}/kT}}\\
&=& \frac{1}{1 + e^{-(E_{2}-E_{1})/kT}}\\
&=& \frac{1}{1 + e^{-\epsilon/kT}}
\end{eqnarray*}
radicaltyro 2006-10-31 14:05:37 Hi moonrazor. Wrap your latex in single dollar signs.
Comments
ernest21 2019-09-30 04:59:09 Cool! I\'ll surely be coming back for the next posts from you. You\'re an incredibly engaging writer that I can freely recommend this article to my college students. clash royale best decks ernest21 2019-09-30 04:57:11 Cool! I\'ll surely be coming back for the next posts from you. You\'re an incredibly engaging writer that I can freely recommend this article to my college students. clash royale best decks ernest21 2019-09-30 04:55:51 Cool! I\'ll surely be coming back for the next posts from you. You\'re an incredibly engaging writer that I can freely recommend this article to my college students. clash royale best decks juliano lorenso 2014-10-09 17:14:43 This is how I see it ..
1) As T--> 0, energy distribution for fermions must reduce to 1/2. So, in our case, it has to be proportional to N0/2.
2) E1 is proportional to +epsilon and E2 is proportional to -epsilon
Since we were asked to find the average number of subsystems in the state of energy E1, we can safely eliminate D and E because they have -epsilon and we want +epsilon
A is correct if we're talking about classical systems in normal conditions, so we eliminate it
B is incorrect because as T-->0, we get N0 and that's nonsense
So the correct answer is (C) .. If we apply T-->0 we get N0/2
Voila!!!! Herminso 2009-09-22 19:12:05 For , all of the systems are in the lowest energy state (minimum entropy), so the only choice is (B). ramparts 2009-08-15 20:35:32 All the solutions are so complicated! :( Here's a way for us stat mech idiots using boundary conditions:
For (or epsilon=0), this should go to . That leaves B and E. (and technically A but that's clearly wrong :) ).
Now E_2 > E_1 so the answer has to be less than for positive epsilon. That leaves B.
Herminso 2009-09-22 19:08:11 I think the answer has to be more than for >. Remember that the graph of is below 1 for a positive. Thus <. The answer (B) holds.
shak 2010-10-10 09:25:14 what about D? it also goes to N/2 when epsilon=0.
why did u ignore it?
asdfuogh 2011-10-05 17:54:19 You're right, D) also goes to N0/2 when T=>inf. But then, we'd ignore that one when we think about T=>zero because that one blows up, and it shouldn't go bigger than N0. engageengage 2009-01-07 20:05:25 I believe what you have down is not the correct fermi-dirac distribution, which shouldn't have a minus sign in front of the energy in the denominator. The other way to get this answer with few assumptions is just to say the partition function is:
Then, to find the number of subsystems in E1, you have to multiply the probability of being in that state by the number of total particles. From basic boltzmann statistics, you know that:
Multiplying through by the inverse of the boltzman factor that was used, you get:
Multiply by the number of particles and you have answer (B)
antigravity 2010-11-09 08:22:29 I agree.. We cant assume Fermi Dirac distribution (It can be nything)..
However it has been mentioned tht the system is at high temperature where all the three distributions merge.
So take it as the Boltzmann one.. Fv4 2008-10-04 09:43:14 as epsilon -> , number of subsystems at E1 should approach . Thus, only B) can be the answer. hassanctech 2007-09-30 22:02:11 Correct me if I'm wrong but if E2-E1 = e>0 then lets look at the limit as e --> infinity. This means that E2 is of infinite energy and you would expect all the sybsystems to be in the E1 state. Choice B is the only one for which e--> infinity reduces to N0.
Blue Quark 2007-10-31 19:22:45 Am I missing something?
As T approaches infinity all the particles should be in the highest energy state, E2.
For (b), as T approaches infinity the average number of particles in the lower energy state, E1, is equal to No. So as the temperature increases more particles are dropping to lower energy states?
Jeremy 2007-11-01 16:58:13 Blue Quark,
Check your math on the limit. The high temperature limit is in fact . Here's a qualitative idea of why it's true. At low temperatures, most of the subsystems are in the lowest energy state because there is not enough energy for a transition (taking to represent the amount of energy available to the system). Of course, there may be a few subsystems in excited states, but I'm speaking about the typical case - what the "big picture" looks like. Well, as increases, the available energy increases, and more and more subsystems will be in their excited states. For , the limitation imposed by energy is completely removed, so that it is equally likely to be in any of its available states. This is why we expect to find half of the subsystems in the lower state and half in the higher state. Also note that this macrostate maximizes entropy, whereas putting all subsystems in one state (for example the one with higher energy) will minimize entropy. And one last thing, this understanding allows you to answer #73 (from this test) immediately - no equations needed!
Jeremy 2007-11-13 11:14:08 I finally found the other problem that my previous post churns out an instant answer for! It's GR8677 #67. So you can see everything one page, here's what it says:
__________
67. A large isolated system of weakly interacting particles is in thermal equilibrium. Each particle has only 3 possible nondegenerate states of energies , , and . When the system is at an absolute temperature , where is Boltzmann's constant, the average energy of each particle is
(A)
(B)
(C)
(D)
(E)
__________
Using the reasoning in my previous post, we know that the 3 states are equally occupied, so the average energy for any individual particle is just the average energy of the possible states: .
Here " target="_blank">http://grephysics.net/ans/8677/67">Here is a link to the page discussing that problem, where others have already mentioned this shortcut.
Jeremy 2007-11-13 11:23:27 Why can I never get the link to work the first time? I must not be doing it right... but it works the second time. Anyone care to share a linking syntax that works?
Link
jmason86 2009-09-02 19:19:10 (D) would also satisfy the T-> inifnity then ave -> . I guess you just have to remember that there is always that minus sign in the exponential when talking about the partition fct or boltzmann distribution. Richard 2007-09-13 23:48:09 I too wanted to say that the posted solution is rather poor. This is a Boltzman stat. problem: The probability for a state is the Boltzman factor for that state divided by the Partition Function. What's with the, "let's assume that ?"
evanb 2008-06-25 12:56:18 It turns out that it doesn't matter what is equal to... it adds a factor of to the numerator and to all the factors in the denominator. Better off setting it to zero, in my opinion, but in terms of the physics it doesn't make one lick of difference: it's like setting your h in mgh potential energy to 0 on the ground.
See radicaltyro's comment on moonrazor's solution for evidence. georgi 2007-08-26 22:07:45 the posted solution is actually just purely incorrect. a fermi dirac distribution assumes that the occupancy of a particular quantum state is either 0 or 1 in accordance with the pauli-exclusion principle. instead we are actually interested in a maxwell boltzmann distribution in which the particles (in our case subsystems) are distinguishable and any number can occupy either energy state. then we apply the partition function as posted by radicaltyro to obtain our solution.
FortranMan 2008-10-02 15:33:49 So because a system only has two states doesn't alone mean that it's Fermi-Dirac?
shak 2010-10-10 09:41:21 right!! moonrazor 2006-03-27 13:37:23 (GR9277-71) Alternate Solution with less assumptions:
Using P(E) = (1/Z)exp[-E/kT] we have
P(E_1) = exp[-E_1/kT] / (exp[-E_1/kT] + exp[-E_2/kT])
= 1 / (1 + exp[-(E_1-E_2)/kT])
= 1 / (1 + exp[-\epsilon /kT])
LaTeX
\begin{eqnarray*}
P(E_{1}) &=& \frac{e^{-E_{1}/kT}}{e^{-E_{1}/kT} + e^{-E_{2}/kT}}\\
&=& \frac{1}{1 + e^{-(E_{2}-E_{1})/kT}}\\
&=& \frac{1}{1 + e^{-\epsilon/kT}}
\end{eqnarray*}
radicaltyro 2006-10-31 14:05:37 Hi moonrazor. Wrap your latex in single dollar signs.
chemicalsoul 2009-11-04 20:58:32 This should be the standard solution Hon'ble Yosun.
Post A Comment!
Bare Basic LaTeX Rosetta Stone
LaTeX syntax supported through dollar sign wrappers $, ex.,
$\alpha^2_0$ produces .
type this...
to get...
$\int_0^\infty$
$\partial$
$\Rightarrow$
$\ddot{x},\dot{x}$
$\sqrt{z}$
$\langle my \rangle$
$\left( abacadabra \right)_{me}$
$\vec{E}$
$\frac{a}{b}$
The Sidebar Chatbox... Scroll to see it, or resize your browser to ignore it...
|
Let $a \in \Bbb{R}$. Let $\Bbb{Z}$ act on $S^1$ via $(n,z) \mapsto ze^{2 \pi i \cdot an}$. Claim: The is action is not free if and only if $a \Bbb{Q}$. Here's an attempt at the forward direction: If the action is not free, there is some nonzero $n$ and $z \in S^1$ such that $ze^{2 \pi i \cdot an} = 1$. Note $z = e^{2 \pi i \theta}$ for some $\theta \in [0,1)$.
Then the equation becomes $e^{2 \pi i(\theta + an)} = 1$, which holds if and only if $2\pi (\theta + an) = 2 \pi k$ for some $k \in \Bbb{Z}$. Solving for $a$ gives $a = \frac{k-\theta}{n}$...
What if $\theta$ is irrational...what did I do wrong?
'cause I understand that second one but I'm having a hard time explaining it in words
(Re: the first one: a matrix transpose "looks" like the equation $Ax\cdot y=x\cdot A^\top y$. Which implies several things, like how $A^\top x$ is perpendicular to $A^{-1}x^\top$ where $x^\top$ is the vector space perpendicular to $x$.)
DogAteMy: I looked at the link. You're writing garbage with regard to the transpose stuff. Why should a linear map from $\Bbb R^n$ to $\Bbb R^m$ have an inverse in the first place? And for goodness sake don't use $x^\top$ to mean the orthogonal complement when it already means something.
he based much of his success on principles like this I cant believe ive forgotten it
it's basically saying that it's a waste of time to throw a parade for a scholar or win he or she over with compliments and awards etc but this is the biggest source of sense of purpose in the non scholar
yeah there is this thing called the internet and well yes there are better books than others you can study from provided they are not stolen from you by drug dealers you should buy a text book that they base university courses on if you can save for one
I was working from "Problems in Analytic Number Theory" Second Edition, by M.Ram Murty prior to the idiots robbing me and taking that with them which was a fantastic book to self learn from one of the best ive had actually
Yeah I wasn't happy about it either it was more than $200 usd actually well look if you want my honest opinion self study doesn't exist, you are still being taught something by Euclid if you read his works despite him having died a few thousand years ago but he is as much a teacher as you'll get, and if you don't plan on reading the works of others, to maintain some sort of purity in the word self study, well, no you have failed in life and should give up entirely. but that is a very good book
regardless of you attending Princeton university or not
yeah me neither you are the only one I remember talking to on it but I have been well and truly banned from this IP address for that forum now, which, which was as you might have guessed for being too polite and sensitive to delicate religious sensibilities
but no it's not my forum I just remembered it was one of the first I started talking math on, and it was a long road for someone like me being receptive to constructive criticism, especially from a kid a third my age which according to your profile at the time you were
i have a chronological disability that prevents me from accurately recalling exactly when this was, don't worry about it
well yeah it said you were 10, so it was a troubling thought to be getting advice from a ten year old at the time i think i was still holding on to some sort of hopes of a career in non stupidity related fields which was at some point abandoned
@TedShifrin thanks for that in bookmarking all of these under 3500, is there a 101 i should start with and find my way into four digits? what level of expertise is required for all of these is a more clear way of asking
Well, there are various math sources all over the web, including Khan Academy, etc. My particular course was intended for people seriously interested in mathematics (i.e., proofs as well as computations and applications). The students in there were about half first-year students who had taken BC AP calculus in high school and gotten the top score, about half second-year students who'd taken various first-year calculus paths in college.
long time ago tho even the credits have expired not the student debt though so i think they are trying to hint i should go back a start from first year and double said debt but im a terrible student it really wasn't worth while the first time round considering my rate of attendance then and how unlikely that would be different going back now
@BalarkaSen yeah from the number theory i got into in my most recent years it's bizarre how i almost became allergic to calculus i loved it back then and for some reason not quite so when i began focusing on prime numbers
What do you all think of this theorem: The number of ways to write $n$ as a sum of four squares is equal to $8$ times the sum of divisors of $n$ if $n$ is odd and $24$ times sum of odd divisors of $n$ if $n$ is even
A proof of this uses (basically) Fourier analysis
Even though it looks rather innocuous albeit surprising result in pure number theory
@BalarkaSen well because it was what Wikipedia deemed my interests to be categorized as i have simply told myself that is what i am studying, it really starting with me horsing around not even knowing what category of math you call it. actually, ill show you the exact subject you and i discussed on mmf that reminds me you were actually right, i don't know if i would have taken it well at the time tho
yeah looks like i deleted the stack exchange question on it anyway i had found a discrete Fourier transform for $\lfloor \frac{n}{m} \rfloor$ and you attempted to explain to me that is what it was that's all i remember lol @BalarkaSen
oh and when it comes to transcripts involving me on the internet, don't worry, the younger version of you most definitely will be seen in a positive light, and just contemplating all the possibilities of things said by someone as insane as me, agree that pulling up said past conversations isn't productive
absolutely me too but would we have it any other way? i mean i know im like a dog chasing a car as far as any real "purpose" in learning is concerned i think id be terrified if something didnt unfold into a myriad of new things I'm clueless about
@Daminark They key thing if I remember correctly was that if you look at the subgroup $\Gamma$ of $\text{PSL}_2(\Bbb Z)$ generated by (1, 2|0, 1) and (0, -1|1, 0), then any holomorphic function $f : \Bbb H^2 \to \Bbb C$ invariant under $\Gamma$ (in the sense that $f(z + 2) = f(z)$ and $f(-1/z) = z^{2k} f(z)$, $2k$ is called the weight) such that the Fourier expansion of $f$ at infinity and $-1$ having no constant coefficients is called a cusp form (on $\Bbb H^2/\Gamma$).
The $r_4(n)$ thing follows as an immediate corollary of the fact that the only weight $2$ cusp form is identically zero.
I can try to recall more if you're interested.
It's insightful to look at the picture of $\Bbb H^2/\Gamma$... it's like, take the line $\Re[z] = 1$, the semicircle $|z| = 1, z > 0$, and the line $\Re[z] = -1$. This gives a certain region in the upper half plane
Paste those two lines, and paste half of the semicircle (from -1 to i, and then from i to 1) to the other half by folding along i
Yup, that $E_4$ and $E_6$ generates the space of modular forms, that type of things
I think in general if you start thinking about modular forms as eigenfunctions of a Laplacian, the space generated by the Eisenstein series are orthogonal to the space of cusp forms - there's a general story I don't quite know
Cusp forms vanish at the cusp (those are the $-1$ and $\infty$ points in the quotient $\Bbb H^2/\Gamma$ picture I described above, where the hyperbolic metric gets coned off), whereas given any values on the cusps you can make a linear combination of Eisenstein series which takes those specific values on the cusps
So it sort of makes sense
Regarding that particular result, saying it's a weight 2 cusp form is like specifying a strong decay rate of the cusp form towards the cusp. Indeed, one basically argues like the maximum value theorem in complex analysis
@BalarkaSen no you didn't come across as pretentious at all, i can only imagine being so young and having the mind you have would have resulted in many accusing you of such, but really, my experience in life is diverse to say the least, and I've met know it all types that are in everyway detestable, you shouldn't be so hard on your character you are very humble considering your calibre
You probably don't realise how low the bar drops when it comes to integrity of character is concerned, trust me, you wouldn't have come as far as you clearly have if you were a know it all
it was actually the best thing for me to have met a 10 year old at the age of 30 that was well beyond what ill ever realistically become as far as math is concerned someone like you is going to be accused of arrogance simply because you intimidate many ignore the good majority of that mate
|
GR9277 #72
Alternate Solutions
jmason86 2009-09-03 19:19:22 I did this one by limits and general test taking strategy.
Limits: Q --> finite value as T --> . Eliminates (D) and (E)
ETS will generally force you to choose between very similar answers. Eliminates (C)
With only (A) and (B) left, it is a good idea to guess. But if you can get a hunch, that is even better. It's probably totally flawed thinking, but I saw the lack of an exponential in the numerator for (B), which made it look similar to a Bose-Einstein distribution. The full expression in (A) reminded me of the Maxwell-Boltzmann distribution that comes from the problem statement. Choose (A)
Comments
ernest21 2019-09-30 05:00:43 Cool! I\'ll surely be coming back for the next posts from you. You\'re an incredibly engaging writer that I can freely recommend this article to my college students. clash royale best decks BerkeleyEric 2010-09-17 17:04:59 If you remember that , then immediately you can see that there will have to be the squared sum in the denominator, so only A and B remain. And there still has to be the exponential in the numerator (a derivative won't get rid of that), so the answer must be A.
flyboy621 2010-10-22 16:11:48 +1 jmason86 2009-09-03 19:19:22 I did this one by limits and general test taking strategy.
Limits: Q --> finite value as T --> . Eliminates (D) and (E)
ETS will generally force you to choose between very similar answers. Eliminates (C)
With only (A) and (B) left, it is a good idea to guess. But if you can get a hunch, that is even better. It's probably totally flawed thinking, but I saw the lack of an exponential in the numerator for (B), which made it look similar to a Bose-Einstein distribution. The full expression in (A) reminded me of the Maxwell-Boltzmann distribution that comes from the problem statement. Choose (A)
RebeccaJK42 2007-03-23 10:27:19 Where does the k in the numerator come from?
alpha 2007-03-30 22:37:37 k is Boltzmann constant.. The k on the numerator is actually canceled out by the in the denominator
Richard 2007-09-14 00:05:03 The fact of the matter is, you have a factor of
introduced by the derivative.
To make it look pretty, they multiplied the top and bottom by giving (with the extra ) the factor . Then of course, you have the . Andresito 2006-03-29 10:05:29 I could not obtain T^2 in the denominator. Is there an error in the solution provided by the ETS?
radicaltyro 2006-10-31 14:19:25 Hi Andresito,
Review your calculus and try again. The answer is correct.
Blue Quark 2007-11-01 08:00:11 Andresito you made the same mistake I did. The T is in the denominator, not the numerator. Thus when you differentiate you get a (-T^(-2)) in front of the exponential in addition to the normal e/k
drizzo01 2012-11-08 07:40:44 am I the only one who doesn't see a T in the denominator of the second term? I appreciate that T^2 would be in the denominator if there was a T to being with, but as far as I can see, the only T's are within the expression for the exponential, which don't come out upon taking a derivative.
drizzo01 2012-11-08 07:46:58 wait nvm, got it.
Post A Comment!
Bare Basic LaTeX Rosetta Stone
LaTeX syntax supported through dollar sign wrappers $, ex.,
$\alpha^2_0$ produces .
type this...
to get...
$\int_0^\infty$
$\partial$
$\Rightarrow$
$\ddot{x},\dot{x}$
$\sqrt{z}$
$\langle my \rangle$
$\left( abacadabra \right)_{me}$
$\vec{E}$
$\frac{a}{b}$
The Sidebar Chatbox... Scroll to see it, or resize your browser to ignore it...
|
Given the work of Turing and Feferman all arithmetical truths can be isolated through a transfinite progression of theories like $T_0=PA$, $T_{\beta+1}=T_β \ plus \ CON(T_\beta)$ and $T\lambda=\cup T\mu(\mu\prec\lambda)$ - when $\lambda$ is a limit ordinal - through all the recursive ordinals. What is the smallest ordinal $\sigma$ such that $T_\sigma$ proves CON(ZF)? How do such ordinals for arithmetical consistency statements align with proof theoretical ordinals?
Edit: My question does not ask for the proof theoretic ordinal of ZF.
Update: Phillip Welch gives a very readable account of such things as I hint to in comments concerning Feferman's work in an answer to a question here:
Update 2: My question was badly prepared, as evidenced also by the previous update and the comments in discussion. Noah Schweber kindly suggested that I unaccept his reply until more is clarified concerning my question as related to the Feferman style process I had in mind, and which through a detour into Shoenfield's recursive omega rule (non-constructively) captures all arithmetical truths. I would be surprised if Turing like collapses down to $\omega+1$ could occur in Feferman style processes.
|
I am given a group $G=\text{Span}(w,x,y,z)$ with relations defined by $$\begin{bmatrix}0&0&1&3\\-2&1&1&3\\-2&4&1&3\\0&-3&1&5\end{bmatrix}\begin{bmatrix}w\\x\\y\\z\end{bmatrix}=0.$$ I was wondering if anyone could give me any hints, or explain a simply way to do this. I have figured out how to solve it, but the method seems slow. Maybe my method can be improved, or maybe there is another method entirely. Any help would be so welcomed.
I know that we can perform Gaussian Elimination* (we can only multiply rows by integer units) to the matrix to get an equivalent set of relations. So the above system after a hairy computation boils down to
\begin{equation}\begin{bmatrix}-2&1&0&0\\0&3&0&0\\0&0&1&3\\0&0&0&2\end{bmatrix}\begin{bmatrix}w\\x\\y\\z\end{bmatrix}=0.\end{equation}
As we cannot deduce $z=0$ from our relations, and $2z=0$ is a relation it follows that $z$ has order $2.$ Then since $$y+3z=y+z=0$$ it follows that $y=-z=z.$
As we cannot deduce $x=0$ from our relations, and $3x=0$ is a relation it follows that $x$ has order $3.$ Well then the relation $-2w+x=0$ tells us that$$-6w+3x=-6w=0.$$Then $6w=0$. As $x\in\text{Span}(w)$ we see that $\text{Ord}(w)=3,6$. If $\text{Ord}(w)=3,$ then$$-2w+x=w+x=0,$$so we would have the relation $w+x=0.$ As $$\begin{bmatrix}1&1&0&0\end{bmatrix}$$is not in the span of the rows of
$$\begin{bmatrix}-2&1&0&0\\0&3&0&0\\0&0&1&3\\0&0&0&2\end{bmatrix}$$ we do not have the relation $w+x=0,$ hence $\text{Ord}(w)=6.$
Note that we've reduced the generating set to $w,z$, hence $G=\left<w\right>+\left<z\right>.$ If the sum were not direct, then since $\text{Ord}(z)=2$ we would have $z\in\left<w\right>.$ Since $\text{Ord}(w)=6$ this would imply $3w=z,$ or $3w+z=0.$ As $$\begin{bmatrix}3&0&0&1\end{bmatrix}$$ is not in the span of the rows of $$\begin{bmatrix}-2&1&0&0\\0&3&0&0\\0&0&1&3\\0&0&0&2\end{bmatrix}$$ this cannot happen, hence the sum is direct, that is $$G=\left<w\right>\oplus\left<z\right>\cong\mathbb{Z}_6\oplus\mathbb{Z}_2\cong\mathbb{Z}_3\oplus\mathbb{Z}_2\oplus\mathbb{Z}_2.$$
|
The Grassmannian
The first thing we need to do to simplify our life is to get out of projective space. Recall that $\newcommand{\PP}{\mathbb{P}}
\newcommand{\CC}{\mathbb{C}} \newcommand{\RR}{\mathbb{R}} \newcommand{\ZZ}{\mathbb{Z}} \DeclareMathOperator{\Gr}{Gr} \DeclareMathOperator{\Fl}{Fl} \DeclareMathOperator{\GL}{GL}\PP^m$ can be defined as the collection of lines through the origin in $\CC^{m+1}$. Furthermore, lines in $\PP^m$ correspond to planes through the origin in $\CC^{m+1}$, and so on.
In problem $3$ in the introduction, we are trying to find lines in $\PP^3$ with certain intersection properties. This translates to a problem about planes through the origin in $\CC^4$, which we refer simply as $2$-dimensional subspaces of $\CC^4$. We wish to know which $2$-dimensional subspaces $V$ intersect each of four given $2$-dimensional subspaces $W_1,W_2, W_3, W_4$ in at least a line. Our strategy will be to consider the algebraic varieties $Z_i$, $i=1,\ldots,4$, of all possible $V$ intersecting $W_i$ in at least a line, and find the intersection $Z_1\cap Z_2\cap Z_3\cap Z_4$. Each $Z_i$ is an example of a
Schubert variety, a moduli space of subspaces of $\CC^m$ with specified intersection properties.
The simplest example of a Schubert variety, where we have no constraints on the subspaces, is the Grassmannian
$ \Gr^n(\CC^m)$. Grassmannian$\Gr^n(\CC^m)$ is the collection of codimension-$n$ subspaces of $\CC^m$. In what follows we will set $$r=m-n,$$ so that the codimension-$n$ subspaces have dimension $r$.
We will see later that the Grassmannian has the structure of an algebraic variety, and has two natural topologies that come in handy. For this reason we will call its elements the
points of the $\Gr^n(\CC^m)$, even though they’re “actually” subspaces of $\CC^m$ of dimension $r=m-n$. It’s the same misfortune that causes us to refer to a line through the origin as a “point in projective space.”
Now, every point of the Grassmannian is the span of $r$ independent row vectors of length $m$, which we can arrange in an $r\times m$ matrix. For instance, the following represents a point in $\Gr^3(\CC^7)$.
$$\left[\begin{array}{ccccccc} 0 & -1 & -3 & -1 & 6 & -4 & 5 \\ 0 & 1 & 3 & 2 & -7 & 6 & -5 \\ 0 & 0 & 0 & 2 & -2 & 4 & -2 \end{array}\right]$$ Notice that we can perform elementary row operations on the matrix without changing the point of the Grassmannian it represents. Therefore:
Each point of the Grassmannian corresponds to a unique full-rank matrix in reduced row echelon form.
Let’s use the convention that the pivots will be in order from left to right and bottom to top.
Example. In the matrix above we can switch the second and third rows, and then add the third row to the first to get: $$\left[\begin{array}{ccccccc} 0 & 0 & 0 & 1 & -1 & -2 & 0 \\ 0 & 0 & 0 & 2 & -2 & 4 & -2 \\ 0 & 1 & 3 & 2 & -7 & 6 & -5 \\ \end{array}\right]$$ Here, the bottom left $1$ was used as the pivot to clear its column. We can now use the $2$ at the left of the middle row as our new pivot, by dividing that row by $2$ first, and adding or subtracting from the two other rows: $$\left[\begin{array}{ccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & -1 & 2 & -1 \\ 0 & 1 & 3 & 0 & -5 & 2 & -3 \\ \end{array}\right]$$ Finally we can use the $1$ in the upper right corner to clear its column: $$\left[\begin{array}{ccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & -1 & 2 & 0 \\ 0 & 1 & 3 & 0 & -5 & 2 & 0 \\ \end{array}\right],$$ and we are done.
In the preceding example, we were left with a reduced row echelon matrix in the form
$$\left[\begin{array}{ccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & \ast & \ast & 0 \\ 0 & 1 & \ast & 0 & \ast & \ast & 0 \\ \end{array}\right],$$ i.e. its leftmost $1$’s are in columns $2$, $4$, and $7$. The subset of the Grassmannian whose points have this particular form constitutes a Schubert cell. Schubert varieties and cell complex structure
To make the previous discussion rigorous, we assign to the matrices of the form
$$\left[\begin{array}{ccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & \ast & \ast & 0 \\ 0 & 1 & \ast & 0 & \ast & \ast & 0 \\ \end{array}\right]$$ a partition – a nonincreasing sequence of nonnegative integers $\lambda=(\lambda_1,\ldots,\lambda_r)$ – as follows. Cut out the “upside-down staircase” from the left of the matrix, and let $\lambda_i$ be the distance from the end of the staircase to the $1$ in each row. In the matrix above, we get the partition $\lambda=(4,2,1)$. Notice that we always have $\lambda_1\ge \lambda_2\ge \cdots \ge \lambda_r$.
By identifying the partition with its Young diagram, we can alternatively define $\lambda$ as the complement in a $r\times n$ box (recall $n=m-r$) of the diagram $\mu$ defined by the $\ast$’s, where we place the $\ast$’s in the lower right corner. For instance:
Notice that every partition $\lambda$ we obtain in this manner must fit in the $r\times n$ box. For this reason, we will call it the
Important Box. (Warning: this terminology is not standard.) Schubert cell$\Omega_{\lambda}^\circ\subset \Gr^n(\CC^m)$ is the set of points whose row echelon matrix has corresponding partition $\lambda$.
Notice that since each $\ast$ can be filled with any complex number, we have $\Omega_{\lambda}^\circ\cong \CC^{r\cdot n-|\lambda|}$. Thus we can think of the Schubert cells as forming an open cover of the Grassmannian by affine subsets.
More rigorously, the Grassmannian can be viewed as a projective variety by embedding $\Gr^n(\CC^m)$ in $\PP^{\binom{m}{r}-1}$ via the
Plücker embedding. To do so, order the $r$-element subsets $S$ of $\{1,2,\ldots,m\}$ arbitrarily and use this ordering to label the homogeneous coordinates $x_S$ of $\PP^{\binom{m}{r}-1}$. Now, given a point in the Grassmannian represented by a matrix $M$, let $x_S$ be the determinant of the $r\times r$ submatrix determined by the columns in the subset $S$. This determines a point in projective space since row operations can only change the coordinates up to a constant factor, and the coordinates cannot all be zero since the matrix has rank $r$.
One can show that the image is an algebraic subvariety of $\PP^{\binom{m}{r}-1}$, cut out by homogeneous quadratic relations known as the
Plücker relations. (See Miller and Sturmfels, chapter 14.) The Schubert cells form an open affine cover.
We are now in a position to define the Schubert varieties as closed subvarieties of the Grassmannian.
standard Schubert varietycorresponding to a partition $\lambda$, denoted $\Omega_\lambda$, is the closure $\overline{{\Omega_\lambda}^\circ}$ of the corresponding Schubert cell in the Grassmannian, taken with respect to the Zariski topology. Explicitly, $$\Omega_{\lambda}=\{V\in \mathrm{Gr}^n(\CC^m)\mid \dim V\cap \langle e_1,\ldots, e_{n+i-\lambda_i}\rangle \ge i.\}$$
In general, however, we can use a different basis than the standard basis $e_1,\cdots,e_m$ for $\CC^m$. Given a
complete flag, i.e. a chain of subspaces $$0=F_0\subset F_1\subset\cdots \subset F_m=\CC^m$$ where each $F_i$ has dimension $i$, we can define $$\Omega_{\lambda}(F_\bullet)=\{V\in \mathrm{Gr}^n(\CC^m)\mid \dim V\cap F_{n+i-\lambda_i}\ge i.\}$$ Remark. The numbers $n+i-\lambda_i$ are the positions of the $1$’s in the matrix starting from the right. Combinatorially, without drawing the matrix, these numbers can be obtained by adjoining an upright staircase to the end of the $r\times n$ Important Box that $\lambda$ is contained in, and computing the distances from the right boundary of $\lambda$ to the right boundary of the enlarged figure. Example. The Schubert variety $\Omega_{\square}(F_\bullet)\subset \Gr^{2}(\CC^4)$ is the collection of $2$-dimensional subspaces $V\subset \CC^4$ for which $\dim V\cap F_2\ge 1$, i.e. $V$ intersects another $2$-dimensional subspace (namely $F_2$)in at least a line.
By choosing four different flags $F^{(1)}_{\bullet},F^{(2)}_{\bullet},F^{(3)}_{\bullet},F^{(4)}_{\bullet}$, problem 3 becomes equivalent to finding the intersection of the Schubert varieties $$\Omega_{\square}(F^{(1)}_\bullet)\cap \Omega_{\square}(F^{(2)}_\bullet)\cap \Omega_{\square}(F^{(3)}_\bullet)\cap \Omega_{\square}(F^{(4)}_\bullet).$$
The CW complex structure
The Schubert varieties also give a CW complex structure on the Grassmannian for each complete flag as follows. Given a fixed flag, define the $0$-skeleton $X_0$ to be the $0$-dimensional Schubert variety $\Omega_{(n^r)}$. Define $X_2$ to be $X_0$ along with the $2$-cell (since we are working over $\CC$ and not $\RR$) formed by removing a corner square from the rectangular partition $(n^r)$, and the attaching map given by the closure in the Zariski topology on $\Gr^n(\CC^m)$. Continue in this manner to define the entire cell structure, $X_0\subset X_2\subset\cdots \subset X_{2nr}$.
This gives the second topology on the Grassmannian, and the one which is easier to work with in computing its cohomology.
|
I read a paper detailing the algebraic process of kernel PCA. I have question though: the paper details the projection of new points onto the new eigenvectors in the feature space, but what if I want eigenvectors themselves? I am performing an analysis that compares eigenvector magnitudes in a Gaussian inner product space.
The closest I found in the paper was the definition of the eigenvectors as a hypothetical linear combination of data vectors in the kernel space:
$$\mathbf{V} = \sum_{i = 1}^l \alpha_i \Phi\left(\mathbf{x}_i\right).$$
From the paper, it seems that you derive all such $\alpha$ with the relation
$$\ell\lambda\mathbf{\alpha}=K\mathbf{\alpha}$$ (where $\mathbf{\alpha}$ is supposed to look bold as a vector).
But even after you obtain $\alpha_i$, you can't compute the eigenvectors in the feature space from the previous formula because, in the case of a Gaussian kernel that operates on $\|\mathbf{x} - \mathbf{y}\|$, there are infinitely many dimensions in the the "hidden" range of $\Phi$.
How can I compute $\mathbf{V}$, the eigenvectors in the $\mathbf{\Phi}$? Edit 0
I understand that we can't formalize eigenvectors in a infinite-dimensional space implied by a Gaussian-norm inner product kernel. What I actually want is the
length of the eigenvector. In the linear case, that would be the semimajor axes of the imaginary ellipse. I want to be able to have, for a "classification" PCA case as this one, a graph like the one on the left showing a curved (for lack of a better word) eigenvector projected down to the input space. I'd prefer a parametric form over which I may compute an arclength integral. Edit 1
Oops. I meant principal component, not eigenvector.
|
It looks like you're new here. If you want to get involved, click one of these buttons!
We've seen that classical logic is closely connected to the logic of subsets. For any set \( X \) we get a poset \( P(X) \), the
power set of \(X\), whose elements are subsets of \(X\), with the partial order being \( \subseteq \). If \( X \) is a set of "states" of the world, elements of \( P(X) \) are "propositions" about the world. Less grandiosely, if \( X \) is the set of states of any system, elements of \( P(X) \) are propositions about that system.
This trick turns logical operations on propositions - like "and" and "or" - into operations on subsets, like intersection \(\cap\) and union \(\cup\). And these operations are then special cases of things we can do in
other posets, too, like join \(\vee\) and meet \(\wedge\).
We could march much further in this direction. I won't, but try it yourself!
Puzzle 22. What operation on subsets corresponds to the logical operation "not"? Describe this operation in the language of posets, so it has a chance of generalizing to other posets. Based on your description, find some posets that do have a "not" operation and some that don't.
I want to march in another direction. Suppose we have a function \(f : X \to Y\) between sets. This could describe an
observation, or measurement. For example, \( X \) could be the set of states of your room, and \( Y \) could be the set of states of a thermometer in your room: that is, thermometer readings. Then for any state \( x \) of your room there will be a thermometer reading, the temperature of your room, which we can call \( f(x) \).
This should yield some function between \( P(X) \), the set of propositions about your room, and \( P(Y) \), the set of propositions about your thermometer. It does. But in fact there are
three such functions! And they're related in a beautiful way!
The most fundamental is this:
Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq Y \) define its inverse image under \(f\) to be
$$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} . $$ The pullback is a subset of \( X \).
The inverse image is also called the
preimage, and it's often written as \(f^{-1}(S)\). That's okay, but I won't do that: I don't want to fool you into thinking \(f\) needs to have an inverse \( f^{-1} \) - it doesn't. Also, I want to match the notation in Example 1.89 of Seven Sketches.
The inverse image gives a monotone function
$$ f^{\ast}: P(Y) \to P(X), $$ since if \(S,T \in P(Y)\) and \(S \subseteq T \) then
$$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\}
\subseteq \{x \in X:\; f(x) \in T\} = f^{\ast}(T) . $$ Why is this so fundamental? Simple: in our example, propositions about the state of your thermometer give propositions about the state of your room! If the thermometer says it's 35°, then your room is 35°, at least near your thermometer. Propositions about the measuring apparatus are useful because they give propositions about the system it's measuring - that's what measurement is all about! This explains the "backwards" nature of the function \(f^{\ast}: P(Y) \to P(X)\), going back from \(P(Y)\) to \(P(X)\).
Propositions about the system being measured also give propositions about the measurement apparatus, but this is more tricky. What does "there's a living cat in my room" tell us about the temperature I read on my thermometer? This is a bit confusing... but there is an answer because a function \(f\) really does also give a "forwards" function from \(P(X) \) to \(P(Y)\). Here it is:
Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define its image under \(f\) to be
$$ f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S\} . $$ The image is a subset of \( Y \).
The image is often written as \(f(S)\), but I'm using the notation of
Seven Sketches, which comes from category theory. People pronounce \(f_{!}\) as "\(f\) lower shriek".
The image gives a monotone function
$$ f_{!}: P(X) \to P(Y) $$ since if \(S,T \in P(X)\) and \(S \subseteq T \) then
$$f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S \}
\subseteq \{y \in Y: \; y = f(x) \textrm{ for some } x \in T \} = f_{!}(T) . $$ But here's the cool part: Theorem. \( f_{!}: P(X) \to P(Y) \) is the left adjoint of \( f^{\ast}: P(Y) \to P(X) \). Proof. We need to show that for any \(S \subseteq X\) and \(T \subseteq Y\) we have
$$ f_{!}(S) \subseteq T \textrm{ if and only if } S \subseteq f^{\ast}(T) . $$ David Tanzer gave a quick proof in Puzzle 19. It goes like this: \(f_{!}(S) \subseteq T\) is true if and only if \(f\) maps elements of \(S\) to elements of \(T\), which is true if and only if \( S \subseteq \{x \in X: \; f(x) \in T\} = f^{\ast}(T) \). \(\quad \blacksquare\)
This is great! But there's also
another way to go forwards from \(P(X)\) to \(P(Y)\), which is a right adjoint of \( f^{\ast}: P(Y) \to P(X) \). This is less widely known, and I don't even know a simple name for it. Apparently it's less useful. Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define
$$ f_{\ast}(S) = \{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\} . $$ This is a subset of \(Y \).
Puzzle 23. Show that \( f_{\ast}: P(X) \to P(Y) \) is the right adjoint of \( f^{\ast}: P(Y) \to P(X) \).
What's amazing is this. Here's another way of describing our friend \(f_{!}\). For any \(S \subseteq X \) we have
$$ f_{!}(S) = \{y \in Y: x \in S \textrm{ for some } x \textrm{ such that } y = f(x)\} . $$This looks almost exactly like \(f_{\ast}\). The only difference is that while the left adjoint \(f_{!}\) is defined using "for some", the right adjoint \(f_{\ast}\) is defined using "for all". In logic "for some \(x\)" is called the
existential quantifier \(\exists x\), and "for all \(x\)" is called the universal quantifier \(\forall x\). So we are seeing that existential and universal quantifiers arise as left and right adjoints!
This was discovered by Bill Lawvere in this revolutionary paper:
By now this observation is part of a big story that "explains" logic using category theory.
Two more puzzles! Let \( X \) be the set of states of your room, and \( Y \) the set of states of a thermometer in your room: that is, thermometer readings. Let \(f : X \to Y \) map any state of your room to the thermometer reading.
Puzzle 24. What is \(f_{!}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "liberal" or "generous" nature of left adjoints, meaning that they're a "best approximation from above"? Puzzle 25. What is \(f_{\ast}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "conservative" or "cautious" nature of right adjoints, meaning that they're a "best approximation from below"?
|
I have a model whose estimation is a function (finite number of points) over an intervalle. I am looking at the sum of each estimated point of the function : $\hat{\theta} := \sum_{i=0}^k \hat{f}(i)$ which is my parameter of interest.
Now, if I want a $95\%$ CI using resampling, I see two ways to do so:
First, I can do $P$ resampling, compute $P$ estimated function $\hat{f}_j$ and then for each function compute $\hat{\theta}_j$ as the sum of this function at each point. A $95\%$ CI could then be computed by taking the $2.5\%$ and $97.5\%$ quantiles of the sample $(\hat{\theta}_j)$
An other way to do is to do $P$ resampling and compute $P$ estimated functions $\hat{f}_j$. Then, for each point $i \in \{0, \dots k\}$ we can compute the $95\%$ confidence interval $[l_i, u_i]$ from the sample $(\hat{f}_j(i))_{0 \leq j \leq P}$. We then have an $97.5\%$ upper function ($f(i) = u_i$) as well as a $2.5\%$ lower function ($f(i) = l_i$). A $95\%$ CI for $\theta$ could then be computed as the sum at each point of the lower function and upper function.
Those two methods yield very differents CI and I was wondering which one is correct?
My opinion is that the first method is correct but I can't see why the second would not be.
|
A forum where anything goes. Introduce yourselves to other members of the forums, discuss how your name evolves when written out in the Game of Life, or just tell us how you found it. This is the forum for "non-academic" content.
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: viewtopic.php?p=44724#p44724
Like this:
[/url][/wiki][/url]
[/wiki]
[/url][/code]
Many different combinations work. To reproduce, paste the above into a new post and click "preview".
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
I wonder if this works on other sites? (Remove/Change )
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
Related:[url=http://a.com/]
[/url][/wiki]
My signature gets quoted. This too. And my avatar gets moved down
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: Saka wrote:
Related:
[
Code: Select all
[wiki][url=http://a.com/][quote][wiki][url=http://a.com/]a[/url][/wiki][/quote][/url][/wiki]
]
My signature gets quoted. This too. And my avatar gets moved down
It appears to be possible to quote the entire page by repeating that several times. I guess it leaves <div> and <blockquote> elements open and then autofills the closing tags in the wrong places.
Here, I'll fix it:
[/wiki][url]conwaylife.com[/url]
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact:
It appears I fixed @Saka's open <div>.
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
toroidalet Posts: 1018 Joined: August 7th, 2016, 1:48 pm Location: my computer Contact:
A for awesome wrote:It appears I fixed @Saka's open <div>.
what fixed it, exactly?
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: toroidalet wrote:
A for awesome wrote:It appears I fixed @Saka's open <div>.
what fixed it, exactly?
The post before the one you quoted. The code was:
Code: Select all
[wiki][viewer]5[/viewer][/wiki][wiki][url]conwaylife.com[/url][/wiki]
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
Aidan, could you fix your ultra quote? Now you can't even see replies and the post reply button. Also, a few more ones eith unique effects popped up.
Appart from Aidan Mode, there is now: -Saka Quote -Daniel Mode -Aidan Superquote We should write descriptions for these: -Adian Mode: A combination of url, wiki, and code tags that leaves the page shaterred in pieces. Future replies are large and centered, making the page look somewhat old-ish. -Saka Quote: A combination of a dilluted Aidan Mode and quotes, leaves an open div and blockquote that quotes the entire message and signature. Enough can quote entire pages. -Daniel Mode: A derivative of Aidan Mode that adds code tags and pushes things around rather than scrambling them around. Pushes bottom bar to the side. Signature gets coded. -Aidan Superqoute: The most lethal of all. The Aidan Superquote is a broken superquote made of lots of Saka Quotes, not normally allowed on the forums by software. Leaves the rest of the page white and quotes. Replies and post reply button become invisible. I would not like new users playing with this. I'll write articles on my userpage.
Last edited by Saka
on June 21st, 2017, 10:51 pm, edited 1 time in total.
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA
I actually laughed at the terminology.
"IT'S TIME FOR MY ULTIMATE ATTACK. I, A FOR AWESOME, WILL NOW PRESENT: THE AIDAN SUPERQUOTE" shoots out lasers
This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.)
Current rule interest: B2ce3-ir4a5y/S2-c3-y
fluffykitty
Posts: 638 Joined: June 14th, 2014, 5:03 pm
There's actually a bug like this on XKCD Forums. Something about custom tags and phpBB. Anyways,
[/wiki]
I like making rules
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
Here's another one. It pushes the avatar down all the way to the signature bar. Let's name it...
-Fluffykitty Pusher
Unless we know your real name that's going to be it lel. It's also interesting that it makes a code tag with purple text.
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact:
Probably the simplest ultra-page-breaker:
Code: Select all
[viewer][wiki][/viewer][viewer][/wiki][/viewer]
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X A for awesome wrote:
Probably the simplest ultra-page-breaker:
Code: Select all
[viewer][wiki][/viewer][viewer][/wiki][/viewer]
Screenshot?
New one yay.
-Adian Bomb: The smallest ultra-page breaker. Leaks into the bottom and pushes the pages button, post reply, and new replies to the side.
Last edited by Saka
on June 21st, 2017, 10:20 pm, edited 1 time in total.
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA
Someone should create a phpBB-based forum so we can experiment without mucking about with the forums.
This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.)
Current rule interest: B2ce3-ir4a5y/S2-c3-y
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
The testing grounds have now become similar to actual military testing grounds.
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
fluffykitty
Posts: 638 Joined: June 14th, 2014, 5:03 pm
We also have this thread. Also,
is now officialy the Fluffy Pusher. Also, it does bad things to the thread preview when posting. And now, another pagebreaker for you:
Code: Select all
[wiki][viewer][/wiki][viewer][/viewer][/viewer]
Last edited by fluffykitty
on June 22nd, 2017, 11:50 am, edited 1 time in total.
I like making rules
83bismuth38 Posts: 453 Joined: March 2nd, 2017, 4:23 pm Location: Still sitting around in Sagittarius A... Contact:
oh my, i want to quote somebody and now i have to look in a diffrent scrollbar to type this. intersting thing, though, is that it's never impossible to fully hide the entire page -- it will always be in a nested scrollbar.
EDIT: oh also, the thing above is kinda bad. not horrible though -- i'd put it at a 1/13 on the broken scale.
Code: Select all
x = 8, y = 10, rule = B3/S23
3b2o$3b2o$2b3o$4bobo$2obobobo$3bo2bo$2bobo2bo$2bo4bo$2bo4bo$2bo!
No football of any dui mauris said that.
Cclee Posts: 56 Joined: October 5th, 2017, 9:51 pm Location: de internet
Code: Select all
[quote][wiki][viewer][/wiki][/viewer][wiki][/quote][/wiki]
This dosen't do good things
Edit:
Code: Select all
[wiki][url][size=200][wiki][viewer][viewer][url=http://www.conwaylife.com/forums/viewtopic.php?f=4&t=2907][/wiki][/url][quote][/url][/quote][/viewer][wiki][quote][/wiki][/quote][url][wiki][quote][/url][/wiki][/quote][url][wiki][/url][/wiki][/wiki][/viewer][/size][quote][viewer][/quote][/viewer][/wiki][/url]
Neither does this
^
What ever up there likely useless Cclee Posts: 56 Joined: October 5th, 2017, 9:51 pm Location: de internet
Code: Select all
[viewer][wiki][/viewer][wiki][url][size=200][wiki][viewer][viewer][url=http://www.conwaylife.com/forums/viewtopic.php?f=4&t=2907][/wiki][/url][quote][/url][/quote][/viewer][wiki][quote][/wiki][/quote][url][wiki][quote][/url][/wiki][/quote][url][wiki][/url][/wiki][/wiki][/viewer][/size][quote][viewer][/quote][/viewer][/wiki][/url][viewer][/wiki][/viewer]
I get about five different scroll bars when I preview this
Edit:
Code: Select all
[viewer][wiki][quote][viewer][wiki][/viewer][/wiki][viewer][viewer][wiki][/viewer][/wiki][/quote][viewer][wiki][/viewer][/wiki][quote][viewer][wiki][/viewer][viewer][wiki][/viewer][/wiki][/wiki][/viewer][/quote][/viewer][/wiki]
Makes a really long post and makes the rest of the thread large and centred
Edit 2:
Code: Select all
[url][quote][quote][quote][wiki][/quote][viewer][/wiki][/quote][/viewer][/quote][viewer][/url][/viewer]
Just don't do this
(Sorry I'm having a lot of fun with this)
^
What ever up there likely useless cordership3 Posts: 127 Joined: August 23rd, 2016, 8:53 am Location: haha long boy
Here's another small one:
Code: Select all
[url][wiki][viewer][/wiki][/url][/viewer]
fg
Moosey Posts: 2483 Joined: January 27th, 2019, 5:54 pm Location: A house, or perhaps the OCA board. Contact:
Code: Select all
[wiki][color=#4000BF][quote][wiki]I eat food[/quote][/color][/wiki][code][wiki]
[/code]
Is a pinch broken
Doesn’t this thread belong in the sandbox?
I am a prolific creator of many rather pathetic googological functions
My CA rules can be found here
Also, the tree game
Bill Watterson once wrote: "How do soldiers killing each other solve the world's problems?"
77topaz Posts: 1345 Joined: January 12th, 2018, 9:19 pm
Well, it started out as a thread to documents "Bugs & Errors" in the forum's code...
Moosey Posts: 2483 Joined: January 27th, 2019, 5:54 pm Location: A house, or perhaps the OCA board. Contact:
77topaz wrote:Well, it started out as a thread to documents "Bugs & Errors" in the forum's code...
Now it's half an aidan mode testing grounds.
Also, fluffykitty's messmaker:
Code: Select all
[viewer][wiki][*][/viewer][/*][/wiki][/quote]
I am a prolific creator of many rather pathetic googological functions
My CA rules can be found here
Also, the tree game
Bill Watterson once wrote: "How do soldiers killing each other solve the world's problems?"
PkmnQ
Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode
Don't worry about this post, it's just gonna push conversation to the next page so I can test something while actually being able to see it. (The testing grounds in the sandbox crashed golly)
Code: Select all
x = 12, y = 12, rule = AnimatedPixelArt
4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX
2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW
EqWE$2.4vX!
i like loaf
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.