text
stringlengths 256
16.4k
|
|---|
I'm trying to develop some basic intuition here, so this comes mostly as a jumble of commentary/questions. Hope its acceptable.
Helmholtz Free Energy: $A = -{\beta ^{-1}}lnZ$. I find this statement to be incredible profound. Granted, I found it yesterday.
Suppose my system has one energy state with no degeneracy. $Z = e^{-\beta E_1}$, then $A = E_1$, which I suppose says if the system consists of one particle, all its internal energy is available for work. That's nice.
Now, if we introduce some degeneracy $\gamma$, we get $Z = \gamma e^{-\beta E_1}$, and so $A = E_1 - \beta ^{-1}ln \gamma$, and we have clearly lost some of our free energy to the degeneracy (ie. to the fact that there are multiple microstates for our given macrostate, and so we have limited information about the actual configuration of the system, which is free to explore its micro states, limiting the energy we can get from it). So that's nice too.
We can go further by introducing more energies, so $Z = \Sigma \gamma_i e^{-\beta E_i}$, but nice analysis is confounded by my inability to deal coherently with sums in a logarithm. Though I managed to show that $A$ for such a multi-state system is strictly less than $\Sigma [E_i-\beta ^{-1}ln\gamma _i]$, ie. less than the sum of the free energies for independent systems of a given energy $E_i$ and degeneracy $\gamma_i$ . This result, however, requires $E_i > 0$, which I take for granted, but makes plenty sense.
Now, what does it mean for A to be negative? Perhaps more importantly, how does one simply go about obtaining work from a system with some A (a practical question)? Or, perhaps even more importantly, is it this requirement that there be a second final state, seemingly of lower free energy, that makes $A$ itself not so significant, but rather $\Delta A$? And if so, what happens to the intuition about a system with only one state having exactly its energy as free-energy?
Your insights on these and related matters pertaining to legendary $Z$ and its relation to $A$, as well as pointers on where my thinking may be flawed or enlightened, are much appreciated.
|
The Annals of Probability Ann. Probab. Volume 23, Number 2 (1995), 501-521. Existence of Quasi-Stationary Distributions. A Renewal Dynamical Approach Abstract
We consider Markov processes on the positive integers for which the origin is an absorbing state. Quasi-stationary distributions (qsd's) are described as fixed points of a transformation $\Phi$ in the space of probability measures. Under the assumption that the absorption time at the origin, $R,$ of the process starting from state $x$ goes to infinity in probability as $x \rightarrow \infty$, we show that the existence of a $\operatorname{qsd}$ is equivalent to $E_xe^{\lambda R} < \infty$ for some positive $\lambda$ and $x$. We also prove that a subsequence of $\Phi^n\delta_x$ converges to a minimal $\operatorname{qsd}$. For a birth and death process we prove that $\Phi^n\delta_x$ converges along the full sequence to the minimal $\operatorname{qsd}$. The method is based on the study of the renewal process with interarrival times distributed as the absorption time of the Markov process with a given initial measure $\mu$. The key tool is the fact that the residual time in that renewal process has as stationary distribution the distribution of the absorption time of $\Phi\mu$.
Article information Source Ann. Probab., Volume 23, Number 2 (1995), 501-521. Dates First available in Project Euclid: 19 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aop/1176988277 Digital Object Identifier doi:10.1214/aop/1176988277 Mathematical Reviews number (MathSciNet) MR1334159 Zentralblatt MATH identifier 0827.60061 JSTOR links.jstor.org Citation
Ferrari, P. A.; Kesten, H.; Martinez, S.; Picco, P. Existence of Quasi-Stationary Distributions. A Renewal Dynamical Approach. Ann. Probab. 23 (1995), no. 2, 501--521. doi:10.1214/aop/1176988277. https://projecteuclid.org/euclid.aop/1176988277
|
Can somebody point me towards a derivation using
statistical physics for the fact that the Helmholtz free energy $F$ is minimised at equilibrium for a canonical system at constant temperature and volume?
I'll sketch the derivation in a classical setting. Let's say we have a system with $N$ degrees of freedom, with $N$ large (disclaimer: I won't discuss the limit properly).
The partition function at temperature $T=1/(k_B\beta)$ is given by the sum over all the configurations $C$ of the system $$ Z = \sum_C \mathrm{e}^{-\beta E_C} = \int\mathrm{d}E\; \mathrm{e}^{-\beta E} \sum_C \delta(E-E_C) = \int\mathrm{d}E\; \mathcal{N}_E\mathrm{e}^{-\beta E}, $$
where $\mathcal{N}_E$ is the number of states at energy $E$.
The entropy is defined as $S=-k_B \ln \mathcal{N}_E$, and I will assume that energy and entropy are extensive, so that the densities $\varepsilon = E/N$, $s=S/N$ are well defined in the thermodynamic limit. We can rewrite the partition function as $$ Z = \int\mathrm{d}E\; \mathrm{e}^{-\beta E+\ln\mathcal{N}_E} = \int \mathrm{d}\varepsilon N \mathrm{e}^{-\beta N (\varepsilon-T s)}\ . $$
For large $N$ we can evaluate the integral by Laplace's method, to obtain the free energy minimisation principle $$ f = -\lim_{N\to \infty}\frac{1}{\beta N} \ln Z = -\lim_{N\to \infty}\frac{1}{\beta N} \max_\varepsilon[-\beta N(\varepsilon-T s)] = \min_\varepsilon (\varepsilon-Ts). $$
|
My other answer addressed the question in the title. Here I will try to address the two queries at the end of the OP question. @ybeltukov was the first to point out the key issue, which is that
ContourPlot works only when it finds sample points where the values of the function are greater than the contour level and points where they are less than it.
ContourPlot, like other plot functions, starts with a 2D rectangular grid of sample points determined by
PlotPoints, which is then recursively subdivided up to
MaxRecursion times. Where or when a recursive subdivision occurs depends on the function and the contour levels. Some discussion of how the subdivision works can be found in the answers to Specific initial sample points for 3D plots.
I hope to show that the answer to the first question about why increasing
PlotPoints results in a worse graph is mainly due to the alignment of the sample points with a certain region, the extremely small region where the function is less than
0.001. The region is so small that misalignment is possible even when
PlotPoints is set well over
100.
Auxiliary functions
There are a couple of helper functions (
cpGrid,
cpShow) below that create
ContourPlots of a function that show how the plot domain is subdivided together with the sample points at which the values of the function are less than the contour level. (See "Code dump" below at the end.)
The OP's functions, which I will call
f1 and
f2:
ClearAll[f1, f2];
SetAttributes[f1, Listable];
SetAttributes[f2, Listable];
f1[x_, y_] := (Cos[x] + Cos[y]) Exp[I x y] Exp[x/2];
f2[a_, a0_, k_, K0_] :=
a^2 Sech[(a a0)/2]^2 (-2 I (1 + E^(2 I a0 k)) k +
a (-1 + E^(2 I a0 k)) Tanh[(a a0)/2]) +
2 k (I E^(I a0 k) ((a - k) (a + k) Cos[a0 k] + (a^2 + k^2) Cos[
a0 K0]) + a (-1 + E^(2 I a0 k)) k Tanh[(a a0)/2]);
ContourPlot sampling
The table of plots below show
PlotPoints settings of 3, 4, 5 points (rows) and
MaxRecursion settings of 0, 1, 2 (columns). The initial sample points lie on a rectangular grid, but the actual subdivision is made of triangles with a SW-NE bias. This is shown below by the gray
Mesh lines. The red points are sample points where the value of the function
Abs@f1 lies below the level
0.001. The other intersections are sample points where the value of the function is above the level. Where a red point is next to a gray intersection, one may observe an increase in the number of subdivisions.
{cp, samp} = cpGrid[Abs@f1@## &, {x, 0, 4 Pi}, {y, 0, 4 Pi}, 0.001, {3, 4, 5}];
cpShow[cp, samp, Abs@f1@## &, 0.001] // GraphicsGrid
There are several things to observe. The zeroes of
f1 lies on lines where $x \pm y$ is a multiple of $\pi$. For the domain $[0,4\pi]\times[0,4\pi]$, the sample points will happen to lie on these lines when
PlotPoints is of the form $4n+1$. So we see red points for 5 points with no subdivision, but not initially in the other plots. Below are some more examples showing the contour plots with the initial sample points (
MaxRecursion -> 0) that exhibit the $4n+1$ pattern. Compare 100 vs. 101 and 25 vs. 101.
GraphicsGrid@Table[
With[{tolerance = 0.001, pp = p + 4 j},
Show[ContourPlot[
Abs[f1[x, y]] == tolerance , {x, 0, 4 Pi}, {y, 0, 4 Pi},
PlotPoints -> pp, MaxRecursion -> 0, PlotRange -> All,
PlotLabel -> pp], PlotRange -> {{0, 4 Pi}, {0, 4 Pi}}]
],
{j, {1, 2, 20}}, {p, 20, 23}
]
Now a single subdivision of every triangle would transform a grid of $n$ by $n$ points into one with $2n-1$ plot points in each direction, which, if repeated, would eventually bring all grids into the form of $4n+1$ points on a side; however depending on the function and contour level, some triangles are subdivided twice and some only once. For example, there is a pair at approximately
{5, 8} in the plot for 3 points, 1 subdivision, whose shared hypotenuse was not divided even though the midpoint is a zero of the function. I do not know how
ContourPlot decides whether to subdivide, but the lack of a division here makes a gap in the contour. (One can check that the contour does not connect these points. A mesh line connecting two red points seems to be interpreted as being connected by a region in which the value of the function is less than
0.001.) In the next step where
MaxRecursion increases from 1 to 2, it is subdivided and the gap is closed. One observes a similar gap in the plot for 4 points, 2 subdivisions around the coordinates
{8.5, 5}. That gap disappears with another increase to
MaxRecursion -> 3. In fact, I think that whatever the number of initial
PlotPoints, the contour will be drawn completely with
MaxRecursion -> 3 for the symmetric domain $[0,4\pi]\times[0,4\pi]$, but it depends precisely because the special symmetry leads to alignment of the sample points with the zeroes of
f1. For an odd number of initial
PlotPoints, one only needs
MaxRecursion -> 2 to get the whole contour, and for a number of the form $4n+1$, no recursion is necessary.
The dependence on the symmetry can be seen by perturbing the domain slightly:
ContourPlot[
Abs[f1[x, y]] == 0.001, {x, 0, 4 \[Pi] + #}, {y, 0, 4 \[Pi] + #},
MaxRecursion -> 3] & /@ {0., 0.001} // GraphicsRow
When the domain is extended to
16 Pi, the number of plot points needs to be of the form $16n+1$. The following is perhaps the simplest way to get the complete contour (
MaxRecursion -> 1 is needed to overcome the SW-NE bias and connect the NW-SE contours):
ContourPlot[Abs@f1[x, y] == 0.001, {x, 0, 16 Pi}, {y, 0, 16 Pi},
PlotPoints -> 17, MaxRecursion -> 1]
Thus the alignment of the grid with the region in $xy$ plane where
Abs[f1[x, y]] < 0.001 is the key to the somewhat odd dependence on the number of
PlotPoints.
The thinness of the region in which sample points must land
A single contour level divides the plane into two regions, one over which the value of the function is greater than the level and one over which the value of the function is less, in addition to the level set itself. Normally, if the function is not locally constant, the level set will be a curve. In the OP's examples, we have another sort of edge case in which desired contour curve is the minimum of the function.
When one of the two regions is extremely small, it is unlikely that many sample points will fall into it except "by accident" so to speak. This is what is happening with both of the OP's functions.
To give an illustration we can see, let's use a sequence of values for the level, say,
1.,
0.1,
0.01. Despite the images, the true regions are connected, but the width of the region was too narrow to contain enough (or any) points to define a boundary.
GraphicsRow @ RegionPlot[
Abs[f1[x, y]] <= #, {x, 0, 4 Pi}, {y, 0, 4 Pi},
PlotPoints -> 40, MaxRecursion -> 3] & /@ {1, 0.1, 0.01}]
With[{a0 = 10, a = 1.4},
GraphicsRow @ RegionPlot[Abs[f2[a, a0, y, x]] <= #,
{x, -2 Pi/a0, 2 Pi/a0}, {y, 0, 2},
PlotPoints -> 40, MaxRecursion -> 3] & /@ {1, 0.1, 0.01}]]
For the second function, I think it proves to be not as well-behaved as at first glance. It gets quite steep in half of the domain, which accounts for the narrowing of the region as the
y coordinate increases. (Note: My plot turns out to have a somewhat greater
PlotRange than that of the OP.)
With[{a0 = 10, a = 1.4},
Plot3D[Abs[f2[a, a0, y, x]], {x, -2 \[Pi]/a0, 2 \[Pi]/a0}, {y, 0, 2}, PlotPoints -> 100]]
)
Summary
I think the behavior of
ContourPlot can be accounted in both cases by the extremely small area in the plot domain for which $f < 0.001$. In the case of the OP's first function, there is accidental and sporadic alignment of the zero set of $f1$ with the sample points, which accounts for why a plot with much fewer
PlotPoints can look better than one with more.
Code dump
For investigating contour plots.
cpGrid[f_, xdom_, ydom_, level_, pp0_List, mr0_: 2] :=
Module[{cp, samp},
cp = samp = Table[{}, {pp, Length@pp0}, {mr, 0, mr0}];
Do[{cp[[pp, 1 + mr]] = First@#,
samp[[pp, 1 + mr]] = Hold @@ Last@#} &@
Reap @ ContourPlot[
f[x, y],
{x, xdom[[-2]], xdom[[-1]]}, {y, ydom[[-2]], ydom[[-1]]},
Contours -> {level}, PlotPoints -> pp0[[pp]],
MaxRecursion -> mr, Mesh -> All, EvaluationMonitor :> Sow[{x, y}],
PlotLabel -> Row[{pp0[[pp]], " points, ", mr, " subdivisions"}]],
{pp, Length@pp0}, {mr, 0, mr0}];
{cp, samp}];
ClearAll[cpShow];
SetAttributes[cpShow, Listable];
cpShow[cp_, Hold[samp_], f_, level_] :=
Show[cp,
Graphics[{Red, PointSize[0.02], Point[Pick[samp, UnitStep[f @@@ samp - level], 0]]}]
]
|
Difference between revisions of "Main Page"
(→Threads)
(→Threads)
Line 26: Line 26:
We are also collecting bounds for [[Fujimura's problem]], motivated by a [[hyper-optimistic conjecture]].
We are also collecting bounds for [[Fujimura's problem]], motivated by a [[hyper-optimistic conjecture]].
+ +
Here are some [[unsolved problems]] arising from the above threads.
Here are some [[unsolved problems]] arising from the above threads.
Revision as of 20:26, 13 February 2009 The Problem
Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A
combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. [math]k=3[/math] Density Hales-Jewett (DHJ(3)) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math]
The original proof of DHJ(3) used arguments from ergodic theory. The basic problem to be considered by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers.
Useful background materials
Some background to the project can be found here. General discussion on massively collaborative "polymath" projects can be found here. A cheatsheet for editing the wiki may be found here. Finally, here is the general Wiki user's guide
Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (final call) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (inactive) (500-599) Possible proof strategies (active) (600-699) A reading seminar on density Hales-Jewett (active) (700-799) Bounds for the first few density Hales-Jewett numbers, and related quantities (arriving at station)
There is also a chance that we will be able to improve the known bounds on Moser's cube problem.
Here are some unsolved problems arising from the above threads.
Here is a tidy problem page.
Bibliography
Density Hales-Jewett
H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem for k=3“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem“, J. Anal. Math. 57 (1991), 64–119. R. McCutcheon, “The conclusion of the proof of the density Hales-Jewett theorem for k=3“, unpublished.
Behrend-type constructions
M. Elkin, "An Improved Construction of Progression-Free Sets ", preprint. B. Green, J. Wolf, "A note on Elkin's improvement of Behrend's construction", preprint. K. O'Bryant, "Sets of integers that do not contain long arithmetic progressions", preprint.
Triangles and corners
M. Ajtai, E. Szemerédi, Sets of lattice points that form no squares, Stud. Sci. Math. Hungar. 9 (1974), 9--11 (1975). MR369299 I. Ruzsa, E. Szemerédi, Triple systems with no six points carrying three triangles. Combinatorics (Proc. Fifth Hungarian Colloq., Keszthely, 1976), Vol. II, pp. 939--945, Colloq. Math. Soc. János Bolyai, 18, North-Holland, Amsterdam-New York, 1978. MR519318 J. Solymosi, A note on a question of Erdős and Graham, Combin. Probab. Comput. 13 (2004), no. 2, 263--267. MR 2047239
|
This will be a talk at the Conference in honor of Arthur W. Apter and Moti Gitik at Carnegie Mellon University, May 30-31, 2015. I am pleased to be a part of this conference in honor of the 60th birthdays of two mathematicians whom I admire very much.
Abstract. The weakly compact embedding property for a cardinal $\kappa$ is the assertion that for every transitive set $M$ of size $\kappa$ with $\kappa\in M$, there is a transitive set $N$ and an elementary embedding $j:M\to N$ with critical point $\kappa$. When $\kappa$ is inaccessible, this property is one of many equivalent characterizations of $\kappa$ being weakly compact, along with the weakly compact extension property, the tree property, the weakly compact filter property and many others. When $\kappa$ is not inaccessible, however, these various properties are no longer equivalent to each other, and it is interesting to sort out the relations between them. In particular, I shall consider the embedding property and these other properties in the case when $\kappa$ is not necessarily inaccessible, including interesting instances of the embedding property at cardinals below the continuum, with relations to cardinal characteristics of the continuum.
This is joint work with Brent Cody, Sean Cox, myself and Thomas Johnstone.
Slides | Article | Conference web site
A. W. Apter, J. Cummings, and J. D. Hamkins, “Singular cardinals and strong extenders,” Central European J.~Math., vol. 11, iss. 9, pp. 1628-1634, 2013.
@article {ApterCummingsHamkins2013:SingularCardinalsAndStrongExtenders,
AUTHOR = {Apter, Arthur W. and Cummings, James and Hamkins, Joel David},
TITLE = {Singular cardinals and strong extenders},
JOURNAL = {Central European J.~Math.},
FJOURNAL = {Central European Journal of Mathematics},
VOLUME = {11},
YEAR = {2013},
NUMBER = {9},
PAGES = {1628--1634},
ISSN = {1895-1074},
MRCLASS = {03E55 (03E35 03E45)},
MRNUMBER = {3071929},
MRREVIEWER = {Samuel Gomes da Silva},
DOI = {10.2478/s11533-013-0265-1},
URL = {http://jdh.hamkins.org/singular-cardinals-strong-extenders/},
eprint = {1206.3703},
archivePrefix = {arXiv},
primaryClass = {math.LO},
}
Brent Cody asked the question whether the situation can arise that one has an elementary embedding $j:V\to M$ witnessing the $\theta$-strongness of a cardinal $\kappa$, but where $\theta$ is regular in $M$ and singular in $V$.
In this article, we investigate the various circumstances in which this does and does not happen, the circumstances under which there exist a singular cardinal $\mu$ and a short $(\kappa, \mu)$-extender $E$ witnessing “$\kappa$ is $\mu$-strong”, such that $\mu$ is singular in $Ult(V, E)$.
A. W.~Apter, V. Gitman, and J. D. Hamkins, “Inner models with large cardinal features usually obtained by forcing,” Archive for Math.~Logic, vol. 51, pp. 257-283, 2012.
@article {ApterGitmanHamkins2012:InnerModelsWithLargeCardinals,
author = {Arthur W.~Apter and Victoria Gitman and Joel David Hamkins},
affiliation = {Mathematics, The Graduate Center of the City University of New York, 365 Fifth Avenue, New York, NY 10016, USA},
title = {Inner models with large cardinal features usually obtained by forcing},
journal = {Archive for Math.~Logic},
publisher = {Springer},
issn = {0933-5846},
keyword = {},
pages = {257--283},
volume = {51},
issue = {3},
url = {http://jdh.hamkins.org/innermodels},
eprint = {1111.0856},
archivePrefix = {arXiv},
primaryClass = {math.LO},
doi = {10.1007/s00153-011-0264-5},
note = {},
year = {2012},
}
We construct a variety of inner models exhibiting features usually obtained by forcing over universes with large cardinals. For example, if there is a supercompact cardinal, then there is an inner model with a Laver indestructible supercompact cardinal. If there is a supercompact cardinal, then there is an inner model with a supercompact cardinal $\kappa$ for which $2^\kappa=\kappa^+$, another for which $2^\kappa=\kappa^{++}$ and another in which the least strongly compact cardinal is supercompact. If there is a strongly compact cardinal, then there is an inner model with a strongly compact cardinal, for which the measurable cardinals are bounded below it and another inner model $W$ with a strongly compact cardinal $\kappa$, such that $H_{\kappa^+}^V\subseteq HOD^W$. Similar facts hold for supercompact, measurable and strongly Ramsey cardinals. If a cardinal is supercompact up to a weakly iterable cardinal, then there is an inner model of the Proper Forcing Axiom and another inner model with a supercompact cardinal in which GCH+V=HOD holds. Under the same hypothesis, there is an inner model with level by level equivalence between strong compactness and supercompactness, and indeed, another in which there is level by level inequivalence between strong compactness and supercompactness. If a cardinal is strongly compact up to a weakly iterable cardinal, then there is an inner model in which the least measurable cardinal is strongly compact. If there is a weakly iterable limit $\delta$ of ${\lt}\delta$-supercompact cardinals, then there is an inner model with a proper class of Laver-indestructible supercompact cardinals. We describe three general proof methods, which can be used to prove many similar results.
A. W.~Apter, J. Cummings, and J. D. Hamkins, “Large cardinals with few measures,” Proc.~Amer.~Math.~Soc., vol. 135, iss. 7, pp. 2291-2300, 2007.
@ARTICLE{ApterCummingsHamkins2006:LargeCardinalsWithFewMeasures,
AUTHOR = {Arthur W.~Apter and James Cummings and Joel David Hamkins},
TITLE = {Large cardinals with few measures},
JOURNAL = {Proc.~Amer.~Math.~Soc.},
FJOURNAL = {Proceedings of the American Mathematical Society},
VOLUME = {135},
YEAR = {2007},
NUMBER = {7},
PAGES = {2291--2300},
ISSN = {0002-9939},
CODEN = {PAMYAR},
MRCLASS = {03E35 (03E55)},
MRNUMBER = {2299507 (2008b:03067)},
MRREVIEWER = {Tetsuya Ishiu},
DOI = {10.1090/S0002-9939-07-08786-2},
URL = {http://jdh.hamkins.org/largecardinalswithfewmeasures/},
eprint = {math/0603260},
archivePrefix = {arXiv},
primaryClass = {math.LO},
file = F,
}
We show, assuming the consistency of one measurable cardinal, that it is consistent for there to be exactly $\kappa^+$ many normal measures on the least measurable cardinal $\kappa$. This answers a question of Stewart Baldwin. The methods generalize to higher cardinals, showing that the number of $\lambda$-strong compactness or $\lambda$-supercompactness measures on $P_\kappa(\lambda)$ can be exactly $\lambda^+$, if $\lambda>\kappa$ is a regular cardinal. We conclude with a list of open questions. Our proofs use a critical observation due to James Cummings.
A. W.~Apter and J. D. Hamkins, “Exactly controlling the non-supercompact strongly compact cardinals,” J.~Symbolic Logic, vol. 68, iss. 2, pp. 669-688, 2003.
@ARTICLE{ApterHamkins2003:ExactlyControlling,
AUTHOR = {Arthur W.~Apter and Joel David Hamkins},
TITLE = {Exactly controlling the non-supercompact strongly compact cardinals},
JOURNAL = {J.~Symbolic Logic},
FJOURNAL = {The Journal of Symbolic Logic},
VOLUME = {68},
YEAR = {2003},
NUMBER = {2},
PAGES = {669--688},
ISSN = {0022-4812},
CODEN = {JSYLA6},
MRCLASS = {03E35 (03E55)},
MRNUMBER = {1976597 (2004b:03075)},
MRREVIEWER = {A.~Kanamori},
doi = {10.2178/jsl/1052669070},
eprint = {math/0301016},
archivePrefix = {arXiv},
primaryClass = {math.LO},
url = {http://wp.me/p5M0LV-2x},
}
We summarize the known methods of producing a non-supercompact strongly compact cardinal and describe some new variants. Our Main Theorem shows how to apply these methods to many cardinals simultaneously and exactly control which cardinals are supercompact and which are only strongly compact in a forcing extension. Depending upon the method, the surviving non-supercompact strongly compact cardinals can be strong cardinals, have trivial Mitchell rank or even contain a club disjoint from the set of measurable cardinals. These results improve and unify previous results of the first author.
A. W.~Apter and J. D. Hamkins, “Indestructibility and the level-by-level agreement between strong compactness and supercompactness,” J.~Symbolic Logic, vol. 67, iss. 2, pp. 820-840, 2002.
@ARTICLE{ApterHamkins2002:LevelByLevel,
AUTHOR = {Arthur W.~Apter and Joel David Hamkins},
TITLE = {Indestructibility and the level-by-level agreement between strong compactness and supercompactness},
JOURNAL = {J.~Symbolic Logic},
FJOURNAL = {The Journal of Symbolic Logic},
VOLUME = {67},
YEAR = {2002},
NUMBER = {2},
PAGES = {820--840},
ISSN = {0022-4812},
CODEN = {JSYLA6},
MRCLASS = {03E35 (03E55)},
MRNUMBER = {1905168 (2003e:03095)},
MRREVIEWER = {Carlos A.~Di Prisco},
DOI = {10.2178/jsl/1190150111},
URL = {http://wp.me/p5M0LV-2i},
eprint = {math/0102086},
archivePrefix = {arXiv},
primaryClass = {math.LO},
}
Can a supercompact cardinal $\kappa$ be Laver indestructible when there is a level-by-level agreement between strong compactness and supercompactness? In this article, we show that if there is a sufficiently large cardinal above $\kappa$, then no, it cannot. Conversely, if one weakens the requirement either by demanding less indestructibility, such as requiring only indestructibility by stratified posets, or less level-by-level agreement, such as requiring it only on measure one sets, then yes, it can.
A. W.~Apter and J. D. Hamkins, “Indestructible weakly compact cardinals and the necessity of supercompactness for certain proof schemata,” Math.~Logic Q., vol. 47, iss. 4, pp. 563-571, 2001.
@ARTICLE{ApterHamkins2001:IndestructibleWC,
AUTHOR = {Arthur W.~Apter and Joel David Hamkins},
TITLE = {Indestructible weakly compact cardinals and the necessity of supercompactness for certain proof schemata},
JOURNAL = {Math.~Logic Q.},
FJOURNAL = {Mathematical Logic Quarterly},
VOLUME = {47},
YEAR = {2001},
NUMBER = {4},
PAGES = {563--571},
ISSN = {0942-5616},
MRCLASS = {03E35 (03E55)},
MRNUMBER = {1865776 (2003h:03078)},
DOI = {10.1002/1521-3870(200111)47:4%3C563::AID-MALQ563%3E3.0.CO;2-%23},
URL = {http://jdh.hamkins.org/indestructiblewc/},
eprint = {math/9907046},
archivePrefix = {arXiv},
primaryClass = {math.LO},
}
We show that if the weak compactness of a cardinal is made indestructible by means of any preparatory forcing of a certain general type, including any forcing naively resembling the Laver preparation, then the cardinal was originally supercompact. We then apply this theorem to show that the hypothesis of supercompactness is necessary for certain proof schemata.
A. W.~Apter and J. D. Hamkins, “Universal indestructibility,” Kobe J.~Math, vol. 16, iss. 2, pp. 119-130, 1999.
@article {ApterHamkins99:UniversalIndestructibility,
AUTHOR = {Arthur W.~Apter and Joel David Hamkins},
TITLE = {Universal indestructibility},
JOURNAL = {Kobe J.~Math},
FJOURNAL = {Kobe Journal of Mathematics},
VOLUME = {16},
YEAR = {1999},
NUMBER = {2},
PAGES = {119--130},
ISSN = {0289-9051},
MRCLASS = {03E55 (03E35)},
MRNUMBER = {1745027 (2001k:03112)},
MRNUMBER = {1 745 027},
eprint = {math/9808004},
archivePrefix = {arXiv},
primaryClass = {math.LO},
url = {http://wp.me/p5M0LV-12},
}
From a suitable large cardinal hypothesis, we provide a model with a supercompact cardinal in which universal indestructibility holds: every supercompact and partially supercompact cardinal kappa is fully indestructible by kappa-directed closed forcing. Such a state of affairs is impossible with two supercompact cardinals or even with a cardinal which is supercompact beyond a measurable cardinal.
|
Well this seems like a basic principle, yet I can't seem to get it. (We're expect to "know" this already).
In a three phase situation I'm given a source voltage of 230V. - So the waveform of each of the phases would be: \$ v_s = \sqrt2 \cdot 230 \cdot \sin(\omega t + \theta_i)\$
Where \$\theta_i\$ is \$0, \tfrac{2}{3} \pi, \tfrac{4}{3} \pi\$ for each phase.
So now I could calculate the line to line voltage by the formula: $$v_{ll} = 2 \cdot \left ( \sqrt2 \cdot 230 \cdot \sin(\tfrac{2}{3} \pi) \right)$$
Is this correct?
|
Many define the likelihood of the data something like $\prod_{x} p(x|\theta)$ others like $p(x|\theta)$. Is the likelihood defined for one sample point/data element (like one document from a collection of documents or one sentence from a collection of sentences), for the whole collection of data elements, or for both?
Is the likelihood, in the Expectation maximization (EM) algorithm $L(\theta|X)$ https://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm#Description and in the Maximum Likelihood Estimation (MLE) algorithm https://en.wikipedia.org/wiki/Maximum_likelihood_estimation#Principles, considered to be taken from one data element or from the whole collection of data elements? By the way, in the EM link why the likelihood is written as $L(\theta|X)=P(X|\theta)$ I know that capital X means a random variable in probability, but likelihood is defined on the data so they should write either $P(x|\theta)$ or $P(X=x|\theta)$?
A short answer that works in general: these terms are sort of loose, and can always mean whatever is the most useful in a particular context. If you have a whole bunch of samples, why not use them all and consider $X$ to be the collection of all samples. If you have only a single sample, then that's all you can use. You are free to do whatever you want.
But to answer your questions more directly:
You give two expressions for likelihood: $\prod_x p(x\mid\theta)$ and $p(x\mid\theta)$. The second expression is just the conditional probability of a point $x$ given a choice of parameters $\theta$, and that's how it should be interpreted. The former expression is the product of many of these conditional probabilities, or in other words it is the conditional probability of many datapoints $X=\{x\}$ assuming that they are independent. In general, if we didn't know anything about independence of the datapoints, we would have to work with the joint distribution $p(X\mid\theta)$, but given that $x$ are independent we can factor the distribution into the product $\prod_x p(x\mid\theta)$. The independence assumption is common so that's probably why you saw it.
I hinted at this in 1, but people often use lowercase $x$ to refer to a single sample, and capital $X$ to refer to the whole collection of data in this context of likelihoods of data. (Often in a more theoretical context a capital $X$ would be a random variable; the capital here comes from the convention that matrices are capital letters.) So as a rule of thumb I'd say $L(\theta\mid X)$ would probably be the likelihood of the parameters given a whole dataset. But again, you are free to do what you want.
Again, it is not necessarily true that capital $X$ means a random variable. It might be true in that context, but you should always pay attention to the context and the usage of the authors you're interested in. Mathematicians are loose with notation and that is what it is. I guess my moral here is not to always assume that the same letters mean the same things in different papers. For instance, on that EM page, they are doing something rather complicated, which is taking an expectation against the probability measure of the entire dataset. This object is rather abstract and hard to conceive of, and I think it's best to understand EM concretely by working through a particular case -- it's really a whole family of algorithms and describing it in general tends to be vague.
Answers to questions in comments:
I can say that for sure, $P(X=x\mid\theta)$ tends to refer to the probability that a random variable $X$ takes on a value $x$ given the parameters $\theta$. But in $P(X\mid\theta)$, $X$ might be a random variable or a dataset. Again, these things should always be clear in a given case, so I think it's not worth worrying about these generalities. In the case where $X$ is a random variable, $P(X\mid\theta)$ is probably referring to the entire distribution rather than a particular probability, so I might be inclined to interpret it as a probability distribution (a function(al)) rather than a probability (a number) if I came across it.
As for the notation on Wikipedia that you asked about -- certainly it can be both, and MLEs will improve in accuracy with more data, so you might as well think about it as the whole dataset. But Wikipedia tends to have really strange and inconsistent notation since it is written collaboratively by random people with different backgrounds, so I would really not stress the notation on that site. In particular, if you're trying to learn these things, don't do it on Wikipedia -- get a textbook. Maybe "Elements of Statistical Learning" or another classic text -- I think that's outside the scope of this question.
|
Let $n$ be the original number and $m$ the number after a digit is deleted. Suppose that the deleted digit is $d$, that $r$ is the number represented by the $k$ digits to the right of $d$, and that $\ell$ is the number represented by the digits to the left of $d$, so that $m=10^k\ell+r$, and $n=10^{k+1}\ell+10^kd+r$.
Suppose first that $n=(10-s)m$, where $1\le s\le 8$. Then
$$10^{k+1}\ell+10^kd+r=10^{k+1}\ell-10^ks\ell+(10-s)r\;,$$
so $(9-s)r=10^k(d+s\ell)$. Now $(9-s)r$ has at most $k+1$ digits, and $10^k(d+s\ell)$ has at least $k+1$ digits, so in fact $d+s\ell$ is a single digit, and $(9-s)r$ has $k+1$ digits. Thus, $\ell$ is a single digit, and we must have $k\ge 2$ (since $n$ has at least $4$ digits). If $s=4$, then $r=20\cdot10^{k-2}(d+s\ell)$ ends in $0$, which is impossible. Otherwise $25$ divides $(9-s)r$ and is relatively prime to $9-s$, so $25\mid r$, and since $n$ has no zero digit, $r$ must end in $25$ or $75$. (Of course the last two digits of $r$ are the last two digits of $m$ and $n$ as well.)
Now suppose that $\frac{n}m>10$. If $d$ is not the first digit of $n$, then $11\le\frac{n}m\le 19$. Suppose that $n=(10+s)m$, where $1\le s\le 9$. Then
$$10^{k+1}\ell+10^kd+r=10^{k+1}\ell+10^ks\ell+(10+s)r\;,$$
so $(9+s)r=10^k(d-s\ell)$. Clearly $d>s\ell$, so $\ell$ is a single digit, and $k\ge 2$. Thus, $25\mid(9+s)r$. If $s=6$, then $3r=20\cdot10^{k-2}(d-6\ell)$, so $10\mid r$, which is impossible. Otherwise, $25\mid r$, and we’re done, as before.
Finally, suppose that $d$ is the first digit of $n$, so $n=10^kd+m$. Suppose further that $n=sm$, so that $10^kd+m=sm$, and $10^kd=(s-1)m$. Now $k\ge 3$, so $125\mid 10^k$, and hence $25\mid m$ (and we’re done, as before) unless $25\mid s-1$. Clearly $s<100$, so we need only worry about the cases $s=26$, $s=51$, and $s=76$. In those cases we have $40d=m$, $40d=2m$, and $40d=3m$, respectively, and in each case $10\mid m$, which is impossible.
|
Advances in Differential Equations Adv. Differential Equations Volume 13, Number 7-8 (2008), 753-780. Entire solutions and global bifurcations for a biharmonic equation with singular non-linearity in $\Bbb R^3$ Abstract
We study the structure of solutions of the boundary-value problem \begin{equation} \tag*{(0.1)} \Delta^2 u=\frac{\lambda}{(1-u)^2} \;\; \mbox{in $B$}, \;\;\; u=\Delta u=0 \;\; \mbox{on $\partial B$} , \end{equation} where $\Delta^2$ is the biharmonic operator and $B \subset \mathbb R^3$ is the unit ball. We show that there are infinitely many turning points of the branch of the radial solutions of (0.1). The structure of solutions depends on the classification of the radial solutions of the equation \begin{equation} \tag*{(0.2)} -\Delta^2 u=u^{-2} \;\;\; \mbox{in $\mathbb R^3$}. \;\; \end{equation} This is in sharp contrast with the corresponding result in $\mathbb R^2$.
Article information Source Adv. Differential Equations, Volume 13, Number 7-8 (2008), 753-780. Dates First available in Project Euclid: 18 December 2012 Permanent link to this document https://projecteuclid.org/euclid.ade/1355867335 Mathematical Reviews number (MathSciNet) MR2479029 Zentralblatt MATH identifier 1203.35018 Subjects Primary: 35J40: Boundary value problems for higher-order elliptic equations Secondary: 35B45: A priori estimates 35J60: Nonlinear elliptic equations 47J15: Abstract bifurcation theory [See also 34C23, 37Gxx, 58E07, 58E09] Citation
Guo, Zongming; Wei, Juncheng. Entire solutions and global bifurcations for a biharmonic equation with singular non-linearity in $\Bbb R^3$. Adv. Differential Equations 13 (2008), no. 7-8, 753--780. https://projecteuclid.org/euclid.ade/1355867335
|
I am trying to get the maximum likelihood estimate for the parameter $p$. The distribution is the following:
$$ f(x\mid p) = \begin{cases} \frac{p}{x^2} &\text{for} \ p\leq x < \infty \ \\ 0 &\text{if not} \end{cases} $$
The sample has size $n$.
The problem is, when I try to estimate it by the procedure I know, I would have to estimate the likelihood function and obtain the derivative of the log-likelihood. We'd have:
$$ L(p; x) = \frac{p^n}{\prod_{i=1}^{n} x^2_i}$$ $$ \ln L(p;x) = n \ln(p) - \sum_{i=1}^{n} \ln(x_i^2) $$
For the derivative:
$$ l'(p;x) = \frac{n}{p} = 0$$
And I am stuck because it has no solution for $p$. How do I evaluate this?
Thanks!
EDIT:
So in this case I can use the indicator variable to write:
$$ L(p;x) = \frac{p^n}{\prod_{i=1}^{n}x_i^2} I_{(x_i \geq p)}$$
for $i = 1,2, \ldots n$ in the indicator variable. So the "closest" non-null value of $x \in X$ to $p$ is $min(X_1, \ldots X_n)$. Is that the point?
|
Let $f:[0,1]\to[1,3]$ be continuous. Prove $$1 \leq \int_0^1 f(x)\,\mathrm dx \int_0^1 \frac{1}{f(x)}\, \mathrm dx\leq \frac{4}{3}.$$
The left is just Cauchy's inequality with integral form, but what's the right?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Let $f:[0,1]\to[1,3]$ be continuous. Prove $$1 \leq \int_0^1 f(x)\,\mathrm dx \int_0^1 \frac{1}{f(x)}\, \mathrm dx\leq \frac{4}{3}.$$
The left is just Cauchy's inequality with integral form, but what's the right?
Since $f(x) + 3/f(x) \leq 4$ for all $x$, $$ \int_0^1 f(x)dx + 3\int_0^1 \frac{1}{f(x)}dx \leq 4 $$ Apply AM-GM inequality on LHS to get the result.
The equality holds when the value of $f$ is $1$ for the half of $[0,1]$ and $3$ for the other half, though it is not continuous.
Thanks for @JWL's hint. I will write down an entire proof for the problem.
Please correct me if I'm wrong.
As for the left part of the inequality we want to prove, denote $$g(x):=\int_0^x f(t)dt\int_0^x\frac{1}{f(t)}dt-x^2,$$then it is equivalent to $g(1)\geq 0$. Notice that $g(0)=0$, and $$\begin{aligned} g'(x)&=f(x)\int_0^x\frac{1}{f(t)}dt+\frac{1}{f(x)}\int_0^x f(t)d-2x\\&=\int_0^x \frac{f(x)}{f(t)}+\frac{f(t)}{f(x)} dt-2x\\&\geq\int_0^x 2dt-2x\\&=0,\end{aligned}$$which shows that $g(x)$ is increasing monotonically. Thus $g(1)\geq g(0)=0$, which is just we want.
Let's turn to tackle the right hand part . Notice that ,under the condition $f(x) \in [1,3]$,$$ f(x)+\frac{3}{f(x)}\leq 4.$$Therefore$$\int_0^1 f(x)+\frac{3}{f(x)}dx \leq \int_0^1 4 dx=4,$$which is $$\int_0^1 f(x)dx+3\int_0^1\frac{1}{f(x)}dx \leq 4.$$ As per AM-GM inequality, we obtain $$\int_0^1 f(x)dx+3\int_0^1\frac{1}{f(x)}dx\geq 2 \left[\int_0^1 f(x)dx \cdot 3\int_0^1\frac{1}{f(x)}dx\right]^{1/2},$$which gives$$\int_0^1 f(x)dx \cdot \int_0^1\frac{1}{f(x)}dx\leq \frac{4}{3}.$$
|
We know (for instance : https://en.wikipedia.org/wiki/Poisson%27s_ratio ) that the Poisson ratio is $\nu=1/2-Y/(6B)$, with $Y$ the Young modulus and $B$ the bulk modulus.
Let's assume we have a energy density $f(r)$ that depends on the distance $r$ between 2 particles of the solid. It can thus also be written as depending on the volume fraction of the solid $\phi$, or on the volumic mass $\rho$ for a homogeneous solid, with a relation : $\phi=\phi(r)$, $\rho=\rho(r)$...
The potential we're talking about has a minimum at $r_0$ (or $\phi_0$, $\rho_0$), where $f'(r)=0$.
The energy for a volume $v$ will be $F(v,r)=v f(r)$. For a cubic cristal, $v=r_0^3$.
If we assume free stress boundary condition, the osmotic pressure is zero, and $f=\rho df/d\rho=0$.
The Young modulus
If we deform a small volume $r_0^3$ Young modulus will write, (see http://www.animations.physics.unsw.edu.au/jw/elasticity.htm for instance) : $Y=r_0^2\frac{d^2f}{dr^2}\Big|_{r=r_0}$.
The Bulk modulus
Similarly, one can compute the bulk modulus for a simple cubic cristal (see for instance https://eis.hu.edu.jo/ACUploads/10010/Cohesive%20energy-2.pdf page 7) : $B=\frac{r_0^2}{9}\frac{d^2f}{dr^2}\Big|_{r=r_0}$
The Poisson's ratio
It comes out $\nu=-1$ !!!! What's the error ???
|
What you are looking for is the correspondence between algebraic Hecke characters over a number field $F$ and compatible families of $l$-adic characters of the absolute Galois group of $F$. This is laid out beautifully in the first section of Laurent Fargues's notes here.
EDIT: In more detail, as Kevin notes in the comments above, an automorphic representation of $GL(1)$ over $F$ is nothing but a Hecke character; that is, a continuous character$$\chi:F^\times\setminus\mathbb{A}_F^\times\to\mathbb{C}^\times$$of the idele class group of $F$. You can associate $L$-functions to these things: they admit analytic continuation and satisfy a functional equation. This is the automorphic side of global Langlands for $GL(1)$.
How to go from here to the Galois side? Well, let's start with the local story. Fix some prime $v$ of $F$; then the automorphic side is concerned with characters$$\chi_v:F_v^\times\to\mathbb{C}^\times$$Local class field theory gives you the reciprocity isomorphism$$rec_v:W_{F_v}\to F_v^\times,$$where $W_{F_v}$ is the Weil group of $F_v$. Then $\chi_v\circ rec_v$ gives you a character of $W_{F_v}$. This is local Langlands for $GL(1)$. The matching up local $L$-functions and $\epsilon$-factors is basically tautological.
We return to our global Hecke character $\chi$. Recall that global class field theory can be interpreted as giving a map (the Artin reciprocity map)$$Art_F:F^\times\setminus\mathbb{A}_F^\times\to Gal(F^{ab}/F),$$where $F^{ab}$ is the maximal abelian extension of $F$. Local-global compatibility here means that, for each prime $v$ of $F$, the restriction $Art_F\vert_{F_v^\times}$ agrees with the inverse of the local reciprocity map $rec_v$.
Since $Art_F$ is not an isomorphism, we do not expect every Hecke character to be associated with a Galois representation. What is true is that $Art_F$ induces an isomorphism from the group of connected components of the idele class group to $Gal(F^{ab}/F)$. In particular, any Hecke character with
finite image will factor through the reciprocity map, and so will give rise to a character of $Gal(F^{ab}/F)$. This is global Langlands for Dirichlet characters (or abelian Artin motives).
But we can say more, supposing that we have a certain algebraicity (or arithmeticity) condition on our Hecke character $\chi$ at infinity. The notes of Fargues referenced above have a precise definition of this condition; I believe the original idea is due to Weil. The basic idea is that the obstruction to $\chi$ factoring through the group of connected components of the idele class group (and hence through the abelianized Galois group) lies entirely at infinity. The algebraicity condition lets us "move" this persnickety infinite part over to the $l$-primary ideles (for some prime $l$), at the cost of replacing our field of coefficients $\mathbb{C}$ by some finite extension $E_\lambda$ of $\mathbb{Q}_l$. This produces a character
$$\chi_l:F^\times\setminus\mathbb{A}_F^\times\to E_\lambda^\times$$
that shares its local factors away from $l$ and $\infty$ with $\chi$, but now factors through $Art_F$. Varying over $l$ gives us a compatible family of $l$-adic characters associated with our automorphic representation $\chi$ of $GL(1)$. The $L$-functions match up since their local factors do.
|
I'm going to combine two criteria to find the wall thickness (the only unknown here).
The first criterion is based on yielding of the wall material prior to to failure as a result of crack formation and propagation, so the inspection team can observe the plastic deformation and the pressure within the tank be released before total failure of the tank. So materials with large critical crack length are suitable, what you can do, is, rank the materials with higher
fracture toughness. how ? See further.
The second criterion is
Leak before break. The cracks may grow and penetrate throughout the wall thickness, but, they never propagate rapidly, again inspector sees the leaking of liquid, and the catastrophic failure can be prevented.
Here, i assume the condition of plain stress, and i assume the pressure vessel is in one whole piece, and i neglect what ever shape, the nozzles introduce on the tank. So a simple cylindrical vessel with spherical heads and no stress risers. The volume of the tank and the pressure are known.
The stress cause by pressure inside are: $$\begin{bmatrix} \sigma_{rrp} & 0 & 0 &\\ 0 & \sigma_{\theta\theta p} & 0 & \\ 0 & 0 & 0 & \end{bmatrix}= \frac{pr}{2t}\begin{bmatrix} 1 & 0 & 0 &\\ 0 & 2 & 0 & \\ 0 & 0 & 0 & \end{bmatrix}$$
You can find the $\sigma_{rr p}$ and $\sigma_{\theta\theta p}$ by solving the ODE (you can find it by writing the equilibrium equations for free body diagram ) or simply using the Lamé equations.
The thermal effect, is also important, you didn't specify how many degrees are outside, so i can't jump into details now, but it will introduces new stresses $\sigma_{rrt}$ and $\sigma_{\theta\theta t}$. The subscript t stands for thermal.
Now we are dealing with a complex stress condition, i choose to work with von Mises criterium.
I call the result of
von Misses yield criterium $\sigma_v$.
Let's use our criteria to find the wall thickness.The first design criterion (Yielding before failure):I borrow this simple equation from fracture mechanics: $$K_{Ic} = Y\sigma \sqrt{\pi a} \qquad (1)$$
$K_{Ic}$ is called
plain strain fracture toughness. $Y$ is a dimensionless parameter $Y \cong 1.1$, $a$ is the half of the crack length, and $\pi$ is the mathematical constant. Before we use the expression above, i want to make a small modification, i replace the $\sigma$ with $\sigma_y$ (yield strength) and divid it by $N$ safety factor. Now solving for $a$:$$a_c = \frac{N^2}{\pi Y^2} (\frac{K_{Ic}}{\sigma_y})^2$$. Now we find the critical crack length, we can find the material using the ratio $\frac{K_{Ic}}{\sigma{y}}$. (it's a material properties). How to find $a_c$? That's not something you can exactly specify the value at this moment, but we should estimate it, it's practically the wall thickness.
After choosing a material, we use the $\sigma_v$ ( it contains the wall thickness), and replace $a$ in equation (1) with $t$ the wall thickness and $\sigma$ with $\sigma_v$. and solve it for the wall thickness. The wall thickness, that we have so far found should be equal or slightly bigger than that we already estimated. If not got back there, improve it and repeat the last step until you get it right.
This is the naivest, and the most simplified version of design the pressure vessel, based on the information provided in the question.
|
To answer your first question, one way to define (co)homology with local coefficients is the following.
Let $X$ be a space, let $\Pi(X)$ be the fundamental groupoid of $X$, ie. a category with objects points of $X$ and morphisms $x \rightarrow y$ given by homotopy classes of paths. A
local coefficient system $M$ on $X$ is a functor $M: \Pi(X) \rightarrow \mathcal{A}b$ from the fundamental groupoid to the category of abelian groups. In particular, $M$ associates a "group of coefficients" $M(x)$ to every point of $x \in X$.
Associated to $M$ is the singular complex given by
$C _{n}(X, M)= \bigoplus _{\sigma \in Sing_{n}(X)} M(\sigma(1,0,\ldots,0))$,
where $Sing _{n}(X)$ is the set of all maps $\Delta ^{n} \rightarrow X$. The differential can be defined using the fact that the groups $M(x)$ are functorial with respect to paths in $X$. Observe that this is a very similar to the usual definition of singular complex with coefficients in an abelian group $A$, which would be
$C _{n}(X, A) = \bigoplus _{\sigma \in Sing_{n}(X)} A$,
except in the "non-local" case, we count the occurances of any $\sigma: \Delta^{n} \rightarrow X$ in a given chain using the same group $A$ and in the local case, we use $M(\Delta^{n}(1, 0, \ldots, ))$, which might be different for different $\sigma$.
This is the
locality ( localness?) in the name, which should be contrasted with globality of usual homology with coefficients, where the choice of the group $A$ is global and the same for all points.
The definition I have given above is enlightening but perhaps not suitable for computations. Luckily under rather weak assumptions one can use the definition you allude to. Let me explain. Let $X$ be path-connected, let $x \in X$ and consider $\pi = \pi_{1}(X, x)$ as a category with one object and morphisms the elements of the group.
The obvious inclusion $\pi \hookrightarrow \Pi(X)$ is an equivalence of categories and so the functor categories $[\pi, \mathcal{A}b], [\Pi(X), \mathcal{A}b]$ are equivalent, too. But the left functor category is exactly the category of $Z[\pi]$-modules! In particular, we have a bijection between isomorphism classes of local coefficient systems on $X$ and $Z[\pi]$-modules.
If $X$ is nice enough to admit a universal cover $\tilde{X}$, the above allows us to give another definiton of homology with local coefficients, the one you know. Let $M$ be a local coefficient system and let $M^\prime$ be the associated $\mathbb{Z}[\pi]$-module under the above equivalence (which is unique up to a unique isomorphism). Since $\pi$ acts on $\tilde{X}$, it also acts on $C _{\bullet}(X, \mathbb{Z})$ and so the latter is a chain complex of $\mathbb{Z}[\pi]$-modules. We the can define homology with local coefficients to be homology of the complex
$C_{n}(X, M ^\prime) = C_{n}(\tilde{X}, \mathbb{Z}) \otimes _{\mathbb{Z}[\pi]} M^\prime$
One can show that this two definitions I gave agree, that is, for $X$ like above we have an isomorphism $H_{n}(X, M) \simeq H_{n}(X, M^\prime)$.
|
You are blindfolded and disoriented, standing exactly 1 mile from the Great Wall of China. How far must you walk to find the wall?
Assume the earth is flat and the Great Wall is infinitely long and straight.
Puzzling Stack Exchange is a question and answer site for those who create, solve, and study puzzles. It only takes a minute to sign up.Sign up to join this community
You are blindfolded and disoriented, standing exactly 1 mile from the Great Wall of China. How far must you walk to find the wall?
Assume the earth is flat and the Great Wall is infinitely long and straight.
$\DeclareMathOperator{\arcsec}{arcsec}$
For each possible orientation of the wall (relative to some arbitrary initial orientation), the point on the wall closest to our starting point is a distance $1$ away. The collection of the closest points for all possible orientations of the wall form a circle of radius $1$ around our starting point.
If we move a distance $r>1$ away from the initial point, we intersect two orientations of the wall that are an angle $\theta$ apart. In order to reach that point we must have crossed all of the orientations in that angle. In the figure below on the left, those "explored" points are marked by a magenta line.
By trigonometry we can show that $\theta = 2\arcsec r$. If we traverse the path shown on the right side of the above figure, we travel a worst-case distance of:
$$ r + r(2\pi - \theta) \\ r + 2r(\pi - \arcsec r) $$
This distance is minimized when $r\approx 1.04356$ for a worst-case distance of $6.99528$, an improvement of about $3.95\%$
However, looking at the figure we can immediately see that the majority of the large circular arc is "wasted" distance. Only the ends contribute to additional "explored" points. If we shrink-wrap the rest of the path around the unit circle, we get the following path:
The worst-case distance of this path is:
$$ r + 2\sqrt{r^2-1} + (2\pi - 2\theta) \\ r + 2\left(\sqrt{r^2-1} + \pi - 2\arcsec r\right) $$
This happens to be minimized for $r = \sqrt{\frac{15-\sqrt{33}}{6}} \approx 1.24200$ (not the distance shown in the figure), for a worst-case distance of:
$$ \sqrt{\frac{9+\sqrt{33}}{2}}+4\arctan \sqrt{\frac{9+\sqrt{33}}{8}} \approx 6.45891 $$
an improvement of $11.32\%$.
Thanks to Michael Seifert for pointing out that we can do better by letting the radii of the start and end be different, in which case we have the distance:
$$ r_1 + \sqrt{r_1^2-1} + \sqrt{r_2^2-1} + 2\pi - \theta_1 - \theta_2 \\ r_1 + \sqrt{r_1^2-1} + \sqrt{r_2^2-1} + 2\pi - \arcsec r_1 - \arcsec r_2 $$
Which is minimized by $r_1=2/\sqrt{3},\ r_2=\sqrt{2}$ (with $\theta_1=\pi/3,\ \theta_2=\pi/2$):
(Because of the nice angles, this picture is exactly to scale.) The worst-case distance here is simply
$$ \frac{2}{\sqrt{3}} + \frac{1}{\sqrt{3}} + \frac{2\pi}{3} + \frac{\pi}{2} + 1 \\ = 1 + \sqrt{3} + \frac{7\pi}{6} $$
(a $12.16\%$ improvement.)
If the angle between the possible wall and the initial line is $x$ (the angle between the diagonal line and the bottom line in the diagram below), then the distance travelled is $1+(\pi/2+2x)+1/tan(x)+1/sin(x)$.
Gratifyingly this gives a slightly improved answer of $2+3\pi/2\approx6.7124$ for my first attempt (because you can drop down straight rather than complete the circle), where $x=\pi/2$.
It also gives my second attempt for $x=\pi/4$ (answer $2+\sqrt{2}+\pi\approx6.5558$).
Throwing the expression into wolfram alpha, shows that a minimum occurs at $\pi/3$. This gives a value of $1+\sqrt{3}+7\pi/6\approx6.397$
Old new upper bound: $2+\sqrt{2}+\pi$ as per diagram:
(Old upper bound: $2\pi+1$ miles. Walk 1 mile in any direction and then walk is a circle of radius 1, centred at your starting point. )
I would like to present this non-rigorous but hopefully more intuitive explanation for the optimal path. (The technique used here was very helpful for working on Oray's variant with two people.)
The first part of 2012rcampion's answer explains that we should go as far out as some tangent $l$, before going around the circle to get back to $l$ on the other side. Call the starting point $A$ and the circle $O$. Then the problem is this:
Find the shortest path that comes from $A$, touches $l$, then goes around the circle and touches $l$ again.
It won't change which path is shortest if we turn around at the end and go all the way back:
Find the shortest path that comes from $A$, touches $l$, then goes around the circle and touches $l$ again, and then goes back around the circle to $l$ and then $A$.
Now, if we reflect the entire diagram over $l$, we get this:
Instead of having our path touch $l$ and go back, we can have it switch sides every time instead, which won't change the length because it's just a reflection. So now the problem is this:
Find the shortest path from point $A$ that goes around circle $O^\prime$, then around circle $O$, then goes to point $A^\prime$.
Anyone should be able to do that (imagine putting a string from $A$ around the circles to $A^\prime$ and pulling it tight):
And now if we only look at the part of the diagram above $l$, there's the answer without any calculations.
"...standing exactly 1 mile from the Great Wall of China. How far must you walk to find the wall?"
You
must walk 1 mile. If you go the wrong way then you will end up walking further. If you don't walk that far you can't reach it.
@DrXorile is clos to the answer. Mine isn't an answer either but here's some food for thoughts
I wanted to picture it. It looks like it.
If we take 360 individuals, all starting at the center of the circle and each at a different angle, only one will find the wall.
That's a 0.27% chance of finding the wall if you walk eaxtly one mile. If you need to reach the wall for your survival, you're dead.
Also imagine the guy who started just one degree slightly off, extend his hands and the wall is just 2 inches further, then starts over in the wrong direction.
Walking more than one mile means we could increase our chances of reaching the wall at slightly off angle.
But then again, this could happen:
|
In the second answer of this post, Euler-Lagrange equations and friction forces
I see a normal Lagrangian (T-V) times an exponential function. $${\cal L}=e^{t\gamma/m}\left(\frac{m}{2}\dot{x}^2 -U(t,x)\right)\:.$$
And, in the #9 of this post, https://www.physicsforums.com/threads/lagrangian-of-object-with-air-resistance.693552/ , I also see a similar Lagrangian. $$L = e^{b t}(\dfrac{1}{2} m v^2)$$
How is this Lagrangian derived?
In the textbook and lecture notes online, I only see people use $$\frac{d}{dt}\left(\frac{\partial{L}}{\partial{\dot{q_i}}}\right)-\frac{\partial{L}}{\partial{q_i}}=Q_i$$ to deal with some situation with friction or air resistance. (the lagrangian remains unchanged)
Can I use this lagrangian instead? Is it correct? $${\cal L}=e^{t\gamma/m}\left(\frac{m}{2}\dot{x}^2 -U(t,x)\right)\:.$$
|
Difference between revisions of "Moser-lower.tex"
Line 1: Line 1:
\section{Lower bounds for the Moser problem}\label{moser-lower-sec}
\section{Lower bounds for the Moser problem}\label{moser-lower-sec}
−
In this section we discuss lower bounds for $c'_{n,3}$.
+
In this section we discuss lower bounds for $c'_{n,3}$. Clearly we have
+
$c'_{0,3}=1$ and $c'_{1,3}=2$, so we focus on the case $n \ge 2$
+ + + + + + + + + + + + + + +
.
−
Observe that if $\{w(1),w(2),w(3)\}$ is a geometric line in $[3]^n$, then $w(1), w(3)$ both lie in the same sphere $S_{i,n}$
+
Observe that if $\{w(1),w(2),w(3)\}$ is a geometric line in $[3]^n$,
+
then $w(1), w(3)$ both lie in the same sphere $S_{i,n}$,
+
and that $w(2)$ lies in a lower
+
sphere $S_{i-r,n}$ for some $1 \leq r \leq i \leq n$. Furthermore,
+
$w(1)$ and $w(3)$ are separated by Hamming distance $r$.
−
As a consequence, we see that $S_{i-1,n} \cup S_{i,n}^e$ (or $S_{i-1,n} \cup S_{i,n}^o$) is a Moser set for any $1 \leq i \leq n$, since any two distinct elements $S_{i,n}^e$ are separated by a Hamming distance of at least two.
+
As a consequence, we see that $S_{i-1,n} \cup S_{i,n}^e$ (or $S_{i-1,n}
−
$$ c'_{n,3} \geq \binom{n}{i-1} 2^{i-1} + \binom{n}{i} 2^{i-1} = \binom{n+1}{i} 2^{i-1}.$$
+
\cup S_{i,n}^o$) is a Moser set for any $1 \leq i \leq n$, since any two
−
It is not hard to see that $\binom{n+1}{i+1} 2^{i} > \binom{n+1}{i} 2^{i-1}$ if and only if $3i < 2n+1$, and so this lower bound is maximised when $i = \lfloor \frac{2n+1}{3} \rfloor$ for $n \geq 2$, giving the formula \eqref{binom}.
+
distinct elements $S_{i,n}^e$ are separated by a Hamming distance of at
−
$$ c'_{2,3} \geq 6; c'_{3,3} \geq 16; c'_{4,3} \geq 40; c'_{5,3} \geq 120; c'_{6,3} \geq 336$$
+
least two.
−
which gives the right lower bounds for $n=2,3$, but is slightly off for $n=4,5$.
+ − +
This leads to the lower bound $$ c'_{n,3} \geq \binom{n}{i-1}
−
\begin{equation}\label{
+
2^{i-1} + \binom{n}{i} 2^{i-1} = \binom{n+1}{i} 2^{i-1}.$$ It is not
− +
hard to see that $\binom{n+1}{i+1} 2^{i} > \binom{n+1}{i} 2^{i-1}$ if
+
and only if $3i < 2n+1$, and so this lower bound is maximised when $i =
+
\lfloor \frac{2n+1}{3} \rfloor$ for $n \geq 2$, giving the formula
+
\eqref{binom}. This leads to the lower bounds $$ c'_{2,3} \geq 6;
+
c'_{3,3} \geq 16; c'_{4,3} \geq 40; c'_{5,3} \geq 120; c'_{6,3} \geq
+
336$$ which gives the right lower bounds for $n=2,3$, but is slightly
+
off for $n=4,5$.
+ +
{\} that this
+
the
+ + + + +
\begin{equation}\label{}
+
c'_{n,3}\geq ^\{n}
\end{equation}
\end{equation}
− − + −
$$ A := S_{i-1,n} \cup S_{i,n}^e \cup A'$$
+ −
where $A' \subset S_{i+1,n}$ has the property that any two elements in $A'$ are separated by a Hamming distance of at least three, or have a Hamming distance of exactly one but their midpoint lies in $S_{i,n}^o$.
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
can
+ + + + + + +
by
+
the
+ + +
$$ A := S_{i-1,n}
+
\cup S_{i,n}^e \cup A'$$ where $A' \subset S_{i+1,n}$ has the property
+
that any two elements in $A'$ are separated by a Hamming distance of at
+
least three, or have a Hamming distance of exactly one but their
+
midpoint lies in $S_{i,n}^o$. By the previous discussion we see that
+
this is a Moser set, and we have the lower bound
\begin{equation}\label{cnn}
\begin{equation}\label{cnn}
− +
c'_{n,3} \geq \binom{n+1}{i} 2^{i-1} + |A'|.
−
\end{equation}
+
\end{equation} This gives some improved lower bounds for $c'_{n,3}$:
−
This gives some improved lower bounds for $c'_{n,3}$:
+ + + + + + − − − − − −
This gives the lower bounds in Theorem \ref{moser} up to $n=5$, but
+
This gives the lower bounds in Theorem \ref{moser} up to $n=5$, but
+
bound for $n=6$ the lower bound $c'_{6,3}\geq
+ +
353$ was located by a genetic algorithm, see Appendix
+
\ref{genetic-alg}. This suggests that greedily filling in spheres
+
is no
+
longer the optimal strategy in dimensions six and higher.
+ +
In any event,
+
bounds such as
+
\eqref{cnn} only seem to offer a minor improvement over
+
\eqref{binom} at best, and in particular we have been unable to locate a
+
bound which is asymptotically better than \eqref{cpn3}.
Revision as of 07:27, 5 June 2009
\section{Lower bounds for the Moser problem}\label{moser-lower-sec}
In this section we discuss lower bounds for $c'_{n,3}$. Clearly we have $c'_{0,3}=1$ and $c'_{1,3}=2$, so we focus on the case $n \ge 2$. The first lower bounds may be due to Koml\'{o}s \cite{komlos}, who observed that the sphere $S_{i,n}$ of elements with exactly $n-i$ 2 entries (see Section \ref{notation-sec} for definition), is a Moser set, so that $c'_{n,3}\geq \vert S_{i,n}\vert$
holds for all $i$. Choosing $i=\lfloor \frac{2n}{3}\rfloor$ and
applying Stirling's formula, we see that this lower bound takes the form \begin{equation}\label{cpn3} c'_{n,3} \geq C 3^n / \sqrt{n} \end{equation} for some absolute constant $C>0$. In particular $c'_{3,3} \geq 12, c'_{4,3}\geq 24, c'_{5,3}\geq 80, c'_{6,3}\geq 240$. Asymptotically, the best lower bounds we know of are still of this type, but the values can be improved by studying combinations of several spheres or
semispheres or applying elementary results from coding theory.
Observe that if $\{w(1),w(2),w(3)\}$ is a geometric line in $[3]^n$, then $w(1), w(3)$ both lie in the same sphere $S_{i,n}$, and that $w(2)$ lies in a lower sphere $S_{i-r,n}$ for some $1 \leq r \leq i \leq n$. Furthermore, $w(1)$ and $w(3)$ are separated by Hamming distance $r$.
As a consequence, we see that $S_{i-1,n} \cup S_{i,n}^e$ (or $S_{i-1,n} \cup S_{i,n}^o$) is a Moser set for any $1 \leq i \leq n$, since any two distinct elements $S_{i,n}^e$ are separated by a Hamming distance of at least two. (Recall Section \ref{notation-sec} for definitions), This leads to the lower bound $$ c'_{n,3} \geq \binom{n}{i-1} 2^{i-1} + \binom{n}{i} 2^{i-1} = \binom{n+1}{i} 2^{i-1}.$$ It is not hard to see that $\binom{n+1}{i+1} 2^{i} > \binom{n+1}{i} 2^{i-1}$ if and only if $3i < 2n+1$, and so this lower bound is maximised when $i = \lfloor \frac{2n+1}{3} \rfloor$ for $n \geq 2$, giving the formula \eqref{binom}. This leads to the lower bounds $$ c'_{2,3} \geq 6; c'_{3,3} \geq 16; c'_{4,3} \geq 40; c'_{5,3} \geq 120; c'_{6,3} \geq 336$$ which gives the right lower bounds for $n=2,3$, but is slightly off for $n=4,5$.
Chv\'{a}tal \cite{chvatal1} observed that one can iterate this. Let us translate his work into the usual notation of coding theory: Let $A(n,d)$ denote the size of the largest binary code of length $n$
and minimal distance $d$.
Then \begin{equation}\label{cnchvatal} c'_{n,3}\geq \max_k \left( \sum_{j=0}^k \binom{n}{j} A(n-j, k-j+1)\right). \end{equation}
With the following values for $A(n,d)$: {\tiny{ \[ \begin{array}{llllllll} A(1,1)=2&&&&&&&\\ A(2,1)=4& A(2,2)=2&&&&&&\\ A(3,1)=8&A(3,2)=4&A(3,3)=2&&&&&\\ A(4,1)=16&A(4,2)=8& A(4,3)=2& A(4,4)=2&&&&\\ A(5,1)=32&A(5,2)=16& A(5,3)=4& A(5,4)=2&A(5,5)=2&&&\\ A(6,1)=64&A(6,2)=32& A(6,3)=8& A(6,4)=4&A(6,5)=2&A(6,6)=2&&\\ A(7,1)=128&A(7,2)=64& A(7,3)=16& A(7,4)=8&A(7,5)=2&A(7,6)=2&A(7,7)=2&\\ A(8,1)=256&A(8,2)=128& A(8,3)=20& A(8,4)=16&A(8,5)=4&A(8,6)=2 &A(8,7)=2&A(8,8)=2\\ A(9,1)=512&A(9,2)=256& A(9,3)=40& A(9,4)=20&A(9,5)=6&A(9,6)=4 &A(9,7)=2&A(9,8)=2\\ A(10,1)=1024&A(10,2)=512& A(10,3)=72& A(10,4)=40&A(10,5)=12&A(10,6)=6 &A(10,7)=2&A(10,8)=2\\ \end{array} \] }}
Generally, $A(n,1)=2^n, A(n,2)=2^{n-1}, A(n-1,2e-1)=A(n,2e), A(n,d)=2$, if $d>\frac{2n}{3}$. The values were taken or derived from Andries Brower's table at\\ http://www.win.tue.nl/$\sim$aeb/codes/binary-1.html \textbf{include to references? or other book with explicit values of $A(n,d)$ }
For $c'_{n,3}$ we obtain the following lower bounds: with $k=2$ \[ \begin{array}{llll} c'_{4,3}&\geq &\binom{4}{0}A(4,3)+\binom{4}{1}A(3,2)+\binom{4}{2}A(2,1) =1\cdot 2+4 \cdot 4+6\cdot 4&=42.\\ c'_{5,3}&\geq &\binom{5}{0}A(5,3)+\binom{5}{1}A(4,2)+\binom{5}{2}A(3,1) =1\cdot 4+5 \cdot 8+10\cdot 8&=124.\\ c'_{6,3}&\geq &\binom{6}{0}A(6,3)+\binom{6}{1}A(5,2)+\binom{6}{2}A(4,1) =1\cdot 8+6 \cdot 16+15\cdot 16&=344. \end{array} \] With k=3 \[ \begin{array}{llll} c'_{7,3}&\geq& \binom{7}{0}A(7,4)+\binom{7}{1}A(6,3)+\binom{7}{2}A(5,2) + \binom{7}{3}A(4,3)&=960.\\ c'_{8,3}&\geq &\binom{8}{0}A(8,4)+\binom{8}{1}A(7,3)+\binom{8}{2}A(6,2) + \binom{8}{3}A(5,3)&=2832.\\ c'_{9,3}&\geq & \binom{9}{0}A(9,4)+\binom{9}{1}A(8,3)+\binom{9}{2}A(7,2) + \binom{9}{3}A(6,3)&=7880. \end{array}\] With k=4 $$c'_{10,3}\geq \binom{10}{0}A(9,5)+\binom{10}{1}A(9,4)+\binom{10}{2}A(8,3) + \binom{10}{3}A(7,4)+\binom{10}{4}A(6,5)=22232.$$
It should be pointed out that these bounds are even numbers, so that $c'_{4,3}=43$ shows that one cannot generally expect this lower bound gives the optimum.
The maximum value appears to occur for $k=\lfloor\frac{n+2}{3}\rfloor$, so that using Stirling's formula and explicit bounds on $A(n,d)$ the best possible value known to date of the constant $C$ in equation \eqref{cpn3} can be worked out, but we refrain from doing this here. Using the Singleton bound $A(n,d)\leq 2^{n-d+1}$ Chv\'{a}tal proved \cite{chvatal1}) that the expression in \eqref{cnchvatal} is also $O\left( \frac{3^n}{\sqrt{n}}\right)$.
For $n=4$ the above does not yet give the exact value. The value $c'_{4,3}=43$ was first proven by Chandra \cite{chandra}. A uniform way of describing examples for the optimum values of $c'_{4,3}=43$ and $c'_{5,3}=124$ is the following:
Let us consider the sets $$ A := S_{i-1,n} \cup S_{i,n}^e \cup A'$$ where $A' \subset S_{i+1,n}$ has the property that any two elements in $A'$ are separated by a Hamming distance of at least three, or have a Hamming distance of exactly one but their midpoint lies in $S_{i,n}^o$. By the previous discussion we see that this is a Moser set, and we have the lower bound \begin{equation}\label{cnn} c'_{n,3} \geq \binom{n+1}{i} 2^{i-1} + |A'|. \end{equation} This gives some improved lower bounds for $c'_{n,3}$:
\begin{itemize} \item By taking $n=4$, $i=3$, and $A' = \{ 1111, 3331, 3333\}$, we obtain $c'_{4,3} \geq 43$; \item By taking $n=5$, $i=4$, and $A' = \{ 11111, 11333, 33311, 33331 \}$, we obtain $c'_{5,3} \geq 124$. \item By taking $n=6$, $i=5$, and $A' = \{ 111111, 111113, 111331, 111333, 331111, 331113\}$, we obtain $c'_{6,3} \geq 342$. \end{itemize}
This gives the lower bounds in Theorem \ref{moser} up to $n=5$, but the bound for $n=6$ is inferior to the lower bound $c'_{6,3}\geq 344$ given above. The lower bound $c'_{6,3} \geq
353$ was located by a genetic algorithm, see Appendix \ref{genetic-alg}. This suggests that greedily filling in spheres, semispheres or codes is no longer the optimal strategy in dimensions six and higher. The current record In any event, bounds such as \eqref{cnchvatal} or \eqref{cnn} only seem to offer a minor improvement over \eqref{binom} at best, and in particular we have been unable to locate a bound which is asymptotically better than \eqref{cpn3}.
|
Consider a regular polygon with $N$ sides (and therefore $N$ vertices) such that the distance between its center $O$ and each of its vertices is $r$. At each vertex of the polygon, a charge $q$ is placed (see figure 1). What is the electric field ${\bf E}$ produced by these charges at the point $O$?
Fig. 1: Regular pentagon with a charge $q$ placed on each of its vertices. In this case, $\theta = 2\pi / 5$. For a general regular polygon with $N$ sides, $\theta = 2\pi / N$.
If your strategy to solve a problem of this kind is to actually compute ${\bf E}$ by hand, you missed an important fact: there is symmetry in this problem. Symmetry is a very important ingredient in physics: it can make many complicated-looking problems become easily solvable in a few lines (or, in our case, in one line).
To see what I mean by that, imagine the electric field produced by the charges at the point $O$ is as shown in figure 2a.
Fig. 2: Rotating a pentagon by $\theta = 2\pi/5$ around its center point in the anti-clockwise (or clockwise) direction produces the same charge configuration (b) as we initially had in (a). In general, this is true if one rotates a polygon with $N$ sides by $\theta = 2\pi/N$ around its center point. Since the final charge configuration is the same as the initial one, the electric field ${\bf E}$ at the point $O$ must be the same before and after the rotation (and therefore it must have zero magnitude at that point).
If we now rotate the polygon by $\theta = 2\pi/N$ clockwise or counterclockwise we will obtain exactly the same charge configuration as we had before (compare figures 2a and 2b). In other words, the electric field ${\bf E}$ at the point $O$ must be the same before and after the rotation is applied. But if the whole polygon is rotated, ${\bf E}$ must also be rotated with it. We will have an absurd situation unless ${\bf E} = {\bf 0}$. In other words, the electric field at $O$ must be zero!
A reader who prefers things "proven mathematically" might be bothered with this type of proof. As a matter of fact, some might not even consider it a proof at all. However, arguments of this kind are ubiquitous in Phyics and yield correct results with very little work.
In any case, if you are one of those readers I just mentioned, let's compute ${\bf E}$ by hand to bring peace back to your mind. Numbering each charge in a counterclockwise manner (as in figure 1), the $k$-th charge would then be located at: $$ {\bf x}_k = r(\cos\theta_k, \sin\theta_k) = r(\cos(2\pi k/N), \sin(2\pi k/N)) $$ for $k = 1, 2, \ldots, N$. Since $e^{i\theta} = (\cos\theta,\sin\theta)$ (a complex number is a point in $\mathbb{R}^2$), then we can write: $$ {\bf x}_k = r e^{i 2\pi k / N} $$ This will allow us to compute ${\bf E}$ more easily. If ${\bf E}_k$ is the electric field produced by the $k$-th charge at the point $O$, then: $$ {\bf E}_k = \displaystyle\frac{kq}{\|{\bf x}_k\|^3}(-{\bf x}_k) = -\displaystyle\frac{kq}{r^3} r e^{i 2\pi k / N} = -\displaystyle\frac{kq}{r^2} e^{i 2\pi k / N} $$ and therefore: $$ {\bf E} = \sum_{k=1}^{N} {\bf E}_k = \sum_{k=1}^N \left( -\displaystyle\frac{kq}{r^2} e^{i 2\pi k / N} \right) = -\displaystyle\frac{kq}{r^2} \sum_{k=1}^N (e^{i 2\pi / N})^k \label{post_1d16b3e5666a13fc6cfa572b64f47dd2_E_as_sum} $$ But since for any complex number $z \neq 1$ we have: $$ \sum_{k=0}^N z^k = \displaystyle \frac{1 - z^{N+1}}{1 - z} \Longrightarrow \sum_{k=1}^N z^k = \displaystyle \frac{1 - z^{N+1}}{1 - z} - 1 = \displaystyle \frac{z - z^{N+1}}{1 - z} $$ then, from equation \eqref{post_1d16b3e5666a13fc6cfa572b64f47dd2_E_as_sum} with $z = e^{i 2\pi/N}$ we obtain: $$ {\bf E} = -\displaystyle\frac{kq}{r^2} \displaystyle\frac{e^{i 2\pi / N} - (e^{i 2\pi / N})^{N+1}}{1 - e^{i 2\pi / N}} = -\displaystyle\frac{kq}{r^2}e^{i 2\pi / N}\frac{1 - (e^{i 2\pi/N})^N}{1 - e^{i 2\pi / N}} = {\bf 0} $$ since $(e^{i 2\pi / N})^N =$ $e^{i 2\pi} =$ $(\cos 2\pi, \sin 2\pi) = 1$.
|
Yours is a subtle question with a rather subtle answer. From the way you ask the question, you seem to be thinking of the photon as a little billiard ball. It is not. It is an excitation of a quantum field, which is described very differently from a "particle" in the classical sense. Even so, the short answer, in some very subtle ways, is that indeed there is a sense wherein the photon can be thought of as accelerating over a nonzero time.
You must think of a whole system which makes a measurement to probe your question. So lets think of a system comprising (1) an excited atom, just after photon absorption, enclosed in and at the centre of a (2) detector sphere, which has a huge number of photon detectors on it so that it will "catch" the photon however it might be emitted. You begin timing and wait for detection. The
only meaningful measurable concept of the photon's speed is as $R / \Delta t$, where $R$ is the sphere's radius and $\Delta t$ the time until detection. You would do this experiment many times and with spheres of different radiusses. If your spheres were very big, you would get the answer $c$ as the photon's speed. You would, however, indeed see some weird subtleties if you could do this experiment with very small spheres.
First of all, there is always a nonzero emission lifetime. My main daytime paying gig is as an optical physicist and I work a great deal with fluorophores. So I work a great deal with fluorescence lifetimes, which are much longer (nanoseconds) than the lifetime for emissions you likely have in mind. They are all the same concept for the purposes of this question. The atom emits the photon at a random time, so if you did the experiment with a sphere of 30cm or so with a fluorophore like FITC ($\tau=4.5{\rm ns}$ at standard pH and temperature), you would see a big spread in the $R / \Delta t$ answer you would get from this experiment - and all the answers would be considerably less than $c$. So in this first sense, you can think of the fluorescence (or other, equivalent) lifetime as the acceleration time for the photon. It is simply not meaningful to ask what really happens to a photon between measurable events. In quantum mechanics and quantum field theory in general, this denial of meaning before measurement is very important for keeping QM and QFT consistent with special relativity. The jargon for this is that
modern physics rejects counterfactual reality. ("counterfactual reality" is the mouthful standing for the notion that an experiment's outcome has a meaning and existence before the experiment is done).
But I'm guessing you might be a little unsatisfied with a denial and stipulation of fluorescence lifetime as an acceleration time for your answer. But we can still squeeze some more out of our thought experiment: it shows that the fluorescence lifetime can indeed be thought of as the acceleration time in a sense that is near to what you are imagining. I want you to imagine now that your sphere is a few atoms in diameter. Things begin to get very weird indeed now: let's explore what theory has to say a bit more.
A lone, first quantised photon propagates following Maxwell's equations. I explain this in my answer here. We can get a pretty good analogy for our excited atom as a tiny, subatomic sized ring of classical current, with stored energy in the magnetic field through the ring. The circulating charge radiates, thus quenching the current, and the classical, Maxwell equation described radiation field can be reinterpreted as a probability amplitude to absorb the photon at any point as follows: you take the system's whole energy (initially stored as a magnetostatic, inductor energy) and you normalise it to unity. The propagating electromagnetic energy density $\frac{1}{2}\epsilon_0 |\vec{E}(\vec{r},\,t)|^2 + \frac{1}{2}\mu_0|\vec{H}(\vec{r},t)|^2$ becomes the probability density to absorb the photon at position $\vec{r}$ at time $t$. The inductor's (current ring's) current decays exponentially with time constant $L/R_{rad}$, where $R_{rad}$ is the radiation resistance of the ring thought of as a magnetic dipole antenna: this $L/R_{rad}$ is the fluorescence (or generalised emission) lifetime.
The radiation's fastest moving wavefront can indeed leave the ring at speed $c$. That is the nature of a nondispersive wave: it is not made of "stuff" with rest mass and there is truly no acceleration time. Moreover, as in any antenna problem, you calculate the electric and vector magnetic potentials as retarded potentials: at any point $\vec{r}$ and time they are given by the currents in the ring at a time $|\vec{r}|/c$ beforehand. The disturbance is a wave, and it always moves at $c$, so in this sense there is no acceleration time. But the wavefront is of a probability amplitude field: this is very different saying the delay, calculated from the actual absorption time, is $|\vec{r}|/c$. The absorption will be sometime later: the minimum delay is $|\vec{r}|/c$, but it is most likely to be quite a long time after $|\vec{r}|/c$. The nearer you are to the ring, the bigger the discrepancy. When your separation becomes comparable to the ring's dimensions, the variations can be enormous. If you really could get your detectors this near to the ring, your experimental results really would seem to show a nonzero acceleration time for the photon. However, now the matter of your detectors would begin to disturb the boundary conditions for this Maxwellian problem:
your detectors themselves would influence the fluorescence lifetime!
You may know that any antenna solution to Maxwells equations in the freespace around the antenna comprises two components:
A "farfield": this is a superposition of infinite plane waves, which all move at exactly $c$, without an acceleration time. As I said, that is the nature of a nondispersive wave. Sound doesn't "accelerate", for instance: the disturbance simply passes from one part of the medium to the next as fast as the constituents can play Chinese Whispers; A "nearfield" or "evanescent field": this is the nonpropagating part and is "bound" to the antenna. it changes over lengthscales that are small compared to the radiation wavelength. Its "velocity" is a pretty meaningless concept: the phase and group velocities are all over the place if you try to calculate them. I say more about evanescent waves here, here and here.
So the photon, in leaving the atom, can be thought to partly comprise an analogy of the antenna near, evanescent field. It is this evanescent field which would make our thought experiment seem to say that the photon is accelerating.
(recall $E^2=p^2 c^2 + m^2 c^4$) "But don't things moving at less than $c$ always have a nonzero rest mass?", I hear you ask.
Quite right. The "farfield" part of the probability amplitude field is indeed massless and always moves at exactly $c$. There is no acceleration time for this part of the field. But this is not the whole picture. The evanescent field is bound to the antenna, and confined energy always has a rest mass, as I discussed in my answer yesterday. It is this part of the photon field that begets the weird, "nonzero acceleration time" results to our thought experiment.
In the Feynman diagram world, my "farfield" becomes "photons" and "nearfield" becomes "virtual photons". When physicists say the word "photon", they most often mean "free photon" or "on-shell photon". These
always travel at speed $c$. On the other hand, a virtual photon i.e. one whose whole history begins and ends within a Feynman diagram, as opposed to leaving the diagram, is not observable and is off-shell, which means it does not fulfill the fundamental relationship $E^2=p^2 c^2 + m^2 c^4$. In effect, it can be thought of as moving at any speed, and a proper evaluation of the Feynman diagram coherently sums up all the amplitudes for it to be moving at all speeds.
|
My question - which is likely stupid or appears due to some confusion - stems from the following considerations: when quantizing canonically we are told (see any book on QFT) that a Dirac fermion field can be expressed as
$$ \psi(x) = \int d\tilde{p} (u \cdot\mathbf{a}\,e^{-ipx} + v\cdot\mathbf{b}^\dagger\, e^{ipx}) $$
where $\mathbf{a}$ destroys fermions and $\mathbf{b}$ creates antifermions and $u, v$ are Dirac spinors. A left-handed chirality projector $P_L = \frac{1-\gamma_5}{2}$ selects the part of this field $\psi_L$ that transforms according to the left-handed irreducible Lorentz representation.
So, how can the state created by $P_L (v\cdot \mathbf{b}^\dagger)$ be a
right-handed antifermion? (as seen for instance from the fact that it interacts weakly)
|
I) Wigner's Theorem states that a symmetry operation $S: H \to H$ is a unitary or anti-unitary$^{1}$ operator $U(S)$ up to a phase factor $\varphi(S,x)$,
$$ S(x)~=~ \varphi(S,x)\cdot U(S)(x), \qquad x~\in~H,\qquad \varphi(S,x)~\in~\mathbb{C} ,\qquad |\varphi(S,x)|~=~1 .$$
In this context, a
symmetry operation $S$ is by definition a surjective (not necessarily linear!) map $S: H \to H$ such that
$$|\langle S(x),S(y)\rangle|~=~|\langle x,y\rangle|,\qquad\qquad x,y~\in~H.$$
Let us introduce the terminology that a symmetry operation $S$ is of
unitary (anti-unitary) type if there exists a unitary (an anti-unitary) $U(S)$, respectively.
Moreover, if ${\rm dim}_{\mathbb{C}} H \geq 2$, then one may show that
$U(S)$ is unique up to a constant phase factor, and $S$ cannot have both a unitary and an antiunitary $U(S)$. In other words, $S$ cannot both be of unitary and anti-unitary type.
II) It follows by straightforwardly applying the definitions, that the composition $S \circ T$ of two symmetry operations $S$ and $T$ is again a symmetry operation, and it is even possible to choose
$$ U(S \circ T)~:=~U(S) \circ U(T).$$Finally, in the case ${\rm dim}_{\mathbb{C}} H \geq 2$,
$S \circ T$ is of anti-unitary type, if precisely one of $S$ and $T$ are of anti-unitary type, and $S \circ T$ is of unitary type, if zero or two of $S$ and $T$ are of anti-unitary type.
Reference:
V. Bargmann, Note on Wigner's Theorem on Symmetry Operations, J. Math. Phys. 5 (1964) 862. Here is a link to the pdf file.
--
$^{1}$ We use for convenience a terminology where linearity (anti-linearity) of $U(S)$ are implicitly implied by the definition of $U(S)$ being unitary (anti-unitary), respectively.
|
$n$-fold Variants of Large Cardinals This page is a WIP.The $n$-fold variants of large cardinal axioms were created by Sato Kentaro in [1] in order to study and investigate the double helix phenomena. The double helix phenomena is the strange pattern in consistency strength between such cardinals, which can be seen below.
This diagram was created by Kentaro. The arrows denote consistency strength, and the double lines denote equivalency. The large cardinals in this diagram will be detailed on this page (unless found elsewhere on this website).
This page will only use facts from [1] unless otherwise stated.
Contents 1 $n$-fold Variants 2 $\omega$-fold variants 3 References $n$-fold Variants
The $n$-fold variants of large cardinals were given in a very large paper by Sato Kentaro. Most of the definitions involve giving large closure properties to the $M$ used in the original large cardinal in an elementary embedding $j:V\rightarrow M$. They are very large, but rank-into-rank cardinals are stronger than most $n$-fold variants of large cardinals.
Generally, the $n$-fold variant of a large cardinal axiom is the similar to the generalization of superstrong cardinals to $n$-superstrong cardinals, huge cardinals to $n$-huge cardinals, etc. More specifically, if the definition of the original axiom is that $j:V\prec M$ has critical point $\kappa$ and $M$ has some closure property which uses $\kappa$, then the definition of the $n$-fold variant of the axiom is that $M$ has that closure property on $j^n{\kappa}$.
$n$-fold Variants Which Are Simply the Original Large Cardinal
There were many $n$-fold variants which were simply different names of the original large cardinal. This was due to the fact that some n-fold variants, if only named n-variants instead, would be confusing to the reader (for example the $n$-fold extendibles rather than the $n$-extendibles). Here are a list of such cardinals:
The $n$-fold superstrongcardinals are precisely the $n$-superstrong cardinals The $n$-fold almost hugecardinals are precisely the almost $n$-huge cardinals The $n$-fold hugecardinals are precisely the $n$-huge cardinals The $n$-fold superhugecardinals are precisely the $n$-superhuge cardinals The $\omega$-fold superstrongand $\omega$-fold Shelahcardinals are precisely the I2 cardinals $n$-fold supercompact cardinals
A cardinal $\kappa$ is
$n$-fold $\lambda$-supercompact iff it is the critical point of some nontrivial elementary embedding $j:V\rightarrow M$ such that $\lambda<j(\kappa)$ and $M^{j^{n-1}(\lambda)}\subset M$ (i.e. $M$ is closed under all of its sequences of length $j^{n-1}(\lambda)$). This definition is very similar to that of the $n$-huge cardinals.
A cardinal $\kappa$ is
$n$-fold supercompact iff it is $n$-fold $\lambda$-supercompact for every $\lambda$. Consistency-wise, the $n$-fold supercompact cardinals are stronger than the $n$-superstrong cardinals and weaker than the $(n+1)$-fold strong cardinals. In fact, if an $n$-fold supercompact cardinal exists, then it is consistent for there to be a proper class of $n$-superstrong cardinals.
It is clear that the $n+1$-fold $0$-supercompact cardinals are precisely the $n$-huge cardinals. The $1$-fold supercompact cardinals are precisely the supercompact cardinals. The $0$-fold supercompact cardinals are precisely the measurable cardinals.
$n$-fold strong cardinals
A cardinal $\kappa$ is
$n$-fold $\lambda$-strong iff it is the critical point of some nontrivial elementary embedding $j:V\rightarrow M$ such that $\kappa+\lambda<j(\kappa)$ and $V_{j^{n-1}(\kappa+\lambda)}\subset M$.
A cardinal $\kappa$ is
$n$-fold strong iff it is $n$-fold $\lambda$-strong for every $\lambda$. Consistency-wise, the $(n+1)$-fold strong cardinals are stronger than the $n$-fold supercompact cardinals, equivalent to the $n$-fold extendible cardinals, and weaker than the $(n+1)$-fold Woodin cardinals. More specifically, in the rank of an (n+1)-fold Woodin cardinal there is an $(n+1)$-fold strong cardinal.
It is clear that the $(n+1)$-fold $0$-strong cardinals are precisely the $n$-superstrong cardinals. The $1$-fold strong cardinals are precisely the strong cardinals. The $0$-fold strong cardinals are precisely the measurable cardinals.
$n$-fold extendible cardinals
For ordinal $η$, class $F$, positive natural $n$ and $κ+η<κ_1<···<κ_n$:
Cardinal $κ$ is $n$-fold $η$-extendible for $F$with targets $κ_1,...,κ_n$ iff there are $κ+η=ζ_0<ζ_1<···<ζ_n$ and an iteration sequence $\vec e$ through $〈(V_{ζ_i},F∩V_{ζ_i})|i≤n〉$ with $\mathrm{crit}(\vec e)=κ$, and $e_{0,i}(κ)=κ_i$. Cardinal $κ$ is $n$-fold extendible for $F$iff, for every $η$, $κ$ is $n$-fold $η$-extendible for $F$. Cardinal $κ$ is $n$-fold extendibleiff it is $n$-fold extendible for $\varnothing$.
$n$-fold extendible cardinals are precisely $n+1$ strong cardinals.
$n$-fold $1$-extendibility is implied by $(n+1)$-fold $1$-strongness and implies $n$-fold superstrongness.
$n$-fold Woodin cardinals
A cardinal $\kappa$ is
$n$-fold Woodin iff for every function $f:\kappa\rightarrow\kappa$ there is some ordinal $\alpha<\kappa$ such that $\{f(\beta):\beta<\alpha\}\subseteq\alpha$ and $V_{j^{n}(f)(j^{n-1}(\alpha))}\subset M$. Consistency-wise, the $(n+1)$-fold Woodin cardinals are stronger than the $(n+1)$-fold strong cardinals, and weaker than the $(n+1)$-fold Shelah cardinals. Specifically, in the rank of an $(n+1)$-fold Shelah cardinal there is an $(n+1)$-fold Woodin cardinal, and every $(n+1)$-fold Shelah cardinal is also an $(n+1)$-fold Woodin cardinal.
The $2$-fold Woodin cardinals are precisely the Vopěnka cardinals (therefore precisely the Woodin for supercompactness cardinals). In fact, the $n+1$-fold Woodin cardinals are precisely the $n$-fold Vopěnka cardinals. The $1$-fold Woodin cardinals are precisely the Woodin cardinals.
(More to be added) $\omega$-fold variants
The $\omega$-fold variant is a very strong version of the $n$-fold variant, to the point where they even beat some of the rank-into-rank axioms in consistency strength. Interestingly, they follow a somewhat backwards pattern of consistency strength relative to the original double helix. For example, $n$-fold strong is much weaker than $n$-fold Vopěnka (the jump is similar to the jump between a strong cardinal and a Vopěnka cardinal), but $\omega$-fold strong is much, much stronger than $\omega$-fold Vopěnka.
$\omega$-fold extendible
For ordinal $η$ and class $F$:
Cardinal $κ$ is $ω$-fold $η$-extendible for $F$iff there are $κ+η=ζ_0<ζ_1<ζ_2<...$ and an iteration sequence $\vec e$ through $〈(V_{ζ_i},F∩V_{ζ_i})|i∈ω〉$ with $\mathrm{crit}(\vec e)=κ$, and $e^{(1)}(κ)>κ+η$. Cardinal $κ$ is $ω$-fold extendible for $F$iff, for every $η$, $κ$ is $ω$-fold $η$-extendible for $F$. Cardinal $κ$ is $ω$-fold extendibleiff it is $ω$-fold extendible for $\varnothing$.
Results:
An $ω$-fold extendible cardinal $κ$ is the $κ$-th element of the class of the critical points of all witnesses of I3. If $κ$ is a regular cardinal and $F⊂V_κ$, we have$\{α<κ|(V_κ,F)\models \text{“$α$ is $ω$-fold extendible for $F$”}\}∈F^{(ω)}_{Vop,κ}$. If there is an $ω$-fold Vopěnka cardinal, then the existence of a proper class of $ω$-fold extendible cardinals is consistent. $\omega$-fold Vopěnka
Definition:
A set $X$is $ω$-fold Vopěnkafor a cardinal $κ$ iff, for every $κ$-natural sequence $〈\mathcal{M}_α|α<κ〉$, there are an increasing sequence $〈α_n|n∈ω〉$ with $α_n<κ$ and an iteration sequence $\vec e$ through $〈\mathcal{M}_{α_n}|n∈ω〉$ such that $\mathrm{crit}(\vec e)∈X$ . A cardinal$κ$ is $ω$-fold Vopěnkaiff $κ$ is regular and $κ$ is $ω$-fold Vopěnka for $κ$. $F^{(ω)}_{Vop,κ}=\{X∈\mathcal{P}(κ)|\text{κ \\ X is not $ω$-fold Vopěnka for $κ$}\}$.
Results:
An $ω$-fold superstrong cardinal is the $κ$-th $ω$-fold Vopěnka cardinal. The critical point $κ$ of a witness of $IE^ω$ is the $κ$-th $ω$-fold Vopěnka cardinal. $\omega$-fold Woodin
A cardinal $\kappa$ is
$\omega$-fold Woodin iff for every function $f:\kappa\rightarrow\kappa$ there is some ordinal $\alpha<\kappa$ such that $\{f(\beta):\beta<\alpha\}\subseteq\alpha$ and $V_{j^{\omega}(f)(\alpha))}\subset M$.
Consistency-wise, the existence of an $\omega$-fold Woodin cardinal is stronger than the I2 axiom, but weaker than the existence of an $\omega$-fold strong cardinal. In particular, if there is an $\omega$-fold strong cardinal $\kappa$ then $\kappa$ is $\omega$-fold Woodin and has $\kappa$-many $\omega$-fold Woodin cardinals below it, and $V_\kappa$ satisfies the existence of a proper class of $\omega$-fold Woodin cardinals.
$\omega$-fold strong
A cardinal $\kappa$ is
$\omega$-fold $\lambda$-strong iff it is the critical point of some nontrivial elementary embedding $j:V\rightarrow M$ such that $\kappa+\lambda<j(\kappa)$ and $V_{j^\omega(\kappa+\lambda)}\subset M$.
$\kappa$ is
$\omega$-fold strong iff it is $\omega$-fold $\lambda$-strong for every $\lambda$.
Consistency-wise, the existence of an $\omega$-fold strong cardinal is stronger than the existence of an $\omega$-fold Woodin cardinal and weaker than the assertion that there is a $\Sigma_4^1$-elementary embedding $j:V_\lambda\prec V_\lambda$ with an uncountable critical point $\kappa<\lambda$ (this is a weakening of the I1 axiom known as $E_2$). In particular, if there is a cardinal $\kappa$ which is the critical point of some elementary embedding witnessing the $E_2$ axiom, then there is a nonprincipal $\kappa$-complete ultrafilter over $\kappa$ which contains the set of all cardinals which are $\omega$-fold strong in $V_\kappa$ and therefore $V_\kappa$ satisfies the existence of a proper class of $\omega$-fold strong cardinals.
References Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex
|
Is there a physical limit to data transfer rate (e.g. for USB $3.0$, this rate can be a few Gbit per second)? I am wondering if there is a physical law giving a fundamental limit to data transfer rate, similar to how the second law of thermodynamics tells us perpetual motion cannot happen and relativity tells us going faster than light is impossible.
The maximum data rate you're looking for would be called the tl;dr- . Realistically speaking, we don't know nearly enough about physics yet to meaningfully predict such a thing. maximum entropy flux
But since it's fun to talk about a data transfer cord that's basically a $1\mathrm{mm}$-tube containing a stream of black holes being fired near the speed of light, the below answer shows an estimate of $1.3{\cdot}{10}^{75}\frac{\mathrm{bit}}{\mathrm{s}}$, which is about $6.5{\cdot}{10}^{64}$ faster than the current upper specification for USB, $20\frac{\mathrm{Gbit}}{\mathrm{s}}=2{\cdot}{10}^{10}\frac{\mathrm{bit}}{\mathrm{s}}$.
Intro
You're basically looking for an upper bound on entropy flux:
the number of potential states which could, in theory, codify information; entropy: rate at which something moves through a given area. flux:
So,$$\left[\text{entropy flux}\right]~=~\frac{\left[\text{information}\right]}{\left[\text{area}\right]{\times}\left[\text{time}\right]}\,.$$
Note: If you search for this some more, watch out for "maximum entropy thermodynamics"; " maximum" means something else in that context.
In principle, we can't put an upper bound on stuff like entropy flux because we can't claim to know how physics really works. But, we can speculate at the limits allowed by our current models.
Speculative physical limitations
Wikipedia has a partial list of computational limits that might be estimated given our current models.
In this case, we can consider the limit on maximum data density, e.g. as discussed in this answer. Then, naively, let's assume that we basically have a pipeline shipping data at maximum density arbitrarily close to the speed of light.
The maximum data density was limited by the Bekenstein bound:
In physics, the
Bekenstein boundis an upper limit on the entropy $S$, or information $I$, that can be contained within a given finite region of space which has a finite amount of energy—or conversely, the maximum amount of information required to perfectly describe a given physical system down to the quantum level.
–"Bekenstein bound", Wikipedia [references omitted]
Wikipedia lists it has allowing up to$$ I ~ \leq ~ {\frac {2\pi cRm}{\hbar \ln 2}} ~ \approx ~ 2.5769082\times {10}^{43}mR \,,$$where $R$ is the radius of the system containing the information and $m$ is the mass.
Then for a black hole, apparently this reduces to$$ I ~ \leq ~ \frac{A_{\text{horizon}}}{4\ln{\left(2\right)}\,{{\ell}_{\text{Planck}}^2}} \,,$$where
${\ell}_{\text{Planck}}$ is the Planck length;
$A_{\text{horizon}}$ is the area of the black hole's event horizon.
This is inconvenient, because we wanted to calculate $\left[\text{entropy flux}\right]$ in terms of how fast information could be passed through something like a wire or pipe, i.e. in terms of $\frac{\left[\text{information}\right]}{\left[\text{area}\right]{\times}\left[\text{time}\right]}.$ But, the units here are messed up because this line of reasoning leads to the holographic principle which basically asserts that we can't look at maximum information of space in terms of per-unit-of-volume, but rather per-unit-of-area.
So, instead of having a continuous stream of information, let's go with a stream of discrete black holes inside of a data pipe of radius $r_{\text{pipe}}$. The black holes' event horizons have the same radius as the pipe, and they travel at $v_{\text{pipe}} \, {\approx} \, c$ back-to-back.
So, information flux might be bound by$$ \frac{\mathrm{d}I}{\mathrm{d}t} ~ \leq ~ \frac{A_{\text{horizon}}}{4\ln{\left(2\right)}\,{{\ell}_{\text{Planck}}^2}} {\times} \frac{v_{\text{pipe}}}{2r_{\text{horizon}}} ~{\approx}~ \frac{\pi \, c }{2\ln{\left(2\right)}\,{\ell}_{\text{Planck}}^2} r_{\text{pipe}} \,,$$where the observation that $ \frac{\mathrm{d}I}{\mathrm{d}t}~{\propto}~r_{\text{pipe}} $ is basically what the holographic principle refers to.
Relatively thick wires are about $1\,\mathrm{mm}$ in diameter, so let's go with $r_{\text{pipe}}=5{\cdot}{10}^{-4}\mathrm{m}$ to mirror that to estimate (WolframAlpha):$$ \frac{\mathrm{d}I}{\mathrm{d}t} ~ \lesssim ~ 1.3{\cdot}{10}^{75}\frac{\mathrm{bit}}{\mathrm{s}} \,.$$
Wikipedia claims that the maximum USB bitrate is currently $20\frac{\mathrm{Gbit}}{\mathrm{s}}=2{\cdot}{10}^{10}\frac{\mathrm{bit}}{\mathrm{s}}$, so this'd be about $6.5{\cdot}{10}^{64}$ times faster than USB's maximum rate.
, to be very clear, the above was a quick back-of-the-envelope calculation based on the Bekenstein bound and a hypothetical tube that fires black holes near the speed of light back-to-back; it's not a fundamental limitation to regard too seriously yet. However
The Shannon-Hartley theorem tells you what the maximum data rate of a communications channel is, given the bandwidth.
$$ C = B \log_2\left(1+\frac{S}{N}\right) $$
Where $C$ is the data rate in bits per second, $S$ is the signal power and $N$ is the noise power.
Pure thermal noise power in a given bandwidth at temperature $T$ is given by:
$$ N = k_BTB $$
So for example, if we take the bandwidth of WiFi (40MHz) at room temperature (298K) using 1W the theoretical maximum data rate for a single channel is:
$$ 40 \times 10^6 \times \log_2\left(1 + \frac{1}{1.38\times 10^{-23} \times 298 \times 40 \times 10^6}\right) = 1.7 \times 10^9 = 1.7 \mathrm{\;Gbs^{-1}} $$
In a practical system, the bandwidth is limited by the cable or antenna and the speed of the electronics at each end. Cables tend to filter out high frequencies, which limits the bandwidth. Antennas will normally only work efficiently across a narrow bandwidth. There will be significantly larger sources of noise from the electronics, and interference from other electronic devices which increases $N$. Signal power is limited by the desire to save power and to prevent causing interference to other devices, and is also affected by the loss from the transmitter to the receiver.
A system like USB uses simple on-off electronic signal operating at one frequency, because that's easy to detect and process. This does not fill the bandwidth of the cable, so USB is operating a long way from the Shannon-Hartley limit (The limiting factors are more to do with the transceivers, i.e. semiconductors). On the other hand, 4G (and soon 5G) mobile phone technology does fill its bandwidth efficiently, because everyone has to share the airwaves and they want to pack as many people in as possible, and those systems are rapidly approaching the limit.
No, there is no fundamental limit on overall transfer rate. Any process that can transfer data at a given rate can be done twice in parallel to transfer data at twice that given rate.
protected by Qmechanic♦ Apr 30 '18 at 19:17
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
|
103 0 1. Homework Statement
Determine the period of oscillations of a simple pendulum ( a particle of mass m suspended by a string of length l in a gravitational field) as a function of the amplitude of oscillations.
2. Homework Equations
[tex] T(E) = \sqrt(2m) \int^{x_2(E)}_{x_1(E)}\frac{dx}{\sqrt(E-U(x)} [/tex]
where T is the period of oscillations
3. The Attempt at a Solution
I only need the expression of E(x) and the problem is pretty much solved but I can't figure out why (which is rather embarassing ) the energy of the pendulum
[tex] E = \frac{1}{2}ml^2{\phi}^2 - mgl\cos{\phi} = -mgl\cos{\phi_o} [/tex]
where [tex] \phi [/tex] is the angle between the string and the vertical and [tex] \phi_0 [/tex] the maximum value of [tex] \phi [/tex]
Any hints?
thanks
Last edited:
|
I need to solve a problem that tells me to find out the motion of both the pendulums that appear in the first 45 seconds of this video
I think this kind of motion is described by a system of differential equation of the form:
$$\ddot{x} + \omega^2x = \epsilon y$$ $$\ddot{y} + \omega^2 y = \epsilon x$$
where all constants like the mass of the pendulum and so on are missing and $x$ describes the motion of the first pendulum and $y$ the motion of the second one.
To solve the problem one needs to assume $\epsilon$ very small and $\epsilon \lt \omega^2$.
I've tried to solve this problem analytically, but it was a little too complicated so i tried the physical approach by following the example given here under the section coupled oscillator.
I think that i've understood almost everything except the fact that when we evaluate the normal modes we add the constant $\psi_1$ and $\psi_2$ and get the solutions
$$\vec{\nu}_1 = c_1 \begin{pmatrix} 1 \\ 1 \end{pmatrix}\cos(\omega_1 t + \psi_1)$$ $$\vec{\nu}_2 = c_2 \begin{pmatrix} 1 \\ -1 \end{pmatrix}\cos(\omega_2 t + \psi_2)$$
My question is: why do we have those two constant $\psi_1, \psi_2$? I understand the fact that we are interested only in the real valued part of the solution $Ae^{i\omega t}$, but I don't understand where do these constant come from, is there a physical or better mathematical explanation for this?
|
When modeling an ideal transformer, why do we not consider the
inductance of the primary and secondary winding of the transformer.
For an ideal transformer, the primary and secondary inductances are arbitrarily large ('infinite'). This must be so since, for an ideal transformer, there is no frequency dependence.
To see this, consider the equations (in the phasor domain) for ideally coupled ideal inductors:
$$V_1 = j\omega L_1I_1 - j\omega M I_2$$
$$V_2 = j \omega M I_1 - j \omega L_2 I_2$$
where
$$M = \sqrt{L_1L_2}$$
Solving for \$V_2\$ yields
$$V_2 = \left(\sqrt{\frac{L_2}{L_1}}\right)V_1 = \frac{N_2}{N_1}V_1$$
Now, assume the primary is driven by a voltage source and that there is an impedance \$Z_2\$ connected to the secondary such that
$$V_2 = I_2 Z_2$$
It follows that
$$I_2 = \frac{j \omega M}{Z_2 + j \omega L_2}I_1 = \left(\sqrt{\frac{L_1}{L_2}}\cdot\frac{1}{1 + \frac{Z_2}{j\omega L_2}}\right)I_1 = \left(\frac{N_1}{N_2}\cdot\frac{1}{1 + \frac{Z_2}{j\omega L_2}}\right)I_1$$
This is certainly
not the behaviour of an ideal transformer where we expect
$$I_2 = \frac{N_1}{N_2}I_1$$
But notice that in the case that \$j\omega L_2 \gg Z_2\$ we have
$$I_2 \approx \frac{N_1}{N_2}I_1$$
which is
exact in the limit that \$\frac{Z_2}{j \omega L_2} \rightarrow 0\$
Thus, we recover the ideal transformer equations from the ideally coupled ideal inductors
in the limit that \$L_1, L_2\$ go to infinity (keeping their ratio constant).
In summary, we don't consider the inductances for the ideal transformer since, as shown above, the ideal transformer equations hold only in the limit of arbitrarily large primary and secondary inductances.
|
(MWE at the end of the post)
I need to solve a non-linear equation $f(y;x_1,x_2,..,x_5)$ in one variable $y$ and then compute 4 new output expressions, for over 60 different initial parameter inputs of $x_i$. The 4 output variables which are as follows:
$g_m(y,x_1,..,x_5) \; \forall m$
The main equation will solve for the variable $y(i)$:
$f(y(i),x_1(i),x_2(i),...,x_5(i))=0 \; \forall \; i \in \{1,60\}$
Now my
$f(y(i),x_1(i),x_2(i),...,x_5(i))=A1(y,x_1,..,x_5)*y^{k_1} + B1(y,x_1,..,x_5)*y^{k_2}-c1(x_1,..,x_5)$
I define variables
A1=A1(y,x1,..,x5);B1=B1(y,x1,..,x5);c1=c1(x1,..,x5); g1=g1(y,x1,..,x5);g2=g2(y,x1,..,x5);g3=g3(y,x1,..,x5);g4=g4(x1,..,x5);
I have data for $x_1(i),..,x_5(i)$ in a CSV file that I Import and Table straight into the variable names making them into lists.
datatemp = Import["C:\\Documents\\2012U26G0.csv"];j = Dimensions[datatemp][[1]]kk=2x1 = Table[datatemp[[i, 3]], {i, kk, j}]x2 = Table[datatemp[[i, 4]], {i, kk, j}]x3 = Table[datatemp[[i, 5]], {i, kk, j}]x4 = Table[datatemp[[i, 6]], {i, kk, j}]x5 = Table[datatemp[[i, 7]], {i, kk, j}]
I think this automatically makes my earlier defined formulae for $A1$,$B1$ and $g_m$ into a list of formulae with the only unknown being $y$ and it makes $c1$ into a list of constants since $c1$ was only dependent on $x_i$.
Now, what I would like to be able to do is the following. Give some initial search point for my FindRoot for $i=1$
sol={1,1,1,1,120}
As you will see in a second, I only care about
sol[[5]]
Due to continuity, the roots move monotonically with $i$ so once I find one root, I can get a sense of where to look for the next one so I substitute the previous solution into the search for the next one. Also, in one shot I compute the 4 output variables I need. So when the Table is run, in one shot I have all the output data I want.
outputdata=Table[sol={g1,g2,g3,g4,y} /. FindRoot[ A1[[i]]*(y[[i]])^(k1) + B1[[i]]*(y[[i]])^(k2)==c[[i]] , {y, sol[[5]]+10, sol[[5]], sol[[5]]+20}], {i,1, 60}]
This process worked for a charm for little while but for a certain parameter space (by parameter I don't mean the $x_i$ I used earlier but a host of $\gamma$s and$\beta$s in my equations that I have suppressed so far), it has started giving me errors, 1/0 infinity type stuff, because of some assignment issues. Is there a clean and correct/good way to do this? I wanna be able to import a ton of data, Table my findroot to compute a whole bunch of data and Export it real fast. Please please please help!
MWE
f = y^(3.1276)*(A1) + y^(-0.5875)*(B1) + (c1)^2;A1 = x1/y + x2*y + 3*x3;B1 = x1*x3 + 1/(y*x2);c1 = x1^3 + x2^5 - x3;g1 = y^(x1) - x3*x2;g2 = x1/y;
Imported Data Below:
x1 = {89, 88, 87}x2 = {0.048334203`, 0.048515211`, 0.048707816`}x3 = {-19486.2273`, -19742.04035`, -20016.22863`}
When I do this, I can see what the curves look like:
Plot[Table[y^(3.1276)*(A1[[i]]) + y^(-0.5875)*(B1[[i]]) + (c1[[i]])^2, {i, 1, 3}], {y, 150, 180}]
This is what I want to do. To be able to Table a whole bunch of output in one shot:
dataoutput=Table[{g1[[i]], g2[[i]], y} /. FindRoot[y^(3.1276)*(A1[[i]]) + y^(-0.5875)*(B1[[i]]) + (c1[[i]])^2, {y, 165, 150, 170}], {i, 1, 3}]
This is the result that I get, with some errors about accuracygoal and precisiongoal.
{{1623.03, -6.88842*10^19, 167.181}, {1530.37, -5.38632*10^19, 163.049}, {1431., -4.18019*10^19, 158.952}}
My MWE is working, just like my actual problem worked for a certain parameter space but now I am running into trouble. Is there a good way to Import a ton of Data, Table my FindRoot and generate a ton of output using Table and then Export my results? Thanks.
|
Regularization using methods such as Ridge, Lasso, ElasticNet is quite common for linear regression. I wanted to know the following: Are these methods applicable for logistic regression? If so, are there any differences in the way they need to be used for logistic regression? If these methods are not applicable, how does one regularize a logistic regression?
Yes, Regularization can be used in all linear methods, including both regression and classification. I would like to show you that
there are not too much difference between regression and classification: the only difference is the loss function.
Specifically, there are three major components of linear method,
Loss Function, Regularization, Algorithms. Where loss function plus regularization is the objective function in the problem in optimization form and the algorithm is the way to solve it (the objective function is convex, we will not discuss in this post).
In loss function setting, we can have different loss in both regression and classification cases. For example, Least squares and least absolute deviation loss can be used for regression. And their math representation are $L(\hat y,y)=(\hat y -y)^2$ and $L(\hat y,y)=|\hat y -y|$. (The function $L( \cdot ) $ is defined on two scalar, $y$ is ground truth value and $\hat y$ is predicted value.)
On the other hand, logistic loss and hinge loss can be used for classification. Their math representations are $L(\hat y, y)=\log (1+ \exp(-\hat y y))$ and $L(\hat y, y)= (1- \hat y y)_+$. (Here, $y$ is the ground truth label in $\{-1,1\}$ and $\hat y$ is predicted "score". The definition of $\hat y$ is a little bit unusual, please see the comment section.)
In regularization setting, you mentioned about the L1 and L2 regularization, there are also other forms, which will not be discussed in this post.
Therefore, in a high level a linear method is
$$\underset{w}{\text{minimize}}~~~ \sum_{x,y} L(w^{\top} x,y)+\lambda h(w)$$
If you replace the Loss function from regression setting to logistic loss, you get the logistic regression with regularization.
For example, in ridge regression, the optimization problem is
$$\underset{w}{\text{minimize}}~~~ \sum_{x,y} (w^{\top} x-y)^2+\lambda w^\top w$$
If you replace the loss function with logistic loss, the problem becomes
$$\underset{w}{\text{minimize}}~~~ \sum_{x,y} \log(1+\exp{(-w^{\top}x \cdot y)})+\lambda w^\top w$$
Here you have the logistic regression with L2 regularization.
This is how it looks like in a toy synthesized binary data set. The left figure is the data with the linear model (decision boundary). The right figure is the objective function contour (x and y axis represents the values for 2 parameters.). The data set was generated from two Gaussian, and we fit the logistic regression model without intercept, so there are only two parameters we can visualize in the right sub-figure.
The blue lines are the logistic regression without regularization and the black lines are logistic regression with L2 regularization. The blue and black points in right figure are optimal parameters for objective function.
In this experiment, we set a large $\lambda$, so you can see two coefficients are close to $0$. In addition, from the contour, we can observe the regularization term is dominated and the whole function is like a quadratic bowl.
Here is another example with L1 regularization.
Note that, the purpose of this experiment is trying to show how the regularization works in logistic regression, but not argue regularized model is better.
Here are some animations about L1 and L2 regularization and how it affects the logistic loss objective. In each frame, the title suggests the regularization type and $\lambda$, the plot is objective function (logistic loss + regularization) contour. We increase the regularization parameter $\lambda$ in each frame and the optimal solution will shrink to $0$ frame by frame.
Some notation comments. $w$ and $x$ are column vectors,$y$ is a scalar. So the linear model $\hat y = f(x)=w^\top x$. If we want to include the intercept term, we can append $1$ as a column to the data.
In regression setting, $y$ is a real number and in classification setting $y \in \{-1,1\}$.
Note it is a little bit strange for the definition of $\hat y=w^{\top} x$ in classification setting. Since most people use $\hat y$ to represent a predicted value of $y$. In our case, $\hat y = w^{\top} x$ is a real number, but not in $\{-1,1\}$. We use this definition of $\hat y$ because we can simplify the notation on logistic loss and hinge loss.
Also note that, in some other notation system, $y \in \{0,1\}$, the form of the logistic loss function would be different.
The code can be found in my other answer here.
A shrinkage/regularization method that was originally proposed for logistic regression based on considerations of higher order asymptotic was Firth logistic regression... some while before all of these talks about lasso and what not started, although after ridge regression risen and subsided in popularity through 1970s. It amounted to adding a penalty term to the likelihood,$$l^*(\beta) = l(\beta) + \frac12 \ln |i(\beta)|$$where $i(\beta) = \frac1n \sum_i p_i (1-p_i) x_i x_i'$ is the information matrix normalized per observation. Firth demonstrated that this correction has a Bayesian interpretation in that it corresponds to Jeffreys prior shrinking towards zero. The excitement it generated was due to it helping fixing the problem of perfect separation: say a dataset $\{(y_i,x_i)\| = \{(1,1),(0,0)\}$ would nominally produce infinite ML estimates, and
glm in
R is still susceptible to the problem, I believe.
Yes, it is applicable to logistic regression. In R, using glmnet, you simply specify the appropriate family which is "binomial" for logistic regression. There are a couple of others (poison, multinomial, etc) that you can specify depending on your data and the problem you are addressing.
|
Introduction to sharpening
Sharpening is an important part of digital image processing. It restores some of the sharpness lost in the lens and image sensor.
Every digital image benefits from sharpening at some point in its workflow— in the camera, the RAW conversion software, and/or image editor. Sharpening has a bad name with some photographers because it’s overdone in some cameras (mostly low-end compacts and camera phones), resulting in ugly “halo” effects near edges. But it’s entirely beneficial when done properly.
Almost every digital camera sharpens images to some degree. Some models sharpen images far more than others— often excessively for large prints. This makes it difficult to compare cameras and determine their intrinsic sharpness unless RAW images are available. [Imatest has developed an approach to solving the problem— standardized sharpening, described below, which is useful for comparing “black box” cameras, but is not recommend for camera engineering or development work.]
The sharpening process Sharpening on a line and edge
A simple sharpening algorithm subtracts a fraction of neighboring pixels from each pixel, as illustrated on the right. The thin
black curve in the lower part of the image is the input to the sharpening function: it is the camera’s response to a point or a sharp line (called the point or line spread function). The two thin dashed blue curves are replicas of the input, reduced in amplitude (multiplied by – k sharp ⁄ 2) and shifted by distances of ±2 pixels (typical of the sharpening applied to compact digital cameras). This distance is called the sharpening radius R S. The thin red curve the impulse response after sharpening— the sum of the black curve and the two blue curves. The thick black and red curves (shown above the thin curves) are the corresponding edge responses, unsharpened and sharpened.
Sharpening increases image contrast at boundaries by reducing the rise distance. It can cause an edge overshoot. (The upper
red curve has a small ovrshoot.) Small overshoots enhance the perception of sharpness, but large overshoots cause “halos” near boundaries that may look good in small displays such as camera phones, but can become glaringly obvious at high magnifications, detracting from image quality.
Sharpening also boosts MTF50 and MTF50P (the frequencies where MTF drops to 50% of its low frequency and peak values, respectively), which are indicators of perceived sharpness. (MTF50P is often preferred because it’s less sensitive to strong sharpening.) Sharpening also boosts noise, which is can be a problem with noisy systems (small pixels or high ISO speeds).
\(\displaystyle L_{sharp} (x) = \frac{L(x)\: -\: 0.5k_{sharp} (L(x-V) + L(x+V))}{1-k_{sharp}}\)
\(V =R_S / d_{scan}\)
where
\(\displaystyle MTF_{sharp}(f) = \frac{1-k_{sharp}\cos(2\pi f V)}{1-k_{sharp}}\)
This equation boosts response at high spatial frequencies with a maximum where
\(\cos(2\pi f V) = \cos (\pi) = -1\) or \( f = \frac{1}{2V} = \frac{d_{scan}}{2R_S}\).
This is equal to the Nyquist frequency, \( f_{Nyq} = d_{scan}/2\), for
Sharpening examples
The plot on right shows the transfer function (MTF) for sharpening with strength
k sharp = 0.15 and sharpening radius R S = 1, 2, and 3. Note the the bottom of the plot is MTF = 1 (not 0).
At the widely-used sharpening radius of 2, MTF reaches its maximum value at half the Nyquist frequency (
f = fNyq/2 = 0.25 cycles/pixel), drops back to 1 at the Nyquist frequency ( fNyq = 0.5 C/P), then bounces back to the maximum at 1.5× fNyq = 0.75 C/P.
Note that sharpening MTF is
, i.e., it oscillates. This can have serious consequences for some MTF measurements on sharpened images. cyclic
The MTF plots below are for two images sharpened with radius ≅ 3. This camera is quite sharp prior to sharpening— there is significant energy above the Nyquist frequency. There is a response dip response around MTF ≅ 400-500 LW/PH that causes MTF50 (the spatial frequency where MTF is 50% of the low frequency value) to become
ext–remely unstable—799 and 507 LW/PH for the similar images. This is a fairly rare (extreme) case, but it’s something to watch for. MTF nn at lower levels— MTF20, MTF10, etc.— can be even more unstable.
This camera would have performed better with
R S ≤ 2.
Be cautious about using strong sharpening with large sharpening radii ( R S > 3). Cyclic response can lead to unexpected bumps in the MTF response curve that can severely distort summary metrics such as MTF50, MTF20, etc. Large sharpening radii have characteristic signatures— thick “halos” and low frequency peaks in the MTF response. We recommend keeping R S ≤ 2 unless there is a compelling reason to make it larger.
Here is an example from the Canon EOS-6D full-frame DSLR. A JPEG image (blue curve) with sharpening slightly reduced is compared with a TIFF image converted from raw by dcraw (with no sharpening and noise reductions). The exact same exposure and regions were used for each curve (both JPG and raw files were saved).The peak at 0.25 cycles/pixel indicates that sharpening with radius = 2 was used. The plot deviates from the ideal plot (for a very simple sharpening algorithm) for spatial frequencies above 0.35 C/P. The EOS-6D allows you to change the amount of sharpening but not the radius for camera JPEGs. (I’m not happy with this limitation.)
Corresponding edges for the Canon EOS-6D: unsharpened (left), sharpened (right)
These curves (and the curves below for the Panasonic Lumix LX7) show how MTF curves (in the MTF Compare plots) correlate to edge response. The modest amount of overshoot (“halo”) on this edge would not be objectionable at any viewing magnification. Better overall performance would be achieved with a sharpening radius of
Here is another example from the Panasonic Lumix LX7. As with the Canon, the same exposure and regions are used: one from a JPEG image (default sharpening, which is very strong), and one from a raw image, converted with no sharpening or noise reduction.The sharpening radius of 1 makes for a sharper image at the pixel level than JPEGs straight out of the EOS-6D, above. Of course the EOS-6D has twice as many pixels (20 vs. 10), but the difference in sharpening accounts for the relatively close sharpness noted in the post, Sharpness and Texture from Imaging-Resource.com.
R S = 1.
Corresponding edges for the Panasonic Lumix LX7: unsharpened (left), sharpened (right)
These curves show how MTF curves (
Oversharpening or undersharpening is the degree to which an image is sharpened relative to the standard sharpening value. If it is strongly oversharpened (oversharpening >about 30%), “halos” might be visible near edges of highly enlarged images. The human eye can tolerate considerable oversharpening because displays and the eye itself tend to blur (lowpass filter) the image. (Machine vision systems are
not tolerant of oversharpening.) There are cases where highly enlarged, oversharpened images might look better with less sharpening. If an image is undersharpened (oversharpening < 0; “undersharpening” displayed) the image will look better with more sharpening. Basic definitions: Oversharpening = 100% (MTF( feql ) – 1)
where
f eql = 0.15 cycles/pixel = 0.3 * Nyquist frequency for reasonably sharp edges (MTF50 ≥ 0.2 C/P). f= 0.6*MTF50 for eql MTF50 < 0.2C/P (relatively blurred edges)
When oversharpening < 1 (when MTF is lower at
f eql than at f= 0), the image is and undersharpened, Undersharpening = –Oversharpening is displayed.
If the image is undersharpened (the case for the EOS-1Ds shown below), sharpening is applied to the original response to obtain Standardized sharpening; if it is positive (if MTF is higher at
f eql than at f= 0), de-sharpening is applied. (We use “de-sharpening” instead of “blurring” because the inverse of sharpening is, which applied here, is slightly different from conventional blurring.) Note that these numbers are not related to the actual sharpening applied by the camera and software.
Undersharpened image
Oversharpened image
The 11 megapixel Canon EOS-1Ds DSLR is unusual in that it has very little built-in sharpening (at least in this particular sample).
The average edge (with no overshoot) is shown on top; the MTF response is shown on bottom. The
Standardized sharpening results in a small overshoot in the spatial domain edge response, about what would be expected in a properly (rather conservatively) sharpened image. It is relatively consistent for all cameras.
The image above is for the 5 megapixel Canon G5, which strongly oversharpens the image— typical for a compact digital camera.A key measurement of rendered detail is the inverse of the 10-90% edge rise distance, which has units of (rises) per PH (Picture Height). The uncorrected value for the G5 is considerably better than the 11 megapixel EOS-1Ds (1929 vs. 1505 rises per PH), but the corrected value (with standardized sharpening) is 0.73x that of the EOS-1Ds. Based on vertical pixels alone, the expected percentage ratio would be 100% (1944/2704) = 0.72x. MTF50P is not shown. It is displayed when Standardized sharpening is turned off; it can also be selected as a Secondary readout. For this camera MTF50P is 0.346 cycles/pixel or 1344 LW/PH, 9% lower than MTF50. It is a better sharpness indicator for strongly oversharpened cameras, especially for cameras where the image will not be altered in post-processing.
Here is an example of an insanely oversharpened image, displayed in
When customers send us problem images we do our best to educate them about how it can be improved.
To begin with it’s perfectly vertical, not slanted as we recommend (a slant of ≥ 2 degrees is usually sufficient). This means the result will be overly sensitive to sampling (subpixel positioning) and won’t be consistent from image to image.
It’s also extremely oversharpened, and both the negative and positive peaks are strongly clipped (flattened), which makes it difficult to determine the severity of the oversharpening. Clipping invalidates the assumption of linearity used for MTF calculations, and makes the result completely meaningless— MTF curves can be much more extended than reasonable. In this case, MTF never drops below 1 at high frequencies, so MTF50 and MTF20 are not even defined.
These results illustrate how uncorrected rise distance and MTF50 can be misleading when comparing cameras with different pixel sizes and degrees of sharpening. MTF50P is slightly better for comparing cameras when strong sharpening is involved.
Uncorrected MTF50 or MTF50P are, however, appropriate for designing imaging systems or comparing lens performance (different focal lengths, apertures, etc.) on a single camera.
“Unsharp masking” (USM) and “sharpening” are often used interchangeably, even though their mathematical algorithms are different. The confusion is understandable but rarely serious because the end results are visually similar. But when sharpening is analyzed in the frequency domain the differences become significant.
“Unsharp masking” derives from the old days of film when a mask for a slide, i.e., positive transparency, was created by exposing the image on negative film slightly out of focus. (Here is a great example from a PBS broadcast. (Alternate Youtube page.)) The next generation of slide or print was made from a sandwich of the original transparency and the fuzzy mask. This mask served two purposes.
It reduced the overall image contrast of the image, which was often necessary to get a good print. It sharpened the image by increasing contrast near edges relative to contrast at a distance from edges.
Unsharp masking was an exacting and tedious procedure which required precise processing and registration. But now USM can be accomplished easily in most image editors, where it’s used for sharpening. You can observe the effects of USM (using the Matlab imsharpen routine) in the
Image Processing module, where you can adjust the blur radius, amount, and threshold settings. Imatest
Most cameras perform regular sharpening rather than USM because it’s faster— USM requires a lot more processing power.
$$\displaystyle \text{Blur} = \frac{e^{-x^2 / 2\sigma_x^2}}{\sqrt{2\pi \sigma_x^2}}$$
σ x corresponds to the sharpening radius, R. The unsharp masked image can be expressed as the original image summed with a constant times the convolution of the original image and the blur function, where convolution is denoted by *.
$$\displaystyle L_{USM}(x) = L(x) – k_{USM} \times \text{Blur} = L(x) \times \frac{\delta(x) – k_{USM} e^{-x^2/2\sigma_x^2} / \sqrt{2\pi \sigma_x^2}}{1 – k_{USM} / \sqrt{2\pi}}$$
L( x) is the input pixel level and L USM( x) is the USM-sharpened pixel level. k USM is the USM sharpening constant (related to the slider setting scanning or editing program). ( L(x) = L x) * δ( x), where δ( x) is a delta function.
The USM algorithm has its own MTF. Using
$$ F(e^{-px^2}) = \frac{e^{-a^2/4p}}{\sqrt{2p}}, $$ where
is the Fourier transform, F
$$MTF_{USM} (f) = \frac{1 – k_{USM} e^{-f^2 \sigma_x^2 /2} / \sqrt{2\pi}}{1 – k_{USM} / \sqrt{2 \pi}} = \frac{ 1 – k_{USM} e^{-f^2 / 2 f_{USM}^2} / \sqrt{2\pi}}{1 – k_{USM}/ \sqrt{2\pi}}$$
where
fUSM = 1 /σ. This equation boosts response at high spatial frequencies, but unlike sharpening, response doesn’t reach a peak then drop— it’s not cyclic. Actual sharpening is a x twodimensional operation.
|
Home
Integration by PartsIntegration by Parts
Examples
Integration by Parts with a definite integral
Going in Circles
Tricks of the Trade
Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions
Product of Sines and Cosines (mixed even and odd powers or only odd powers)
Product of Sines and Cosines (only even powers)
Product of Secants and Tangents
Other Cases
Trig SubstitutionsHow Trig Substitution Works
Summary of trig substitution options
Examples
Completing the Square
Partial FractionsIntroduction to Partial Fractions
Linear Factors
Irreducible Quadratic Factors
Improper Rational Functions and Long Division
Summary
Strategies of IntegrationSubstitution
Integration by Parts
Trig Integrals
Trig Substitutions
Partial Fractions
Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration
Type 2 - Improper Integrals with Discontinuous Integrands
Comparison Tests for Convergence
Modeling with Differential EquationsIntroduction
Separable Equations
A Second Order Problem
Euler's Method and Direction FieldsEuler's Method (follow your nose)
Direction Fields
Euler's method revisited
Separable EquationsThe Simplest Differential Equations
Separable differential equations
Mixing and Dilution
Models of GrowthExponential Growth and Decay
The Zombie Apocalypse (Logistic Growth)
Linear EquationsLinear ODEs: Working an Example
The Solution in General
Saving for Retirement
Parametrized CurvesThree kinds of functions, three kinds of curves
The Cycloid
Visualizing Parametrized Curves
Tracing Circles and Ellipses
Lissajous Figures
Calculus with Parametrized CurvesVideo: Slope and Area
Video: Arclength and Surface Area
Summary and Simplifications
Higher Derivatives
Polar CoordinatesDefinitions of Polar Coordinates
Graphing polar functions
Video: Computing Slopes of Tangent Lines
Areas and Lengths of Polar CurvesArea Inside a Polar Curve
Area Between Polar Curves
Arc Length of Polar Curves
Conic sectionsSlicing a Cone
Ellipses
Hyperbolas
Parabolas and Directrices
Shifting the Center by Completing the Square
Conic Sections in Polar CoordinatesFoci and Directrices
Visualizing Eccentricity
Astronomy and Equations in Polar Coordinates
Infinite SequencesApproximate Versus Exact Answers
Examples of Infinite Sequences
Limit Laws for Sequences
Theorems for and Examples of Computing Limits of Sequences
Monotonic Covergence
Infinite SeriesIntroduction
Geometric Series
Limit Laws for Series
Test for Divergence and Other Theorems
Telescoping Sums
Integral TestPreview of Coming Attractions
The Integral Test
Estimates for the Value of the Series
Comparison TestsThe Basic Comparison Test
The Limit Comparison Test
Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test
Absolute Convergence
Rearrangements
The Ratio and Root TestsThe Ratio Test
The Root Test
Examples
Strategies for testing SeriesStrategy to Test Series and a Review of Tests
Examples, Part 1
Examples, Part 2
Power SeriesRadius and Interval of Convergence
Finding the Interval of Convergence
Power Series Centered at $x=a$
Representing Functions as Power SeriesFunctions as Power Series
Derivatives and Integrals of Power Series
Applications and Examples
Taylor and Maclaurin SeriesThe Formula for Taylor Series
Taylor Series for Common Functions
Adding, Multiplying, and Dividing Power Series
Miscellaneous Useful Facts
Applications of Taylor PolynomialsTaylor Polynomials
When Functions Are Equal to Their Taylor Series
When a Function Does Not Equal Its Taylor Series
Other Uses of Taylor Polynomials
Functions of 2 and 3 variablesFunctions of several variables
Limits and continuity
Partial DerivativesOne variable at a time (yet again)
Definitions and Examples
An Example from DNA
Geometry of partial derivatives
Higher Derivatives
Differentials and Taylor Expansions
Differentiability and the Chain RuleDifferentiability
The First Case of the Chain Rule
Chain Rule, General Case
Video: Worked problems
Multiple IntegralsGeneral Setup and Review of 1D Integrals
What is a Double Integral?
Volumes as Double Integrals
Iterated Integrals over RectanglesHow To Compute Iterated Integrals
Examples of Iterated Integrals
Fubini's Theorem
Summary and an Important Example
Double Integrals over General RegionsType I and Type II regions
Examples 1-4
Examples 5-7
Swapping the Order of Integration
Area and Volume Revisited
Double integrals in polar coordinatesdA = r dr (d theta)
Examples
Multiple integrals in physicsDouble integrals in physics
Triple integrals in physics
Integrals in Probability and StatisticsSingle integrals in probability
Double integrals in probability
Change of VariablesReview: Change of variables in 1 dimension
Mappings in 2 dimensions
Jacobians
Examples
Bonus: Cylindrical and spherical coordinates
Changing variables is a very useful technique for simplifying manytypes of math problems. You can use a
Transformations in higher dimensions, called
The reason mappings like these are so useful in double integrals comes from their action on particular sets in the plane.
A mapping ${\bf \Phi} : {\mathbb R}^2\,\to\, {\mathbb R}^2$ and a rectangle $D^*$ with sides parallel to the axes in the $uv$-plane such that: $${\bf \Phi}(u,\, v) \ = \ (x(u,\,v),\, y(u,\,v))\,, \qquad {\bf \Phi}\big(D^*\big) \ = \ D\,;$$
A 'distortion' function $\displaystyle{\frac{\partial(x,\,y)}{\partial(u,\,v)}}$ to replace $g'(u)$ so that $$ \int\int_D \, f(x,\, y)\, dxdy \ = \ \int\int_{D^*}\, f\big({\bf \Phi}(u,\,v)\big)\, \left|\frac{\partial(x,\,y)}{\partial(u,\,v)}\right|\,dudv\,.$$
In this case, if $D^* = [a,\,b]\times[c,\,d]$, then $$ \int\int_D \, f(x,\, y)\, dxdy \ = \ \int_a^b \left(\int_c^d \, f\big({\bf \Phi}(u,\,v)\big)\, \left|\frac{\partial(x,\,y)}{\partial(u,\,v)}\right|\,dv\right)du\,.$$
When the region of integration $D$ in the $xy$ plane has rotational symmetry, polar coordinates often send a rectangle $D^*$ in the $r\,\theta$ plane to a more complicated region $D$.
|
It is strange to me that for a symmetry which involves $\dot{x}$, there seems to always appear a term with $\dddot{x}$ in the variation of the equations of motion, which doesn't makes much sense. I think that probably the procedure I am following is wrong.
I will show you an example: Consider the simple case of a free particle in one dimension, it's Lagrangian is:
$$L=\frac{1}{2}\dot{x}^2$$
It is obvious that the system conserves energy, so the symmetry that must be valid is $(\delta x=0 ,\delta t=\epsilon)$. I may rewrite these as symmetries that doesn't involve time variations (as the paper by E. L. Hill do):
$$(\delta_{*}x=\epsilon\dot{x},\delta_{*}t=0)$$
Now, I calculate the variation of the equations of motion, with the hope of finding that $\delta(\text{e.o.m})=\text{e.o.m}.$ Such result would mean that the equations of motion are invariant under the symmetry in consideration. So:
$$\delta{(\text{e.o.m})}=\delta(\ddot{x})=\ddot{\eta}$$
In this case $\eta = \dot{x}$ (remember that a variation is of the form $\delta{x}=\epsilon\eta$). So $\ddot{\eta}=\dddot{x}$. Hence:
$$\delta(\text{e.o.m})=\dddot{x}\neq\text{e.o.m}$$
This doesn't make sense because, for the system in consideration, the time translation is a Noetherian symmetry that gives conservation of Energy.
My question is: What is failing in this procedure? Is there a general way of showing that some Symmetry is indeed Noetherian?
|
Difference between revisions of "Modes"
(added some info)
m (Copy edit →Packet)
Line 188: Line 188:
This mode was popular in the late 1970's and early 1980's, but has decreased in use since then.
This mode was popular in the late 1970's and early 1980's, but has decreased in use since then.
−
[http://en.wikipedia.org/wiki/Packet_radio Wikipedia
+ + +
[http://en.wikipedia.org/wiki/Packet_radio Wikipedia article ]
== Phase Shift Keying (PSK) ==
== Phase Shift Keying (PSK) ==
Latest revision as of 12:20, 9 May 2019
The term mode has varying meanings, according to the context, but the most common are permitted modes in amateur licensing.
Waves have three characteristics that can be changed, Amplitude, Frequency and Phase. A mode is the way of changing electromagnetic waves,
modulating them so that transmission of information is possible. Modulating signals can be either analogue, for example sound or digital, for example simple binary on-off. Contents 1 Analogue Modulation methods 2 Digital modulation 2.1 Amplitude Shift keying (ASK) 2.2 Continuous Wave (CW) 2.3 Frequency Shift Keying (FSK) 2.4 Packet 2.5 Phase Shift Keying (PSK) 2.6 Digital modes in practice 3 VOIP (Voice Over IP) Modes 4 See also 5 External links Analogue Modulation methods
There are two main
analogue modes, or methods of modulation: Amplitude Modulation (AM), in which the phasor amplitude changes, and Angle Modulation, in which the phasor angle changes.
Double Sideband (DSB), Single Sideband (SSB) and Vestigal Sideband (VSB) are all forms of AM. Frequency Modulation (FM) and Phase Modulation (PM) are all forms of Angle modulation.
Amplitude Modulation (AM)
The transceiver produces a carrier wave at the frequency of transmission. Voice is superimposed on the carrier wave, and alters its shape by changing the
Amplitude or height of the wave. Hence the frequency and wavelength of the carrier do not change with this form of modulation.
See Amplitude Modulation on Wikipedia for more information.
Double-Sideband Modulation (DSB)
Double Sideband is what's usually meant when people talk about AM. In DSB transmissions, the message signal is transmitted in two sidebands, one being the mirror image of the other. The carrier may be either transmitted at full power (DSB-FC), at reduced power (DSB-RC), or completely eliminated (DSB-SC).
Conceptually, the power level at the carrier frequency, equates to the DC bias in the input signal. Mathematically, it looks something like this:
<math>x(t) = (A + m(t))\cos( 2 \pi f_c t + \phi )</math>
Single-Sideband Modulation (SSB)
Single sideband is what you get if you take a DSB-SC signal, and pass it through a sharp high-pass or low-pass filter to reject the offending sideband. It may be generated through high-pass/low-pass filter, or it may be done using a Harley Modulator, which cancels out the unwanted sideband through the use of a Hilbert Transform.
Frequency Modulation (FM)
The transceiver produces a carrier wave, in the same way as for Amplitude Modulation. In this case however, voice is added to the carrier so that is
frequency changes. This in turn affects the wavelength of the carrier, but the amplitude remains constant.
See Frequency Modulation on Wikipedia for more information.
<math>x(t) = \cos( 2 \pi ( f_c + \Delta f m(t) t + \phi ) )</math>
Phase Modulation (PM)
This mode is seldom used in amateur radio. It's very similar to FM, but rather than the frequency changing, it's the phase of the signal that changes according to the modulating signal.
<math>x(t) = \cos( 2 \pi ( f_c t + \Delta \phi m(t) ) )</math>
The well-known PSK31 digital mode is a form of phase modulation.
In analogue radio, phase modulation differs only vary slightly from frequency modulation as mathematically frequency is the rate of change of phase. In FM, output frequency deviation is directly proportional to input signal amplitude only. A PM signal looks much like an FM signal except that output frequency deviation is directly proportional to both input signal amplitude and input frequency. An FM receiver would therefore receive a PM signal, but high frequencies in the audio would be pre-emphasised by 3dB/octave.
Lesser known modes Quadrature amplitude modulation (QAM). In this mode, two carrier waves, 90° out of phase with each other are produced. QAM is a variant of AM, in which each of these two carriers is modulated by a separate audio signal. QAM or a variant thereof had been used in many AM Stereo broadcast radio systems as well as for the colour subcarrier on analogue fast-scan TV transmissions. Digital modulation
Technically, whenever a signal is turned on and off to enable transmission of information, it can be considered to be a digital mode. Under this definition, CW is certainly a digital mode. This section refers to methods of transmitting and receiving (rather than modulating) that are digital, or that require digital processing in part of the transmission or receiving process.
Amplitude Shift keying (ASK)
Also known as
Off/ On Keying, ( OOK)
Amplitude-shift keying (ASK) is a form of modulation in which digital data is sent as variations in the amplitude of a carrier wave. In this mode there are two states, carrier on and carrier off, hence the name off/on keying.
ASK is sensitive to atmospheric noise, distortions and propagation conditions. Because light can be controlled to have two states, on and off, ASK is also commonly used to transmit digital data over optical fiber.
See Wikipedia Amplitude Shift Keying for more information.
Continuous Wave (CW)
A continuous wave is an electromagnetic wave of constant amplitude and frequency, a pure carrier, and information is carried by turning the wave on and off, and measuring the interval. Morse code is often transmitted using CW.
QRSS - Slow Morse
The term QRSS comes from the Morse Code Q-Code QRS which means either "Shall I send more slowly?" or "Please send more slowly"
In practice QRSS has a dot time-length of 3 seconds or more, and occupies a very narrow bandwidth - as low as 1Hz.
QRSS signals are monitored by "grabbers" such as those listed on this site from I2NDT.
This clipboard has probably the most up to date information about qrss beacons and grabbers.
Kits can be obtained here.
See Wikipedia Continuous Wave for more information.
Frequency Shift Keying (FSK)
The frequency of the carrier is varied according to a digital signal.
See Wikipedia Frequency Shift Keying for more information.
MFSK - Multiple Frequency Shift Keying
In MFSK data is sent using many different tones. MFSK is used by several digital modes including MFSK16, Throb, Olivia and Ale and Domino. The advantages of MFSK compared to other FSK modes are:
good noise rejection low propagation distortion less effects from multi-pathing low error rates
Some limitations of MFSK are:
high stability transceivers are required for effective transmission and reception. Exception: Olivia and Domino which are very drift tolerant. some interference effects from ionospheric multipathing some interference from constant carrier signals
External Links to MFSK sites:
ALE Automatic Link establishment. Information and downloads. Domino EX has plenty of info and download links. Olivia includes information, download links and frequencies used. ROS digital Download the latest version and contribute to the blog. Throb screenshot and download link. AMTOR Amateur Teleprinting Over Radio
Also known as SITOR in its commercial form.
AMTOR comes in two types, Type A and Type B.
Type A: information is repeated if requested by the receiving station. This is known as ARG (Automatic ReQuest)
Type B: AMTOR B uses FEC (Forward Error Correction)to ensure data is transmitted with as little loss as possible. This is accomplished by sending each character twice, with three seconds between each end.
The frequency of the carrier is varied according to a digital signal.
Wikipedia article [1]
CLOVER
CLOVER is a PSK mode which provides a full duplex. There are two CLOVER variants, CLOVER I and CLOVER II. Perhaps the most interesting characteristics of Clover is that it adapts to conditions by constantly monitoring the received signal and changes modulation scheme in response.
G-TOR
Invented by M Golay, G-TOR (Golay-TOR) was used to transmit the early Jupter and saturn space mission pictures back to earth. Is an FSK mode that has faster transfer rate than Pactor. To minimize the effects of atmospheric noise, a data interleaving process is employed. This has the added advantage that garbled data can be decoded. GTOR is a proprietary mode developed by Kantronics and is rarely used by radio amateurs. Some features:
a 16 bit CRC (Cyclic Redundancy Check). This process involves the transmitting station sending a checksum with the data. The receiving station compares the checksum with the recieved data, andf can request either new data, a resend of data or a change of baud rate. Baud rates of 100, 200 or 300 to suit varying conditions. PACTOR
PACTOR is a hybrid of Packet and AMTOR
Designed by Peter, DL6MAA and Ulrich, DF4KV asa an alternative to both AMTOR and packet. It has three incarnations, PACTOR I, PACTOR II and PACTOR III. These are effective under weak signaql and high noise conditions. PACTOR is not commonly sed by amateurs.
Radio Teletype (RTTY)
RTTY or "Radio Teletype" is an FSK mode that has been in use longer than any other digital mode except for morse code.
In its original form, RTTY was a very simple technique which used a five-bit Baudot code to represent all the letters of the alphabet, the numbers, some punctuation and some control characters. Transmissions were at approximately 60 wpm (words per minute). More recent implementations operate at higher bitrates using the same ASCII code used for standard computer data.
Because there is no error correction provided in RTTY, noise and interference can have a serious impact on transmissions. RTTY is still popular with many radio amateurs.
RTTY frequencies
As a general general rule of thumb RTTY is usually found between 80kHz and 100kHz up from the lower edge of each band, except for 160M and 80M.
160M - 1800 to 1820 (RTTY is rare on this band) 80M - 3580 to 3650 40M - 7080 to 7100 (differs from region to region)* 30M - 10110 to 10150 20M - 14080 to 14099 17M - 18095 to 18109 15M - 21080 to 21100 12m - 24915 to 24929 10M - 28080 to 28100 6m - 50300 AFSK and 50600 = FSK
A listing of RTTY frequencies used for weather, ham radio bulletins and for other purposes can be found here
Data is transferred from transceiver to receiver in
packets or groups of data bits. Typically, this involves connection of transceiver audio to a terminal node controller, which handles demodulated modem signals and provides some level of automated error correction. APRS (automated packet reporting system, automated position reporting system) is one example of a packet radio system; it is commonly used to provide weather beacons and GPS tracking for unmanned craft such as weather balloons.
This mode was popular in the late 1970's and early 1980's, but has decreased in use since then.
See main amateur-radio-wiki article: Packet
Phase Shift Keying (PSK)
The
phase of the carrier is modulated by a digital signal. In its simplest terms, this could mean for example that the phase of the carrier is turned through 180° with each change in the digital signal. In practical terms, PSK allows long distance communication even when noise level are high.
PSK reporter reports who has been seen using psk.
Common PSK frequencies
(subject to propagation characteristics.)
BAND 160M 80M 40M 40M 30M 30M 20M 17M 15M 15M 12M 10M 6M 2M 1.25M 70cm 33cm FREQ MHz 1.838 3.580 7.035 DX 7.070 US 10.140 DX 10.142 US 14.070 18.100 21.070 21.080 24.920 28.120 50.290 144.150 222.070 432.200 909.000 Digital modes in practice
The licensing regime defines digital modes as those modulation techniques that require digital data processing. In Australia refer to the ACMA LCD ( Licence Conditions Determination) for exact details. You will need to scroll down the page to find the link.
To get on air in digital modes normally an SSB transceiver is used which is coupled to a computer via a so called interface.
As a minimum the interface requires 4 signals from the transceiver:
Audio in - where you would connect the microphone for SSB. Audio out - for the loudspeaker. PTT. Ground.
On the computer side you need corresponding signals from the computers sound-card or integrated sound system:
Audio out - the audio generated by a digital modes program on transmit. Audio in - either the "microphone-in" or the "line-in" connector of the sound-card. PTT - often the RTS or DTR signal of a serial port is used for this. PTT can also be generated within the "interface" by a VOX like circuit. Ground.
The computer has to run a digital modes program - see the digimodes software page.
What do digital modes sound like? Click here to find out. G4UCJ has a useful site with screenshots of digital modes here at G4UCJ's Radio Website VOIP (Voice Over IP) Modes
VOIP is not considered by some hams as being a "true" ham mode. These modes are reliant to some extent on using the internet to transfer voice from one station to another. For example, IRLP users transmit from radio into a "node" - their voice is transferred via the internet to another node, from where it is transmitted from another radio to the receiving station. The link between radios is therefore via the internet.
One advantage of these modes is that hams in restricted communities can have contacts with hams in far distant lands with basic equipment.
Some VOIP modes, for example CQ100 do not require a radio at all as the "rig" is software created and driven.
All VOIP modes require some ham radio certification
See also Repeater listings APRS D-Star Emission Classification Packet Slow-Scan Television (SSTV) Fast-Scan Television (ATV) Optical communications WSPR WSJT Software QRP
Modes of operation Modes CW * AM * FM * SSB * Digital * Echolink * Emission Classification * IRLP * Optical communications Packet APRS * D-Star SSTV and ATV SSTV frequencies * SSTV Modes
|
Suppose $C$ is a small category and $X_{\bullet}$ is a simplicial object in $C$. In particular, by composing with Yoneda $$y:C \to Set^{C^{op}}$$ $y(X)_{\bullet}$ is a simplicial presheaf. I believe it is true that in (either the injective or projective) model structure on simplicial presheaves, that $y(X)_{\bullet}$ "is the homotopy colimit of itself", that is, the homotopy colimit of the functor $$\Delta^{op} \stackrel{X}{\longrightarrow} C \stackrel{y}{\longrightarrow} Set^{C^{op}} \to \left(Set^{\Delta^{op}}\right)^{C^{op}} $$ is the simplicial presheaf $y(X)_{\bullet}$ itself. Does someone know a quick proof. I think it should be simple, I just cannot think very well at the moment. I'm also fine with an infinity category answer to this.
Take the diagonal of $\Delta^{op} \to \left(Set^{\Delta^{op}}\right)^{C^{op}}$. If I understand the rightmost arrow in your question correctly, then this is just $y(X_\bullet)$.
Yes, that's right. I once wrote out some details behind this (of course it's standard) in the nLab at
and at
|
Limits of iterates of spherical Aluthge transforms
Orateur:
R.E. Curto
Affiliation:
Iwoa
Dates:
Vendredi, 7 Juin, 2019 - 14:00 - 15:00
Résumé:
Let $\mathbf{T} \equiv (T_1,T_2)$ be a commuting pair of Hilbert space operators, and let $P:=\sqrt{T_1^{\ast}T_1+T_2^{\ast}T_2}$ be the positive factor in the
(joint) polar decomposition of $\mathbf{T}$, i.e., $T_i=V_iP \; (i=1,2)$. The spherical Aluthge transform of $\mathbf{T}$ is the (necessarily commuting) pair $\widehat{\mathbf{T}}:=(\sqrt{P}V_1\sqrt{P},\sqrt{P}V_2\sqrt{P})$. We study the iterates of the spherical Aluthge transform, that is, the commuting pairs given by $\widehat{\mathbf{T}}^{(1)}:=\widehat{\mathbf{T}}$ and $\widehat{\mathbf{T}}^{(n)}:=\widehat{\widehat{\mathbf{T}}^{(n-1)}} \; (n \ge 2)$. In this talk, we will focus on the asymptotic behavior of the sequence $\{\widehat{\mathbf{T}}^{(n)}\}_{n \ge 1}$ as $n \rightarrow \infty$. In those cases when the limit exists, the limit pair is a fixed point for the spherical Aluthge transform, that is, a spherically quasinormal pair. For large suitable classes of $2$-variable weighted shifts we will establish the convergence of the sequence of iterates in the weak operator topology. The talk is based on joint work with Chafiq Benhida (Université de Lille, Lille, France), and with Jasang Yoon (The University of Texas Rio Grande Valley, Edinburg, Texas, USA).
(joint) polar decomposition of $\mathbf{T}$, i.e., $T_i=V_iP \; (i=1,2)$. The spherical Aluthge transform of $\mathbf{T}$ is the (necessarily commuting) pair
$\widehat{\mathbf{T}}:=(\sqrt{P}V_1\sqrt{P},\sqrt{P}V_2\sqrt{P})$. We study the iterates of the spherical Aluthge transform, that is, the commuting pairs
given by $\widehat{\mathbf{T}}^{(1)}:=\widehat{\mathbf{T}}$ and $\widehat{\mathbf{T}}^{(n)}:=\widehat{\widehat{\mathbf{T}}^{(n-1)}} \; (n \ge 2)$. In this talk, we will focus on the asymptotic behavior of the sequence $\{\widehat{\mathbf{T}}^{(n)}\}_{n \ge 1}$ as $n \rightarrow \infty$. In those cases when the limit exists, the limit pair is a fixed point for the spherical Aluthge transform, that is, a spherically quasinormal pair. For large suitable classes of $2$-variable weighted shifts we will establish the convergence of the sequence of iterates in the weak operator topology.
The talk is based on joint work with Chafiq Benhida (Université de Lille, Lille, France), and with Jasang Yoon (The University of Texas Rio Grande Valley, Edinburg, Texas, USA).
Accueil Annuaire Equipes Evènements Formation par la Recherche Laboratoire
|
The film Circular Reuleaux triangle tells about the figures of constant width. The Reuleuaux triangle —, the simplest figure of constant width will help us to drill square holes. If one moves the center of the this «triangle» along some trajectory, its vertices will draw almost a square and itself will cover the area inside this figure.
The borders of the obtained figure, except small angular pieces, will be straight segments! And if one extends the segments adding the corners, we get an exact square.
To achieve what we described above, the center of the Reuleaux triangle should move along the trajectory that consists of four equal patched arcs of ellipses. The centers of the ellipses are placed in the vertices of the square, the semi-axes forming an angle of $45^\circ$ with the sides of the square and equal $k\cdot(1+1/\sqrt3)/2$ and $k\cdot(1-1/\sqrt3)/2$ where $k$ is the side of the square.
The curves rounding the corners are also arcs of ellipses centered in the vertices of the square, semi-axes forming an angle of $45^\circ$ with the sides of the square and equal $k\cdot(\sqrt3+1)/2$ and $k\cdot(\sqrt3-1)/2$.
The area of the non-covered corners forms only around 2 percent of the area of the square!
Now if one makes a Reuleaux triangle shaped drill, one will be able to drill square holes with slightly rounded corners but absolutely straight sides!
Gerolamo Cardano (1501 — 1576).
When in 1541 the emperor Charles V triumphally entered conquered Milano, the rector of the doctors college Cardano was walking near the canopy.
In response to this honor he suggested to supply the royal carriage with a suspension with two shafts, the rolling of which would keep the carriage horizontal […]
To keep justice we have to mention that the idea of such a system goes back to antique times, at least the «Atlantic Codex» by Leonardo da Vinci has a drawing of a ship compass with a gimbal suspension.
Such compasses became widespread in the first half of the XVI century, apparently, without Cardano's influence.
You should have already seen under the passing trucks what helps us: the drive shaft. his transmission is named after Gerolamo Cardano.
Now we have everything to start drilling. Take a sheet of veneer and… drill a square hole! As we've said before, the sides will be strictly straight and only the corners will be sightly rounded. If needed, on may correct them with a broach file.
What is left is to make such a drill… Well, it's not hard to make the drill, its section should only remind of Reuleaux triangle and its cutting edges should coincide with its vertices!
The point is that its center's trajectory should consist of four arcs of ellipses, as we mentioned before. Visually this curve is very close to a circle and is even mathematically close to it, but it's still not a circle. In addition, all the eccentrics (a circle placed on a circle of different radius with a shifted center) used in technics provide circular motion.
In 1914 an english engineer Garry James Watts invents how to organize such a drilling. On a surface he places a directing template with a square cut in which the drill, «put freely into a chuck», moves. A patent for such a chuck had been given to a company that started manufacturing Watts drills in 1916.
We'll use another known construction. Fix the drill to the Reuleaux triangle placed into a square directing frame. The frame is attached to the drill. It's left to transmit the drill chuck rotation to the Reuleaux triangle.
|
My model is the following: I have a 1D-lattice with L sites. Each site can be occupied by either one or zero atoms. The Hamiltonian looks as follows: $$H = T\sum(b^\dagger_i b_{i+1} + h.c.) + V\sum n_i n_{i+1}$$
My question is now of a rather basic quantum mechanical nature. I want to make sure what I'm doing is right and that I really understand it. (I'm programming all of this so I want to make sure the reason why my model isn't working doesn't lie on the physical side of things.)
First I've set up the hamiltonian's matrix regarding the Fock states (for example $|010011101001\rangle$): I've created a list of all possible states (there are $2^L$ states since every position can be either zero or one). Say state $|a\rangle$ results in two states, $|a\rangle$ itself and $|b\rangle$: $$H|a\rangle = x|a\rangle + y |b\rangle$$ then I looked at column belonging to $|a\rangle$ and wrote $x$ and $y$ in the lines belonging to $|a\rangle$ and $|b\rangle$ respectively. That way I got my Hamiltonian matrix. (It has block-diagonal form since the total number of atoms is conserved, that means there's a block for each possible total atom number from $0$ to $L$.) Up to this point I have no difficulties.
Now I diagonalize the Hamiltonian. I'm doing this by calculating its eigenvectors and eigenstates (since it has diagonal block form I can do this for each block separately, minimizing the calculation time). Next I plot the expectation value of some $\hat n _i$ operator against energy (using a scatter plot) with $i$ lying somewhere between $1$ and $L$. (The $\hat n _i$ operator on a Fock state returns the state itself with an eigenvalue of the occupation number of the site $i$, so $\hat n _2 |011\rangle = 1\cdot|011\rangle$ and $\hat n _2 |001\rangle = 0\cdot|001\rangle$.)
Now comes the part where I'm not sure on how to proceed. How do I calculate the expectation value, i.e. $ \langle \hat n _i \rangle$, and how do I calculate the energy? I've got the eigenstates in a representation of $2^L$ dimensional vectors, where each line corresponds to a specific Fock state (the first line is all lattice places empty the last line is all filled and those in between have some other occupational setup. That means I have the eigenvectors in the basis of my possible Fock states. How do I now calculate the expectation value?
I know that the eigenvalues are the energies corresponding to my eigenvectors but I don't know how the operator affects the eigenvectors, so my idea was to decompose the eigenvectors in my Fock basis so if I for now just call my different Fock states by the number of it's corresponding line in the Hamiltonian, and my eigenstate looks for a example like $(x,y,z)$ then I decompose it into $$(x,y,z) = x|1\rangle + y|2\rangle + z |3\rangle .$$ Since I know how my operator acts on the Fock states I could now calculate the expectation value by just the dot product between the old $x,y,z$ eigenvector and the new, changed eigenvector. I could also just calculate the Fock states' energies but since this problem scales up very fast this wouldn't be computationally feasible, I think, and one also wouldn't make use of the diagonalization.
I'm really not quite sure if this is correct, and if this is really how one makes use of the diagonalized Hamilton, maybe someone can shed some light on my problem and explain if my approach is correct or where my faults lie.
|
Suppose you are given a nowhere-vanishing exact 2-form $B=dA$ on an open, connected domain $D\subset\mathbb{R}^3$. I'd like to think of $B$ as a magnetic field.
Consider the product $H(A)=A\wedge dA$. At least in the plasma physics literature, $H(A)$ is known as the magnetic helicity density.
How can one determine if there is a closed one-form $\mathbf{s}$ such that $H(A+\mathbf{s})$ is non-zero at all points in $D$?
The reason I am interested in this question is that if you can find such an $\mathbf{s}$, then $A+\mathbf{s}$ will define a contact structure on $D$ whose Reeb vector field gives the magnetic field lines. Thus, the question is closely related to the Hamiltonian structure of magnetic field line dynamics.
I'll elaborate on this last point a bit. If there is a vector potential $A$ such that $A∧dA$ is non-zero everywhere, then the distribution $ξ=\text{ker}(A)$ is nowhere integrable, meaning $ξ$ defines a contact structure on $D$ with a global contact 1-form $A$. The Reeb vector field of this contact structure relative to the contact form $A$ is the unique vector field $X$ that satisfies $A(X)=1$ and $i_XdA=0$. Using the standard volume form $μ_o$, $dA$ can be expressed as $i_B\mu_o$ for a unique divergence-free vector field $\mathbf{B}$ (I'm having trouble typing $\mathbf{B}$ as a subscript). Thus, the second condition on the Reeb vector field can be expressed as $\mathbf{B}×X=0$, which implies the integral curves of X coincide with the magnetic field lines.
More generally, suppose $M$ is an orientable odd-dimensional manifold equipped with an exact 2-form $\omega$ of maximal rank. Also assume that the characteristic line bundle associated with $\omega$ admits a non-vanishing section $b:M\rightarrow \text{ker}(\omega)$. What is the obstruction to the existence of a 1-form $\vartheta$ with $d\vartheta=\omega$ and $\vartheta(b)>0$?
Some observations/comments:
1) If $A(\mathbf{B})$ is bounded above and below on $D$, then a sufficient condition for there to be an $\mathbf{s}$ that gives a nowhere-vanishing helicity density is the existence of a closed one-form $\alpha$ with $\alpha(\mathbf{B})$ nowhere vanishing. In that case, $\mathbf{s}=\lambda \alpha$, where $\lambda$ is some large real number (with appropriate sign), would work.
If there is such an $\alpha$, then, being closed, it defines a foliation whose leaves are transverse to the divergence-free field $\mathbf{B}$. I suspect the question that asks whether a given non-vanishing divergence-free vector field admits a transverse co-dimension one foliation has been studied before, but I am not familiar with any work of this type.
An example where $D=$3-ball and helicity density must have a zero:
Let $D$ consist of those points in $\mathbb{R}^3$ with $x^2+y^2 < a^2$ for a real number $a>1$. Note that all closed 1-forms are exact in this case. Let $f:[0,\infty)\rightarrow\mathbb{R}$ be a smooth, non-decreasing function such that $f(r)=0$ for $r<1/10$ and $f(r)=1$ for $r\ge1/2$. Let $g:\mathbb{R}\rightarrow \mathbb{R}$ be the polynomial $g(r)=1-3r+2r^2$. Define the 2-form $B$ using the divergence free vector field $\mathbf{B}(x,y,z)=f(\sqrt{x^2+y^2})e_\phi(x,y,z)+g(\sqrt{x^2+y^2})e_z$. Here $e_\phi$ is the azimuthal unit vector and $e_z$ is the $z$-directed unit vector. It is easy to verify that $B$, thus defined, is an exact 2-form that is nowhere vanishing.
Because $g(1)=0$ and $f(1)=1$, the circle, $C$, in the $z=0$-plane, $x^2+y^2=1$, is an integral curve for the vector field $\mathbf{B}$. I will use this fact to prove that the helicity density must have a zero for any choice of gauge. Let $A$ satisfy $dA=B$ and suppose $A\wedge B$ is non-zero at all points in $D$. Note that $A\wedge B=A(\mathbf{B})\mu_o$, meaning $h=A(\mathbf{B})$ is a nowhere vanishing function. Without loss of generality, I will assume $h>0$. Thus, the line integral $I=\oint_C h\frac{dl}{|\mathbf{B}|}$ satisfies $I>0$. But, by Stoke's theorem, $I=2\pi\int_0^1g(r)rdr=0$, as is readily verified by directly evaluating the integral. Thus, there can be no such $A$.
An example where $D=T^2\times (0,2\pi)$ and helicity density must have a zero:
Set $D=S^1\times S^1\times(0,2\pi)$ and let $(\theta,\zeta,r)$ be the obvious coordinate system. Set $B=f(r) dr\wedge d\theta+g(r) dr\wedge d\zeta$ where $$f(r)=\cos(2r),$$ and $$g(r)=\sin(r). $$ Clearly, $A=\frac{1}{2}\sin(2r)d\theta-\cos(r)d\zeta$ satisfies $B=dA$ and $B$ is nowhere vanishing. A quick calculation shows that $\int_D A\wedge B=0$.
Now suppose that $\mathbf{s}$ is an arbitrary closed 1-form. Either by using Stoke's theorem or by direct calculation, the fact that the total toroidal and poloidal fluxes, $2\pi\int_0^{2\pi}f(r)dr$ and $2\pi\int_0^{2\pi}g(r)dr$, are zero implies that $\int_D(A+\mathbf{s})\wedge B=0$. Thus, the helicity density must always have a zero.
|
Definition of quasi toric manifolds :
The action of $(S^1)^n$ on $\Bbb C^n$ by pointwise multiplication is called the standard representation. Given a manifold $M^{2n}$ with an $(S^1)^n$-action, a
local isomorphism of $M^{2n}$ with the standard representation consists of an automorphism $\theta:(S^1)^n\to (S^1)^n$ $(S^1)^n$-stable open sets $V$ in $M^{2n}$ and $W$ in $\Bbb C^n$ and a $\theta$-equivariant homeomorphism $f:V\to W$
One says that $M^{2n}$ is locally isomorphic to standard representation if each point of $M^{2n}$ is in the domain of some local isomorphism.
Let $P^n$ be a simple convex polytope. A quasitoric manifold over $P^n$ is a manifold $M^{2n}$ with an $(S^1)^n$ action that is locally isomorphic to the standard representation with a projection map $\pi:M^{2n}\to P^n$ such that the fibres are the $(S^1)^n$ orbits.
Definition of small covers :
By replacing $\Bbb C^n$ by $\Bbb R^n$, $(S^1)^n$ by $\Bbb Z_2^n$ (where $\Bbb Z_2=\{1,-1\}$) and $M^{2n}$ by $M^n$ in the above definition we get a small cover over $P^n$.
In the Wikipedia article on Quasitoric manifolds in the section "comparison with toric manifolds" they state the following :
Any projective toric manifold is a quasitoric manifold, and in some cases non-projective toric manifolds are also quasitoric manifolds.
Not all quasitoric manifolds are toric manifolds. For example, the connected sum $\mathbb {C} P^{2}\sharp \mathbb {C} P^{2}$ can be constructed as a quasitoric manifold, but it is not a toric manifold.
Question : I was wondering if the same statements can be made for small covers (real analogues of quasitoric manifolds) and the real valued points of a toric manifold?
For example where we have $\mathbb {C} P^{2}\sharp \mathbb {C} P^{2}$ is a quasitoric manifold but not a toric manifold, we have $\mathbb {C} P^{2}\sharp \mathbb {C} P^{2}$ which is the Klein bottle, is a small cover as well as a "real" toric manifold.
Thanks.
|
Let $p_n$ be the $n$-th prime number, as usual: $p_1 = 2$, $p_2 = 3$, $p_3 = 5$, $p_4 = 7$, etc.
For $k=1,2,3,\ldots$, define $$ g_k = \liminf_{n \rightarrow \infty} (p_{n+k} - p_n). $$ Thus the twin prime conjecture asserts $g_1 = 2$.
Zhang's theorem (= weak twin prime conjecture) asserts $g_1 < \infty$.
The prime $m$-tuple conjecture asserts $g_2 = 6$ (infinitely many prime triplets), $g_3 = 8$ (infinitely many prime quadruplets), "etcetera" (with $m=k+1$).
Can Zhang's method be adapted or extended to prove $g_k < \infty$ for any (all) $k>1$?
Added a day later: Thanks for all the informative comments and answers!To summarize and update (I hope I'm attributing correctly):
0) [Eric Naslund] The question was already raised in the Goldston-Pintz-Yıldırım paper. See Question 3 on page 3:
Assuming the Elliott-Halberstam conjecture, can it be proved that there are three or more primes in admissible $k$-tuples with large enough $k$? Even under the strongest assumptions, our method fails to prove anything about more than two primes in a given tuple.
1) [several respondents] As things stand now, it does not seem that Zhang's technique or any other known method can prove finiteness of $g_k$ for $k > 1$. The main novelty of Zhang's proof is a cleverly weakened estimate a la Elliott-Halberstam, which is well short of "the strongest assumptions" mentioned by G-P-Y.
2) [GH] For $k>1$, the state of the art remains for now as it was pre-Zhang, giving nontrivial bounds not on $g_k$ but on $$ \Delta_k := \liminf_{n \rightarrow \infty} \frac{p_{n+k} - p_n}{\log n}. $$ The Prime Number Theorem (or even Čebyšev's technique) trivially yields $\Delta_k \leq k$ for all $k$; anything less than that is nontrivial. Bombieri and Davenport obtained $\Delta_k \leq k - \frac12$; the current record is $\Delta_k \leq e^{-\gamma} (k^{1/2}-1)^2$. This is positive for $k>1$ (though quite small for $k=2$ and $k=3$, at about $0.1$ and $0.3$), and for $k \rightarrow \infty$ is asymptotic to $e^{-\gamma} k$ with $e^{-\gamma} \approx 0.56146$.
3) [Nick Gill, David Roberts] Some other relevant links:
Terry Tao's June 3 exposition of Zhang's result and the work leading up to it;
The "Secret Blogging Seminar" entry and thread that has already brought the bound on $g_1$ from Zhang's original $7 \cdot 10^7$ down to below $5 \cdot 10^6$;
A PolyMath page that's keeping track of these improvements with links to the original arguments, supporting computer code, etc.;
A Polymath proposal that includes the sub-project of achieving further such improvements.
4) [Johan Andersson] A warning: phrases such as"large prime tuples in a given [length] interval"(from the Polymath proposal) refer not to configurationsthat we can prove arise in the primes but to
admissibleconfigurations, i.e. patterns of integers that could all be prime(and should all be prime infinitely often, according to thegeneralized prime $m$-tuple [a.k.a. weak Hardy-Littlewood] conjecture,which we don't seem to be close to proving yet). Despite appearances,such phrasings do not bear on a proof of $g_k < \infty$ for $k>1$,at least not yet.
|
Literature on Carbon Nanotube Research
I have hijacked this page to write down my views on the literature on Carbon Nanotube (CNT) growths and processing, a procedure that should give us the cable/ribbon we desire for the space elevator. I will try to put as much information as possible here. If anyone has something to add, please do not hesitate!
Contents 1 Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes 2 Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis 3 Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology 4 Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen 5 In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation 6 High-Performance Carbon Nanotube Fiber Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes
B. G. Demczyk et al., Materials and Engineering,
A334, 173-178, 2002 The paper by Demczyk et al. (2002) is the basic reference for the experimental determination of the tensile strengths of individual Multi-wall nanotube (MWNT) fibers. The experiments are performed with a microfabricated piezo-electric device. On this device CNTs in the length range of tens of microns are mounted. The tensile measurements are obseverd by transmission electron microscopy (TEM) and videotaped. Measurements of the tensile strength (tension vs. strain) were performed as well as Young modulus and bending stiffness. Breaking tension is reached for the SWNT at 150GP and between 3.5% and 5% of strain. During the measurements 'telescoping' extension of the MWNTs is observed, indicating that single-wall nanotubes (SWNT) could be even stronger. However, 150GPa remains the value for the tensile strength that was experimentally observed for carbon nanotubes. Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis
Y.-L. Li, I. A. Kinloch, and A. H. Windle, Science,
304,276-278, 2004 The work described in the paper by Y.-L. Li et al. is a follow-on of the famous paper by Zhu et al. (2002), which was cited extensively in Brad's book. This article goes a little more into the details of the process. If you use a mixture of ethene (as the source of carbon), ferrocene, and theophene (both as catalysts, I suppose) into a furnace (1050 to 1200 deg C) using hydrogen as carrier gas, you apparently get an 'aerogel' or 'elastic smoke' forming in the furnace cavity, which comprises the CNTs. Here's an interesting excerpt: Under these synthesis conditions, the nanotubes in the hot zone formed an aerogel, which appeared rather like “elastic smoke,” because there was sufficient association between the nanotubes to give some degree of mechanical integrity. The aerogel, viewed with a mirror placed at the bottom of the furnace, appeared very soon after the introduction of the precursors (Fig. 2). Itwas then stretched by the gas flow into the form of a sock, elongating downwards along the furnace axis. The sock did not attach to the furnace walls in the hot zone, which accordingly remained clean throughout the process.... The aerogel could be continuously drawn from the hot zone by winding it onto a rotating rod. In this way, the material was concentrated near the furnace axis and kept clear of the cooler furnace walls,...
The elasticity of the aerogel is interpreted to come from the forces between the individual CNTs. The authors describe the procedure to extract the aerogel and start spinning a yarn from it as it is continuously drawn out of the furnace. In terms of mechanical properties of the produced yarns, the authors found a wide range from 0.05 to 0.5 GPa/g/ccm. That's still not enough for the SE, but the process appears to be interesting as it allows to draw the yarn directly from the reaction chamber without mechanical contact and secondary processing, which could affect purity and alignment. Also, a discussion of the roles of the catalysts as well as hydrogen and oxygen is given, which can be compared to the discussion in G. Zhang et al. (2005, see below).
Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology
M. Zhang, K. R. Atkinson, and R. H. Baughman, Science,
306, 1358-1361, 2004 In the research article by M. Zhang et al. (2004) the procedure of spinning long yarns from forests of MWNTs is described in detail. The maximum breaking strength achieved is only 0.46 GPa based on the 30micron-long CNTs. The initial CNT forest is grown by chemical vapour deposition (CNT) on a catalytic substrate, as usual. A very intersting formula for the tensile strength of a yarn relative to the tensile strength of the fibers (in our case the MWNTs) is given:
<math> \frac{\sigma_{\rm yarn}}{\sigma_{\rm fiber}} = \cos^2 \alpha \left(1 - \frac{k}{\sin \alpha} \right) </math>
where alpha is the helix angle of the spun yarn, i.e. fiber direction relative to yarn axis. The constant k=sqrt(dQ/mu)/3L is given by the fiber diameter d=1nm, the fiber migration length Q (distance along the yartn over which a fiber shifts from the yarn surface to the deep interior and back again), the mu=0.13 is the friction coefficient of CNTs (the friction coefficent is the ratio of maximum along-fiber force divided by lateral force pressing the fibers together), L=30micron is the fiber length. A critical review of this formula is given here.
In the paper interesting transmission electron microscope (TEM) pictures are shown, which give insight into how the yarn is assembled from the CNT forest. The authors describe other characteristics of the yarn, like how knots can be introduced and how the yarn performs when knitted, apparently in preparation for application in the textile industry.
Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen
Important aspects of the production of CNTs that are suitable for the SE is the efficiency of the growth and the purity (i.e. lack of embedded amorphous carbon and imperfections in the Carbon bounds in the CNT walls). In their article G. Zhang et al. go into detail about the roles of oxygen and hydrogen during the chemical vapour deposition (CVD) growth of CNT forests from hydrocarbon sources on catalytic substrates. In earlier publications the role of oxygen was believed to be to remove amorphous carbon by oxidation into CO. The authors show, however, that, at least for this CNT growth technique, oxygen is important, because it removes hydrogen from the reaction. Hydrogen has apparently a very detrimental effect on the growth of CNTs, it even destroys existing CNTs as shown in the paper. Since hydrogen radicals are released during the dissociation of the hydrocarbon source compount, it is important to have a removal mechanism. Oxygen provides this mechanism, because its chemical affinity towards hydrogen is bigger than towards carbon.
In summary, if you want to efficiently grow pure CNT forests on a catalyst substrate from a hydrocarbon CVD reaction, you need a few percent oxygen in the source gas mixture. An additional interesting information in the paper is that you can design the places on the substrate, on which CNTs grow by placing the the catalyst only in certain areas of the substrate using lithography. In this way you can grow grids and ribbons. Figures are shown in the paper.
In the paper no information is given on the reason why the CNT growth stops at some point. The growth rate is given with 1 micron per minute. Of course for us it would be interesting to eliminate the mechanism that stops the growth so we could grow infinitely long CNTs.
This article can be found in our archive.
In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation
The paper by S. Hofmann et al. (2007) is a key publication for understanding the microscropic processes of growing CNTs. The authors describe an experiment in which they observe in situ the growth of CNTs from chemical vapour deposition (CVD) onto metallic catalyst particles. The observations are made in time-lapse transmission electron microscopy (TEM) and in x-ray photo-electron spectroscopy. Since I am not an expert on spectroscopy, I stick to the images and movies produced by the time-lapse TEM. In the observations it can be seen that the catalysts are covered by a graphite sheet, which forms the initial cap of the CNT. The formation of that cap apparently deforms the catalyst particle due to its inherent shape as it tries to form a minimum-energy configuration. Since the graphite sheet does not extend under the catalyst particle, which is prevented by the catalyst sitting on the silicon substrate, the graphite sheet cannot close itself. The deformation of the catalyst due to the cap forming leads to a retoring force exerted by the crystaline stracture of the catalyst particle. As a consequence the carbon cap lifts off the catalyst particle. On the base of the catalyst particle more carbon atoms attach to the initial cap starting the formation of the tube. The process continues to grow a CNT as long as there is enough carbon supply to the base of the catalyst particle and as long as the particle cannot be enclosed by the carbon compounds. During the growth of the CNT the catalyst particle breathes so drives so the growth process mechanically.
Of course for us SE community the most interesting part in this paper is the question: can we grow CNTs that are long enough so we can spin them in a yarn that would hold the 100GPa/g/ccm? In this regard the question is about the termination mechanism of the growth. The authors point to a very important player in CNT growth: the catalyst. If we can make a catalyst that does not break off from its substrate and does not wear off, the growth could be sustained as long as the catalyst/substrate interface is accessible to enough carbon from the feedstock.
If you are interested, get the paper from our archive, including the supporting material, in which you'll find the movies of the CNTs growing.
High-Performance Carbon Nanotube Fiber
K. Koziol et al., Science,
318, 1892, 2007. The paper "High-Performance Carbon Nanotube Fiber" by K. Koziol et al. is a research paper on the production of macroscopic fibers out of an aerogel (low-density, porous, solid material) of SWNT and MWNT that has been formed by carbon vapor deposition. They present an analysis of the mechanical performance figures (tensile strength and stiffness) of their samples. The samples are fibers of 1, 2, and 20mm length and have been extracted from the aerogel with high winding rates (20 metres per minute). Indeed higher winding rates appear to be desirable, but the authors have not been able to achieve higher values as the limit of extraction speed from the aerogel was reached, and higher speeds led to breakage of the aerogel.
They show in their results plot (Figure 3A) that typically the fibers split in two performance classes: low-performance fibers with a few GPa and high-performance fibers with around 6.5 GPa. It should be noted that all tensile strengths are given in the paper as GPa/SG, where SG is the specific gravity, which is the density of the material divided by the density of water. Normally SG was around 1 for most samples discussed in the paper. The two performance classes have been interpreted by the authors as the typical result of the process of producing high-strength fibers: since fibers break at the weakest point, you will find some fibers in the sample, which have no weak point, and some, which have one or more, provided the length of the fibers is in the order of the frequency of occurrence of weak points. This can be seen by the fact that for the 20mm fibers there are no high-performance fibers left, as the likelihood to encounter a weak point on a 20mm long fiber is 20 times higher than encountering one on a 1mm long fiber.
As a conclusion the paper is bad news for the SE, since the difficulty of producing a flawless composite with a length of 100,000km and a tensile strength of better than 3GPa using the proposed method is enormous. This comes back to the ribbon design proposed on the Wiki: using just cm-long fibers and interconnect them with load-bearing structures (perhaps also CNT threads). Now we have shifted the problem from finding a strong enough material to finding a process that produces the required interwoven ribbon. In my opinion the race to come up with a fiber of better than Kevlar is still open.
|
As far as I understand it, the first principle of thermodynamics is a mere definition of the quantity “Heat”: $$\text d Q: = \text d L + \text d U.$$ This is somewhat the point of view taken in Fermi's introductory book "Thermodynamics":
[...] $$\Delta U + L=0$$If the system is not thermally isolated, the first member of [eqn.] will be generally not equal to zero [...] Substitute the [eqn.] with the more general: $$\Delta U + L = Q.$$ [...] Now we will call $Q$, by definition, the quantity of heat received by the system during the transformation.
(if you want to read the full text you might want to google “Fermi Thermodynamics"... pag. 17).
I think that this point is logically sound and I have a quite good understanding of some of the above structure starting from here (e.g. the second principle). On the other hand I feel as I'm missing something.
To give an example, from mechanics, this is how I understand Newton's equation:
It is a matter of fact that the positions and the velocities of a mechanical system fully determine the accelerations of the system. Hence, the dynamic of each system follows second order differential equations: $$\ddot x = F(x,\dot x, t).$$
An other example might be the second law of thermodynamics, that (in Clausius' form) is simply the statement of the fact that heat doesn't flow spontaneously from a cold body to an hotter one.
Since I find strange that something that is called a “principle” is a mere definition (after all, there's no assumption involved in making a definition), I ask: what are the experimental facts behind the first principle of thermodynamics?
Note: I understand that this is really about my personal understanding, however I think that this question can be useful to others. Furthermore, if something isn't clear and if I can improve my question, let me know.
|
I’d like to share a simple proof I’ve discovered recently of a surprising fact: there is a universal algorithm, capable of computing any given function!
Wait, what? What on earth do I mean? Can’t we prove that some functions are not computable? Yes, of course.
What I mean is that there is a universal algorithm, a Turing machine program capable of computing any desired function, if only one should run the program in the right universe. There is a Turing machine program $p$ with the property that for any function $f:\newcommand\N{\mathbb{N}}\N\to\N$ on the natural numbers, including non-computable functions, there is a model of arithmetic or set theory inside of which the function computed by $p$ agrees exactly with $f$ on all standard finite input. You have to run the program in a different universe in order that it will compute your desired function $f$.
$\newcommand\ZFC{\text{ZFC}} \newcommand\PA{\text{PA}} \newcommand\Con{\mathop{\text{Con}}} \newcommand\proves{\vdash} \newcommand{\concat}{\mathbin{{}^\smallfrown}} \newcommand\restrict{\upharpoonright} $ Theorem There is a Turing machine program $p$, carrying out the specific algorithm described in the proof, such that for any function $f:\N\to\N$, there is a model of arithmetic $M\models\PA$, or indeed a model of set theory $M\models\ZFC$ or more (if consistent), such that the function computed by program $p$ inside $M$ agrees exactly with $f$ on all standard finite input.
The proof is elementary, relying essentially only on the ideas of the classical proof of the Gödel-Rosser theorem. To briefly review, for any computably axiomatized theory $T$ extending $\PA$, there is a corresponding sentence $\rho$, called the
Rosser sentence, which asserts, “for any proof of $\rho$ in $T$, there is a smaller proof of $\neg\rho$.” That is, by smaller, I mean that the Gödel-code of the proof is smaller. One constructs the sentence $\rho$ by a simple application of the Gödel fixed-point lemma, just as one constructs the usual Gödel sentence that asserts its own non-provability. The basic classical facts concerning the Rosser sentence include the following: If $T$ is consistent, then so are both $T+\rho$ and $T+\neg\rho$ $\PA+\Con(T)$ proves $\rho$. The theories $T$, $T+\rho$ and $T+\neg\rho$ are equiconsistent. If $T$ is consistent, then $T+\rho$ does not prove $\Con(T)$.
The first statement is the essential assertion of the Gödel-Rosser theorem, and it is easy to prove: if $T$ is consistent and $T\proves\rho$, then the proof would be finite in the meta-theory, and so since $T$ would have to prove that there is a smaller proof of $\neg\rho$, that proof would also be finite in the meta-theory and hence an actual proof, contradicting the consistency of $T$. Similarly, if $T\proves\neg\rho$, then the proof would be finite in the meta-theory, and so $T$ would be able to verify that $\rho$ is true, and so $T\proves\rho$, again contradicting consistency. By internalizing the previous arguments to PA, we see that $\PA+\Con(T)$ will prove that neither $\rho$ nor $\neg\rho$ are provable in $T$, making $\rho$ vacuously true in this case and also establishing $\Con(T+\rho)$ and $\Con(T+\neg\rho)$, for the second and third statements. In particular, $T+\Con(T)\proves\Con(T+\rho)$, which implies that $T+\rho$ does not prove $\Con(T)$ by the incompleteness theorem applied to the theory $T+\rho$, for the fourth statement.
Let’s now proceed to the proof of the theorem. To begin, we construct what I call the
Rosser tree over a c.e. theory $T$. Namely, we recursively define theories $R_s$ for each finite binary string $s\in 2^{{<}\omega}$, placing the initial theory $R_{\emptyset}=T$ at the root, and then recursively adding either the Rosser sentence $\rho_s$ for the theory $R_s$ or its negation $\neg\rho_s$ at each stage to form the theories at the next level of the tree. $$R_{s\concat 1}=R_s+\rho_s$$ $$R_{s\concat 0}=R_s+\neg\rho_s$$ Each theory $R_s$ is therefore a finite extension of $T$ by successively adding the appropriate Rosser sentences or their negations in the pattern described by $s$. If the initial theory $T$ is consistent, then it follows by induction using the Gödel-Rosser theorem that all the theories $R_s$ in the Rosser tree are consistent. Extending our notation to the branches through the tree, if $f\in{}^\omega 2$ is an infinite binary sequence, we let $R_f=\bigcup_n R_{f\upharpoonright n}$ be the union of the theories arising along that branch of the Rosser tree. In this way, we have constructed a perfect set of continuum many distinct consistent theories.
I shall now describe a universal algorithm for the case of computing binary functions. Consider the Rosser tree over the theory $T=\PA+\neg\Con(\PA)$. This is a consistent theory that happens to prove its own inconsistency. By considering the Gödel-codes in order, the algorithm should begin by searching for a proof of the Rosser sentence $\rho_{\emptyset}$ or its negation in the initial theory $R_{\emptyset}$. If such a proof is ever found, then the algorithm outputs $0$ or $1$ on input $0$, respectively, depending on whether it was the Rosser sentence or its negation that was found first, and moves to the next theory in the Rosser tree by adding the opposite statement to the current theory. Then, it starts searching for a proof of the Rosser sentence of
that theory or its negation. At each stage in the algorithm, there is a current theory $R_s$, depending on which prior proofs have been found, and the algorithm searches for a proof of $\rho_s$ or $\neg\rho_s$. If found, it outputs $0$ or $1$ accordingly (on input $n=|s|$), and moves to the next theory in the Rosser tree by adding the opposite statement to the current theory.
If $f:\N\to 2=\{0,1\}$ is any binary function on the natural numbers, then let $R_f$ be the theory arising from the corresponding path through the Rosser tree, and let $M\models R_f$ be a model of this theory. I claim that the universal algorithm I just described will compute exactly $f(n)$ on input $n$ inside this model. The thing to notice is that because $\neg\Con(\PA)$ was part of the initial theory, the model $M$ will think that all the theories in the Rosser tree are inconsistent. So the model will have plenty of proofs of every statement and its negation for any theory in the Rosser tree, and so in particular, the function computed by $p$ in $M$ will be a total function. The question is which proofs will come first at each stage, affecting the values of the function. Let $s=f\restrict n$ and notice that $R_s$ is true in $M$. Suppose inductively that the function computed by $p$ has worked correctly below $n$ in $M$, and consider stage $n$ of the procedure. By induction, the current theory will be exactly $R_s$, and the algorithm will be searching for a proof of $\rho_s$ or its negation in $R_s$. Notice that $f(n)=1$ just in case $\rho_s$ is true in $M$, and because of what $\rho_s$ asserts and the fact that $M$ thinks it is provable in $R_s$, it must be that there is a smaller proof of $\neg\rho_s$. So in this case, the algorithm will find the proof of $\neg\rho_s$ first, and therefore, according to the precise instructions of the algorithm, it will output $1$ on input $n$ and add $\rho_s$ (the opposite statement) to the current theory, moving to the theory $R_{s\concat 1}$ in the Rosser tree. Similarly, if $f(n)=0$, then $\neg\rho_s$ will be true in $M$, and the algorithm will therefore first find a proof of $\rho_s$, give output $0$ and add $\neg\rho_s$ to the current theory, moving to $R_{s\concat 0}$. In this way, the algorithm finds the proofs in exactly the right way so as to have $R_{f\restrict n}$ as the current theory at stage $n$ and thereby compute exactly the function $f$, as desired.
Basically, the theory $R_f$ asserts exactly that the proofs will be found in the right order in such a way that program $p$ will exactly compute $f$ on all standard finite input. So every binary function $f$ is computed by the algorithm in any model of the theory $R_f$.
Let me now explain how to extend the result to handle all functions $g:\N\to\N$, rather than only the binary functions as above. The idea is simply to modify the binary universal algorithm in a simple way. Any function $g:\N\to \N$ can be coded with a binary function $f:\N\to 2$ in a canonical way, for example, by having successive blocks of $1$s in $f$, separated by $0$s, with the $n^{\rm th}$ block of size $g(n)$. Let $q$ be the algorithm that runs the binary universal algorithm described above, thereby computing a binary sequence, and then extract from that binary sequence a corresponding function from $\N$ to $\N$ (this may fail, if for example, the binary sequence is finite or if it has only finitely many $0$s). Nevertheless, for any function $g:\N\to \N$ there is a binary function $f:\N\to 2$ coding it in the way we have described, and in any model $M\models R_f$, the binary universal algorithm will compute $f$, causing this adapted algorithm to compute exactly $g$ on all standard finite input, as desired.
Finally, let me describe how to extend the result to work with models of set theory, rather than models of arithmetic. Suppose that $\ZFC^+$ is a consistent c.e. extension of ZFC; perhaps it is ZFC itself, or ZFC plus some large cardinal axioms. Let $T=\ZFC^++\neg\Con(\ZFC^+)$ be a slightly stronger theory, which is also consistent, by the incompleteness theorem. Since $T$ interprets arithmetic, the theory of Rosser sentences applies, and so we may build the corresponding Rosser tree over $T$, and also we may undertake the binary universal algorithm using $T$ as the initial theory. If $f:\N\to 2$ is any binary function, then let $R_f$ be the theory arising on the corresponding branch through the Rosser tree, and suppose $M\models R_f$. This is a model of $\ZFC^+$, which also thinks that $\ZFC^+$ is inconsistent. So again, the universal algorithm will find plenty of proofs in this model, and as before, it will find the proofs in just the right order that the binary universal algorithm will compute exactly the function $f$. From this binary universal algorithm, one may again design an algorithm universal for all functions $g:\N\to\N$, as desired.
One can also get another kind of universality. Namely, there is a program $r$, such that for any finite $s\subset\N$, there is a model $M$ of $\PA$ (or $\ZFC$, etc.) such that inside the model $M$, the program $r$ will enumerate the set $s$ and nothing more. One can obtain such a program $r$ from the program $p$ of the theorem: just let $r$ run the universal binary program $p$ until a double $0$ is produced, and then interprets the finite binary string up to that point as the set $s$ to output.
Let me now also discuss another form of universality.
Corollary There is a program $p$, such that for any model $M\models\PA+\Con(\PA)$ and any function $f:M\to M$ that is definable in $M$, there is an end-extension of $M$ to a taller model $N\models\PA$ such that in $N$, the function computed by program $p$ agrees exactly with $f$ on input in $M$.
Proof We simply apply the main theorem inside $M$. The point is that if $M$ thinks $\Con(\PA)$, then it can build what it thinks is the tree of Rosser extensions, and it will think that each step maintains consistency. So the theory $T_f$ that it constructs will be consistent in $M$ and therefore have a model (the Henkin model) definable in $M$, which will therefore be an end-extension of $M$. QED
This last application has a clear affinity with a theorem of Woodin’s, recently extended by Rasmus Blanck and Ali Enayat. See Victoria Gitman’s posts about her seminar talk on those theorems: Computable processes can produce arbitrary outputs in nonstandard models, continuation.
Alternative proof. Here is an alternative elegant proof of the theorem based on the comments below of Vadim Kosoy. Let $T$ be any consistent computably axiomatizable theory interpreting PA, such as PA itself or ZFC or what have you. For any Turing machine program $e$, let $q(e)$ be a program carrying out the following procedure: on input $n$, search systematically for a finite function $h:X\to\mathbb{N}$, with $X$ finite and $n\in X$, and for a proof of the statement “program $p$ does not agree with $h$ on all inputs in $X$,” using the function $h$ simply as a list of values for this assertion. For the first such function and proof that is found, if any, give as output the value $h(n)$.
Since the function $e\mapsto q(e)$ is computable, there is by Kleene’s recursion theorem a program $p$ for which $p$ and $f(p)$ compute the same function, and furthermore, $T$ proves this. So the program $p$ is searching for proofs that $p$ itself does not behave in a certain way, and then it is behaving in that way when such a proof is found.
I claim that the theory $T$ does not actually prove any of those statements, “program $p$ does not agree with $h$ on inputs in $X$,” for any particular finite function $h:X\to\mathbb{N}$. If it did prove such a statement, then for the smallest such function and proof, the output of $p$ would indeed be $h$ on all inputs in $X$, by design. Thus, there would also be a proof that the program
did agree with this particular $h$, and so $T$ would prove a contradiction, contrary to our assumption that it was consistent. So $T$ actually proves none of those statements. In particular, the program $p$ computes the empty function in the standard model of arithmetic. But also, for any particular finite function $h:X\to\mathbb{N}$, we may consistently add the assertion “program $p$ agrees with $h$ on inputs in $X$” to $T$, since $T$ did not refute this assertion.
For any function $f:\mathbb{N}\to\mathbb{N}$, let $T_f$ be the theory $T$ together with all assertions of the form “program $p$ halts on input $n$ with value $k$”, for the particular value $k=f(n)$. I claim that this theory is consistent, for if it is not, then by compactness there would be finitely many of the assertions that enable the inconsistency, and so there would be a finite function $h:X\to\mathbb{N}$, with $h=f\upharpoonright X$, such that $T$ proved the program $p$ does not agree with $h$ on inputs in $X$. But in the previous paragraph, we proved that this doesn’t happen. And so the theory $T_f$ is consistent.
Finally, note that in any model of $T_f$, the program $p$ computes the function $f$ on standard input, because these assertions are exactly made in the theory.
QED
|
I was given the following formula for the Fourier series of a function with period $2\pi$:
$$ \begin{align*} \hat f(x) = \frac {a_0} 2 + \sum_{n=1}^\infty a_n \cos (nx) + b_n \sin (nx) \end{align*} $$
This formula is awkward because the coefficient $a_0$ is halved, whereas the coefficients $a_n$ and $b_n$ aren't, for $n > 0$. Then I realized that I could rewrite this as:
$$ \begin{align*} \hat f(x) = \sum_{n \in \mathbb Z} c_n \cos (nx) + d_n \sin (nx) \end{align*} $$
Where:
$$ \begin{align*} a_n & = c_n + c_{-n} && \mbox{for } n \ge 0 \\ b_n & = d_n - d_{-n} && \mbox{for } n \ge 0 \\ \end{align*} $$
This is IMO much easier on the eyes. Then you can add a constraint that $c_n, d_n = 0$ when $n < 0$, if you want to. Is there a good reason not to work this way?
|
Consider the system:
$\begin{cases}x' = e^{ay} - e^x \\ y' = a x^2 + (a-a^2) x + ay^2 e^{-y}\end{cases}$
Using Lyapunov first method we have, for $a \in ]-\infty,1[ \cup ]0,1[$ $p = (0,0)$ is unestable and for $a \in ]1,+\infty[$ is asymptotically stable.
For $a = 1$ I need to use Chetaev theorem to show that it is unestable and without indication I have to determine the stability when $a = 0$.
$a = 1$
In this case I looked for a Lyapunov function with separated variables but I get:
$\dot V(x,y) = V_1'(x) e^y - V_1'(x) e^x + x^2 V_2'(y) + y^2 e^{-y}V_2'(y)$
I don't see how to choose $V_1,V_2$ to make $\dot V(x,y) > 0$ in a neighbourhood of $p = (0,0)$
$a = 0$
This time we have the system $\begin{cases}x' = 1 - e^x \\ y' = 0\end{cases}$ and one solution is $(0,C)$ with $C \in \mathbb{R}$, so $p = (0,0)$ is not asymptotically stable. I tried to use Lyapunov's second theorem to have $\dot V = 0$ and show stability but it didn't work again.
|
There are proofs that treat the cases of real and non-real $\chi$ on an equal footing. One proof is in Serre's Course in Arithmetic, which the answers by Pete and David are basically about. That method is using the (hidden) fact that the zeta-function of the $m$-th cyclotomic field has a simple pole at $s = 1$, just like the Riemann zeta-function.Here is another proof which focuses only on the $L$-function of the character $\chi$ under discussion, the $L$-function of the conjugate character, and the Riemann zeta-function.
Consider the product$$H(s) = \zeta(s)^2L(s,\chi)L(s,\overline{\chi}).$$This function is analytic for $\sigma > 0$, with the possible exception of a pole at $s = 1$. (As usual I write $s = \sigma + it$.)
Assume $L(1,\chi) = 0$. Then also $L(1,\overline{\chi}) = 0$.So in the product defining $H(s)$, the double pole of $\zeta(s)^2$ at $s = 1$ is cancelled and $H(s)$ is therefore analytic throughout the half-plane $\sigma > 0$.
For $\sigma > 1$, we have the exponential representation $$H(s) = \exp\left(\sum_{p, k} \frac{2 + \chi(p^k) + \overline{\chi}(p^k)}{kp^{ks}}\right),$$where the sum is over $k \geq 1$ and primes $p$. If $p$ does not divide $m$, then we write $\chi(p) = e^{i\theta_p}$ and find
$$\frac{2 + \chi(p^k) + \overline{\chi}(p^k)}{k} = \frac{2(1 + \cos(k\theta_p))}{k} \geq 0.$$ If $p$ divides $m$ then this sum is $2/k > 0$. Either way, inside that exponential is a Dirichlet series with nonnegative coefficients, so when we exponentiate and rearrange terms (on the half-plane of abs. convergence, namely where $\sigma > 1$), we see that $H(s)$ is a Dirichlet series with nonnegative coefficients. A lemma of Landau on Dirichlet series with nonnegative coefficients then assures us that the Dirichlet series representation of $H(s)$ is valid on any half-plane where $H(s)$ can be analytically continued.
To get a contradiction at this point, here are several methods.
[Edit: In the answer by J.H.S., and due to Bateman, is the slickest argument I have seen, so let me put it here. The idea is to look at the coefficient of $1/p^{2s}$ in the Dirichlet series for $H(s)$. By multiplying out the $p$-part of the Euler product, the coefficient of $1/p^s$ is $2 + \chi(p) + \overline{\chi}(p)$, which is nonnegative, but the coefficient of $1/p^{2s}$ is $(\chi(p) + \overline{\chi}(p) + 1)^2 + 1$, which is not only nonnegative but in fact is greater than or equal to 1. Therefore if $H(s)$ has an analytic continuation along the real line out to the number $\sigma$, then for real $s \geq \sigma$ we have $H(s) \geq \sum_{p} 1/p^{2s}$. The hypothesis that $L(1,\chi) = 0$ makes $H(s)$ analytic for all complex numbers with positive real part, so we can take $s = 1/2$ and get $H(1/2) \geq \sum_{p} 1/p$, which is absurd since that series over the primes diverges. QED!]
If you are willing to accept that $L(s,\chi)$ (and therefore $L(s,\overline{\chi})$) has an analytic continuation to the whole plane, or at least out to the point $s = -2$, then $H(s)$ extends to $s = -2$. The Dirichlet series representation of $H(s)$ is convergent at $s = -2$ by our analytic continuation hypothesis and it shows $H(-2) > 1$, or the exponential representation implies that at least $H(-2) \not= 0$.But $\zeta(-2) = 0$, so $H(-2) = 0$. Either way, we have a contradiction.
There is a similar argument, pointed out to me by Adrian Barbu, that does not require analytic continuation of $L(s,\chi)$ beyond the half-plane $\sigma > 0$. If you are willing to accept that $\zeta(s)$ has zeros in the critical strip $0 < \sigma < 1$ (which is a region that the Dirichlet series and exponential representations of $H(s)$ are both valid since $H(s)$ is analytic on $\sigma > 0$), we can evaluate the exponential representation of $H(s)$ at such a zero to get a contradiction. Of course the amount of analysis that lies behind this is more substantial than what is used to continue $L(s,\chi)$ out to $s = -2$.
We consider $H(s)$ as $s \rightarrow 0^{+}$. We need to accept that $H$ is bounded as $s \rightarrow 0^{+}$. (It's even holomorphic there, but we don't quite need that.) For real $s > 0$ and a fixed prime $p_0$ (not dividing $m$, say), we can bound $H(s)$ from below by the sum of the $p_0$-power terms in its Dirichlet series. The sum of these terms is exactly the $p_0$-Euler factor of $H(s)$, so we have the lower bound $$H(s) > \frac{1}{(1 - p_0^{-s})^2(1 - \chi(p_0)p_0^{-s})(1 - \overline{\chi}(p_0)p_0^{-s})} = \frac{1}{(1 - p_0^{-s})^2(1 - (\chi(p_0)+ \overline{\chi}(p_0))p_{0}^{-s} + p_0^{-2s})}$$for real $s > 0$. The right side tends to $\infty$ as $s \rightarrow 0^{+}$.We have a contradiction. QED
These three arguments at some point use knowledge beyond the half-plane $\sigma > 0$ or a nontrivial zero of the zeta-function. Granting any of those lets you see easily that $H(s)$ can't vanish at $s = 1$, but that "granting" may seem overly technical. If you want a proof for the real and complex cases uniformly which does not go outside the region $\sigma > 0$, use the method in the answer by Pete or David [edit: or use the method I edited in as the first one in this answer].
|
This question is about Feynman diagram and vertices in a hadron decay, which comes from Problem 89.5 in Srednicki's textbook Quantum Field Theory. The reaction involved is \begin{equation} d\rightarrow u + e + \overline{\nu}_{e} \tag{1} \end{equation}
After integrating out the $W^{\pm}$ fields, we get an effective interaction between the hadron and lepton currents that includes \begin{equation} \mathcal{L}_{eff}=2\sqrt{2}Z_{C}C(\mathcal{\overline{E}}_{L}\gamma^{\mu}\mathcal{N}_{eL})(\mathcal{\overline{U}}_{L}\gamma_{\mu}\mathcal{D}_{L}), \tag{89.35} \end{equation} where $Z_{C}$ is renormalizing factor, $C$ is constant, and \begin{equation} \mathcal{E} \equiv \left( \begin{array}{c} e \\ \overline{e}^{\dagger} \end{array} \right), \hspace{0.2in} \mathcal{N}_{e} \equiv \left( \begin{array}{c} \nu_{e} \\ \nu_{e}^{\dagger} \end{array} \right), \hspace{0.2in} \mathcal{U} \equiv \left( \begin{array}{c} u \\ \overline{u}^{\dagger} \end{array} \right), \hspace{0.2in} \mathcal{D} \equiv \left( \begin{array}{c} d \\ \overline{d}^{\dagger} \end{array} \right) \end{equation} are Dirac fields for the electron, electron neutrino, up quark, and down quark respectively. Using Fierz identity, eq.(89.35) can be written as \begin{equation} \mathcal{L}_{eff}=2\sqrt{2}Z_{C}C(\mathcal{\overline{E}}_{L}\gamma^{\mu}\mathcal{D}_{L})(\mathcal{\overline{U}}_{L}\gamma_{\mu}\mathcal{N}_{eL}), \tag{89.36} \end{equation}
In the Feynman diagram for the interaction eq. (89.35), the two vertices are $d-u-W^{-}$ and $e-\overline{\nu}_{e}-W^{-}$. Does eq. (89.36) imply a different interaction that requires a different diagram? If so, what are the two vertices in this diagram? (they seem to be $e-d-X$ and $u-\overline{\nu}_{e}-X$; what is $X$?) Furthermore, does eq. (89.36) imply a quark process different from (1)?
If someone says that eq. (89.36) is just a mathematical transformation using Fierz identity and does not correspond to a physical interaction, this seems not the case, because by making analogy of the diagram of eq. (89.36) to the one-loop correction to the photon-fermion-fermion vertex (Fig. 62.3) in spinor electrodynamics (Section 62 in Srednicki's textbook), Problem 89.5 part b) asks us to prove the same result of Problem 62.2. This suggests that eq. (89.36) implies a diagram different from that of eq. (89.35) but the same as that of Fig. 62.3 in Problem 62.2.
|
PCTeX Talk Discussions on TeX, LaTeX, fonts, and typesetting
Author Message zedler Joined: 03 Mar 2006 Posts: 15
Posted: Mon Mar 27, 2006 11:53 am Post subject: spacing of mathrm Hello,
\documentclass{book}
\usepackage{times,mtpro2}
\begin{document}
$(\mathrm j$
\end{document}
gives touching glyphs.
Michael Michael Spivak Joined: 10 Oct 2005 Posts: 52
Posted: Tue Mar 28, 2006 12:29 pm Post subject: Re: spacing of mathrm
zedler wrote: Hello,
\documentclass{book}
\usepackage{times,mtpro2}
\begin{document}
$(\mathrm j$
\end{document}
gives touching glyphs.
Michael
There's not much I can do about that---if you are using Times as the
text font, then in text (j also touches! [though $(\mathrm j$ is worse, with more overlap].
I'm wondering how this arose. Assuming that you didn't really want
\mathrm{(j ... I would guess that you are using roman letters as a set
of variables, either in addition to, or in place of, the MTPro2 italic letters.
In that case, you really would want a special font for this purpose, in the
same way that MTPro's \mathbf font has different spacing than the
Times-bold, so that subscripts and superscripts will work better. zedler Joined: 03 Mar 2006 Posts: 15
Posted: Wed Mar 29, 2006 5:23 am Post subject: Re: spacing of mathrm
Quote: There's not much I can do about that---if you are using Times as the
text font, then in text (j also touches! [though $(\mathrm j$ is worse, with more overlap].
I'm wondering how this arose. Assuming that you didn't really want
\mathrm{(j ... I would guess that you are using roman letters as a set
of variables, either in addition to, or in place of, the MTPro2 italic letters.
In that case, you really would want a special font for this purpose, in the
same way that MTPro's \mathbf font has different spacing than the
Times-bold, so that subscripts and superscripts will work better.
Yes, I really want to typeset $\exp(\mathrm j\omega\tau=$ ;-)
I suppose this can only be corrected by increasing the bracket side bearings, but your approach was to have very tight bracket side bearings and adjust/increase the spacing using kerns. This of course fails for \mathrm...
The tight bracket side bearings were also an issue in my previous example, $\[\]_{xy}$. CM, Fourier, Lucida and MnSymbol don't have this problem...
Michael Michael Spivak Joined: 10 Oct 2005 Posts: 52
Posted: Wed Mar 29, 2006 6:39 am Post subject: Re: spacing of mathrm
zedler wrote:
Quote: There's not much I can do about that---if you are using Times as the
text font, then in text (j also touches! [though $(\mathrm j$ is worse, with more overlap].
I'm wondering how this arose. Assuming that you didn't really want
\mathrm{(j ... I would guess that you are using roman letters as a set
of variables, either in addition to, or in place of, the MTPro2 italic letters.
In that case, you really would want a special font for this purpose, in the
same way that MTPro's \mathbf font has different spacing than the
Times-bold, so that subscripts and superscripts will work better.
Yes, I really want to typeset $\exp(\mathrm j\omega\tau=$ ;-)
I suppose this can only be corrected by increasing the bracket side bearings, but your approach was to have very tight bracket side bearings and adjust/increase the spacing using kerns. This of course fails for \mathrm...
The tight bracket side bearings were also an issue in my previous example, $\[\]_{xy}$. CM, Fourier, Lucida and MnSymbol don't have this problem...
Michael
Actually, I didn't, and one can't, adjust the spacing after a left parentheses or before a right parenthesis
using kerns [I mentioned this on some posting somewhere once before];
even if you put kerns into the tfm file, they are ignored because the left parenthesis is an "opening", which determines its own spacing, and similarly the right parenthesis is a "closing". I chose side bearings for
the parenthesis that worked well with the italic letters on the math italic font.
Even if that were not the case, the real problem is that
in the expression \exp(\mathrm j the ( comes from the math italic font,
while the j is coming from an entirely different font, the Times-Roman font, and TeX has no way of kerning characters in different fonts. If you
were to use some other roman font as your text font, then the problem might very well be less or much more---it would depend entirely on the left side bearing of j in that particular font.
I suspect that j is being used here as a some special character (perhaps in electrical engineering, although I thought they preferred bold j); in that case, I would just define something like \myj to give a small kern followed by j---in fact, it's easier to type \myj than to type \mathrm j.
Sorry that [] doesn't work out for you, but I've never seen something like that in any mathematics paper, and since I like the way brackets work with the math italic characters in general, I wouldn't want to change the side bearings just for this special case (once again, changes couldn't be
overridden with kerns). zedler Joined: 03 Mar 2006 Posts: 15
Posted: Wed Mar 29, 2006 10:00 am Post subject: Re: spacing of mathrm
Quote: Sorry that [] doesn't work out for you, but I've never seen something like that in any mathematics paper, and since I like the way brackets work with the math italic characters in general, I wouldn't want to change the side bearings just for this special case (once again, changes couldn't be
overridden with kerns).
I can apply manual spacings, the "\mathrm j" is stored in a macro anyway and the empty brackets I need only once. Perhaps you're interested, I've put together a collection showing how different math font setups behave in the above mentioned cases: http://www.hft.ei.tum.de/mz/mtpro2_sidebearings.pdf
Michael Michael Spivak Joined: 10 Oct 2005 Posts: 52
Posted: Wed Mar 29, 2006 1:24 pm Post subject: Re: spacing of mathrm
Quote: ="zedler
I can apply manual spacings, the "\mathrm j" is stored in a macro anyway and the empty brackets I need only once. Perhaps you're interested, I've put together a collection showing how different math font setups behave in the above mentioned cases: http://www.hft.ei.tum.de/mz/mtpro2_sidebearings.pdf
Michael
Interesting.
I'd say that CM looks the worst (especially the \omega and
\tau, as well as being so thin).
Lucida is somewhat "klunky", though definitely easy to read!
(Is this Lucida or Lucida-Bright?) Some one mentioned that section headings are sometimes printed in sans-serif,
so that a sans-serif math might be nice to have; I suspect that the Lucida
greek letters would work well for that.
If \mathrm j is in a macro, then probably there should also be some space on the right; certainly needed for CM, not really needed for Lucida
or Minion, useful for Fourier and MTPro2.
By the way, what is []_{\langle6\times6\rangle} ? zedler Joined: 03 Mar 2006 Posts: 15
Posted: Wed Mar 29, 2006 3:25 pm Post subject: Re: spacing of mathrm
Quote: (Is this Lucida or Lucida-Bright?)
Pctex's Lucida fonts.
Quote: By the way, what is []_{\langle6\times6\rangle} ?
Excerpt from a paper (Deadline tomorrow Mar 30, Hawai time ;-))
Code:
\begin{document}\let\mathbf\mbf
...
Next, the impedance matrix of the outer 12-port is obtained by
inverting $\mathbf{Y}^{\langle 16\times16\rangle}$ and taking the
upper left $\langle 12\times12\rangle$ submatrix
\begin{equation}
\mathbf{Z}^{\langle 12\times12\rangle}=\left[{\mathbf{Y}^{\langle 16\times16\rangle}}^{-1}\right]_{\langle 12\times12\rangle}
\end{equation}
where the operator $[\,]_{\langle 12\times12\rangle}$ denotes taking
the submatrix. The $\mathbf Z^{\langle 6\times 6\rangle}=\mathbf Z$
matrix of the outer six-port is obtained by
Perhaps not the best notation, do you have a better idea? BTW, quite funny that both you and my boss are aficionados of differential forms ;-)
Wish you wedge and hodge,
Michael Michael Spivak Joined: 10 Oct 2005 Posts: 52
Posted: Wed Mar 29, 2006 3:39 pm Post subject: Re: spacing of mathrm
zedler wrote:
Excerpt from a paper (Deadline tomorrow Mar 30, Hawai time ;-))
Code:
\begin{document}\let\mathbf\mbf
...
Next, the impedance matrix of the outer 12-port is obtained by
inverting $\mathbf{Y}^{\langle 16\times16\rangle}$ and taking the
upper left $\langle 12\times12\rangle$ submatrix
\begin{equation}
\mathbf{Z}^{\langle 12\times12\rangle}=\left[{\mathbf{Y}^{\langle 16\times16\rangle}}^{-1}\right]_{\langle 12\times12\rangle}
\end{equation}
where the operator $[\,]_{\langle 12\times12\rangle}$ denotes taking
the submatrix. The $\mathbf Z^{\langle 6\times 6\rangle}=\mathbf Z$
matrix of the outer six-port is obtained by
Perhaps not the best notation, do you have a better idea? BTW, quite funny that both you and my boss are aficionados of differential forms ;-)
Wish you wedge and hodge,
Michael
Not really, but I would probably have used something like UL_{\langle
12\times\12\rangle}(...) with U and L roman (or perhaps bold). And I
probably would actually have used something like UL_{[12]}, with
the idea that for square matrices [12] would mean \langle12\times12\rangle.
All times are GMT - 7 Hours
You can post new topics in this forum You can reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum Powered by phpBB © 2001, 2005 phpBB Group
|
Home
Integration by PartsIntegration by Parts
Examples
Integration by Parts with a definite integral
Going in Circles
Tricks of the Trade
Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions
Product of Sines and Cosines (mixed even and odd powers or only odd powers)
Product of Sines and Cosines (only even powers)
Product of Secants and Tangents
Other Cases
Trig SubstitutionsHow Trig Substitution Works
Summary of trig substitution options
Examples
Completing the Square
Partial FractionsIntroduction to Partial Fractions
Linear Factors
Irreducible Quadratic Factors
Improper Rational Functions and Long Division
Summary
Strategies of IntegrationSubstitution
Integration by Parts
Trig Integrals
Trig Substitutions
Partial Fractions
Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration
Type 2 - Improper Integrals with Discontinuous Integrands
Comparison Tests for Convergence
Modeling with Differential EquationsIntroduction
Separable Equations
A Second Order Problem
Euler's Method and Direction FieldsEuler's Method (follow your nose)
Direction Fields
Euler's method revisited
Separable EquationsThe Simplest Differential Equations
Separable differential equations
Mixing and Dilution
Models of GrowthExponential Growth and Decay
The Zombie Apocalypse (Logistic Growth)
Linear EquationsLinear ODEs: Working an Example
The Solution in General
Saving for Retirement
Parametrized CurvesThree kinds of functions, three kinds of curves
The Cycloid
Visualizing Parametrized Curves
Tracing Circles and Ellipses
Lissajous Figures
Calculus with Parametrized CurvesVideo: Slope and Area
Video: Arclength and Surface Area
Summary and Simplifications
Higher Derivatives
Polar CoordinatesDefinitions of Polar Coordinates
Graphing polar functions
Video: Computing Slopes of Tangent Lines
Areas and Lengths of Polar CurvesArea Inside a Polar Curve
Area Between Polar Curves
Arc Length of Polar Curves
Conic sectionsSlicing a Cone
Ellipses
Hyperbolas
Parabolas and Directrices
Shifting the Center by Completing the Square
Conic Sections in Polar CoordinatesFoci and Directrices
Visualizing Eccentricity
Astronomy and Equations in Polar Coordinates
Infinite SequencesApproximate Versus Exact Answers
Examples of Infinite Sequences
Limit Laws for Sequences
Theorems for and Examples of Computing Limits of Sequences
Monotonic Covergence
Infinite SeriesIntroduction
Geometric Series
Limit Laws for Series
Test for Divergence and Other Theorems
Telescoping Sums
Integral TestPreview of Coming Attractions
The Integral Test
Estimates for the Value of the Series
Comparison TestsThe Basic Comparison Test
The Limit Comparison Test
Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test
Absolute Convergence
Rearrangements
The Ratio and Root TestsThe Ratio Test
The Root Test
Examples
Strategies for testing SeriesStrategy to Test Series and a Review of Tests
Examples, Part 1
Examples, Part 2
Power SeriesRadius and Interval of Convergence
Finding the Interval of Convergence
Power Series Centered at $x=a$
Representing Functions as Power SeriesFunctions as Power Series
Derivatives and Integrals of Power Series
Applications and Examples
Taylor and Maclaurin SeriesThe Formula for Taylor Series
Taylor Series for Common Functions
Adding, Multiplying, and Dividing Power Series
Miscellaneous Useful Facts
Applications of Taylor PolynomialsTaylor Polynomials
When Functions Are Equal to Their Taylor Series
When a Function Does Not Equal Its Taylor Series
Other Uses of Taylor Polynomials
Functions of 2 and 3 variablesFunctions of several variables
Limits and continuity
Partial DerivativesOne variable at a time (yet again)
Definitions and Examples
An Example from DNA
Geometry of partial derivatives
Higher Derivatives
Differentials and Taylor Expansions
Differentiability and the Chain RuleDifferentiability
The First Case of the Chain Rule
Chain Rule, General Case
Video: Worked problems
Multiple IntegralsGeneral Setup and Review of 1D Integrals
What is a Double Integral?
Volumes as Double Integrals
Iterated Integrals over RectanglesHow To Compute Iterated Integrals
Examples of Iterated Integrals
Fubini's Theorem
Summary and an Important Example
Double Integrals over General RegionsType I and Type II regions
Examples 1-4
Examples 5-7
Swapping the Order of Integration
Area and Volume Revisited
Double integrals in polar coordinatesdA = r dr (d theta)
Examples
Multiple integrals in physicsDouble integrals in physics
Triple integrals in physics
Integrals in Probability and StatisticsSingle integrals in probability
Double integrals in probability
Change of VariablesReview: Change of variables in 1 dimension
Mappings in 2 dimensions
Jacobians
Examples
Bonus: Cylindrical and spherical coordinates
In the following video we go over the definitions of limit and continuity. The definitions are repeated in text form below the video.
For functions of one variable, the (rough) definition of a limit was:
We made that precise by saying exactly what `close to' means. We used the letter $\epsilon$ for how close $f(x)$ has to get to $L$, and $\delta$ for how close $x$ is to $a$:
The rough and precise definitions of limits of functions of two (or more) variables work the same way:
The only difference is what we mean by `close to $(a,b)$'. We mean the distance in the plane: $$\|\langle x-a, y-b \rangle\| = \sqrt{(x-a)^2 + (y-b)^2}.$$This is small whenever $x$ is close to $a$ and $y$ is close to $b$. Just one of them being close isn't good enough.
Just as with one variable, we say a function is
Most of the rules for continuous functions carry over unchanged from single variable calculus. For instance,
|
Forgot password? New user? Sign up
Existing user? Log in
Q1Q1Q1 Find the largest y such that
11+x2≥y−xy+xforallx>0\frac { 1 }{ 1+{ x }^{ 2 } } \ge \frac { y-x }{ y+x } \quad for\quad all\quad x>01+x21≥y+xy−xforallx>0
Q2Q2Q2 Find the minimum and maximum values of
x+1xy+x+1+y+1yz+y+1+z+1zx+z+1\frac { x+1 }{ xy+x+1 } +\frac { y+1 }{ yz+y+1 } +\frac { z+1 }{ zx+z+1 } xy+x+1x+1+yz+y+1y+1+zx+z+1z+1
Note by Satyajit Ghosh 3 years, 11 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
@Dev Sharma @Adarsh Kumar @Surya Prakash @Svatejas Shivakumar @Kushagra Sahni
Log in to reply
@Satyajit Ghosh Thank you for mentioning me!
Applying componendo and dividendo we get 2+x2−x2≥2y−2x\frac{2+x^{2}}{-x^{2}}\geq\frac{2y}{-2x}−x22+x2≥−2x2y
2x+x≥y\frac{2}{x}+x \geq y x2+x≥y
Applying A.M−G.MA.M-G.MA.M−G.M
2x+x≥22\frac{2}{x}+x\geq 2\sqrt{2}x2+x≥22
Therefore y=22y=2\sqrt{2}y=22.
Thanks. How could I forget to tag you!Btw can you even have a look at Olympiad corner#1 q1
Answered
@Shivam Jadhav – He is asking for question 1 solution. 2nd was easy.
@Shivam Jadhav – What about 2nd question?
Lol both questions are Q1?
I've fixed it.
Do u think the 2nd question is correct? Check the expression again from your book.
Sorry the denominator of 1st term had xy+y+1 but now I have updated it it to the correct question
If you have got the answer, please give a hint at least
Therefore, the maximum value of yyy is 8\sqrt {8}8
Thanks! Can you tell the answer for q2. Do check out my Olympiad corner#1 which has q1 unanswered.
Problem Loading...
Note Loading...
Set Loading...
|
Difference between revisions of "Abstract.tex"
Line 1: Line 1:
\begin{abstract}
\begin{abstract}
−
The Hales--Jewett theorem asserts that for every $r$ and every $k$ there exists $n$ such that every $r$-colouring of the $n$-dimensional grid $\{1, \dotsc, k\}^n$ contains a combinatorial line. This result is a generalization of van der Waerden's theorem, and it is one of the fundamental results of Ramsey theory. The van der Waerden
+
The Hales--Jewett theorem asserts that for every $r$ and every $k$ there exists $n$ such that every $r$-colouring of the $n$-dimensional grid $\{1, \dotsc, k\}^n$ contains a combinatorial line. This result is a generalization of van der Waerden's theorem, and it is one of the fundamental results of Ramsey theory. The van der Waerden has a famous density version, conjectured by Erd\H os and Tur\'an in 1936, proved by Szemer\'edi in 1975 and given a different proof by Furstenberg in 1977. The Hales--Jewett theorem has a density version as well, proved by Furstenberg and Katznelson in 1991 by means of a significant extension of the ergodic techniques that had been pioneered by Furstenberg in his proof of Szemer\'edi's theorem. In this paper, we give the first elementary proof of the theorem of Furstenberg and Katznelson, and the first to provide a quantitative bound on how large $n$ needs to be. In particular, we show that a subset of $[3]^n$ of density $\delta$ contains a combinatorial line if $n \geq 2 \upuparrows O(1/\delta^3)$. Our proof is surprisingly\noteryan{``reasonably'', maybe} simple: indeed, it gives what is probably the simplest known proof of Szemer\'edi's theorem.
\end{abstract}
\end{abstract}
Latest revision as of 13:23, 8 July 2009
\begin{abstract}The Hales--Jewett theorem asserts that for every $r$ and every $k$ there exists $n$ such that every $r$-colouring of the $n$-dimensional grid $\{1, \dotsc, k\}^n$ contains a combinatorial line. This result is a generalization of van der Waerden's theorem, and it is one of the fundamental results of Ramsey theory. The theorem of van der Waerden has a famous density version, conjectured by Erd\H os and Tur\'an in 1936, proved by Szemer\'edi in 1975 and given a different proof by Furstenberg in 1977. The Hales--Jewett theorem has a density version as well, proved by Furstenberg and Katznelson in 1991 by means of a significant extension of the ergodic techniques that had been pioneered by Furstenberg in his proof of Szemer\'edi's theorem. In this paper, we give the first elementary proof of the theorem of Furstenberg and Katznelson, and the first to provide a quantitative bound on how large $n$ needs to be. In particular, we show that a subset of $[3]^n$ of density $\delta$ contains a combinatorial line if $n \geq 2 \upuparrows O(1/\delta^3)$. Our proof is surprisingly\noteryan{``reasonably
, maybe} simple: indeed, it gives what is probably the simplest known proof of Szemer\'edi's theorem. \end{abstract}
|
Forgot password? New user? Sign up
Existing user? Log in
Determine the Taylor series for the function f(x)=sin(x)cos(x) centered at x=π.f(x) = \sin(x)\cos(x) \text{ centered at } x = \pi.f(x)=sin(x)cos(x) centered at x=π.
At a=0 a = 0a=0, what is the Taylor series expansion of
ln(1+x)? \ln ( 1 + x ) ? ln(1+x)?
1(1−x)2? \frac{1}{(1-x)^2} ? (1−x)21?
Given the Maclaurin series expansion of exp(x2)\exp(x^2)exp(x2) as a0+a1x1+a2x2+⋯ ,a_0 + a_1 x^1 + a_2 x^2 + \cdots ,a0+a1x1+a2x2+⋯, what is the value of a0+a1+a2a_0 + a_1 + a_2 a0+a1+a2?
Determine the first three non-zero terms of the Taylor series for f(x)=tan(x) centered at x=π4.f(x) = \tan(x) \text{ centered at } x = \frac{\pi}{4}.f(x)=tan(x) centered at x=4π.
Problem Loading...
Note Loading...
Set Loading...
|
Define the quadratic form $$Q(z_1,z_2,z_3,z_4) = 13 + \sum_{i=1}^4 (10+i)z_i +5 \sum_{1 \le i \le j \le 4} z_iz_j.$$ Then, $r_Q(n) := \left|\{(z_1,z_2,z_3,z_4) \in \mathbb{Z}^4 : Q(z_1,z_2,z_3,z_4) = n \}\right|$ is weakly multiplicative. I can prove this by using the generating function $\sum r_Q(n) q^n$ which is in the Eisenstein subspace of $\mathcal{M}_2(\chi_5)$, with $\chi_5$ the Legendre symbol.
Because $r_Q(n)$ counts something, a likely alternative explanation is that there exists some product on solutions to $Q(\vec{z})=n$ that would explain this multiplicativity directly, hence the question:
Does there exist a product $\times$ on solutions to $Q(\vec{z})=n$ such that, whenever $(m,n)=1$,
$Q(\vec{x})=m$ and $Q(\vec{y})=n$ imply $Q(\vec{x}\\times \vec{y})=mn$; $Q(\vec{z})=mn$ implies we can find unique $\vec{x}, \vec{y}$ with $Q(\vec{x})=m$, $Q(\vec{y})=n$ and $\vec{x} \times \vec{y} = mn$?
I am interested in this because results of Garvan, Kim and Stanton give a bijection between the representations of $n$ by $Q$ and the number of $5-$core partitions of size $n-1$, which would lead to a product on 5-cores that I would like to understand. This multiplicativity has been used at 5
only by GKS to show combinatorially that $p(5n+4) \equiv 0 \mod5$. Note 1: After the change of variables $v_i := 5z_i+i$ and the introduction of the fifth variable $v_0 := -v_1-v_2-v_3-v_4$, one can also define $Q$ by the more symmetric and homogeneous$$Q(v_0,v_1,v_2,v_3,v_4) = \frac{1}{10} \sum_{i=0}^4 v_i^2,$$ but we are not looking at all solutions then: we need $\sum_{i=0}^4 v_i = 0$ and $v_i \equiv i \mod 5$. Note 2: For the multiplicativity, 5 is special at the moment, but could conceivably be replaced by 7 and 11 later, judging from the theory of Garvan, Kim and Stanton. However I am hoping that a combinatorial construction for the product on 5-cores could be generalized more widely. UPDATE: I am sure the $r_Q(n)$ is not completely multiplicative. In fact, here is a list of the first 50 values, starting at 1: 1, 1, 2, 3, 5, 2, 6, 5, 7, 5, 12, 6, 12, 6, 10, 11, 16, 7, 20, 15, 12, 12, 22, 10, 25, 12, 20, 18, 30, 10, 32, 21, 24, 16, 30, 21, 36, 20, 24,25, 42, 12, 42, 36, 35, 22, 46, 22, 43, 25,... ( it's at http://oeis.org/A053723 ). There are actually closed forms there for $r_Q(p^e)$, but I don't see how they help for my question.
In particular, we have $r_Q(2) = 1$, and $r_Q(4) = 3$, corresondign respectively to the solutions $(0, -1, 0, -1)$ and $(-1, 0, -1, 0), (0, -1, -1, -1), (0, 0, 0, -1)$.
|
Let $(e_n)$ be an orthonormal sequence in an inner product space $X$. Then, for every $x \in X$, we have $$ \sum_{n=1}^\infty \vert \langle x, e_n \rangle \vert^2 \ \leq \ \Vert x \Vert^2.$$
Now $\ell^2$ is the inner product space of all sequences $x\colon=(\xi_n)$ of complex numbers such that $$\sum_{n=1}^\infty \vert \xi_n \vert < +\infty,$$ with the inner product defined by $$ \langle x, y \rangle \colon= \sum_{n=1}^\infty \xi_n \overline{\eta_n} \ \ \ \mbox{ for all } \ x \colon= (\xi_n), \ y \colon= (\eta_n) \in \ell^2. $$ So the norm on $\ell^2$ is given by $$\Vert x \Vert \colon= \sqrt{ \langle x, x \rangle} = \sqrt{ \sum_{n=1}^\infty \vert \xi_n \vert^2 }. $$
Now can we give an example of an orthonormal sequence $(e_n)$ in $\ell^2$ and an element $x \in \ell^2$ such that $$\sum_{n=1}^\infty \vert \langle x, e_n \rangle \vert^2 \ < \ \Vert x \Vert^2? $$
I've been trying different $x$ with the following orthonormal sequence: $$e_n \colon= (\delta_{nj}) \ \ \ \mbox{ for each } \ n= 1, 2, 3, \ldots, $$ where $$ \delta_{nj} \colon= \begin{cases} 1 \ & \mbox{ if } \ j =n ; \\ 0 \ & \mbox{ if } \ j \neq n. \end{cases} $$ But I've had little success so far.
However, if we takt a proper subsequence of this sequence, say the sequence $(e_{2n})$, then $x = (1/n)$ would do the job. Am I righrt?
Is there any $x$ that would work (i.e. give the strict inequality) even with the full sequence $(e_n)$?
|
Difference between revisions of "Sperner's theorem"
Line 41: Line 41:
<math>\mathbf{E}_{C_1, \dots, C_d} \left[\mathbf{E}_{s_1, \dots, t_d}[f(C_1(s_1), \dots, C_d(s_d)) \cdots f(C_1(t_1), \dots, C_d(t_d))]\right].</math>
<math>\mathbf{E}_{C_1, \dots, C_d} \left[\mathbf{E}_{s_1, \dots, t_d}[f(C_1(s_1), \dots, C_d(s_d)) \cdots f(C_1(t_1), \dots, C_d(t_d))]\right].</math>
−
Since <math>s_1, \dots, t_d</math> are independent, the inside expectation-of-a-product can be changed to a product of expectations. But for fixed <math>C_1, \dots, C_d</math>, each string of the form in (2) has the same distribution. Hence the above equals
+
Since <math>s_1, \dots, t_d</math> are independent, the inside expectation-of-a-product can be changed to a product of expectations. But for fixed <math>C_1, \dots, C_d</math>, each string of the form in (2) has the same distribution. Hence the above equals
<math>\mathbf{E}_{C_1, \dots, C_d} \left[\mathbf{E}_{s_1, \dots, s_d}[f(C_1(s_1), \dots, C_d(s_d))]^{2^d}\right].</math>
<math>\mathbf{E}_{C_1, \dots, C_d} \left[\mathbf{E}_{s_1, \dots, s_d}[f(C_1(s_1), \dots, C_d(s_d))]^{2^d}\right].</math>
Revision as of 10:08, 12 March 2009 Contents Statement of the theorem
Sperner's theorem as originally stated is a result about set systems. Suppose that you want to find the largest collection of sets [math]\mathcal{A}[/math] such that no set in [math]\mathcal{A}[/math] is a proper subset of any other. Then the best you can do is choose all the sets of some fixed size---and of course the best size to pick is [math]\lfloor n/2\rfloor[/math], since the binomial coefficient [math]\binom nm[/math] is maximized when [math]m=\lfloor n/2\rfloor.[/math]
Sperner's theorem is closely related to the density Hales-Jewett theorem: in fact, it is nothing other than DHJ(2) with the best possible bound. To see this, we associate each set [math]A\subset[n][/math] with its characteristic function (that is, the sequence that is 0 outside A and 1 in A). If we have a pair of sets [math]A\subset B,[/math] then the two sequences form a combinatorial line in [math][2]^n.[/math] For example, if n=6 and A and B are the sets [math]\{2,3\}[/math] and [math]\{2,3,4,6\}[/math], then we get the combinatorial line that consists of the two points 011000 and 011101, which we can denote by 011*0* (so the wildcard set is [math]\{4,6\}[/math]).
Proof of the theorem
There are several proofs, but perhaps the most enlightening is a very simple averaging argument that proves a stronger result. Let [math]\mathcal{A}[/math] be a collection of subsets of [n]. For each k, let [math]\delta_k[/math] denote the density of [math]\mathcal{A}[/math] in the kth layer of the cube: that is, it is the number of sets in [math]\mathcal{A}[/math] of size k, divided by [math]\binom nk.[/math] The equal-slices measure of [math]\mathcal{A}[/math] is defined to be [math]\delta_0+\dots+\delta_n.[/math]
Now the equal-slices measure of [math]\mathcal{A}[/math] is easily seen to be equal to the following quantity. Let [math]\pi[/math] be a random permutation of [n], let [math]U_0,U_1,U_2\dots,U_n[/math] be the sets [math]\emptyset, \{\pi(1)\},\{\pi(1),\pi(2)\},\dots,[n],[/math] and let [math]\mu(\mathcal{A})[/math] be the expected number of the sets [math]U_i[/math] that belong to [math]\mathcal{A}.[/math] This is the same by linearity of expectation and the fact that the probability that [math]U_k[/math] belongs to [math]\mathcal{A}[/math] is [math]\delta_k.[/math]
Therefore, if the equal-slices measure of [math]\mathcal{A}[/math] is greater than 1, then the expected number of sets [math]U_k[/math] in [math]\mathcal{A}[/math] is greater than 1, so there must exist a permutation for which it is at least 2, and that gives us a pair of sets with one contained in the other.
To see that this implies Sperner's theorem, one just has to make the simple observation that a set with equal-slices measure at most 1 must have cardinality at most [math]\binom n{\lfloor n/2\rfloor}.[/math] (If n is odd, so that there are two middle layers, then it is not quite so obvious that to have an extremal set you must pick one or other of the layers, but this is the case.) This stronger version of the statement is called the
LYM inequality Multidimensional version
The following proof is a variant of the Gunderson-Rodl-Sidorenko result. Its parameters are a little worse, but the proof is a little simpler.
Proposition 1: Let [math]A \subseteq \{0,1\}^n[/math] have density [math]\delta[/math]. Let [math]Y_1, \dots, Y_d[/math] be a partition of [math][n][/math] with [math]|Y_i| \geq r[/math] for each [math]i[/math]. If
[math]\delta^{2^d} - \frac{d}{\sqrt{\pi r}} \gt 0, [/math] (1)
then [math]A[/math] contains a nondegenerate combinatorial subspace of dimension [math]d[/math], with its [math]i[/math]th wildcard set a subset of [math]Y_i[/math].
Proof: Let [math]C_i[/math] denote a random chain from [math]0^{|Y_i|}[/math] up to [math]1^{|Y_i|}[/math], thought of as residing in the coordinates [math]Y_i[/math], with the [math]d[/math] chains chosen independently. Also, let [math]s_i, t_i[/math] denote independent Binomial[math](|Y_i|, 1/2)[/math] random variables, [math]i \in [d][/math]. Note that [math]C_i(s_i)[/math] and [math]C_i(t_i)[/math] are (dependent) uniform random strings in [math]\{0,1\}^{Y_i}[/math]. We write, say,
[math](C_1(s_1), C_2(t_2), C_3(t_3), \dots, C_d(s_d)) [/math] (2)
for the string in [math]\{0,1\}^n[/math] formed by putting [math]C_1(s_1)[/math] into the [math]Y_1[/math] coordinates, [math]C_2(t_2)[/math] into the [math]Y_2[/math] coordinates, etc. Note that each string of this form is also uniformly random, since the chains are independent.
If all [math]2^d[/math] strings of the form in (2) are simultaneously in [math]A[/math] then we have a [math]d[/math]-dimensional subspace inside [math]A[/math] with wildcard sets that are \emph{subsets} of [math]Y_1, \dots, Y_d[/math]. All [math]d[/math] dimensions are nondegenerate iff [math]s_i \neq t_i[/math] for all [math]i[/math]. Since [math]s_i[/math] and [math]t_i[/math] are independent Binomial[math](|Y_i|, 1/2)[/math]'s with [math]|Y_i| \geq r[/math], we have
[math]\Pr[s_i = t_i] \leq \frac{1}{\sqrt{\pi r}}.[/math]
Thus to complete the proof, it suffices to show that with probability at least [math]\delta^{2^d}[/math], all [math]2^d[/math] strings of the form in (2) are in [math]A[/math].
This is easy: writing [math]f[/math] for the indicator of [math]A[/math], the probability is
[math]\mathbf{E}_{C_1, \dots, C_d} \left[\mathbf{E}_{s_1, \dots, t_d}[f(C_1(s_1), \dots, C_d(s_d)) \cdots f(C_1(t_1), \dots, C_d(t_d))]\right].[/math]
Since [math]s_1, \dots, t_d[/math] are independent, the inside expectation-of-a-product can be changed to a product of expectations.
[THIS STEP IS WRONG, I THINK -- Ryan] But for fixed [math]C_1, \dots, C_d[/math], each string of the form in (2) has the same distribution. Hence the above equals
[math]\mathbf{E}_{C_1, \dots, C_d} \left[\mathbf{E}_{s_1, \dots, s_d}[f(C_1(s_1), \dots, C_d(s_d))]^{2^d}\right].[/math]
By Jensen (or repeated Cauchy-Schwarz), this is at least
[math]\left(\mathbf{E}_{C_1, \dots, C_d} \mathbf{E}_{s_1, \dots, s_d}[f(C_1(s_1), \dots, C_d(s_d))]\right)^{2^d}.[/math]
But this is just [math]\delta^{2^d}[/math], since [math](C_1(s_1), \dots, C_d(s_d))[/math] is uniformly distributed. []
As an aside: Corollary 2: If [math]A \subseteq [n][/math] has density [math]\Omega(1)[/math], then [math]A[/math] contains a nondegenerate combinatorial subspace of dimension at least [math]\log_2 \log n - O(1)[/math]. If we are willing to sacrifice significantly more probability, we can find a [math]d[/math]-dimensional subspace randomly. Corollary 3: In the setting of Proposition 1, assume [math]\delta \lt 2/3[/math] and
[math] r \geq \exp(4 \ln(1/\delta) 2^d). [/math] (3)
Suppose we choose a random nondegenerate [math]d[/math]-dimensional subspace of [math][n][/math] with wildcard sets [math]Z_i \subseteq Y_i[/math]. By this we mean choosing, independently for each [math]i[/math], a random combinatorial line within [math]\{0,1\}^{Y_i}[/math], uniformly from the [math]3^r - 1[/math] possibilities. Then this subspace is entirely contained within [math]A[/math] with probability at least [math]3^{-dr}[/math].
This follows immediately from Proposition~\ref{prop:1}: having [math]r[/math] as in (3) achieves (1), hence the desired nondengenerate combinatorial subspace exists and we pick it with probability [math]1/(3^r-1)^d[/math]. We can further conclude: Corollary 4: Let [math]A \subseteq \{0,1\}^n[/math] have density [math]\delta \lt 2/3[/math] and let [math]Y_1, \dots, Y_d[/math] be disjoint subsets of [math][n][/math] with each [math]|Y_i| \geq r[/math],
[math] r \geq \exp(4 \ln(1/\delta) 2^d). [/math]
Choose a nondegenerate combinatorial subspace at random by picking uniformly nondegenerate combinatorial lines in each of [math]Y_1, \dots, Y_d[/math], and filling in the remaining coordinates outside of the [math]Y_i[/math]'s uniformly at random. Then with probability at least [math]\exp(-r^{O(1)})[/math], this combinatorial subspace is entirely contained within [math]A[/math].
This follows because for a random choice of the coordinates outside the [math]Y_i[/math]'s, there is a [math]\delta/2[/math] chance that [math]A[/math] has density at least [math]\delta/2[/math] over the [math]Y[/math] coordinates. We then apply the previous corollary, noting that [math]\exp(-r^{O(1)}) \ll (\delta/2)3^{-dr}[/math], even with [math]\delta[/math] replaced by [math]\delta/2[/math] in the lower bound demanded of [math]r[/math]. Strong version
An alternative argument deduces the multidimensional Sperner theorem from the density Hales-Jewett theorem. We can think of [math][2]^n[/math] as [math][2^k]^{n/k}.[/math] If we do so and apply DHJ(2^k) and translate back to [math][2]^n,[/math] then we find that we have produced a k-dimensional combinatorial subspace. This is obviously a much more sophisticated proof, since DHJ(2^k) is a very hard result, but it gives more information, since the wildcard sets turn out to have the same size. A sign that this strong version is genuinely strong is that it implies Szemerédi's theorem. For instance, suppose you take as your set [math]\mathcal{A}[/math] the set of all sequences such that the number of 0s plus the number of 1s in even places plus twice the number of 1s in odd places belongs to some dense set in [math][3n].[/math] Then if you have a 2D subspace with both wildcard sets of size d, one wildcard set consisting of odd numbers and the other of even numbers (which this proof gives), then this implies that in your dense set of integers you can find four integers of the form a, a+d, a+2d, a+d+2d, which is an arithmetic progression of length 4.
One can also prove the above strong form of Sperner theorem by using the multidimensional Szemerédi theorem which has combinatorial proofs. (reference!) It states that large dense high dimensional grids contain corners. Given a dense subset of [math][2]^n,[/math] denoted by [math]\mathcal{A}[/math]. We can suppose that the elements of [math]\mathcal{A}[/math] are of size about [math]\frac{n}{2}\pm C\sqrt{n}.[/math] Take a random permutation of [n]. An element of [math]\mathcal{A}[/math] is “[math]d-[/math]nice” after the permutation if it consists of [math]d[/math] intervals, each of length between [math]\frac{n}{2d}\pm C\sqrt{n}/2,[/math] and each interval begins at position [math]id[/math] for some [math]0\leq i\lt \frac{n}{d}.[/math] (Suppose that [math]d[/math] divides [math]n[/math]) Any [math]d-[/math]nice set can be represented as a point in a [math]d-[/math]dimensional [math][C\sqrt{n}]^d[/math] cube. The sets represented by the vertices of an axis-parallel [math]d-[/math]dimensional cube in [math][C\sqrt{n}]^d[/math] form a subspace with equal sized wildcard sets. Finding a cube is clearly more difficult than finding a corner, but it's existence in dense sets also follows from the multidimensional Szemerédi theorem. All what we need is to show that the expected number of the [math]d-[/math]nice elements is [math]c\sqrt{n}^d[/math] where c only depends on the density of [math]\mathcal{A}[/math]. For a typical [math]m-[/math]element subset of [math]\mathcal{A}[/math] the probability that it is [math]d-[/math]nice after the permutation is about [math]\binom{n}{m}^{-1}\sqrt{n}^{d-1}.[/math] The sum for elements of [math]\mathcal{A}[/math] with size between [math]\frac{n}{2}\pm C\sqrt{n},[/math] gives that the expected number of the [math]d-[/math]nice elements is [math]c\sqrt{n}^d,[/math] so there is a cube if n is large enough.
Further remarks
The k=3 generalisation of the LYM inequality is the hyper-optimistic conjecture.
Sperner's theorem is also related to the Kruskal-Katona theorem.
|
PHOTO SERIES PROJECTIVE TRIANGLES TAYLOR SERIES STILLS ANIMATIONS SHORT FILMS
ALLOSPHERE
The Allosphere is a 30ft diameter spherical screen enclosing a wide walkway which can accomodate 30+ visitors for simultaneous fully immersive virtual reality experiences. Designed and operated by Jo Ann Kuchera-Morin of UCSB's Media Arts and Technology (MAT) department (site), it has been used for both artistic and data-visualization purposes. In collaboration with MAT graduate students Kenny Kim and Dennis Adderton, I have worked with the Allosphere to simulate the interior of geometric 3-manifolds and give visual lectures on various topics in low dimensional topology.
THE THREE SPHERE & HYPERBOLIC SPACE
The projective models of hyperbolic and spherical geometry can be utilized to produce perspectivally-correct views of these spaces, showing the viewer accurately what such a world would look like from the inside. From the convergence/divergence of geodesics to parallax in curved space, this allows one to experience some of the fundamental concepts of Riemannian geometry.
In the 3-sphere we have worked to produce accurate visualizations of the six regular polytopes in $\mathbb{R}^4$ as well as the Hopf and Seifert fibrations. These have been used to give introductory lectures on low dimensional topology, as well as talks on quaternions, complex projective space and the trefoil knot complement.
Similar work in hyperbolic space allows the visualization of the interior of hyperbolic 3-manifolds. The first image above shows a Cayley graph of the figure-eight knot group embedded in $\mathbb{H}^3$, and the the second shows what one would see if there were a single rubber duck in the figure-eight knot complement (the multiple images arise from light traveling around nontrivial loops in the space). The final image is the Cayley graph of a group whose ideal boundary is the Appolonian gasket on $\mathbb{H}^2_\infty$.
FUTURE WORK
Ongoing work in the Allosphere is building the necessary foundation to produce perspectivally-correct visualizations of all eight Thurston geometries in dimension 3. This will allow us to produce fully immersive visualizations of the interior of the geometric components of 3-manifolds arising from geometrization.
I do most of my visual work in Mathematica. For people interested in playing around with some of this stuff themselves, below are some mathematica files together with short explanations of the mathematics and implementation.
HOPF & SEIFERT FIBRATIONS
The Hopf fibration $\mathbb{S}^3\to\mathbb{S}^2$ is a filling of the three sphere by pairwise-linked circles with parameter space s\$S^2$. The three sphere admits infinitely many twisted versions of this, called Seifert fibrations, where the complement of a Hopf link is filled with (p,q) torus knots. The mathematica notebooks below render fibers of the Hopf and Seifert fibrations, both in stereographic projection and in a perspectivally correct view (what you would actually see if you were inside of the round three sphere).
The 5 platonic solids are the five maximally symmetric tilings of $\mathbb{S}^2$. The maximally symmetric tilings of $\mathbb{S}^3$ are the 4-dimensional analogs of these familiar shapes - of which there are six. The mathematica notebooks below contain code to render the 5 cell, hypercube, 16 cell and 24 cell in stereographic projection.
Reflecting in the sides of certain hyperbolic polygons leads to beautiful tilings of the hyperbolic plane. The mathematica file here produces pictures of tilings based on the $(p,q,r)$ triangle tilings, in the Klein model, examples are shown below. The second notebook above is renders these pictures in the upper half plane and Poincare disk models (though as these models show more area of $\mathbb{H}^2$ away from the edges, it takes much longer to draw complete pictures).
The hyperbolic plane can be realized as a subset of $\mathbb{R}\mathsf{P}^2$ (the Klein model) and in some cases tilings of $\mathbb{H}^2$ can be deformed into tilings of other properly convex subsets of projective space. The mathematica documents below contain code for producing visualizations of the deformations of $(p,q,r)$ triangle tilings. The calculation of this one parameter family of deformations for the reflection groups was completed using the description in the Master's thesis of Anton Valerievich Lukyanenko, available here.
The graph of a single variable complex function is a subset of $\mathbb{C}\times\mathbb{C}\cong\mathbb{R}^4$, making direct visualization impossible. Domain coloring is a useful technique that allows us to understand complex functions using color and saturation on the domain to represent the values of the output. The mathematica files below compute domain colorings of complex functions, using code inspired by an excellent post by Simon Wood on stack exchange (View Here). The images below are the example outputs given in the file.
|
$\def\Spec{\mathop{\rm Spec}}\def\R{{\bf R}}\def\Ep{{\rm E}^+}\def\L{{\rm L}}\def\EpL{\Ep\L}$One can argue that an object of the right category of spaces in measure theoryis not a set equipped with a σ-algebra of measurable sets,but rather a set $S$ equipped with a σ-algebra $M$ of measurable sets and a σ-ideal $N$ of $M$ consisting of sets of measure $0$.The reason for this is that you can hardly state any theorem of measure theory or probability theory without referring to sets of measure $0$.However, objects of this category contain less data than the usual
measured spaces,because they are not equipped with a measure.Therefore I prefer to call them measurable spaces.A morphism of measurable spaces $(S,M,N)\to(T,P,Q)$ is a map $S\to T$ such thatthe preimage of every element of $P$ is a union of an element of $M$ and a subset of an element of $N$and the preimage of every element of $Q$ is a subset of an element of $N$.
Irving Segal proved in “Equivalences of measure spaces” (see also Kelley's “Decomposition and representation theorems in measure theory”) that for a measurable space $(S,M,N)$ the following properties are equivalent.
The Boolean algebra $M/N$ of equivalence classes of measurable sets is complete; The space of equivalence classes of all bounded (or unbounded) real-valued functions on $S$ is Dedekind-complete; The Radon-Nikodym theorem is true for $(S,M,N)$; The Riesz representation theorem is true for $(S,M,N)$ (the dual of $\L^1$ is isomorphic to $\L^\infty$); Equivalence classes of bounded functions on $S$ form a von Neumann algebra (alias W*-algebra). The measurable space $(S,M,N)$ is a coproduct (disjoint union) of measurable spaces of the form $2^m$, where $2=\{0,1\}$ is a two-point space and $m$ is an infinite cardinal or $0$. (For $m=0$ and $m=\aleph_0$ we get points and real lines respectively.)
A measurable space that satisfies these conditions is called
localizable.
This theorem tells us that if we want to prove anything nontrivial about measurable spaces,we better restrict ourselves to localizable measurable spaces.We also have a nice illustration of the claim I made in the first paragraph:none of these statements would be true without identifying objects that differ on a set of measure $0$.For example, take a non-measurable set $G$ and a family of one-element subsets of $G$ indexed by themselves.This family of measurable sets does not have a supremum in the Boolean algebra of measurable sets, thus disproving a naïve version of (1).
Another argument for restricting to localizable spaces is the following version of the Gelfand duality theorem, already known to Murray and von Neumann; an exposition of the most difficult parts is found in Theorem III.1.18 of Takesaki's “Theory of operator algebras” and §343 of Fremlin's “Measure theory”.
The category of localizable measurable spaces is equivalent to the category of commutative von Neumann algebras (alias W*-algebras) and their morphisms (normal unital homomorphisms of *-algebras).
I actually prefer to define the category of localizable measurable spacesas the opposite category of the category of commutative W*-algebras.The reason for this is that the classical definition of measurable spaceexhibits immediate connections only to descriptive set theory (and with additional effort to Boolean algebras),which are mostly irrelevant for the central core of mathematics,whereas the description in terms of operator algebras immediately connects measure theory to other areas of the central core(noncommutative geometry, algebraic geometry, complex geometry, differential geometry etc.).Also it is easier to use in practice.Let me illustrate this statement with just one example: when we try to define measurable bundles of Hilbert spaceson a localizable measurable space set-theoretically, we run into all sorts of problemsif the fibers can be non-separable, and I do not know how to fix this problem in the set-theoretic framework.On the other hand, in the algebraic framework we can simply say that a bundle of Hilbert spaces is a Hilbert module over the corresponding W*-algebra.
Categorical properties of W*-algebras (hence of localizable measurable spaces) were investigated by Guichardet, see also an electronic version of his paper.Let me mention some of his results.The category of localizable measurable spaces admits equalizers and coequalizers, arbitrary coproducts, hence also arbitrary colimits.It also admits products, although they are quite different from what one might think.For example, the product of two real lines is
not $\R^2$ with the two obvious projections.The product contains $\R^2$, but it also has a lot of other stuff, for example, the diagonal of $\R^2$,which is needed to satisfy the universal property for the two identity maps on $\R$.The more intuitive product of measurable spaces ($\R\times\R=\R^2$) corresponds to the spatialtensor product of von Neumann algebras and forms a part of a symmetric monoidal structure on the category of measurable spaces.See Guichardet's paper for other categorical properties (monoidal structures on measurable spaces, flatness, existence of filtered limits, etc.).
Finally let me mention pushforward and pullback properties of measures on measurable spaces.I will talk about more general case of $\L^p$-spaces instead of just measures (i.e., $\L^1$-spaces).For the sake of convenience, denote $\L_p(M)=\L^{1/p}(M)$, where $M$ is a measurable space.Here $p$ can be an arbitrary complex number with a nonnegative real part.We do not need a measure on $M$ to define $\L_p(M)$.For instance, $\L_0$ is the space of all bounded functions (i.e., the W*-algebra itself),$\L_1$ is the space of finite complex-valued measures (the dual of $\L_0$ in the $\sigma$-weak topology),and $\L_{1/2}$ is the Hilbert space of half-densities.I will also talk about extended positive part $\EpL_p$ of $L_p$ for real $p$.In particular, $\EpL_1$ is the space of all (not necessarily finite) positive measures.
Pushforward for $\L_p$-spaces.Suppose we have a morphism of measurable spaces $M\to N$.If $p=1$, then we have a canonical map $\L_1(M)\to\L_1(N)$, which just the dual of $\L_0(N)\to\L_0(M)$ in the σ-weak topology.Geometrically, this is the fiberwise integration map.If $p\ne 1$, then we only have a pushforward map of the extended positive parts, namely, $\EpL_p(M)\to\EpL_p(N)$, which is non-additive unless $p=1$.Geometrically, this is the fiberwise $\L_p$-norm.Thus $\L_1$ is a functor from the category of measurable spaces to the category of Banach spacesand $\EpL_p$ is a functor to the category of “positive homogeneous $p$-cones”.The pushforward map preserves the trace on $\L_1$ and hence sends a probability measure to a probability measure.
To define pullback of $\L_p$-spaces (in particular, $\L_1$-spaces) one needs to pass to a different category of measurable spaces.In algebraic language, if we have two W*-algebras $A$ and $B$,then a morphism from $A$ to $B$ is a usual morphism of W*-algebras $f\colon A\to B$together with an operator valued weight $T\colon\Ep(B)\to\Ep(A)$ associated to $f$.Here $\Ep(A)$ denotes the extended positive part of $A$.(Think of positive functions on $\Spec A$ that can take infinite values.)Geometrically, this is a morphism $\Spec f\colon\Spec B\to\Spec A$between the corresponding measurable spaces and a choice of measure on each fiber of $\Spec f$.Now we have a canonical additive map $\EpL_p(\Spec A)\to\EpL_p(\Spec B)$,which makes $\EpL_p$ into a contravariant functor from the category of measurable spacesequipped with a fiberwise measure to the category of “positive homogeneous additive cones”.
If we want to have a pullback of $\L_p$-spaces themselves and not just their extended positive parts,we need to replace operator valued weights in the above definitionby finite complex-valued operator valued weights $T\colon B\to A$ (think of fiberwise complex-valued measure).Then $\L_p$ becomes a functor from the category of measurable spaces to the category of Banach spaces (if the real part of $p$ is at most $1$)or quasi-Banach spaces (if the real part of $p$ is greater than $1$).Here $p$ is an arbitrary complex number with a nonnegative real part.Notice that for $p=0$ we get the original map $f\colon A\to B$ and in this (and only this) case we do not need $T$.
Finally, if we restrict ourselves to an even smaller subcategory of measurable spaces equipped with a finite operator valued weight $T$ such that $T(1)=1$(i.e., $T$ is a conditional expectation; think of fiberwise probability measure),then the pullback map preserves the trace on $\L_1$ and in this case the pullback of a probability measure is a probability measure.
There is also a smooth analog of the theory described above.The category of measurable spaces and their morphisms is replaced by the category of smooth manifolds and submersions,$\L_p$-spaces are replaced by bundles of $p$-densities,operator valued weights are replaced by sections of the bundle of relative $1$-densities,integration map on $1$-densities is defined via Poincaré duality (to avoid circular dependence on measure theory) etc.The forgetful functor that sends a smooth manifold to its underlying measurable space commutes with everything and preserves everything.
Of course, the story doesn't end here, there are manyother interesting topics to consider: products of measurable spaces,the difference between Borel and Lebesgue measurability, conditional expectations, etc.See the answer to Is there a category structure one can place on measure spaces so that category-theoretic products exist?for an index of my writings on this topic.
|
Astrid the astronaut is floating in a grid. Each time she pushes off she keeps gliding until she collides with a solid wall, marked by a thicker line. From such a wall she can propel herself either parallel or perpendicular to the wall, but always travelling directly \(\leftarrow, \rightarrow, \uparrow, \downarrow\). Floating out of the grid means death.
In this grid, Astrid can reach square Y from square ✔. But if she starts from square ✘ there is no wall to stop her and she will float past Y and out of the grid.
In this grid, from square X Astrid can float to three different squares with one push (each is marked with an *). Push \(\leftarrow\) is not possible from X due to the solid wall to the left. From X it takes three pushes to stop safely at square Y, namely \(\downarrow, \rightarrow, \uparrow\). The sequence \(\uparrow, \rightarrow\) would have Astrid float past Y and out of the grid.
Question:
In the following grid, what is the least number of pushes that Astrid can make to safely travel from X to Y?
|
Soliton solutions for quasilinear Schrödinger equations involving supercritical exponent in $\mathbb R^N$
1.
Department of Mathematics, University of British Columbia, Vancouver, BC, Canada
$-\epsilon \Delta u+V(x)u-\epsilon k(\Delta(|u|^2))u=g(u), \quad u>0,x \in \mathbb R^N,$
where g has superlinear growth at infinity without any restriction from above on its growth. Mountain pass in a suitable Orlicz space is employed to establish this result. These equations contain strongly singular nonlinearities which include derivatives of the second order which make the situation more complicated. Such equations arise when one seeks for standing wave solutions for the corresponding quasilinear Schrödinger equations. Schrödinger equations of this type have been studied as models of several physical phenomena. The nonlinearity here corresponds to the superfluid film equation in plasma physics.
Mathematics Subject Classification:Primary: 35J10, 35J20; Secondary: 35J2. Citation:Abbas Moameni. Soliton solutions for quasilinear Schrödinger equations involving supercritical exponent in $\mathbb R^N$. Communications on Pure & Applied Analysis, 2008, 7 (1) : 89-105. doi: 10.3934/cpaa.2008.7.89
[1]
Christopher Grumiau, Marco Squassina, Christophe Troestler.
On the
Mountain-Pass algorithm for the quasi-linear Schrödinger equation.
[2]
Dorota Bors.
Application of Mountain Pass Theorem to superlinear equations with fractional Laplacian controlled by distributed parameters and boundary data.
[3]
Masahito Ohta.
Strong instability of standing waves for nonlinear Schrödinger equations with a partial confinement.
[4]
Xiaoyu Zeng.
Asymptotic properties of standing waves for mass subcritical nonlinear Schrödinger equations.
[5]
François Genoud, Charles A. Stuart.
Schrödinger equations with a spatially decaying nonlinearity: Existence and stability of standing waves.
[6]
Soohyun Bae, Jaeyoung Byeon.
Standing waves
of nonlinear Schrödinger equations with optimal conditions for
potential and nonlinearity.
[7]
Daniele Cassani, João Marcos do Ó, Abbas Moameni.
Existence and concentration of solitary waves for a class of quasilinear Schrödinger equations.
[8]
Jaeyoung Byeon, Louis Jeanjean.
Multi-peak standing waves for nonlinear Schrödinger equations with a general nonlinearity.
[9]
Zaihui Gan, Jian Zhang.
Blow-up, global existence and standing waves for the magnetic nonlinear Schrödinger equations.
[10] [11] [12]
Reika Fukuizumi.
Stability and instability of standing waves for the nonlinear Schrödinger equation with harmonic potential.
[13]
François Genoud.
Existence and stability of high frequency standing waves for a nonlinear
Schrödinger equation.
[14]
Zhi Chen, Xianhua Tang, Ning Zhang, Jian Zhang.
Standing waves for Schrödinger-Poisson system with general nonlinearity.
[15]
Yi He, Gongbao Li.
Concentrating solitary waves for a class of singularly perturbed quasilinear Schrödinger equations with a general nonlinearity.
[16]
Jaeyoung Byeon, Ohsang Kwon, Yoshihito Oshita.
Standing wave concentrating on compact manifolds for nonlinear
Schrödinger equations.
[17] [18] [19]
Fábio Natali, Ademir Pastor.
Stability properties of periodic standing waves for the
Klein-Gordon-Schrödinger system.
[20]
Alex H. Ardila.
Stability of standing waves for a nonlinear SchrÖdinger equation under an external magnetic field.
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top]
|
Defining differential operator that acts like curl
Hello,
As a newbie in using SAGE (experience with Python and Numpy), I was wondering how to define a differential operator that acts like curl (or $\nabla \times \vec F$). The curl for a vector field $\vec F=(F_x,F_y, F_z)$is defined as a determinant $\mathrm{det} (\nabla,\vec F) $. I want to define such operators that act on $F_x, F_y, F_z$
without calculating the determinant a priori. In other words if I was to multiply $\frac {\partial }{\partial x}$ with $F_y$, I would expect to get $\frac{\partial F_y}{\partial x}$EDIT:
class DiffOpp(SageObject): def __init__(self, dep_var): self.dep_var = dep_var def __mul__(self, f): return diff(f, self.dep_var)
|
This question already has an answer here:
I am trying to attain an ARIMA model for the following Time Series Data:
There is quite obviously a seasonal component - as the plot seems to oscillate between smaller and larger peaks, its seems to suggest that the pattern repeats every 48 timesteps.
I then took the seasonal difference and plotted the ACF & PACF:
From the ACF & PACF plots, I concluded that the model will most likely fit a $\text{ARIMA}(0,0,1)(0,1,1)_{48}$ model due the the shape of the PACF and the peaks at the ACF plots.
However, when trying to fit the data onto such model, the estimated coefficient for the regular fails the hypothesis test (roughly if $\mid\frac{\hat{\theta}}{\hat{\sigma}_\theta}\mid > 1.96$, then accept for some coefficient $\theta$).
I have tried other variations, but they consistently fail. I have come to the conclusion that I've made a grave error or I am missing a step. Note that when using
Regular Differencing, the model does seem to fit better - but I don't know how to justify using Regular Differencing.
I have come to one of two conclusions: Either taking the seasonal difference of 48 is too high,
or, the plot for PACF shows that the data may be non-stationary - therefore needing some Regular differencing.
Thank you
|
Do you know how a fireman and the direction of a financial time series are related? If your answer is no, you’re reading the right post.
An introduction to k-Means: Voronoi diagram
Suppose that you a work at an emergency center, and your job is to tell the pilots of firefighter helicopters to take off. You receive an emergency call because there’s a point of the city on fire and a helicopter is necessary to put it out. You need to choose which pilot has to do this work. It’s obvious that the farther helicopters (
) will arrive later than the closer helicopters ( grey helicopters ), but you don’t know exactly which is the closest. red helicopters
Georgy Voronoi (a mathematician born in the Russian Empire in 1868), defined the Voronoi diagram (also called Thiessen polygons or Dirichlet Tesselation in honor of Alfred Thiessen and Gustav Lejeune Dirichlet) to find the answer to this kind of problems. It associates all the helicopters to a polygonal cell (called Voronoi cell or Voronoi region), where all points included are closer to one helicopter than the others. Using the Euclidean distance, if you keep your eyes on a particular helicopter (\(h_{i}\)), for each pair of helicopters (\(h_{i}\), \(h_{j}\)) \(i \neq j\), the points set which is closer to \(h_{i}\) than \(h_{j}\) is defined as:
\(H_{i}^{j} = \{x \in X \; | \; d\left(x, h_{i}\right) \leq d\left(x, h_{j}\right), \; i \neq j \}\)
\(h_{i} \in H_{i}^{j}; \; h_{j} \notin H_{i}^{j}\)
\(h_{i}\) Voronoi cell is the intersection of all half-planes where \(h_{i}\) is inside:
\(vor\left(h_{i}\right) = \bigcap\limits_{j=1;j \neq i}^{n} H_{i}^{j}\)
If you calculate all
Voronoi cells (for each helicopter) you could discover which helicopter is the closest:
You can find a lot of
Voronoi diagrams in nature like: Giraffe skin: W ings of a dragonfly: Dry desert and more: k-Means algorithm in finance
The
Voronoi diagram is used in some machine learning techniques like clustering. In this post, you can learn how the k-means algorithm works (a clustering algorithm).
Given a
n-dimensional points set ( two-dimensional points set in our example), you need to define the number of classes ( k classes) that you want to get. Using both things, the algorithm makes the following steps:
1. At first, there are two options to set initial points:
1.1. Choose \(k\) (two-dimensional) points.
1.2. Generate \(k\) random points.
2. Calculate
Voronoi cells for the initial points.
3. Calculate midpoint of all points included in each region.
4. Calculate
Voronoi cells for the midpoints.
5. Repeat steps 3 and 4 while midpoints are changing.
K-means algorithm could be applied to a financial time series. In finance it’s really important to define which kind of returns and trends are in a time serie. As a first approach, using weekly percentage returns of a financial time serie (like Standard & Poor’s 500 Index), you could show two-dimensional scatter plot (weekly returns vs shift one week weekly returns) using the following matrix ( x axis is the first column and y axis is the second column):
\( \left( \begin{array}{cc} Week_{2} & Week_{1} \\ Week_{3} & Week_{2} \\ \cdots & \cdots \\ Week_{i} & Week_{i-1} \\ \cdots & \cdots \\ Week_{n} & Week_{n-1} \end{array} \right) \)
At this point you need to look how many classes you must get. My objective is
to cluster weekly returns in three kinds of return movements called , Upward and Downward . If I only select these 3 classes, the results are not coherent. So I try it by selecting more classes: No trend , Upward , Low Upward , No trend and Low Downward . If I test the Downward k-means algorithmusing both initial options and 5 classes: : Predefined initial points. I test the k-means function using like five initial points the four quadrants middle point and axis intersection, for example: Option 1
\( \begin{array}{cccc} + & + & \longrightarrow & \text{Upward} \\ – & + & \longrightarrow & \text{Low upward} \\ + & – & \longrightarrow & \text{Low downward} \\ – & – & \longrightarrow & \text{Downward} \\ 0 & 0 & \longrightarrow & \text{No trend} \\\end{array}\)
: Random initial points. Using 5 random points, the centers set by Option 2 k-means algorithmseem to be worse:
The trend separation produced by predefined initial points is better than the separation produced by random initial points, because the final voronoi cells produced in
option 1 are more coherent than option 2, and it’s not necessary to look at the subsequent points to associate it to a movement. I associate the classes (produced by k-means algorithm using week i return and week i-1 return) to week i because I don’t want to use information early.
To finish the clustering, I only need to rename classes and colour it in
S&P500 Index spot:
1.
= Downward + Downward Low downward
2.
= Upward + Upward Low Upward
Trends separation doesn’t look perfect, and the trends themselves are short. I need to keep in mind that I’m using only two consecutive returns in clustering. It’s therefore neccesary to leave you with this next question:
|
Epsilon naught, $\epsilon_0$ The ordinal $\epsilon_0$, commonly given the British pronunciation "epsilon naught," is the smallest ordinal $\alpha$ for which $\alpha=\omega^\alpha$ and can be equivalently characterized as the supremum
$$\epsilon_0=\sup\{\omega,\omega^\omega,\omega^{\omega^\omega},\ldots\}$$
The ordinals below $\epsilon_0$ exhibit an attractive finitistic normal form of representation, arising from an iterated Cantor normal form involving only finite numbers and the expression $\omega$ in finitely iterated exponentials, products and sums.
The ordinal $\epsilon_0$ arises in diverse proof-theoretic contexts. For example, it is the proof-theoretic ordinal of the first-order Peano axioms.
Epsilon numbers
The ordinal $\epsilon_0$ is the first ordinal in the hierarchy of $\epsilon$-numbers, where $\epsilon_\alpha$ is the $\alpha^{\rm th}$ fixed point of the exponential function $\beta\mapsto\omega^\beta$. These can also be defined inductively, as $\epsilon_{\alpha+1}=\sup\{\epsilon_\alpha+1,\omega^{\epsilon_\alpha+1},\omega^{\omega^{\epsilon_\alpha+1}},\ldots\}$, and $\epsilon_\lambda=\sup_{\alpha\lt\lambda}\epsilon_\alpha$ for limit ordinals $\lambda$. The epsilon numbers therefore form an increasing continuous sequence of ordinals. Every uncountable infinite cardinal $\kappa$ is an epsilon number fixed point $\kappa=\epsilon_\kappa$.
|
Difference between revisions of "Kakeya problem"
Line 36: Line 36:
since the set of all vectors in <math>{\mathbb F}_3^n</math> such that at least one of the numbers <math>1</math> and <math>2</math> is missing among their coordinates is a Kakeya set.
since the set of all vectors in <math>{\mathbb F}_3^n</math> such that at least one of the numbers <math>1</math> and <math>2</math> is missing among their coordinates is a Kakeya set.
−
This estimate can be improved using an idea due to Ruzsa. Namely, let <math>E:=A\cup B</math>, where <math>A</math> is the set of all those vectors with <math>r/3+O(\sqrt r)</math> coordinates equal
+
This estimate can be improved using an idea due to Ruzsa. Namely, let <math>E:=A\cup B</math>, where <math>A</math> is the set of all those vectors with <math>r/3+O(\sqrt r)</math> coordinates equal to <math>1</math> and the rest equal to <math>0</math>, and <math>B</math> is the set of all those vectors with <math>2r/3+O(\sqrt r)</math> coordinates equal to <math>2</math> and the rest equal to <math>0</math>. Then <math>E</math>, being of size just about <math>(27/4)^{r/3}</math> (which is not difficult to verify using [[Stirling's formula]]) contains lines in positive proportion of directions. Now one can use the random rotations trick to get the rest of the directions in <math>E</math> (losing a polynomial factor in <math>n</math>).
Putting all this together, we seem to have
Putting all this together, we seem to have
Revision as of 01:39, 19 March 2009
Define a
Kakeya set to be a subset [math]A\subset{\mathbb F}_3^n[/math] that contains an algebraic line in every direction; that is, for every [math]d\in{\mathbb F}_3^n[/math], there exists [math]a\in{\mathbb F}_3^n[/math] such that [math]a,a+d,a+2d[/math] all lie in [math]A[/math]. Let [math]k_n[/math] be the smallest size of a Kakeya set in [math]{\mathbb F}_3^n[/math].
Clearly, we have [math]k_1=3[/math], and it is easy to see that [math]k_2=7[/math]. Using a computer, it is not difficult to find that [math]k_3=13[/math] and [math]k_4\le 27[/math]. Indeed, it seems likely that [math]k_4=27[/math] holds, meaning that in [math]{\mathbb F}_3^4[/math] one cannot get away with just [math]26[/math] elements.
General lower bounds
Trivially,
[math]k_n\le k_{n+1}\le 3k_n[/math].
Since the Cartesian product of two Kakeya sets is another Kakeya set, we have
[math]k_{n+m} \leq k_m k_n[/math];
this implies that [math]k_n^{1/n}[/math] converges to a limit as [math]n[/math] goes to infinity.
From a paper of Dvir, Kopparty, Saraf, and Sudan it follows that [math]k_n \geq 3^n / 2^n[/math], but this is superseded by the estimates given below.
To each of the [math](3^n-1)/2[/math] directions in [math]{\mathbb F}_3^n[/math] there correspond at least three pairs of elements in a Kakeya set, etermining this direction. Therefore, [math]\binom{k_n}{2}\ge 3\cdot(3^n-1)/2[/math], and hence
[math]k_n\gtrsim 3^{(n+1)/2}.[/math]
One can derive essentially the same conclusion using the "bush" argument, as follows. Let [math]E\subset{\mathbb F}_3^n[/math] be a Kakeya set, considered as a union of [math]N := (3^n-1)/2[/math] lines in all different directions. Let [math]\mu[/math] be the largest number of lines that are concurrent at a point of [math]E[/math]. The number of point-line incidences is at most [math]|E|\mu[/math] and at least [math]3N[/math], whence [math]|E|\ge 3N/\mu[/math]. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity [math]\mu[/math], we see that [math]|E|\ge 2\mu+1[/math]. Minimizing [math]\max\{3N/\mu, 2\mu+1\}[/math] over all possible values of [math]\mu[/math] one obtains [math]|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}[/math].
A better bound follows by using the "slices argument". Let [math]A,B,C\subset{\mathbb F}_3^{n-1}[/math] be the three slices of a Kakeya set [math]E\subset{\mathbb F}_3^n[/math]. Form a bipartite graph [math]G[/math] with the partite sets [math]A[/math] and [math]B[/math] by connecting [math]a[/math] and [math]b[/math] by an edge if there is a line in [math]E[/math] through [math]a[/math] and [math]b[/math]. The restricted sumset [math]\{a+b\colon (a,b)\in G\}[/math] is contained in the set [math]-C[/math], while the difference set [math]\{a-b\colon (a-b)\in G\}[/math] is all of [math]{\mathbb F}_3^{n-1}[/math]. Using an estimate from a paper of Katz-Tao, we conclude that [math]3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}[/math], leading to [math]|E|\ge 3^{6(n-1)/11}[/math]. Thus,
[math]k_n \ge 3^{6(n-1)/11}.[/math] General upper bounds
We have
[math]k_n\le 2^{n+1}-1[/math]
since the set of all vectors in [math]{\mathbb F}_3^n[/math] such that at least one of the numbers [math]1[/math] and [math]2[/math] is missing among their coordinates is a Kakeya set.
This estimate can be improved using an idea due to Ruzsa. Namely, let [math]E:=A\cup B[/math], where [math]A[/math] is the set of all those vectors with [math]r/3+O(\sqrt r)[/math] coordinates equal to [math]1[/math] and the rest equal to [math]0[/math], and [math]B[/math] is the set of all those vectors with [math]2r/3+O(\sqrt r)[/math] coordinates equal to [math]2[/math] and the rest equal to [math]0[/math]. Then [math]E[/math], being of size just about [math](27/4)^{r/3}[/math] (which is not difficult to verify using Stirling's formula) contains lines in positive proportion of directions. Now one can use the random rotations trick to get the rest of the directions in [math]E[/math] (losing a polynomial factor in [math]n[/math]).
Putting all this together, we seem to have
[math](3^{6/11} + o(1))^n \le k_n \le ( (27/4)^{1/3} + o(1))^n[/math]
or
[math](1.8207\ldots+o(1))^n \le k_n \le (1.88988+o(1))^n.[/math]
|
1. How can we "interact" with a photon with a purely mechanical device such as a shutter?
Here is the culprit - the QED vertex:
The mechanical shutter, whether made from metal or plastic, contains charged particles (electrons and atomic nuclei). Charged particles interact with photons via the QED vertex. On the most fundamental (and most pedantic) level, this is the mechanism at work.
Thought experiment: try shifting the photon frequency with a shutter made of neutrinos (which do not interact via the QED vertex). It won't work.
2. Draw me a QED diagram responsible for the interaction
The answer may depend on the nature of the shutter. So let's choose the simplest possible "shutter" - a dilute gas of free electrons, whose density is somehow being varied with frequency $\omega_2$. Note that there's no need to block the light completely - any change to the total scattering amplitude will still cause the frequency-shifting effect. Here is the relevant QED diagram:
A photon enters with wavevector $(\vec{k_1}, \omega_1)$ and exits with the same wavevector $(\vec{k_1}, \omega_1)$. You can think of this as elastic scattering by angle zero.
3. How can scattering with $\theta = 0$ possibly have any effect?
The zero-angle scattering amplitude interferes quantum-mechanically with the no-scattering amplitude to form the outgoing amplitude at angle zero - see the optical theorem.
4. How can scattering with $\Delta \omega = 0$ possibly have any effect?
Because the scattering amplitude of the photon at frequency $\omega_1$ is being modulated in time with frequency $\omega_2$, which leads to an EM field excitation at $\omega_1 + \omega_2$ - see the convolution theorem.
|
Is there a way to find $\sqrt[n]{x}$ with
Mathematica beside of
x^(1/n)as this is something different, because this is not always the same$$(-1)^{\frac{2}{4}}=i \neq 1= \sqrt[4]{(-1)^2}$$In the help I only found
Sqrt[x] which is the squareroot and
CubeRoot[x] for the cubic root.
Is there a reason that there aren't $n$-th roots implemented? (Assuming they really don't exist and I am not to stupid to find them).
I am using
Mathematica 9.0.1 Student Edition.
|
I'm trying to find the Fourier Transform of the following rectangular pulse:
$$ x(t) = rect(t - 1/2) $$
This is simply a rectangular pulse stretching from 0 to 1 with an amplitude of 1. It is 0 elsewhere. I tried using the definition of the Fourier Tranform:
$$ X(\omega) = \int_0^1 (1)*e^{-j\omega*t}dt $$
However carrying out the relatively simple integration and subbing in the bounds results for me in this:
$$ X(\omega) = \frac{1}{j\omega}[e^{-j\omega} - 1] $$
& unfortunately wolfram alpha has a different answer when I use it to compute this fourier transform. It's got the sinc function; I'd appreciate any help on this, if I've got some giant conceptual error. I have an exam on this stuff in a bit less than a week :/
Edit: also realized I used j; it's the same with i (the imaginary #)
|
Take your favourite newspaper and write down all the numbers you can find in it. Then take the first, leftmost digit of each number. This is called the most significative digit (MSD). What is the frequency you expect to observe for these digits? If your answer is that all digits from 1 to 9 have the same probability, please think again!
In order to understand why the distribution is not uniform, consider the following news borrowed from The Guardian, The Times and NY Times and the numbers highlighted:
If we put all these numbers in an axis, we get the following:
Ok, this representation in a natural (linear) scale is not useful. The reason is that, since one of the numbers is huge, the rest are squeezed towards zero. What if we use a logarithmic scale instead?
Much better. The reason why this representation is more appropriate is that the numbers we come across in our daily life have a huge dynamic range (they can refer to goals, GDP of countries or distances among galaxies) so that we can assume they are uniformly distributed across a logarithmic axis. The American physicist Frank Benford realized this fact and stated his famous law.
The Benford’s law
Coming back to our original question, what is the frequency of numbers starting with “1” compared to those starting with, for example, “4”? According to our previous assumption, we can answer the question by measuring the area corresponding to the numbers starting with “1” and “4”, respectively, in the logarithmic axis.
In the figure, we have painted in blue the area corresponding to numbers starting with 1 (from 1 to 2, from 10 to 20, etc) and, for comparison, the numbers starting with 4 (4 to 5, 40, to 50, etc) are painted in red. We can see that the blue area is wider than the red one.
Formally, the probability mass function (PMF) that corresponds to this distribution is:
$$ p(n) = \log_{10}\frac{n+1}{n}, \quad n=1,\dots,9 \tag{1}$$
Proving that this is indeed a valid PMF, i.e. that \(\sum_{n=1}^9 p(n)=1\), is easy and fun (and left as an exercise). We plot the values provided by Eq. (1) in the following figure:
As shown in the figure, numbers starting with 1 are more than six times as frequent as those starting with 9, so you will find that around 30% of numbers in the newspaper start with 1, and less than 5% start with a 9.
Ok, the Benford’s law is cool but, is it also useful? Indeed it is. Keep reading!
The Enron scandal
In 2001, the energy and commodities American company Enron went bankrupt after the revelation of systematic accounting fraud. In other words, its books had been “cooked” for quite a while. It wasn’t the first company to “make up” its numbers, but it’s been the biggest case so far. The case raised concerns about the need for more powerful accounting techniques. The Benford’s law could have helped to detect the fraud because “fake” numbers (those made up by humans) tend not to follow the Benford distribution. Because of that, Deutsche Bank has applied the Law to accounting books of a number of companies, with the surprising conclusion that
companies that do not adhere to the Law underperform the market!
Also, the U.S. Internal Revenue Service is using the Benford’s law to detect tax fraud – another kind of fraud that also involves substituting natural number by made-up ones. Unfortunately, a sophisticated fraudster can take countermeasures and make up numbers that follow the Benford law.
So far we have considered a nice statistical property of “natural” numbers, i.e. those that we come across through our daily life.
What about language? Does it hold any interesting statistical properties? The Zipf’s law
We have the intuition that, regardless of the language, some words are more frequent than others. According to the Brown Corpus of American English text (a repository of text for research purposes with one million words), the word
the is the most frequent one, accounting for nearly 7% of all word occurrences. The second one is “ of” and accounts for slightly over 3.5% of words. The third one is “ and”, present 2.8% of the time. Far earlier than the Brown Corpus was built, the American linguist George K. Zipf had found this pattern and formulated his law as follows: . In other words: the second most frequent word is half as frequent as the first one. The third word’s frequency is a third of the first one’s, and so on. Mathematically: the frequency of any word is inversely proportional to its rank in the frequency table
$$ P(w) \propto \frac{1}{R(w)} $$
being \(P(w)\) the probability (frequency) of the word \(w\) and \(R(w)\) his ranking in the frequency table.
As you can see, the Law describes very well the distribution of
“the”, “ and”, and “by” we mentioned before. Actually, most studies have found that an exponent of 1.07 rather than 1 fits better:
$$ P(w) \propto \frac{1}{R(w)^{1.07}} $$
It is difficult to think of an application of the Zipf’s law similar to the fraud detection based on Benford’s. Fake numbers might not follow the Benford’s law if they are not carefully chosen, but “fake news” still follow the rules of grammar and fulfil the Zipf’s law, so we cannot use it to identify them. Now, imagine a text written in an unknown language, so that we use the Zipf’s law to determine whether it is a real language or just a joke. Does such a text even exist?
The Voynich manuscript
The Voynich manuscript is probably the most enigmatic, mysterious book ever written. It dates back to the beginning of the XV century according to the Carbon 14 test, and it is thought to have been written in the north of Italy. The book is full of illustrations of unknown plants – remarkably, one of them is very similar to an American variety of sunflowers (remember: it was written around 1400). It looks like an ancient hoax, something similar to Borges’ Encyclopaedia of Tlön or the plot described in Umberto Eco’s Foucault’s Pendulum. But the most intriguing thing about the codex is the fact that is written in a completely unknown language and writing system.
The text has been intensively studied by cryptographers, including WWII code-breakers and experts of the NSA, in order to decipher its contents. None of those efforts has succeeded. However, the entropy of the text is similar to that of real languages and
the words seem to follow the Zipf’s law. If this medieval book is a joke, it is an astonishingly sophisticated one. The mystery is still open!
|
Let me introduce to you the topic of modal model theory, injecting some ideas from modal logic into the traditional subject of model theory in mathematical logic.
For example, we may consider the class of all models of some first-order theory, such as the class of all graphs, or the class of all groups, or all fields or what have you. In general, we have $\newcommand\Mod{\text{Mod}}\Mod(T)$, where $T$ is a first-order theory in some language $L$.
We may consider $\Mod(T)$ as a potentialist system, a Kripke model of possible worlds, where each model accesses the larger models, of which it is a submodel. So $\newcommand\possible{\Diamond}\possible\varphi$ is true at a model $M$, if there is a larger model $N$ in which $\varphi$ holds, and $\newcommand\necessary{\Box}\necessary\varphi$ is true at $M$, if $\varphi$ holds in all larger models.
In this way, we enlarge the language $L$ to include these modal operators. Let $\possible(L)$ be the language obtained by closing $L$ under the modal operators and Boolean connectives; and let $L^\possible$ also close under quantification. The difference is whether a modal operator falls under the scope of a quantifier.
Recently, in a collaborative project with Wojciech Aleksander Wołoszyn, we made some progress, which I’d like to explain. (We also have many further results, concerning the potentialist validities of various natural instances of $\Mod(T)$, but those will wait for another post.)
Theorem. If models $M$ and $N$ are elementarily equivalent, that is, if they have the same theory in the language of $L$, then they also have the same theory in the modal language $\possible(L)$. Proof. We show that whenever $M\equiv N$ in the language of $L$, then $M\models\varphi\iff N\models\varphi$ for sentences $\varphi$ in the modal language $\possible(L)$, by induction on $\varphi$.
Of course, by assumption the statement is true for sentences $\varphi$ in the base language $L$. And the property is clearly preserved by Boolean combinations. What remains is the modal case. Suppose that $M\equiv N$ and $M\models\possible\varphi$. So there is some extension model $M\subset W\models\varphi$.
Since $M\equiv N$, it follows by the Keisler-Shelah theorem that $M$ and $N$ have isomorphic ultrapowers $\prod_\mu M\cong\prod_\mu N$, for some ultrafilter $\mu$. It is easy to see that isomorphic structures satisfy exactly the same modal assertions in the class of all models of a theory. Since $M\subset W$, it follows that the ultrapower of $M$ is extended to (a copy of) the ultrapower of $W$, and so $\prod_\mu M\models\possible\varphi$, and therefore also $\prod_\mu N\models\possible\varphi$. From this, since $N$ embeds into its ultrapower $\prod_\mu N$, it follows also that $N\models\possible\varphi$, as desired. $\Box$
Corollary. If one model elementarily embeds into another $M\prec N$, in the language $L$ of these structures, then this embedding is also elementary in the language $\possible(L)$. Proof. To say $M\prec N$ in language $L$ is the same as saying that $M\equiv N$ in the language $L_M$, where we have added constants for every element of $M$, and interpreted these constants in $N$ via the embedding. Thus, by the theorem, it follows that $M\equiv N$ in the language $\possible(L_M)$, as desired. $\Box$
For example, every model $M$ is elementarily embedding into its ultrapowers $\prod_\mu M$, in the language $\possible(L)$.
We’d like to point out next that these results do not extend to elementary equivalence in the full modal language $L^\possible$.
For a counterexample, let’s work in the class of all simple graphs, in the language with a binary predicate for the edge relation. (We’ll have no parallel edges, and no self-edges.) So the accessibility relation here is the induced subgraph relation.
Lemma. The 2-colorability of a graph is expressible in $\possible(L)$. Similarly for $k$-colorability for any finite $k$. Proof. A graph is 2-colorable if we can partition its vertices into two sets, such that a vertex is in one set if and only if all its neighbors are in the other set. This can be effectively coded by adding two new vertices, call them red and blue, such that every node (other than red and blue) is connected to exactly one of these two points, and a vertex is connected to red if and only if all its neighbors are connected to blue, and vice versa. If the graph is $2$-colorable, then there is an extension realizing this statement, and if there is an extension realizing the statement, then (even if more than two points were added) the original graph must be $2$-colorable. $\Box$
A slightly more refined observation is that for any vertex $x$ in a graph, we can express the assertion, “the component of $x$ is $2$-colorable” by a formula in the language $\possible(L)$. We simply make the same kind of assertion, but drop the requirement that every node gets a color, and insist only that $x$ gets a color and the coloring extends from a node to any neighbor of the node, thereby ensuring the full connected component will be colored.
Theorem. There are two graphs that are elementary equivalent in the language $L$ of graph theory, and hence also in the language $\possible(L)$, but they are not elementarily equivalent in the full modal language $L^\possible$. Proof. Let $M$ be a graph consisting of disjoint copies of a 3-cycle, a 5-cycle, a 7-cycle, and so on, with one copy of every odd-length cycle. Let $M^*$ be an ultrapower of $M$ by a nonprincipal ultrafilter.
Thus, $M^*$ will continue to have one 3-cycle, one 5-cycle, one 7-cycle and on on, for all the finite odd-length cycles, but then $M^*$ will have what it thinks are non-standard odd-length cycles, except that it cannot formulate the concept of “odd”. What it actually has are a bunch of $\mathbb{Z}$-chains.
In particular, $M^*$ thinks that there is an $x$ whose component is $2$-colorable, since a $\mathbb{Z}$-chain is $2$-colorable.
But $M$ does not think that there is an $x$ whose component is $2$-colorable, because an odd-length finite cycle is not $2$-colorable. $\Box$.
Since we used an ultrapower, the same example also shows that the corollary above does not generalize to the full modal language. That is, we have $M$ embedding elementarily into its ultrapower $M^*$, but it is not elementary in the language $L^\possible$.
Let us finally notice that the Łoś theorem for ultraproducts fails even in the weaker modal language $\possible(L)$.
Theorem. There are models $M_i$ for $i\in\mathbb{N}$ and a sentence $\varphi$ in the language of these models, such that every nonprincipal ultraproduct $\prod_\mu M_i$ satisfies $\possible\varphi$, but no $M_i$ satisfies $\possible\varphi$. . Proof. In the class of all graphs, using the language of graph theory, let the $M_i$ be all the odd-length cycles. The ultraproduct $\prod_\mu M_i$ consists entirely of $\mathbb{Z}$-chains. In particular, the ultraproduct graph is $2$-colorable, but none of the $M_i$ are $2$-colorable. $\Box$
|
Norman Lewis Perlmutter successfully defended his dissertation under my supervision and will earn his Ph.D. at the CUNY Graduate Center in May, 2013. His dissertation consists of two parts. The first chapter arose from the observation that while direct limits of large cardinal embeddings and other embeddings between models of set theory are pervasive in the subject, there is comparatively little study of inverse limits of systems of such embeddings. After such an inverse system had arisen in Norman’s joint work on Generalizations of the Kunen inconsistency, he mounted a thorough investigation of the fundamental theory of these inverse limits. In chapter two, he investigated the large cardinal hierarchy in the vicinity of the high-jump cardinals. During this investigation, he ended up refuting the existence of what are now called the
excessively hypercompact cardinals, which had appeared in several published articles. Previous applications of that notion can be made with a weaker notion, what is now called a hypercompact cardinal.
Norman Lewis Perlmutter, “Inverse limits of models of set theory and the large cardinal hierarchy near a high-jump cardinal” Ph.D. dissertation for The Graduate Center of the City University of New York, May, 2013.
Abstract. This dissertation consists of two chapters, each of which investigates a topic in set theory, more specifically in the research area of forcing and large cardinals. The two chapters are independent of each other.
The first chapter analyzes the existence, structure, and preservation by forcing of inverse limits of inverse-directed systems in the category of elementary embeddings and models of set theory. Although direct limits of directed systems in this category are pervasive in the set-theoretic literature, the inverse limits in this same category have seen less study. I have made progress towards fully characterizing the existence and structure of these inverse limits. Some of the most important results are as follows. If the inverse limit exists, then it is given by either the entire thread class or a rank-initial segment of the thread class. Given sufficient large cardinal hypotheses, there are systems with no inverse limit, systems with inverse limit given by the entire thread class, and systems with inverse limit given by a proper subset of the thread class. Inverse limits are preserved in both directions by forcing under fairly general assumptions. Prikry forcing and iterated Prikry forcing are important techniques for constructing some of the examples in this chapter.
The second chapter analyzes the hierarchy of the large cardinals between a supercompact cardinal and an almost-huge cardinal, including in particular high-jump cardinals. I organize the large cardinals in this region by consistency strength and implicational strength. I also prove some results relating high-jump cardinals to forcing. A high-jump cardinal is the critical point of an elementary embedding $j: V \to M$ such that $M$ is closed under sequences of length $\sup\{\ j(f)(\kappa) \mid f: \kappa \to \kappa\ \}$. Two of the most important results in the chapter are as follows. A Vopenka cardinal is equivalent to an Woodin-for-supercompactness cardinal. The existence of an excessively hypercompact cardinal is inconsistent.
|
Literature on Carbon Nanotube Research
I have hijacked this page to write down my views on the literature on Carbon Nanotube (CNT) growths and processing, a procedure that should give us the cable/ribbon we desire for the space elevator. I will try to put as much information as possible here. If anyone has something to add, please do not hesitate!
Contents 1 Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes 2 Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis 3 Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology 4 Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen 5 Sustained Growth of Ultralong Carbon Nanotube Arrays for Fiber Spinning 6 In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation 7 High-Performance Carbon Nanotube Fiber Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes
B. G. Demczyk et al., Materials and Engineering,
A334, 173-178, 2002 The paper by Demczyk et al. (2002) is the basic reference for the experimental determination of the tensile strengths of individual Multi-wall nanotube (MWNT) fibers. The experiments are performed with a microfabricated piezo-electric device. On this device CNTs in the length range of tens of microns are mounted. The tensile measurements are obseverd by transmission electron microscopy (TEM) and videotaped. Measurements of the tensile strength (tension vs. strain) were performed as well as Young modulus and bending stiffness. Breaking tension is reached for the SWNT at 150GP and between 3.5% and 5% of strain. During the measurements 'telescoping' extension of the MWNTs is observed, indicating that single-wall nanotubes (SWNT) could be even stronger. However, 150GPa remains the value for the tensile strength that was experimentally observed for carbon nanotubes. Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis
Y.-L. Li, I. A. Kinloch, and A. H. Windle, Science,
304,276-278, 2004 The work described in the paper by Y.-L. Li et al. is a follow-on of the famous paper by Zhu et al. (2002), which was cited extensively in Brad's book. This article goes a little more into the details of the process. If you use a mixture of ethene (as the source of carbon), ferrocene, and theophene (both as catalysts, I suppose) into a furnace (1050 to 1200 deg C) using hydrogen as carrier gas, you apparently get an 'aerogel' or 'elastic smoke' forming in the furnace cavity, which comprises the CNTs. Here's an interesting excerpt: Under these synthesis conditions, the nanotubes in the hot zone formed an aerogel, which appeared rather like “elastic smoke,” because there was sufficient association between the nanotubes to give some degree of mechanical integrity. The aerogel, viewed with a mirror placed at the bottom of the furnace, appeared very soon after the introduction of the precursors (Fig. 2). Itwas then stretched by the gas flow into the form of a sock, elongating downwards along the furnace axis. The sock did not attach to the furnace walls in the hot zone, which accordingly remained clean throughout the process.... The aerogel could be continuously drawn from the hot zone by winding it onto a rotating rod. In this way, the material was concentrated near the furnace axis and kept clear of the cooler furnace walls,...
The elasticity of the aerogel is interpreted to come from the forces between the individual CNTs. The authors describe the procedure to extract the aerogel and start spinning a yarn from it as it is continuously drawn out of the furnace. In terms of mechanical properties of the produced yarns, the authors found a wide range from 0.05 to 0.5 GPa/g/ccm. That's still not enough for the SE, but the process appears to be interesting as it allows to draw the yarn directly from the reaction chamber without mechanical contact and secondary processing, which could affect purity and alignment. Also, a discussion of the roles of the catalysts as well as hydrogen and oxygen is given, which can be compared to the discussion in G. Zhang et al. (2005, see below).
Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology
M. Zhang, K. R. Atkinson, and R. H. Baughman, Science,
306, 1358-1361, 2004 In the research article by M. Zhang et al. (2004) the procedure of spinning long yarns from forests of MWNTs is described in detail. The maximum breaking strength achieved is only 0.46 GPa based on the 30micron-long CNTs. The initial CNT forest is grown by chemical vapour deposition (CNT) on a catalytic substrate, as usual. A very intersting formula for the tensile strength of a yarn relative to the tensile strength of the fibers (in our case the MWNTs) is given:
<math> \frac{\sigma_{\rm yarn}}{\sigma_{\rm fiber}} = \cos^2 \alpha \left(1 - \frac{k}{\sin \alpha} \right) </math>
where <math>\alpha</math> is the helix angle of the spun yarn, i.e. fiber direction relative to yarn axis. The constant <math>k=\sqrt(dQ/\mu)/3L</math> is given by the fiber diameter d=1nm, the fiber migration length Q (distance along the yarn over which a fiber shifts from the yarn surface to the deep interior and back again), the quantity <math>\mu=0.13</math> is the friction coefficient of CNTs (the friction coefficent is the ratio of maximum along-fiber force divided by lateral force pressing the fibers together), <math>L=30{\rm \mu m}</math> is the fiber length. A critical review of this formula is given here.
In the paper interesting transmission electron microscope (TEM) pictures are shown, which give insight into how the yarn is assembled from the CNT forest. The authors describe other characteristics of the yarn, like how knots can be introduced and how the yarn performs when knitted, apparently in preparation for application in the textile industry.
Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen
Important aspects of the production of CNTs that are suitable for the SE is the efficiency of the growth and the purity (i.e. lack of embedded amorphous carbon and imperfections in the Carbon bounds in the CNT walls). In their article G. Zhang et al. go into detail about the roles of oxygen and hydrogen during the chemical vapour deposition (CVD) growth of CNT forests from hydrocarbon sources on catalytic substrates. In earlier publications the role of oxygen was believed to be to remove amorphous carbon by oxidation into CO. The authors show, however, that, at least for this CNT growth technique, oxygen is important, because it removes hydrogen from the reaction. Hydrogen has apparently a very detrimental effect on the growth of CNTs, it even destroys existing CNTs as shown in the paper. Since hydrogen radicals are released during the dissociation of the hydrocarbon source compount, it is important to have a removal mechanism. Oxygen provides this mechanism, because its chemical affinity towards hydrogen is bigger than towards carbon.
In summary, if you want to efficiently grow pure CNT forests on a catalyst substrate from a hydrocarbon CVD reaction, you need a few percent oxygen in the source gas mixture. An additional interesting information in the paper is that you can design the places on the substrate, on which CNTs grow by placing the the catalyst only in certain areas of the substrate using lithography. In this way you can grow grids and ribbons. Figures are shown in the paper.
In the paper no information is given on the reason why the CNT growth stops at some point. The growth rate is given with 1 micron per minute. Of course for us it would be interesting to eliminate the mechanism that stops the growth so we could grow infinitely long CNTs.
This article can be found in our archive.
Sustained Growth of Ultralong Carbon Nanotube Arrays for Fiber Spinning In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation
The paper by S. Hofmann et al. (2007) is a key publication for understanding the microscropic processes of growing CNTs. The authors describe an experiment in which they observe in situ the growth of CNTs from chemical vapour deposition (CVD) onto metallic catalyst particles. The observations are made in time-lapse transmission electron microscopy (TEM) and in x-ray photo-electron spectroscopy. Since I am not an expert on spectroscopy, I stick to the images and movies produced by the time-lapse TEM. In the observations it can be seen that the catalysts are covered by a graphite sheet, which forms the initial cap of the CNT. The formation of that cap apparently deforms the catalyst particle due to its inherent shape as it tries to form a minimum-energy configuration. Since the graphite sheet does not extend under the catalyst particle, which is prevented by the catalyst sitting on the silicon substrate, the graphite sheet cannot close itself. The deformation of the catalyst due to the cap forming leads to a retoring force exerted by the crystaline stracture of the catalyst particle. As a consequence the carbon cap lifts off the catalyst particle. On the base of the catalyst particle more carbon atoms attach to the initial cap starting the formation of the tube. The process continues to grow a CNT as long as there is enough carbon supply to the base of the catalyst particle and as long as the particle cannot be enclosed by the carbon compounds. During the growth of the CNT the catalyst particle breathes so drives so the growth process mechanically.
Of course for us SE community the most interesting part in this paper is the question: can we grow CNTs that are long enough so we can spin them in a yarn that would hold the 100GPa/g/ccm? In this regard the question is about the termination mechanism of the growth. The authors point to a very important player in CNT growth: the catalyst. If we can make a catalyst that does not break off from its substrate and does not wear off, the growth could be sustained as long as the catalyst/substrate interface is accessible to enough carbon from the feedstock.
If you are interested, get the paper from our archive, including the supporting material, in which you'll find the movies of the CNTs growing.
High-Performance Carbon Nanotube Fiber
K. Koziol et al., Science,
318, 1892, 2007. The paper "High-Performance Carbon Nanotube Fiber" by K. Koziol et al. is a research paper on the production of macroscopic fibers out of an aerogel (low-density, porous, solid material) of SWNT and MWNT that has been formed by carbon vapor deposition. They present an analysis of the mechanical performance figures (tensile strength and stiffness) of their samples. The samples are fibers of 1, 2, and 20mm length and have been extracted from the aerogel with high winding rates (20 metres per minute). Indeed higher winding rates appear to be desirable, but the authors have not been able to achieve higher values as the limit of extraction speed from the aerogel was reached, and higher speeds led to breakage of the aerogel.
They show in their results plot (Figure 3A) that typically the fibers split in two performance classes: low-performance fibers with a few GPa and high-performance fibers with around 6.5 GPa. It should be noted that all tensile strengths are given in the paper as GPa/SG, where SG is the specific gravity, which is the density of the material divided by the density of water. Normally SG was around 1 for most samples discussed in the paper. The two performance classes have been interpreted by the authors as the typical result of the process of producing high-strength fibers: since fibers break at the weakest point, you will find some fibers in the sample, which have no weak point, and some, which have one or more, provided the length of the fibers is in the order of the frequency of occurrence of weak points. This can be seen by the fact that for the 20mm fibers there are no high-performance fibers left, as the likelihood to encounter a weak point on a 20mm long fiber is 20 times higher than encountering one on a 1mm long fiber.
As a conclusion the paper is bad news for the SE, since the difficulty of producing a flawless composite with a length of 100,000km and a tensile strength of better than 3GPa using the proposed method is enormous. This comes back to the ribbon design proposed on the Wiki: using just cm-long fibers and interconnect them with load-bearing structures (perhaps also CNT threads). Now we have shifted the problem from finding a strong enough material to finding a process that produces the required interwoven ribbon. In my opinion the race to come up with a fiber of better than Kevlar is still open.
|
$A \subset \mathbb{R}^n $ is open $ \iff A \cap \overline{X} \subset \overline{A \cap X} $
The $ \overline{X} $ represents the closure of a set.
If I assume that $A$ is open we have many cases for the left side $ A \cap \overline{X} $.
$ A \cap \overline{X} = \emptyset $ $ A \cap \overline{X} = A $ since $ A \subset \overline{X} $ $ A \cap \overline{X} = \overline{X} $ since $ \overline{X}\subset A$ $ A \cap \overline{X} = C $ another set, not open or closed.
In the first case it states true even if $A$ is a closed set. Since $A$ and $\overline{X}$ are disjoint the $ \cap $ is a empty set and we don't even need to evaluate the right hand since $\emptyset \subset Z $ where $Z$ is ANY set even $\emptyset$. And it holds true in the first case.
The second case, since $A \subset \overline{X} $ the result is the smaller set $A$. If $A$ is open this is open. If $A$ is closed this is closed. Lets assume $A$ is open (by hypothesis) therefore in the right hand we have $A \subset X = A$ (open) and the closure of $A$ is $\overline{A} = \partial A \cup \mathring{A} $ that is closed.
As left hand $A$ is open we have $A = \mathring{A} $ and the right hand $A$ is closed and we have $ A = \overline{A} $
Since $\mathring{A} \subset A \subset \overline{A} $ the statement stands as true in the second case.
...
Starting this way is the best way to do? should I split in 4 cases? I think the proof will become too big and this is not the right approach.
(PS: I want ask a deep sorry, since yesterday I was VERY tired and had no clues how to follow up. Today morning this startup seems almost as easy as write a letter to Santa)
|
Literature on Carbon Nanotube Research
I have hijacked this page to write down my views on the literature on Carbon Nanotube (CNT) growths and processing, a procedure that should give us the cable/ribbon we desire for the space elevator. I will try to put as much information as possible here. If anyone has something to add, please do not hesitate!
Contents 1 Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes 2 Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis 3 Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology 4 Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen 5 Sustained Growth of Ultralong Carbon Nanotube Arrays for Fiber Spinning 6 In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation 7 High-Performance Carbon Nanotube Fiber Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes
B. G. Demczyk et al., Materials and Engineering,
A334, 173-178, 2002 The paper by Demczyk et al. (2002) is the basic reference for the experimental determination of the tensile strengths of individual Multi-wall nanotube (MWNT) fibers. The experiments are performed with a microfabricated piezo-electric device. On this device CNTs in the length range of tens of microns are mounted. The tensile measurements are obseverd by transmission electron microscopy (TEM) and videotaped. Measurements of the tensile strength (tension vs. strain) were performed as well as Young modulus and bending stiffness. Breaking tension is reached for the SWNT at 150GP and between 3.5% and 5% of strain. During the measurements 'telescoping' extension of the MWNTs is observed, indicating that single-wall nanotubes (SWNT) could be even stronger. However, 150GPa remains the value for the tensile strength that was experimentally observed for carbon nanotubes. Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis
Y.-L. Li, I. A. Kinloch, and A. H. Windle, Science,
304,276-278, 2004 The work described in the paper by Y.-L. Li et al. is a follow-on of the famous paper by Zhu et al. (2002), which was cited extensively in Brad's book. This article goes a little more into the details of the process. If you use a mixture of ethene (as the source of carbon), ferrocene, and theophene (both as catalysts, I suppose) into a furnace (1050 to 1200 deg C) using hydrogen as carrier gas, you apparently get an 'aerogel' or 'elastic smoke' forming in the furnace cavity, which comprises the CNTs. Here's an interesting excerpt: Under these synthesis conditions, the nanotubes in the hot zone formed an aerogel, which appeared rather like “elastic smoke,” because there was sufficient association between the nanotubes to give some degree of mechanical integrity. The aerogel, viewed with a mirror placed at the bottom of the furnace, appeared very soon after the introduction of the precursors (Fig. 2). Itwas then stretched by the gas flow into the form of a sock, elongating downwards along the furnace axis. The sock did not attach to the furnace walls in the hot zone, which accordingly remained clean throughout the process.... The aerogel could be continuously drawn from the hot zone by winding it onto a rotating rod. In this way, the material was concentrated near the furnace axis and kept clear of the cooler furnace walls,...
The elasticity of the aerogel is interpreted to come from the forces between the individual CNTs. The authors describe the procedure to extract the aerogel and start spinning a yarn from it as it is continuously drawn out of the furnace. In terms of mechanical properties of the produced yarns, the authors found a wide range from 0.05 to 0.5 GPa/g/ccm. That's still not enough for the SE, but the process appears to be interesting as it allows to draw the yarn directly from the reaction chamber without mechanical contact and secondary processing, which could affect purity and alignment. Also, a discussion of the roles of the catalysts as well as hydrogen and oxygen is given, which can be compared to the discussion in G. Zhang et al. (2005, see below).
Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology
M. Zhang, K. R. Atkinson, and R. H. Baughman, Science,
306, 1358-1361, 2004 In the research article by M. Zhang et al. (2004) the procedure of spinning long yarns from forests of MWNTs is described in detail. The maximum breaking strength achieved is only 0.46 GPa based on the 30micron-long CNTs. The initial CNT forest is grown by chemical vapour deposition (CNT) on a catalytic substrate, as usual. A very intersting formula for the tensile strength of a yarn relative to the tensile strength of the fibers (in our case the MWNTs) is given:
<math> \frac{\sigma_{\rm yarn}}{\sigma_{\rm fiber}} = \cos^2 \alpha \left(1 - \frac{k}{\sin \alpha} \right) </math>
where <math>\alpha</math> is the helix angle of the spun yarn, i.e. fiber direction relative to yarn axis. The constant <math>k=\sqrt(dQ/\mu)/3L</math> is given by the fiber diameter d=1nm, the fiber migration length Q (distance along the yarn over which a fiber shifts from the yarn surface to the deep interior and back again), the quantity <math>\mu=0.13</math> is the friction coefficient of CNTs (the friction coefficent is the ratio of maximum along-fiber force divided by lateral force pressing the fibers together), <math>L=30{\rm \mu m}</math> is the fiber length. A critical review of this formula is given here.
In the paper interesting transmission electron microscope (TEM) pictures are shown, which give insight into how the yarn is assembled from the CNT forest. The authors describe other characteristics of the yarn, like how knots can be introduced and how the yarn performs when knitted, apparently in preparation for application in the textile industry.
Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen
Important aspects of the production of CNTs that are suitable for the SE is the efficiency of the growth and the purity (i.e. lack of embedded amorphous carbon and imperfections in the Carbon bounds in the CNT walls). In their article G. Zhang et al. go into detail about the roles of oxygen and hydrogen during the chemical vapour deposition (CVD) growth of CNT forests from hydrocarbon sources on catalytic substrates. In earlier publications the role of oxygen was believed to be to remove amorphous carbon by oxidation into CO. The authors show, however, that, at least for this CNT growth technique, oxygen is important, because it removes hydrogen from the reaction. Hydrogen has apparently a very detrimental effect on the growth of CNTs, it even destroys existing CNTs as shown in the paper. Since hydrogen radicals are released during the dissociation of the hydrocarbon source compount, it is important to have a removal mechanism. Oxygen provides this mechanism, because its chemical affinity towards hydrogen is bigger than towards carbon.
In summary, if you want to efficiently grow pure CNT forests on a catalyst substrate from a hydrocarbon CVD reaction, you need a few percent oxygen in the source gas mixture. An additional interesting information in the paper is that you can design the places on the substrate, on which CNTs grow by placing the the catalyst only in certain areas of the substrate using lithography. In this way you can grow grids and ribbons. Figures are shown in the paper.
In the paper no information is given on the reason why the CNT growth stops at some point. The growth rate is given with 1 micron per minute. Of course for us it would be interesting to eliminate the mechanism that stops the growth so we could grow infinitely long CNTs.
This article can be found in our archive.
Sustained Growth of Ultralong Carbon Nanotube Arrays for Fiber Spinning
Q. Li et al. have published a paper on a subject that is very close to our hearts: growing long CNTs. The longer the fibers, which we hope have a couple of 100GPa of tensile strength, can hopefully be spun into the yarns that will make our SE ribbon. In the paper the method of chemical vapour deposition (CVD) onto a catalyst-covered silicon substrate is described, which appears to be the leading method in the publications after 2004. This way a CNT "forest" is grown on top of the catalyst particles. The goal of the authors was to grow CNTs that are as long as possible. The found that the growth was terminated in earlier attempts by the iron catalyst particles interdiffusing with the substrate. This can apparently be avoided by putting an aluminium oxide layer of 10nm thickness between the catalyst and the substrate. With this method the CNTs grow to an impressive 4.7mm! Also, in a range from 0.5 to 1.5mm fiber length the forests grown with this method can be spun into yarns.
The growth rate with this method was initially <math>60{\rm \mu m\ min.^{-1}</math> and could be sustained for 90 minutes. The growth was prolonged by the introduction of water vapour into the mixture, which achieved the 4.7mm after 2h of growth. By introducing periods of restricted carbon supply, the authors produced CNT forests with growth marks. This allowed to determine that the forest grew from the base. This is in line with the in situ observations by S. Hofmann et al. (2007).
In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation
The paper by S. Hofmann et al. (2007) is a key publication for understanding the microscropic processes of growing CNTs. The authors describe an experiment in which they observe in situ the growth of CNTs from chemical vapour deposition (CVD) onto metallic catalyst particles. The observations are made in time-lapse transmission electron microscopy (TEM) and in x-ray photo-electron spectroscopy. Since I am not an expert on spectroscopy, I stick to the images and movies produced by the time-lapse TEM. In the observations it can be seen that the catalysts are covered by a graphite sheet, which forms the initial cap of the CNT. The formation of that cap apparently deforms the catalyst particle due to its inherent shape as it tries to form a minimum-energy configuration. Since the graphite sheet does not extend under the catalyst particle, which is prevented by the catalyst sitting on the silicon substrate, the graphite sheet cannot close itself. The deformation of the catalyst due to the cap forming leads to a retoring force exerted by the crystaline stracture of the catalyst particle. As a consequence the carbon cap lifts off the catalyst particle. On the base of the catalyst particle more carbon atoms attach to the initial cap starting the formation of the tube. The process continues to grow a CNT as long as there is enough carbon supply to the base of the catalyst particle and as long as the particle cannot be enclosed by the carbon compounds. During the growth of the CNT the catalyst particle breathes so drives so the growth process mechanically.
Of course for us SE community the most interesting part in this paper is the question: can we grow CNTs that are long enough so we can spin them in a yarn that would hold the 100GPa/g/ccm? In this regard the question is about the termination mechanism of the growth. The authors point to a very important player in CNT growth: the catalyst. If we can make a catalyst that does not break off from its substrate and does not wear off, the growth could be sustained as long as the catalyst/substrate interface is accessible to enough carbon from the feedstock.
If you are interested, get the paper from our archive, including the supporting material, in which you'll find the movies of the CNTs growing.
High-Performance Carbon Nanotube Fiber
K. Koziol et al., Science,
318, 1892, 2007. The paper "High-Performance Carbon Nanotube Fiber" by K. Koziol et al. is a research paper on the production of macroscopic fibers out of an aerogel (low-density, porous, solid material) of SWNT and MWNT that has been formed by carbon vapor deposition. They present an analysis of the mechanical performance figures (tensile strength and stiffness) of their samples. The samples are fibers of 1, 2, and 20mm length and have been extracted from the aerogel with high winding rates (20 metres per minute). Indeed higher winding rates appear to be desirable, but the authors have not been able to achieve higher values as the limit of extraction speed from the aerogel was reached, and higher speeds led to breakage of the aerogel.
They show in their results plot (Figure 3A) that typically the fibers split in two performance classes: low-performance fibers with a few GPa and high-performance fibers with around 6.5 GPa. It should be noted that all tensile strengths are given in the paper as GPa/SG, where SG is the specific gravity, which is the density of the material divided by the density of water. Normally SG was around 1 for most samples discussed in the paper. The two performance classes have been interpreted by the authors as the typical result of the process of producing high-strength fibers: since fibers break at the weakest point, you will find some fibers in the sample, which have no weak point, and some, which have one or more, provided the length of the fibers is in the order of the frequency of occurrence of weak points. This can be seen by the fact that for the 20mm fibers there are no high-performance fibers left, as the likelihood to encounter a weak point on a 20mm long fiber is 20 times higher than encountering one on a 1mm long fiber.
As a conclusion the paper is bad news for the SE, since the difficulty of producing a flawless composite with a length of 100,000km and a tensile strength of better than 3GPa using the proposed method is enormous. This comes back to the ribbon design proposed on the Wiki: using just cm-long fibers and interconnect them with load-bearing structures (perhaps also CNT threads). Now we have shifted the problem from finding a strong enough material to finding a process that produces the required interwoven ribbon. In my opinion the race to come up with a fiber of better than Kevlar is still open.
|
Let $\gamma : [0,1] \to \mathbb R^2$ be a finite $C^2$-curve in the plane which does not intersect itself. Let $p(z)$ be a second-degree polynomial in $z \in \mathbb R^2$. Can we construct a Riemannian metric $g$ along $\gamma$ such that
$\gamma$ is a geodesic of $g$, $\gamma$ has length 1, and $p(z)$ is the 2-jet of $g$ at $\gamma(0)$? (i.e. this prescribes $g$ and its first and second derivatives at $\gamma(0)$) Edit: As per Sergei's comment, assume that $p$ is chosen so that $\gamma$ does in fact solve the geodesic equation.
I think so, and my sketch of an argument follows the proof of existence of Fermi coordinates in reverse. I haven't worked through this in detail yet, though, because I'm more concerned about the next question:
Let $\gamma$ and $\eta$ both be finite curves $[0,1] \to \mathbb R^2$ which do not intersect themselves, and such that $\gamma(0) = \eta(0)$ and $\gamma(1) = \eta(1)$ with no other intersections (i.e. $\gamma \cup \eta$ is a piecewise, simple, closed $C^2$-curve in the plane). Can we construct a metric $g$ as above? Note that if so, $\gamma(0)$ and $\gamma(1)$ will be conjugate points along $\gamma$.
|
If your system is damped, after some periods, the resonance will occur regardless of at which point you apply the harmonic force on the swing and only the resonance frequency. There is a steady state solution: $ x(t)= X \sin{(2 \pi f t +\phi)} $. So if you apply a force with resonant frequency, it will vibrate at resonance as well and your initial conditions (e.g. starting phase) can be ignored.
If your system is undamped, there will be two terms determined by initial conditions (see solution below), but their frequency is identical to the resonant frequency. So the resonance still occurs.
a undamped mass-spring system with harmonic force input:
which can be solved by Wolfram:
In conclusion, if you apply a harmonic force on a linear mass-spring system, the resonance occurs regardless of damping and the initial conditions (e.g. different phase of a swing vibration).
|
The
theta function is the analytic function $\theta:U\to\mathbb{C}$ defined on the (open) right half-plane $U\subset\mathbb{C}$ by $\theta(\tau)=\sum_{n\in\mathbb{Z}}e^{-\pi n^2 \tau}$. It has the following important transformation property.
Theta reciprocity: $\theta(\tau)=\frac{1}{\sqrt{\tau}}\theta\left(\frac{1}{\tau}\right)$.
This theorem, while fundamentally analytic—the proof is just Poisson summation coupled with the fact that a Gaussian is its own Fourier transform—has serious arithmetic significance.
It is the key ingredient in the proof of the functional equation of the Riemann zeta function.
It expresses the
automorphyof the theta function.
Theta reciprocity also provides an analytic proof (actually, the
only proof, as far as I know) of the Landsberg-Schaar relation
$$\frac{1}{\sqrt{p}}\sum_{n=0}^{p-1}\exp\left(\frac{2\pi i n^2 q}{p}\right)=\frac{e^{\pi i/4}}{\sqrt{2q}}\sum_{n=0}^{2q-1}\exp\left(-\frac{\pi i n^2 p}{2q}\right)$$
where $p$ and $q$ are arbitrary positive integers. To prove it, apply theta reciprocity to $\tau=2iq/p+\epsilon$, $\epsilon>0$, and then let $\epsilon\to 0$.
This reduces to the formula for the quadratic Gauss sum when $q=1$:
$$\sum_{n=0}^{p-1} e^{2 \pi i n^2 / p} = \begin{cases} \sqrt{p} & \textrm{if } \; p\equiv 1\mod 4 \\\ i\sqrt{p} & \textrm{if } \; p\equiv 3\mod 4 \end{cases}$$
(where $p$ is an odd prime). From this, it's not hard to deduce Gauss's "golden theorem".
Quadratic reciprocity: $\left(\frac{p}{q}\right)\left(\frac{q}{p}\right)=(-1)^{(p-1)(q-1)/4}$ for odd primes $p$ and $q$.
For reference, this is worked out in detail in the paper "Applications of heat kernels on abelian groups: $\zeta(2n)$, quadratic reciprocity, Bessel integrals" by Anders Karlsson.
I feel like there is some deep mathematics going on behind the scenes here, but I don't know what.
Why should we expect theta reciprocity to be related to quadratic reciprocity? Is there a high-concept explanation of this phenomenon? If there is, can it be generalized to other reciprocity laws (like Artin reciprocity)?
Hopefully some wise number theorist can shed some light on this!
|
Suppose I am given a morphism $f:BG\to BGL_1(R)$ for $R$ some at least $E_1$-ring spectrum and $G$ a loop space. Then This corresponds, I believe, to an action of $G$ on $R$, coming from a morphism $G\to GL_1(R)$. The first part of my question is: does this imply a map of spectra $R[G]\wedge R\to R$ or something like this? The second part of my question is regards the Thom spectrum $Mf$ associated to $f$. There are several standard constructions of $Mf$ from the given data, but I'm particularly interested in the interpretation of $Mf$ as $R/G$, the "quotient" of $R$ by the $G$ action (the first place I saw this discussed was in the preprint of Ando, Blumberg, Gepner, Hopkins and Rezk on units of ring spectra). Specifically, can $R/G$ be constructed by some specific bar construction, or something along these lines, in the $\infty$-category of spectra? ABGHR seem to indicate that the construction of the Thom spectrum as the colimit of the $BG$-shaped diagram inside of $R$-modules makes it obvious that we should call it $R/G$, but it's not as clear to me.
For your first question, the answer is yes. A map $BG\to BGL_1(R)$ gives you a map $G\to GL_1(R)$. Since $GL_1(R)$ is a set of component of $\Omega^{\infty}(R)$, this gives you a map $G\to \Omega^{\infty}(R)$ or equivalently a map $\mathbb{S}[G]=\Sigma_+^\infty G\to R$. The multiplication map $R\wedge R\to R$ gives you by adjunction a map $R\to F(R,R)$. Postcomposing with that map gives you a map $\mathbb{S}[G]\to F(R,R)$. The target has an $R$-module structure given by the map:
$$F(R,R)\wedge R\cong F(R,R)\wedge F(\mathbb{S},R)\to F(R,R\wedge R)\to F(R,R)$$ where the second map is just smashing the two functions spectra. We have an adjunction $-\wedge R:Spec\leftrightarrows Mod_R:forget$. Thus the map $\mathbb{S}[ G]\to F(R,R)$ is the same data as a map of $R$-module spectra $R[G]:=R\wedge\Sigma_+^\infty G\to F(R,R)$. Again by adjunction, you get from this a map $R[G]\wedge R\to R$.
Regarding your second question. A map $BG$ to an $\infty$-category $C$ is exactly the data of an object $X$ of $C$ together with a map $G\to Map_{C}(X,X)$. Now by definition the colimit of this diagram in $C$ is an object $Y$ of $C$ with a map $X\to Y$ which is $G$-equivariant when you give $Y$ the trivial $G$ action and which is initial with that property. In other word, the data of an other object $Z$ with a $G$-equivariant map $X\to Z$ (where $Z$ is equipped with the trivial $G$-action) should be the same as the data of a map $Y\to Z$. I think it is clear that $Y$ has to be $X/G$.
One explicit point set level method for constructing $R/G$ is the following. Pick a model for $G$ that is a strictly associative simplicial group and a map $G\to Map(R,R)$ (say you work in symmetric spectra). Then you can do a Bar construction $B(*,G,R)$. This is a simplicial object in the category of symmetric spectra whose $n$-simplices are the spectrum $\Sigma_+^{\infty}G^n\wedge R$. Then if $R$ is a cofibrant spectrum, the resulting simplicial object is Reedy cofibrant and computes $R/G$.
|
We owe Paul Dirac two excellent mathematical jokes. I have amended them with a few lesser known variations.
A.
Square root of the Laplacian: we want $\Delta$ to be $D^2$ for some first order differential operator (for example, because it is easier to solve first order partial differential equations than second order PDEs). Writing it out,
$$\sum_{k=1}^n \frac{\partial^2}{\partial x_k^2}=\left(\sum_{i=1}^n \gamma_i \frac{\partial}{\partial x_i}\right)\left(\sum_{j=1}^n \gamma_j \frac{\partial}{\partial x_j}\right) = \sum_{i,j}\gamma_i\gamma_j \frac{\partial^2}{\partial x_i x_j},$$
and equating the coefficients, we get that this is indeed true if
$$D=\sum_{i=1}^n \gamma_i \frac{\partial}{\partial x_i}\quad\text{and}\quad \gamma_i\gamma_j+\gamma_j\gamma_i=\delta_{ij}.$$
It remains to come up with the right $\gamma_i$'s. Dirac realized how to accomplish it with $4\times 4$ matrices when $n=4$; but a neat follow-up joke is to simply define them to be the elements $\gamma_1,\ldots,\gamma_n$ of
$$\mathbb{R}\langle\gamma_1,\ldots,\gamma_n\rangle/(\gamma_i\gamma_j+\gamma_j\gamma_i - \delta_{ij}).$$
Using symmetry considerations, it is easy to conclude that the commutator of the $n$-dimensional Laplace operator $\Delta$ and the multiplication by $r^2=x_1^2+\cdots+x_n^2$ is equal to $aE+b$, where $$E=x_1\frac{\partial}{\partial x_1}+\cdots+x_n\frac{\partial}{\partial x_n}$$ is the Euler vector field. A boring way to confirm this and to determine the coefficients $a$ and $b$ is to expand $[\Delta,r^2]$ and simplify using the commutation relations between $x$'s and $\partial$'s. A more exciting way is to act on $x_1^\lambda$, where $\lambda$ is a formal variable:
$$[\Delta,r^2]x_1^{\lambda}=((\lambda+2)(\lambda+1)+2(n-1)-\lambda(\lambda-1))x_1^{\lambda}=(4\lambda+2n)x_1^{\lambda}.$$
Since $x_1^{\lambda}$ is an eigenvector of the Euler operator $E$ with eigenvalue $\lambda$, we conclude that
$$[\Delta,r^2]=4E+2n.$$
B.
Dirac delta function: if we can write
$$g(x)=\int g(y)\delta(x-y)dy$$
then instead of solving an inhomogeneous linear differential equation $Lf=g$ for each $g$, we can solve the equations $Lf=\delta(x-y)$ for each real $y$, where a linear differential operator $L$ acts on the variable $x,$ and combine the answers with different $y$ weighted by $g(y)$. Clearly, there are fewer real numbers than functions, and if $L$ has constant coefficients, using translation invariance the set of right hand sides is further reduced to just one, $\delta(x)$. In this form, the joke goes back to Laplace and Poisson.
What happens if instead of the ordinary geometric series we consider a doubly infinite one? Since
$$z(\cdots + z^{-n-1} + z^{-n} + \cdots + 1 + \cdots + z^n + \cdots)= \cdots + z^{-n} + z^{-n+1} + \cdots + z + \cdots + z^{n+1} + \cdots,$$
the expression in the parenthesis is annihilated by the multiplication by $z-1$, hence it is equal to $\delta(z-1)$. Homogenizing, we get
$$\sum_{n\in\mathbb{Z}}\left(\frac{z}{w}\right)^n=\delta(z-w)$$
This identity plays an important role in conformal field theory and the theory of vertex operator algebras.
Pushing infinite geometric series in a different direction,
$$\cdots + z^{-n-1} + z^{-n} + \cdots + 1=-\frac{z}{1-z} \quad\text{and}\quad 1 + z + \cdots + z^n + \cdots = \frac{1}{1-z},$$
which add up to $1$. This time, the sum of doubly infinite geometric series is zero!Thus the point $0\in\mathbb{Z}$ is the sum of all lattice points on the non-negative half-line and all points on the positive half-line:
$$0=[\ldots,-2,-1,0] + [0,1,2,\ldots] $$
A vast generalization is given by Brion's formula for the generating function for the lattice points in a convex lattice polytope $\Delta\subset\mathbb{R}^N$ with vertices $v\in{\mathbb{Z}}^N$ and closed inner vertex cones $C_v\subset\mathbb{R}^N$:
$$\sum_{P\in \Delta\cap{\mathbb{Z}}^N} z^P = \sum_v\left(\sum_{Q\in C_v\cap{\mathbb{Z}}^N} z^Q\right),$$
where the inner sums in the right hand side need to be interpreted as rational functions in $z_1,\ldots,z_N$.
Another great joke based on infinite series is the Eilenberg swindle, but I am too exhausted by fighting the math preview to do it justice.
|
Bear with me while I try to explain exactly what the question is. The question Can a curvature in time (and not space) cause acceleration? is imagining a coordinate system in which the curvature is only in the time coordinate. I want to be as precise as possible about what we mean by
curvature in the time coordinate.
It seems to me that a good starting point is the geodesic equation:
$$ {d^2 x^\mu \over d\tau^2} + \Gamma^\mu_{\alpha\beta} {dx^\alpha \over d\tau} {dx^\beta \over d\tau} = 0 $$
because if we stick to Cartesian coordinates then in flat space all the Christoffel symbols vanish and we're left with:
$$ {d^2 x^\mu \over d\tau^2} = 0 $$
So a coordinate system in which spacetime is only curved in the time coordinate, $x^0$, would be one in which:
$$\begin{align} {d^2 x^0 \over d\tau^2} &\ne 0 \\ {d^2 x^{\mu\ne 0} \over d\tau^2} &= 0 \end{align}$$
So my question is whether this is a sensible perspective.
|
I'm a 2nd year physics undergraduate and recently I've volunteered to give a short presentation on the Sackur-Tetrode equation derivation and its use at removing the Gibbs paradox. I've looked on the Internet and in some books and found many methods for deriving this result. The one I'm going with is the derivation from the book
Thermodynamics and Statistical Mechanics by W. Greiner. However, there are some things I'm still not certain about (I dared to expand the argument a little) and in order to seem like I know what I'm talking about, I need some clarification. Here's more or less how I see it going:
In order to obtain an expression for the entropy of an ideal gas with $N$ (identical) particles of mass $m$ occupying a volume $V$ and with internal energy $E$, we use Boltzmann's entropy formula: $$S(N,V,E)=k\ln \Omega(N,V,E).$$
This reduces the problem to finding $\Omega(N,V,E)$, the number of microstates of our system which realise the macrostate $(N,V,E)$.
A microstate of the system gives information about the entire system at a microscopic level, that is, it specifies the exact momentum and position of each constituent of the system. There are $N$ particles and each one has three directions of position and of momentum, therefore our system has $6N$ degrees of freedom.
We imagine a $6N$ dimensional space, called the phase space, where each axis corresponds to a degree of freedom or the system. Each point in this space corresponds to a specific microstate: $(r_x^{(1)},r_y^{(1)},r_z^{(1)}, p_x^{(1)}, p_y^{(1)}, p_z^{(1)},...,r_x^{(N)},r_y^{(N)},r_z^{(N)}, p_x^{(N)}, p_y^{(N)}, p_z^{(N)})$.
The microstates which correspond to our particular macrostate $(N,V,E)$ form some sort of high-dimensional hypersurface. The "volume" of this surface, $\phi(N,V,E)$, would give us a quantity related to the number of microstates. Of course in phase space, "volumes" of surfaces have dimension of action raised to some power, and since we're interested in a number, the idea is to introduce some sort of elementary bit of "volume", $\phi_0$ and then divide to produce a number $\frac{\phi(N,V,E)}{\phi_0}$ which we claim to be the number of relevant microstates.
Fortunately, it physically makes sense for such an elementary "volume" to exist, since from the uncertainty principle we know that a point in phase space really cannot be specified with infinite precision, and that there is a quantum of action, which is nice because our "volumes" have dimension of action to some power.
We already used the fact that we have $N$ particles. Now it's time to take our other givens into account. Let's start with $E$. Classicaly, for an isolated system of noninteracting particles, the total energy is purely kinetic and given by the sum of the kinetic energies of each particle:
$$E=\sum_{k=1}^{N} \frac{ [p_{x}^{(k)} ]^2 +[p_{y}^{(k)} ]^2 +[p_{z}^{(k)} ]^2 }{2m}$$
That gives a bound for our momenta coordinates. The position coordinates can be likewise bounded using our last given: the volume $V$. We may assume that the system occupies a cube of length $l$, then for every $k=1,2,...N$ we have:
$$0\le r_{x}^{(k)} \le l ; \quad 0\le r_{y}^{(k)} \le l ; \quad 0\le r_{z}^{(k)} \le l ; $$
This seems like cheating a bit (why assume a particular shape of the container), but the assumption can be later dropped because in further calculation we don't really care about the bounds on the position coordinates, thanks to the fact that the total energy is independent of them. So this works for arbitrary $V$.
The two previous equations specify the desired "volume" in our phase space. The dimension of this object is $6N-1$ because we only have one constraint, namely the energy being constant. To calculate its "volume" we need to consider the wonderful looking integral:
$$\phi(N,V,E)=|A|=\int_{A} dr_{x}^{(1) }dr_{y}^{(1)} dr_{z}^{(1)} dp_{x}^{(1) }dp_{y}^{(1)} dp_{z}^{(1)} ... dr_{x}^{(N) }dr_{y}^{(N)} dr_{z}^{(N)} dp_{x}^{(N) }dp_{y}^{(N)} dp_{z}^{(N)}$$
Where $A$ is the set given by the previous equations. We can simplify this notation by writing
$$\phi(N,V,E)=|A|=\int_{A} d^{3N}r \ d^{3N}p$$
Notice that the position coordinates are independent of momentum coordinates, so really, the integral factors:
$$\phi(N,V,E)=|A|=\int_{A_r} d^{3N}r \int_{A_p} d^{3N}p$$
Where $A_r$ is the set given by the position conditions and $A_p$ by the momentum conditions. The first factor is simply $V^N$, or $l^{3N}$. The second factor is slightly trickier to calculate, but it's just the surface of a 3N dimensional sphere with radius $\sqrt{2mE}$. This already implies that the units will be $\mbox{[momentum]}^{3N-1}$, and so the units of $|A|$ are $\mbox{[momentum]}^{3N-1} \mbox{[length]}^{3N}$.
The actual derivation of the surface of an $n$-dimensional sphere of radius $r$ will be left out of this, and we just quote the result: $S_n = \frac{2 \pi ^{n/2}}{\Gamma (n/2)}r^{n-1}$
Anyway, we have obtained a solution for our "volume" $|A|$:
$$\phi(N,V,E)=|A|=2 V^N \frac{\pi ^{3N/2}}{\Gamma(3N/2)}\sqrt{2mE}^{3N-1} \quad \mbox{[momentum]}^{3N-1} \mbox{[length]}^{3N}$$
Here I have a big problem. As said before, the number of states is $\frac{\phi(N,V,E)}{\phi_0}$. I would
like for $\phi_0$ to be something like $h^{3N}$ or $h^{3N-1}$ but this gives wrong units! Ignoring this for the moment and just taking $\phi_0 = h^{3N-1}$ here's the rest of the derivation:
We thus obtain an expression for the number of microstates:
$$\Omega(N,V,E) = 2 V^N \frac{\pi^{3N/2}}{\Gamma(3N/2)} \left( \frac{\sqrt{2mE}}{h} \right) ^{3N-1}$$
Taking the log of this and using appropriate approximations etc.(I skip this here but won't during the presentation ofc.) we arrive at:
$$ S(N,V,E) = kN \left( \frac{3}{2} + \ln V \left( \frac{4 \pi m E}{h^2} \right) ^{\frac{3}{2}} \right)$$
This unfortunately leads to the Gibbs paradox. The reason for this is that we've assumed the particles to be disinguishable when in reality they are not. In order to account for this, we need to introduce the Gibbs correction factor into the total amount of states $\Omega$:
$$\Omega_C(N,V,E) = \frac{1}{N!} \Omega(N,V,E) $$
Dividing by the amount of permutations accounts for indistinguishability. Going through the same derivation leads to the proper formula now. Thus the paradox is fixed.
I would love to hear some feedback about this presentation and particularly with the problem I had with $\phi_0$. Additionally, the introduction of the Gibbs correction factor seems kind of shady to me, I read somewhere that it's again just an approximation and that proper counting would produce a more complicated result than just diving by the factorial, but once again the error is negligible.
|
In this paper, we present quantum algorithms to solve the linear structures of Boolean functions. “Suppose Boolean function $$f$$ f : $$\{0, 1\}^{n}\rightarrow \{0, 1\}$$ { 0 , 1 } n → { 0 , 1 } is given as a black box. There exists an unknown n-bit string $$\alpha $$ α such that $$f(x)=f(x\oplus \alpha )$$ f ( x ) = f ( x ⊕ α ) . We do not know the n-bit string $$\alpha $$ α , excepting the Hamming weight $$W(\alpha )=m, 1\le m\le n$$ W ( α ) = m , 1 ≤ m ≤ n . Find the string $$\alpha $$ α .” In case $$W(\alpha )=1$$ W ( α ) = 1 , we present an efficient quantum algorithm to solve this linear construction for the general $$n$$ n . In case $$W(\alpha )>1$$ W ( α ) > 1 , we present an efficient quantum algorithm to solve it for most cases. So, we show that the problem can be ”solved nearly” in quantum polynomial times $$O(n^{2})$$ O ( n 2 ) . From this view, the quantum algorithm is more efficient than any classical algorithm.
Quantum Information Processing – Springer Journals
Published: Feb 4, 2015
It’s your single place to instantly
discover and read the research that matters to you.
Enjoy
affordable access to over 18 million articles from more than 15,000 peer-reviewed journals.
All for just $49/month
Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly
Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.
Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.
Read from thousands of the leading scholarly journals from
SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.
All the latest content is available, no embargo periods.
“Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”Daniel C. “Whoa! It’s like Spotify but for academic articles.”@Phil_Robichaud “I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”@deepthiw “My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”@JoseServera
|
I'm working on a 4-layer PCB with a U-Blox module and I'm trying to calculate the space between the fencing vias next to the Antenna trace and for the stitching vias.
According to the datasheet we have the following possible frequencies:
The case which would cause the closest vias is the last one.
According to the calculations suggested here https://electronics.stackexchange.com/a/42028 I should put them ideally with the following spacing: $$ \frac{\lambda}{20} = \frac{C/2170Mhz}{20} = \frac{138.25mm}{20} = 6.9mm $$ but if we look at this image from the datasheet it seems way too much compared to what they did; consider that the side of the module (the white outline) shown in the above figure is 16mm, so according to calculation there should be a dot at the beginning and at the end of the track and that's it.
My best guess is they are basing the fence spacing on the Greatest Common Divisor for the mean value of each frequency reported in the table to cover each operational mode.
Stitching wise I report what found in this page: https://www.edn.com/electronics-blogs/the-practicing-instrumentation-engineer/4406491/Via-spacing-on-high-performance-PCBs
$$ \lambda = \frac{300}{F\times\sqrt{\varepsilon_{R}}} = \frac{300}{2170Mhz\times2.097} = 60mm $$ now \$\lambda/8\$ is 7.5mm and that should be the necessary spacing for the ground stitching (Er = 4.4 for typical FR-4 PCB material).
To sum up:
1) How much space there should be between each fencing via in my case?
2) How much space there should be between each stitching via in my case?
3) Does the frequency actually relates to the fencing/stitching space, or placing the vias closer than the smallest \$\lambda/20\$ (for fencing) and \$\lambda/8\$ (for stitching) is all that matters?
4) Is all that stitching shown in figure really necessary from an RF point of view?
|
Forgot password? New user? Sign up
Existing user? Log in
Is there a positive integer nnn such that 2n≡1(mod7) ?2^n \equiv 1 \pmod{7} \, ?2n≡1(mod7)?
115,215,315,…,1415,1515\frac{1}{15}, \frac{2}{15}, \frac{3}{15}, \ldots, \frac{14}{15}, \frac{15}{15}151,152,153,…,1514,1515How many of these fractions cannot be reduced?
Is 999999 divisible by 7?
Hint: 999999=106−1.999999 = 10^6 - 1.999999=106−1.
What is the last digit of 3100 ?3^{100} \, ?3100?
Which of these is congruent to 10100(mod11) ?10^{100} \pmod{11} \, ?10100(mod11)?
Problem Loading...
Note Loading...
Set Loading...
|
What is meant by a "system curve" that is used to determine the operating point of a pump. I know what a "pump curve" is, but I often hear about a "system curve" being generated so that the pump operating point can be determined by where the pump curve and system curve intersect.
Imagine if you were pumping through a single pipe. That's all the pump does, is take in water from a source and pumps it along a very long length of pipe. Since this is just plain piping, the friction loss isn't difficult to determine, but you need to know the velocity to calculate the Reynolds number. Without it, you can't use a Moody Chart to calculate a friction factor to find out the loss. But, with a good estimate of the flow rate, the estimated range of the flow rate could be guessed at, and a friction factor can be guessed. With a friction factor estimate, but a flow rate unknown, you finally settle on the pressure loss across this long length of pipe:
$$\Delta P =f_D \frac{\rho V^2}{2}\frac{L}{D} = f_D \frac{8\rho Q^2}{\pi^2}\frac{L}{D^5}$$
Where P is pressure, Q is the flow rate, $\rho$ is the density, L is pipe length, D is diameter, and $f_D$ is the friction factor. Let's take cast iron 100mm pipe, highly turbulent regime, running for 10km. Plugging all the numbers in this theoretical situation, we come to (hypothetically):
$$\Delta P = 0.304 Q^2 \frac{kPa}{(\frac{L}{s})^2} $$
Now we have a formula to give an idea for pump sizing! More importantly, we have a curve. Plugging in any arbitrary value of Q (in L/s) would give you the pressure loss across the pipe in kPa. This curve is the system curve, and it naturally looks like a parabola. This is, in general, true for all fluid systems without any type of controls response (acting under natural behavior). You can plot this curve on top of the pump curve, and find out where the system will reach equilibrium.
Note, it isn't hard with such a simple system to get a lot of liquid out. More complex system have a lot more intricate mechanics, but the general parabola rule still applies. Thus, most people work through the complex mechanics of their system to simplify it into a single point. In our case, operating at 25 L/s would mean a pump that has to pump 190 kPa. This is the single operating point. Typically, many engineers will slightly increase these values, so they will always find an operating point that's safe. In this case, going to 30 L/s would mean a 275 kPa. Thus, the only parabola that goes through the point 30 L/s, 275 kPa and the origin (there is only one parabola that does this) would be the system curve.
Usually a pressure / flow characteristic curve for a given power.
They are usually provided by the manufacturer.
Unless you do an engineering degree where you get to measure the flow rate, pressure, power, calculate the efficiency etc and then have to plot all the dimensionless groups etc to show what is happening.
System curve is simply a plot of the required head against change in system flow rate, as the required head (mainly major and minor losses) increases with the increase in flow rate.
On the other hand, for a pump curve, you can see that the available head by the pump decreases with the increase in flow rate. So, if you put the two curves on a single plot (and if the pump is properly selected), you will have an intersection point between the two curves, and that's your operating point.
|
We have an elementary sharp lower bound for the regulator of a real quadratic field as a function of the discriminant
$$R\geq \tfrac{1}{2}(\sqrt{d-4}+\sqrt{d})$$
It is sharp because the equality holds infinitely often for $d=x^2+4$.
The problem of finding a good upper seems much more complicated, but there's still is a very nice (and relatively easy) bound for $\mathbb{Q}(\sqrt{D})$ depending only on $D$.
Loo-Keng Hua proved that $L(1,\chi)<1+\tfrac{1}{2} \log D$, so using the trivial estimate $h\geq 1$ and substituting on Dirichlet's class number formula we get
$$R<\sqrt{D}(\tfrac{1}{2}\log D+1)$$
The way this bound is presented (indirectly) in this survey suggests that it might be the best one currently known for all real quadratic fields (well, for $D>5$). Given how old Hua result is (1942), this seems unlikely, but I haven't been able to find a better one so far.
I am aware of much better estimates which work
for sufficiently large $D$. For example it follows from work of Lavrik that
$$R<(0.263+\mathcal{o}(1))\sqrt{D}\log D$$
What is the best known bound for $R$ which holds
for all real quadratic fieldsand depends only on $D$? (or for $D>k$ with the $k$ explicit and "small")
I'm also interested in what the true bound is expected to be.
|
PCTeX Talk Discussions on TeX, LaTeX, fonts, and typesetting
Author Message stubner Joined: 14 Mar 2006 Posts: 7
Posted: Wed Apr 19, 2006 2:19 pm Post subject: absolute values Hi everybody,
it seems I ahven't used much absolute values lately since only yesterday I found that things like $|x|$ or $|o|$ look offbalance to me. The space to the right of the latter looks larger than the space to the left of the letter.
In order to test this systematically, I have taken some code from testfont.tex (without fully understanding it ;-):
\documentclass{article}
\renewcommand{\rmdefault}{ptm}
\usepackage[slantedGreek]{mtpro2}
\def\math{\def\ii{i} \def\jj{j}
\def\\##1{|##1|+}\mathtrial
\def\\##1{##1_2+}\mathtrial
\def\\##1{##1^2+}\mathtrial
\def\\##1{##1/2+}\mathtrial
\def\\##1{2/##1+}\mathtrial
\def\\##1{##1,{}+}\mathtrial
\def\\##1{d##1+}\mathtrial
\let\ii=\imath \let\jj=\jmath \def\\##1{\hat##1+}\mathtrial}
\newcount\skewtrial \skewtrial='177
\def\mathtrial{$\\A \\B \\C \\D \\E \\F \\G \\H \\I \\J \\K \\L \\M \\N \\O
\\P \\Q \\R \\S \\T \\U \\V \\W \\X \\Y \\Z \\a \\b \\c \\d \\e \\f \\g
\\h \\\ii \\\jj \\k \\l \\m \\n \\o \\p \\q \\r \\s \\t \\u \\v \\w \\x \\y
\\z \\\alpha \\\beta \\\gamma \\\delta \\\epsilon \\\zeta \\\eta \\\theta
\\\iota \\\kappa \\\lambda \\\mu \\\nu \\\xi \\\pi \\\rho \\\sigma \\\tau
\\\upsilon \\\phi \\\chi \\\psi \\\omega \\\vartheta \\\varpi \\\varphi
\\\Gamma \\\Delta \\\Theta \\\Lambda \\\Xi \\\Pi \\\Sigma \\\Upsilon
\\\Phi \\\Psi \\\Omega \\\partial \\\ell \\\wp$\par}
\def\mathsy{\begingroup\skewtrial='060 % for math symbol font tests
\def\mathtrial{$\\A \\B \\C \\D \\E \\F \\G \\H \\I \\J \\K \\L
\\M \\N \\O \\P \\Q \\R \\S \\T \\U \\V \\W \\X \\Y \\Z$\par}
\math\endgroup}
\begin{document}
\math
\end{document}
IMO most lower case letters look off-center to me with to much space to the tight of the letter (f,v,w,e are exceptions). The greeks are fine, while the uppercase letters are mixed (R has to much space to the left, U and M on the right). The other tests (besides absolute values) look fine.
Other opinions?
cheerio
ralf jautschbach Joined: 17 Mar 2006 Posts: 11
Posted: Wed Apr 19, 2006 3:52 pm Post subject: Re: absolute values
stubner wrote: Hi everybody,
it seems I ahven't used much absolute values lately since only yesterday I found that things like $|x|$ or $|o|$ look offbalance to me. The space to the right of the latter looks larger than the space to the left of the letter.
ralf
|i| and |\pi|, for example, seem to have too much space on the right. |\eta| looks like there is not enough space on the right.
Jochen Michael Spivak Joined: 10 Oct 2005 Posts: 52
Posted: Thu Apr 20, 2006 4:04 pm Post subject: Basically, I want to reiterate the remark I made in the last post to the "firstimpressions" posting by stubner. If you start looking carefully
at any mathematical typesetting (as opposed to just reading it) you will find thousands of non-optimal things. Some of these are actually due
to the design of TeX (see some remarks of mine in the "spacing" posting by zeller), and some to the varying circumstances of individual characters.
All sorts of things that one would never even notice while reading a mathematics paper can stand out when one looks at things a character at a time, and sometimes one becomes overly concerned.
(The link
http://support.pctex.com/files/JWPXMWRZTYLV/abs.pdf
shows Computer Modern and MTPro2 characters inside absolute values
and parentheses, and I think that you will find cases where CM is spaced better than MTPro2, but also cases where the opposite is true.)
For example, although I agree that |M| and |U| have too much space to the right of the letters, I wouldn't agree that |R| has too much space to the left of the R, or at most just a tiny extra bit of space. By contrast, in Computer Modern, the |R| definitely has this problem to a much greater degree.
Notice, moreover, that in MTPro2, (M) and (U) and (R) look nicely balanced. Of course, that's partly because of the character of the right parenthesis---it has a top piece that extends backwards, unlike almost
all characters! In Computer Modern this doesn't pose as great a problem
mainly because the ) is much thinner and unshaped.
The case of |i|, where there is certainly more space on the right, is also instructive. Notice that the dot on the Times-Italic i is very close to being the rightmost part of the character, while in CM it is nowhere near the right, because of the curlicue at the bottom. For this reason, I had to make the italic correction of the i rather big; otherwise, superscripts would
be very close to the dot, making reading very unpleasant. Since the italic correction is always added to the i, this gives the extra space before the |
or the ). Naturally, I had to compensate for this by adding more negative
kerning between the i and all other characters, but you can't kern with the ), as I've mentioned before, in one of the two postings I mentioned.
Similarly, if you compare x^i in CM and MTPro2, you'll see that the superscript i in CM has a curlicue to the left, which keeps it separated from the x, while in MTPro2, I needed to make a greater italic correction to the x in order to get superscripts adequately far away.
TeX has \scriptspace to determine extra space after a subscript or superscript; alas, that it does not also have a \prescriptspace, to determine some extra space _before_ superscripts! (And similarly, see
one of the previously mentioned postings, the spacing in scriptstyle and scriptscriptstyle should be more flexible.)
At any rate, for now, I'll leave things as they are. Possibly in a future release I'll try to address some of these questions, though it simply isn't possible to optimize all spacing.
All times are GMT - 7 Hours
You can post new topics in this forum You can reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum Powered by phpBB © 2001, 2005 phpBB Group
|
Essentially, I want to find a single matrix $X$ such that conjugation by $X$ sends:
$$\begin{bmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & 1 \end{bmatrix} \mapsto \begin{bmatrix} -1 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ \end{bmatrix}$$ and
$$\frac{1}{2} \begin{bmatrix} -i & -i & 0 & i & i \\ \sqrt{\frac{3}{2}} & 0 & -1 & 0 & \sqrt{\frac{3}{2}} \\ i & -i & 0 & i & -i \\ -\frac{1}{2} & 1 & -\sqrt{\frac{3}{2}} & 1 & -\frac{1}{2} \\ \end{bmatrix} \mapsto \begin{bmatrix} 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & e^{\frac{2 i \pi }{3}} & 0 \\ 0 & 0 & 0 & 0 & e^{-\frac{1}{3} (2 i \pi )} \\ \end{bmatrix}$$
The matrices are unitary and I have a mathematical guarantee this is possible.
Some math details: The two matrices on the left generate the tetrahedral group, and the two matrices on the right also generate the tetrahedral group. The matrices on the left are in the Spin(2) representation of the rotation group (rotation by 180 degrees about the z axis and rotation by 120 degrees about the $(1,1,1)/\sqrt{3}$ axis). Through some fancy representation theory (looking at the traces of the matrices on the left), you can show that you have to be able to decompose it into the direct sum of irreducible representations, and those give the matrices on the right.
I was able to get close to solving the problem. If I simply worry about the more complicated 2nd matrix, I can use the eigensystem to ensure the 2nd map is satisfied. The first map is then a mess with the top-left 3x3 block not how I want it. Also, this generalizes poorly to higher dimensions (2 sets of (2n+1)x(2n+1) matrices that I'm guaranteed are similar to other (2n+1)x(2n+1) matrices).
Is there any magic command or simple solution to this problem? I can probably do some sort of iterative procedure that keeps detailed track of all the eigenvectors/values, first finding the correct X for the first matrix, and then the correct X for the second matrix compatible with the first one. But that seems like it would be very involved!
This question is similar to Is there a clean way to extract the subspaces invariant under a list of matrices?, but I think my problem is much simpler because I know the invariant subspaces and in detail what I want the output to look like.
{s1,s2}={{{1,0,0,0,0},{0,-1,0,0,0},{0,0,1,0,0},{0,0,0,-1,0},{0,0,0,0,1}},{{-(1/4),-(1/2),-(Sqrt[(3/2)]/2),-(1/2),-(1/4)},{-(I/2),-(I/2),0,I/2,I/2},{Sqrt[3/2]/2,0,-(1/2),0,Sqrt[3/2]/2},{I/2,-(I/2),0,I/2,-(I/2)},{-(1/4),1/2,-(Sqrt[(3/2)]/2),1/2,-(1/4)}}};{t1,t2}={{{-1,0,0,0,0},{0,-1,0,0,0},{0,0,1,0,0},{0,0,0,1,0},{0,0,0,0,1}},{{0,1,0,0,0},{0,0,1,0,0},{1,0,0,0,0},{0,0,0,E^(2 Pi I/3),0},{0,0,0,0,E^(4 Pi I/3)}}};ct=ConjugateTranspose;(* "Arg", "SortBy" and "Transpose" are used to make sure that the eigenvalues and eigenvectors of s2 and t2 are in the same order. *){t2d,t2x}=N@Transpose[SortBy[Transpose[Eigensystem[t2]],Arg[N[#[[1]]]]&]];(* the spectrum is degenerate, so we have to manually orthonormalize. This also ensures t2x and s2x are both unitary matrices *)t2x=Transpose[Chop[ Orthogonalize[t2x]]];{s2d,s2x}=N@Transpose[SortBy[Transpose[Eigensystem[s2]],Arg[N[#[[1]]]]&]];s2x=Transpose[Chop[ Orthogonalize[s2x]]];X=ct[s2x.ct[t2x]];MatrixForm[Chop[X.s1.ct[X]]]MatrixForm[Chop[X.s2.ct[X]]]
|
Let $V,H$ be separable Hilbert spaces such that there are dense injections $V \hookrightarrow H \hookrightarrow V^*$. (For example, $H = L^2(\mathbb{R}^n)$, $V = H^1(\mathbb{R}^n)$, $V^* = H^{-1}(\mathbb{R}^n)$.) We can then define the vector-valued Sobolev space $W^{1,2}([0,1]; V, V^*)$ of functions $u \in L^2([0,1]; V)$ which have one weak derivative $u' \in L^2([0,1], V^*)$. Such spaces arise often in the study of PDE involving time.
I would like a reference for some simple facts about $W^{1,2}$. For example:
Basic calculus, like integration by parts, etc.
The "Sobolev embedding" result $W^{1,2} \subset C([0,1]; H)$;
The "product rule" $\frac{d}{dt} \|u(t)\|_{H^2} = (u'(t), u(t))_{V^*, V}$
$C^\infty([0,1]; V)$ is dense in $W^{1,2}$.
These are pretty easy to prove, but they should be standard and I don't want to waste space in a paper with proofs.
Some of these results, in the special case where $V$ is Sobolev space, are in L. C. Evans,
Partial Differential Equations, section 5.9, but I'd rather not cite special cases. Also, in current editions of this book, there's a small but significant gap in one of the proofs (it is addressed briefly in the latest errata). So I'd prefer something else.
Thanks!
|
Complex number
A complex number is an ordered pair of real numbers. (A real number may take any value from -infinity to +infinity. Real numbers are commonly represented as points on the "real number line", i.e., a straight line of infinite length.)
The two components of a complex number (a,b) are the real part (a) and the imaginary part (b). Complex numbers may be represented as points on an infinite two-dimensional plane surface, with the real part as the "X" coordinate and the imaginary part as the "Y" coordinate.
The operations of addition and multiplication are defined for complex numbers:
(a,b) + (c,d) = (a+c, b+d), and
(a,b) x (c,d) = (ac-bd, ad+bc)
Complex numbers may also be represented using "i" (or "j" in engineering contexts). The symbol "i" refers the the complex number (0,1). If "i" is interpreted as the square root of -1, we can write complex numbers in the form
(a,b) = a + ib
The addition and multiplication operators work out in a simple way, if we remember to collect real and imaginary terms and remember that i x i = -1. Thus,
(a+ib) x (c+id) = ac+aid+ibc+iibd = ac+i(ad+bc)+(-1)bd = ac-bd + i(ad+bc)
Complex numbers are often used in scientific and engineering applications to describe systems where amplitude and phase of a narrow band signal are important. If V = (re, im) is a complex value (say voltage), the amplitude and phase of V are
<math>Amp = \sqrt{re^2 + im^2}\,</math>
and
<math>Phase = \arctan \big(\frac{im}{re}\big)</math>
A sinusoidal voltage with frequency <math>\omega = 2 \pi F</math> may be considered to be the real part of a complex voltage
<math>V(t) = V_0\, \exp(j \omega t+ j\phi) = V_0\, ( \cos(\omega t+\phi) + j \sin(\omega t+\phi)\,)</math>
with amplitude <math>V_0</math> and phase <math>\phi</math>.
|
ABSTRACT: This preprint proposes a general framework for studying transitional behavior in geometric topology. A theory of families of geometries is developed, robust enough to contain all currently known transitional geometries arising from conjugacy limits. This is used to construct a theory of Klein geometries over $\mathbb{R}$-algebras and the relationship between transitioning algebraic structure and transitions of the corresponding geometries is investigated in detail. This generalizes the construction in the paper below.
ABSTRACT: By degenerating the algebraic structure of $\mathbb{C}$, we construct a transition of geometries from complex hyperbolic space to a new geometry built out of $\mathbb{R}\mathsf{P}^n$ and its dual. This transition provides a geometric context for considering the flexing of hyperbolic orbifolds, as defined by Cooper, Long & Thistlethwaite. As an application, we connect the convex projective and complex hyperbolic deformations of triangle groups via this transition.
ABSTRACT:
This paper studies the geometry given by the projective action of the Heisenberg group on the plane. The closed orbifolds admitting Heisenberg structures are those with vanishing Euler characteristic and singularities of order at most two, and the corresponding deformation spaces are computed. Heisenberg geometry is of interest as a transitional geometry between any two of the constant-curvature geometries $\mathbb{S}^2,\mathbb{E}^2,\mathbb{H}^2$, and regenerations of Heisenberg tori into these geometries are completely described.
ARXIV: https://arxiv.org/abs/1805.04256
ABSTRACT: Here we classify the conjugacy limits of the isometry groups of the constant curvature geometries as subgroups of $PGL(3,\mathbb{R})$. I wrote this to teach myself the material, hopefully it proves useful to other learners.
ABSTRACT: A group $\Gamma<\mathsf{PSL}(2,\mathbb{Q})$ is pseudomodular if $\Gamma$ is not comenusrable with $\mathsf{PSL}(2,\mathbb{Z})$ but the cusp set of $\Gamma$ is still the extended rationals $\mathbb{Q}\cup\{\infty\}$. Here we extend a technique of Lu, Tan and Vo to construct infinite families of pseudomodular groups. (TO BE POSTED SOON)
|
The Annals of Statistics Ann. Statist. Volume 6, Number 4 (1978), 701-726. Nonparametric Inference for a Family of Counting Processes Abstract
Let $\mathbf{B} = (N_1, \cdots, N_k)$ be a multivariate counting process and let $\mathscr{F}_t$ be the collection of all events observed on the time interval $\lbrack 0, t\rbrack.$ The intensity process is given by $\Lambda_i(t) = \lim_{h \downarrow 0} \frac{1}{h}E(N_i(t + h) - N_i(t) \mid \mathscr{F}_t)\quad i = 1, \cdots, k.$ We give an application of the recently developed martingale-based approach to the study of $\mathbf{N}$ via $\mathbf{\Lambda}.$ A statistical model is defined by letting $\Lambda_i(t) = \alpha_i(t)Y_i(t), i = 1, \cdots, k,$ where $\mathbf{\alpha} = (\alpha_1, \cdots, \alpha_k)$ is an unknown nonnegative function while $\mathbf{Y} = (Y_1, \cdots, Y_k),$ together with $\mathbf{N},$ is a process observable over a certain time interval. Special cases are time-continuous Markov chains on finite state spaces, birth and death processes and models for survival analysis with censored data. The model is termed nonparametric when $\mathbf{\alpha}$ is allowed to vary arbitrarily except for regularity conditions. The existence of complete and sufficient statistics for this model is studied. An empirical process estimating $\beta_i(t) = \int^t_0 \alpha_i(s) ds$ is given and studied by means of the theory of stochastic integrals. This empirical process is intended for plotting purposes and it generalizes the empirical cumulative hazard rate from survival analysis and is related to the product limit estimator. Consistency and weak convergence results are given. Tests for comparison of two counting processes, generalizing the two sample rank tests, are defined and studied. Finally, an application to a set of biological data is given.
Article information Source Ann. Statist., Volume 6, Number 4 (1978), 701-726. Dates First available in Project Euclid: 12 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aos/1176344247 Digital Object Identifier doi:10.1214/aos/1176344247 Mathematical Reviews number (MathSciNet) MR491547 Zentralblatt MATH identifier 0389.62025 JSTOR links.jstor.org Subjects Primary: 62G05: Estimation Secondary: 62G10: Hypothesis testing 62M99: None of the above, but in this section 62N05: Reliability and life testing [See also 90B25] 60G45 60H05: Stochastic integrals 62M05: Markov processes: estimation Citation
Aalen, Odd. Nonparametric Inference for a Family of Counting Processes. Ann. Statist. 6 (1978), no. 4, 701--726. doi:10.1214/aos/1176344247. https://projecteuclid.org/euclid.aos/1176344247
|
Research Open Access Published: Blow-up phenomena and lifespan for a quasi-linear pseudo-parabolic equation at arbitrary initial energy level Boundary Value Problems volume 2018, Article number: 159 (2018) Article metrics
403 Accesses
Abstract
In this paper, we continue to study the initial boundary value problem of the quasi-linear pseudo-parabolic equation
which was studied by Peng et al. (Appl. Math. Lett. 56:17–22, 2016), where the blow-up phenomena and the lifespan for the initial energy \(J(u_{0})<0\) were obtained. We establish the finite time blow-up of the solution for the initial data at arbitrary energy level and the lifespan of the blow-up solution. Furthermore, as a product, we obtain the blow-up rate and refine the lifespan when \(J(u_{0})<0\).
Introduction
In this paper, we investigate the initial boundary value problem of the following quasi-linear pseudo-parabolic equation:
where \(\Omega\subset\mathbb{R}^{n}\) (\(n\geq3\)) is a bounded domain with sufficiently smooth boundary
∂Ω, \(p>1\) and \(0\leq 2q< p-1\). \(T\in(0, \infty]\) denotes the maximal existence time of the solution.
Problem (1.1) describes a variety of important physical and biological phenomena such as the aggregation of population [1], the unidirectional propagation of nonlinear, dispersive, long waves [2], and the nonstationary processes in semiconductors [3]. In the absence of the term \(\operatorname{div}(|\nabla u|^{2q}\nabla u)\), Eq. (1.1) reduces to the following semilinear pseudo-parabolic equation:
There are many results for Eq. (1.2) such as the existence and uniqueness in [4], blow-up in [5–8], asymptotic behavior in [6, 9], and so on. Using the integral representation and the semigroup, Cao et al.[10] obtained the critical global existence exponent and the critical Fujita exponent for Eq. (1.2). Chen et al. [11] considered Eq. (1.2) with the logarithmic nonlinearity source term by the potential well methods.
Recently, Peng et al. [12] considered the blow-up phenomena on problem (1.1). By the way, Payne et al. [13] considered the blow-up phenomena of solutions on the initial boundary problem of the nonlinear parabolic equation
In addition, Long et al. [14] investigated the blow-up phenomena for a nonlinear pseudo-parabolic equation with nonlocal source
Finally, we mention some interesting works concerning quasi-linear or degenerate parabolic equations. For example, Winkert and Zacher [15] considered a generate class of quasi-linear parabolic problems and established global a priori bounds for the weak solutions of such problems; Fragnelli and Mugnai [16] established Carleman estimates for degenerate parabolic equations with interior degeneracy and non-smooth coefficients.
Throughout this paper, we use \(\|\cdot\|_{p}= (\int_{\Omega}|\cdot |^{p}\, dx )^{\frac{1}{p}}\) and \(\|\cdot\|_{W_{0}^{1,p}}= (\int_{\Omega }(|\cdot|^{p}+|\nabla\cdot|^{p})\,dx )^{\frac{1}{p}}\) as the norms on the Banach spaces \(L^{p}=L^{p}(\Omega)\) and \(W_{0}^{1, p}=W_{0}^{1, p}(\Omega)\), respectively. As in [12], we define the energy functional and the Nehari functional of (1.1), respectively, by
Let \(\lambda_{1}\) be the first nontrivial eigenvalue of −△ operator in Ω with homogeneous Dirichlet condition, then we have
In order to compare with our work, in this paper, we summarize the blow-up results obtained in [12] as follows.
(RES1) If \(0\leq2q< p-1\), \(J(u_{0})<0\), and
u is a nonnegative solution of (1.1), then u blows up at some finite time T, where T is bounded by
From the above (RES1), we notice that (1) the blow-up rate is not given when \(J(u_{0})<0\); (2) the blow-up phenomena and the lifespan are still unsolved when \(J(u_{0})\geq0\).
Motivated by the above-mentioned facts, we investigate these two problems in this paper. Firstly, we state the local existence theorem of problem (1.1) by Faedo–Galerkin method (see Theorem 2.1 in [12]).
(RES2) For any \(u_{0}\in W_{0}^{1,2q+2}(\Omega)\), there exists \(T>0\) such that problem (1.1) has a unique local weak solution \(u\in L^{\infty}(0,T;W_{0}^{1, 2q+2}(\Omega))\) with \(u_{t}\in L^{2}(0,T; H_{0}^{1}(\Omega))\) which satisfies
for all \(v\in W_{0}^{1, 2q+2}(\Omega)\).
Our main result of this paper can be stated as the following theorem.
Theorem 1.1 For all \(0\leq2q< p-1\), the nonnegative solution u of problem (1.1) blows up at finite time in \(H_{0}^{1}\)- norm provided that Furthermore, the lifespan T can be estimated by Remark 1.1 Proof of Theorem 1.1 Lemma 2.1 Suppose that a nonnegative, twice- differentiable function \(\theta(t)\) satisfies the inequality where \(r>0\) is some constant. If \(\theta(0) > 0\) and \(\theta'(0)>0\), then there exists \(0 < t_{1}\leq\frac{\theta(0)}{r\theta'(0)}\) such that \(\theta(t)\rightarrow+\infty\) as \(t\rightarrow t_{1}^{-}\). Proof of Theorem 1.1
We give the proof in the following two steps.
Step 1: Blow-up. Let \(u(t)\) be the solution of problem (1.1) with the initial data satisfying (1.7). We may assume \(J(u(t))\geq0\); otherwise, there exists some \(t_{0}\geq0\) such that \(J(u(t_{0}))<0\), then \(u(t)\) will blow up in finite time by (RES1), the proof of this step is complete. So, in the following, we give our proof by contradiction and assume that \(u(t)\) exists globally and \(J(u(t))\geq0\) for all \(t\geq0\).
Since
by Hölder’s inequality, (2.1), and \(J(u_{0})\geq J(u(t))\geq0\), we obtain that
Combining (1.5) and Hölder’s inequality, we deduce that
Since \(\frac{d}{dt}(J(u(t)))\leq0\), it follows from the above inequality that
Let
then
for all \(t\geq0\). By using Gronwall’s inequality, we get
Noticing that \(H(0)>0\) via (1.7) and the assumption \(J(u(t))\geq 0\) for \(t\geq0\), we deduce
which is a contradiction with (2.3) for
t sufficiently large. Hence, \(u(t)\) blows up at some finite time, i.e., \(T<\infty\). Step 2: Lifespan. We will find an upper bound for T. Firstly, we claim that
where we also use \(0\leq2q< p-1\), which implies \(I(u_{0})<0\). Hence, if (2.4) does not hold, there must exist \(t_{0}\in(0,T)\) such that \(I(u(t_{0}))=0\), \(I(u(t))<0\) for \(t\in[0,t_{0})\). Then, by (2.2), we obtain that \(\|u(t)\|_{H_{0}^{1}}^{2}\) is strictly increasing on \([0, t_{0})\). Then it follows from (1.7) that
which is a contradiction with (2.6). Hence, \(I(u(t))<0\) and \(\| u(t)\|_{H_{0}^{1}}^{2}\) is strictly increasing on \([0, T)\).
We define the functional
with two positive constants
β, γ to be chosen later. Since \(\|u(t)\|_{H_{0}^{1}}^{2}\) is strictly increasing, we get
and
Noticing that
and
by using Young’s inequality, Hölder’s inequality, and the element algebraic inequality
we can deduce
Hence, it follows from the above inequality and (2.7) that
By the above equality, (2.8), and the fact that \(\|u(t)\| _{H_{0}^{1}}^{2}\) is strictly increasing, we have
From (1.7), we can choose
β sufficiently small such that
Then the conditions of Lemma 2.1 are satisfied with \(r=\frac{p-1}{2}\), so we have
Fixing arbitrary
β satisfying (2.9), then let γ be sufficiently large such that
then it follows from (2.10) that
Define a function \(T_{\beta}(\gamma)\) by
It is easy to prove that the function \(T_{\beta}(\gamma)\) has a unique minimum at
Then it follows from (2.11) that
for any
β satisfying (2.9). Finally, we obtain
This completes the proof of Theorem 1.1. □
Corollary 2.1 For all \(0\leq2q< p-1\) and any \(M>0\), there exists initial \(u_{0M}\in W_{0}^{1, 2q+2}(\Omega)\) such that the weak solution for corresponding problem (1.1) will blow up in finite time. Proof
Let \(M>0\), and \(\Omega_{1}\) and \(\Omega_{2}\) be two arbitrary disjoint open subdomains of Ω. We assume that \(v\in W_{0}^{1, 2q+2}(\Omega _{1})\subset W_{0}^{1, 2q+2}(\Omega)\subset H_{0}^{1}(\Omega)\) is an arbitrary nonzero function, then we can take \(\alpha_{1}>0\) sufficiently large such that
We claim that there exist \(w\in W_{0}^{1, 2q+2}(\Omega_{2})\subset W_{0}^{1, 2q+2}(\Omega) \) and \(\alpha>\alpha_{1}\) such that \(J(w)=M-J(\alpha v)\).
In fact, we choose a function \(w_{k}\in C_{0}^{1}(\Omega_{2})\) such that \(\| \nabla w_{k}\|_{2}\geq k\) and \(\|w_{k}\|_{\infty}\leq c_{0}\). Hence,
On the other hand, since \(0\leq2q< p-1\), it holds that
Hence, there exist \(k>0\) and \(\alpha>\alpha_{1}\) both sufficiently large such that
Then we choose \(w=w_{k}\) and denote \(u_{0M}:=\alpha v+w\). Hence, we have
and
The proof is complete. □
Remark 2.1
In this remark, we establish the blow-up rate for \(J(u_{0})<0\). We define the functionals \(\varphi(t)=\|u(t)\|_{H_{0}^{1}}^{2}\) and \(\psi(t)=-2(p+1)J(u(t))\) as these in [12]. It was shown in (4.8) of [12] that
Now, we integrate the inequality from
t to T, noticing \(\lim_{t\rightarrow T^{-}}\varphi(t)=+\infty\) (by (RES1)), we obtain
Then it follows from the definitions of \(\varphi(t)\) and \(\psi(t)\) that
References 1.
Padrón, V.: Effect of aggregation on population recovery modeled by a forward–backward pseudoparabolic equation. Trans. Am. Math. Soc.
356(7), 2739–2756 (2004) 2.
Brill, H.: A semilinear Sobolev evolution equation in a Banach space. J. Differ. Equ.
24, 412–425 (1977) 3.
Korpusov, M.O., Sveshnikov, A.G.: Three-dimensional nonlinear evolution equations of pseudoparabolic type in problems of mathematical physics. Zh. Vychisl. Mat. Mat. Fiz.
43(12), 1835–1869 (2003) 4.
Showalter, R.E., Ting, T.W.: Pseudoparabolic partial differential equations. SIAM J. Math. Anal.
88(1), 1–26 (1970) 5.
Luo, P.: Blow-up phenomena for a pseudo-parabolic equation. Math. Methods Appl. Sci.
38(12), 2636–2641 (2015) 6.
Xu, R.Z., Su, J.: Global existence and finite time blow-up for a class of semilinear pseudo-parabolic equations. J. Funct. Anal.
264(12), 2732–2763 (2013) 7.
Xu, G.Y., Zhou, J.: Lifespan for a semilinear pseudo-parabolic equation. Math. Methods Appl. Sci.
41(2), 705–713 (2018) 8.
Xu, R.Z., Wang, X.C., Yang, Y.B.: Blowup and blowup time for a class of semilinear pseudo-parabolic equations with high initial energy. Appl. Math. Lett.
83, 176–181 (2018) 9.
Liu, Y., Jiang, W.S., Huang, F.L.: Asymptotic behaviour of solutions to some pseudo-parabolic equations. Appl. Math. Lett.
25, 111–114 (2012) 10.
Cao, Y., Yin, J., Wang, C.P.: Cauchy problems of semilinear pseudo-parabolic equations. J. Differ. Equ.
246(12), 4568–4590 (2009) 11.
Chen, H., Tian, S.Y.: Initial boundary value problem for a class of semilinear pseudo-parabolic equations with logarithmic nonlinearity. J. Differ. Equ.
258(12), 4424–4442 (2015) 12.
Peng, X.M., Shang, Y.D., Zheng, X.X.: Blow-up phenomena for some nonlinear pseudo-parabolic equations. Appl. Math. Lett.
56, 17–22 (2016) 13.
Payne, L.E., Philippin, G.A., Schaefer, P.W.: Blow-up phenomena for some nonlinear parabolic problems. Nonlinear Anal. TMA
69, 3495–3502 (2008) 14.
Long, Q.F., Chen, J.Q.: Blow-up phenomena for a nonlinear pseudo-parabolic equation with nonlocal source. Appl. Math. Lett.
74, 181–186 (2017) 15.
Winkert, P., Zacher, R.: Global a priori bounds for weak solutions to quasilinear parabolic equations with nonstandard growth. Nonlinear Anal. TMA
145(52), 1–23 (2016) 16.
Fragnelli, G., Mugnai, D.: Carleman estimates for singular parabolic equations with interior degeneracy and non-smooth coefficients. Adv. Nonlinear Anal.
6(1), 61–84 (2017) 17.
Levine, H.A.: Instability and nonexistence of global solutions of nonlinear wave equation of the form \(Pu_{tt} = Au + \mathfrak{F}(u)\). Trans. Am. Math. Soc.
192, 1–21 (1974) Acknowledgements
The authors would like to thank the referees for the careful reading of this paper and for the valuable suggestions to improve the presentation and the style of the paper.
Availability of data and materials
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
Funding
This work was supported by Key Scientific Research Foundation of the Higher Education Institutions of Henan Province, China (Grant No. 19A110004).
Ethics declarations Competing interests
The authors declare that they have no competing interests.
Additional information Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
|
In a comment elsewhere you write that you're interested in understanding how quantum-mechanical theory describes the radiation that a hydrogen atom does and does not emit.In your question you ask about another answer that suggests some significance to the electron having zero total momentum; I think that's a feature of the coordinate system choice rather than something physically interesting.Here's a second answer to hopefully address that concern.
In Schrödinger's quantum mechanics the probability density $\psi$ for finding the electron in some small volume near the nucleus (charge $Z$, mass $m_\text{nuc}^{-1} = \mu^{-1} - m_e^{-1}$), obeys the differential equation$$\left( \frac{\hbar^2}{2\mu} \vec\nabla^2 - \frac{Z\alpha\hbar c}r\right) \psi = E\psi.\tag 1$$It turns out that this equation has bound solutions with $E<0$ if, and only if, you introduce some
integer parameters $n,\ell,m$ subject to some constraints: $1\leq n$, $\ell<n$, and $|m|\leq \ell$. The energies associated with these quantum numbers are$$E_{n\ell m} = -\frac{\mu c^2\alpha^2Z^2}{2n^2} = Z^2 \cdot \frac{-13.6\rm\,eV}{n^2}.\tag 2$$Critically for our discussion, this means that there is a state with $n=1$ that has the minimum possible energy for an electron interacting with a proton.This is totally different from the unbound case, or the interaction between two like-charged particles, in which you can give your mobile particle any (positive) total energy that you like and inquire about its motion.If the total energy doesn't satisfy (2), it's simply impossible for the system to obey the equation of motion (1).
You compute transition rates in quantum mechanics using Fermi's Golden Rule: a transition between an initial state $i$ and a final state $f$ occurs in some time interval $\tau_{if} = 1/\lambda_{if}$ with probability $1/e$, where the decay constant $\lambda_{if}$ is$$\lambda_{if} = \frac{2\pi}\hbar\left| M_{if} \right|^2\rho_f.$$The density of final states $\rho_f$ is interesting if there are multiple final states with the same energy. (For instance, in hydrogen there are generally several degenerate final states with given $n,\ell$ but varying $m$.) The matrix element measures the overlap of the initial and final state given some interaction operator $U$:$$M_{if} = \int d^3x\ \psi_f^* U \psi_i$$For electric dipole radiation the operator is $U_{E1} = e\vec r$; for magnetic dipole radiation, $U_{M1} = {e}\vec L/{2\mu}$; for quadrupole etc. radiation there are other operators.You could also couple to multiple photons: for instanced the $n=2,\ell=0$ state cannot decay to the ground state by emitting a single photon, since the photon carries angular momentum, but can decay by emitting two dipole photons at the same time. This forbidden transition has lifetime $\sim 0.1\rm\,s$, compared with nanoseconds for the $n=2,\ell=1$ states at the same energy.Computing matrix elements gives you some hairy integrals, so generally you let someone else do them.
You can in principle use these arguments and the Golden Rule to calculate the radiation emitted in three cases:
From a free electron with $E_i>0$ to a free electron traveling in a different direction with a different energy $E_f>0$. This should give a result most similar to the classical case, where you can get continuous radiation from an accelerating charge.
From a free electron with $E_i>0$ transitioning to a bound electron with $E_f<0$.
From one bound electron state to another.
It's this final option, transitions between bound states, that interests you. The salient feature, unique to quantum mechanics, is that the energies of the bound states are quantized. Unlike in classical mechanics, in quantum theory the equation of motion has
no solutions with $E<E_1$. Even if you made up some trial sub-ground-state wavefunction to compute the matrix element for the transition (which can't be done, since the existing wavefunctions form a complete set), you'd find that the density of states at your hypothetical lower energy is $\rho_f=0$, so the time before the transition occurs is, on average, infinitely long.
The classical theory predicts radiation when a charge accelerates from one continuum momentum to another.So does the quantum theory. But the quantum theory also predicts bound states with quantized energies.Non-transitions from a state to itself have zero matrix element, therefore never occur; transitions from one state to another can only occur if there's a final state available.
|
In a previous post, we took a look at the computation of a portfolio’s exposure to its allocations. Then, to show the effects of active management, we compared the return made by two portfolios. But there is so much more to look inside the financial time series.
Since we left a couple of cliffhangers, let’s jump into them now.
Risk metrics
First of all, let’s begin with those hidden dangers we mentioned in the impressive performance accomplished by
Free. Recall that it generated way more profit than our Rebalanced portfolio. That could lead you to decide that it’s better to leave your money to itself rather than to pay someone to manage it. However, when we take a glance at their associated risk metrics, we find out why taking some care of where your money goes is worth it.
Our two indicators are going to be the
annual volatility of the returns and drawdown. Volatility
Volatility is an indicator of the expected displacement of the return around its current value. The classic formula to compute it for a period T is
\(\)
\[Vol_t = \sqrt{T} \sigma\left(\ln 1 + r_j \right), \quad j = t-(T+1), \ldots, t-1\]
Putting together both portfolio’s annual volatility, we see that our simple, yet effective, monthly rebalance was able to reduce the metric’s value by 21 % on average. And if you observe the plots carefully, you will see how when peaks of volatility take place on
Free, on the Rebalanced curve they are damped or do not exist at all.
What are the implications of this reduction? When you are investing for the long term, you naturally want to create value, but mostly to preserve capital. Thus, what you want is not necessarily the highest return, but the most stable one. Indeed, a famous financial empirical fact is the low volatility anomaly.
Drawdown
It’s time now for our
fright measure. The drawdown reflects how much the accumulated return has dropped in for a given period since the last highest high. The more negative it is, the more the value of your portfolio dropped.
This could induce a feeling of insecurity, and perhaps make the investor think about moving his/her positions when the value of the portfolio drops too much (and as a consequence, overreact to the market. Remember that on the long term both portfolio’s fared really well).
Again, we see that the rebalancing has rewarding effects for most of the investing period. The
Rebalanced portfolio has had less steep drops during stress periods, or even a null one, pumping up the feeling of calm for such volatile assets, we are dealing with a very volatile sector after all!
It is inevitably to fall when the market does so too, you just want to fall as less as possible to start from higher up when the rebound takes place.
So, with these two simple metrics, we have seen the effect of a very simple risk control strategy: to keep your portfolio equally weighted in as much as possible. Could you guess why I did not suggest to rebalance every day or every week?
The number of shares
The problem with the weight formulation is that it leads to the computation of shares with decimal points. Naturally, the maths do not suffer from this problem, but you cannot go to your broker and ask for 17.345 shares of Apple on your next move.
To compute the initial shares you need to buy of a certain stock to reach a determined exposure level, you determine the ratio between the Assets Under Management you want to expose and the stock price,
\[S_0 = \frac{I_0 w_0}{P_0}\]
and take the closest smallest integer number (known as
the floor operation in mathematics).
Since now you work with an integer number, you will always have some leftover cash that is not invested. You want that number to be as small as possible, but at the same time have enough left so that you can rebalance your exposure without problems.
Summing up, there is more to portfolio valuation than simply the return. Certainly, you are not going far if you do not capture the market growth when it takes place, but you want to make sure you are not boarding an airplane without a pilot, no matter how high it goes.
Once again, thank you for reading.
|
I'm reading through "Algorithms for Stochastic Mixed-Integer Programming Models", Elsevier Handbooks in OR&MS Volume 12 (Discrete Optimization), Chapter 9 and I'm stuck. For those without access to the Handbooks, I found a preprint version at: http://server1.tepper.cmu.edu/Seminars/docs/Sen_paper1-2.pdf.
On page 531, Sen describes an algorithm from Laporte and Loveaux's 1993 paper "The integer L-shaped method for stochastic integer programs with complete recourse". In step 2b, he says:
Define \(\eta_k(x) = \max \left\{\eta_{k-1}(x), ~\alpha + \beta x \right\}\)
In the first iteration, \(\eta_{k-1}(x)\) is the constant lower bound on the expected recourse, and \(\alpha + \beta x \), the right-hand side of equation 3.4b on page 530, is a function of the binary first-stage variable \(x\). given that the first term of \(\max\) is a constant, and the second term is a function of \(x\), I'm not clear what the \(\max\) function means. In the example instance on the next page, the \(\beta x\) term in \(\alpha + \beta x \) cancels out, but this isn't always the case.
What does step 2b mean?
This means that you keep all the cuts that you have added so far. Here $eta_k(x)$ is the point-wise maximum of all (affine) cuts and is the current (piece-wise linear) convex under-approximation of the expected recourse function.
answered
Shabbir Ahmed
@OPer: were you able to implement the integer L-shaped method?? I am currently stumbled upon it while trying to transfer the non-integer, single-cut version of L-shaped method to an integer version one...
answered
ghjk
|
$\newcommand\Q{\mathbf{Q}}\newcommand\OL{\mathcal{O}}\newcommand\I{\mathcal{I}}\newcommand\Z{\mathbf{Z}}\newcommand\eps{\epsilon}\newcommand\p{\mathfrak{p}}\newcommand\PP{\mathfrak{P}}\newcommand\Hom{\mathrm{Hom}}\newcommand\R{\mathbf{R}}\newcommand\q{\mathfrak{q}}\newcommand\Gal{\mathrm{Gal}}$
Summary: I think that it's difficult problem and that there won't be a "nice" answer in general.
Let $\I_K$ denote the group of invertible fractional ideals.There is a tautological exact sequence$$1 \rightarrow K^{\times}/\OL^{\times} \rightarrow \I_K\rightarrow C_K \rightarrow 0.$$ Taking cohomology gives the following sequence:$$0 \rightarrow (K^{\times}/\OL^{\times})^{G}\rightarrow \I^{G}_K\rightarrow C^{G}_K \rightarrow H^1(G,K^{\times}/\OL^{\times}),$$which can naturally be modified to yeild the sequence:$$0 \rightarrow (K^{\times}/\OL^{\times})^{G}/\Q^{ > 0}\rightarrow \I^{G}_K/\Q^{ > 0}\rightarrow C^{G}_K \rightarrow H^1(G,K^{\times}/\OL^{\times}).$$You are interested in the order of $ (K^{\times}/\OL^{\times})^{G}/\Q^{ > 0}$.As Will Sawin noted, the second term is simply $\prod \Z/e_p$. Thus one wantsto understand the classes $I \in C^G_K$ (that is, the so-called
ambiguous classes)which are actually strongly ambiguous , that is, $\sigma I = I$ as ideals , notjust ideal classes. If one defines $S_K \subset C^G_K \subset C_K$ to denote the subset of stronglyambiguous classes, then one "formally" has an answer to your question, namely,$$\frac{1}{|S_K|} \cdot \prod e_p.$$OTOH, this is really a proof by definition, so much content so far, although it gives a "name" to some of the objects to connect you with the literature.
The strongly ambiguous classes can also be described (given the exact sequence above)as the kernel of the map$$C^G_K \rightarrow H^1(G,K^{\times}/\OL^{\times})\hookrightarrow H^2(G,\OL^{\times})$$
What makes things
much easier when $K$ is cyclic is that one can essentially determine$C^G_K$ (by the ambiguous class number formula, which only exists for cyclic extensions), and also $S_K$. In fact, for a cyclic extension $S_K$ and $C^G_K$ are (almost) the same group.Let $C^{+}_K$ denote the group of invertible fractional ideals $I_K$ modulo the following relation:$[I] \sim [J]$ if and only if $I = (\alpha) J$ for some $\alpha$ with $N(\alpha) > 0$. In particular, $C^{+}_K$is a quotient of the narrow class group, and surjects onto the class group. For example, $C^{+}_K = C_K$if there exist units of norm $-1$. Suppose that $K/\Q$ is cyclic. I claim that the image of $(C^{+}_K)^G$ in $C^{G}_K$ lands in $S_K$. By assumption, $[\sigma I] \sim [I]$ in $C^{+}_K$, so $\sigma I = (\alpha) I$ for some $\alpha$ with positive norm.Clearly $N(\alpha)$ is a unit, so $N(\alpha) = 1$. Since $K/\Q$ is cyclic, by Hilbert 90 there exists a $\beta \in K^{\times}$such that $\alpha = \beta/\sigma \beta$. (This version of Hilbert 90 only makes sense for cyclic extensions, which is oneof the difficulties in the general case.) Replacing $I$ by $J = (\beta) I$ we deduce that $\sigma J = J$ and $[I] = [J]$ in$C_K$. When $K/\Q$ is not cyclic, however, then it's much trickier to get a handle on theambiguous classes --- I don't think that there will be a nice answer in general.
The simplest possible non-cyclic case is the case of biquadratic extensions. Suppose that $K$ is totally real.Then $K$ contains three subfields $K_1$, $K_2$, and $K_3$. The unit group of $K$ is, up to finite index,generated by $U:=\{\eps_1, \eps_2, \eps_3\}$. Kubota (Uber den bizyklischen biquadratischen Zahlkörper) classified the possible $\Gal(K/\Q)$-modulestructure of $\OL^{\times}_K$ (there are $8$ or so different types, of indices ranging from $2^0$ to $2^3$). One could simply compute the correspondingcohomology groups of each type, and see what one gets.It's not clear that the answer will be any more precise than a list of cases.(Even in the case of real quadratic fields the answer depends on the existence of unit of norm$-1$, which is itself a notoriously fickle condition.)
Yet another way to explain why the cyclic case is not typical is that, since (at least for $p$ odd) the units tensor $\Z_p$ are annihilated by the norm map, they form a module under $$\Z_p[x]/(x^n-1,1+x+x^2+ \ldots + x^{n-1}) = \Z_p[\zeta_n],$$ which essentially a direct sum of PIDs.
I guess it depends on exactly what you are interested in doing, but it might be useful to consider the following approach. Assuming $K/\Q$ is abelian, one can easily computethe genus class field $L/K$. The field $L$ is the largest field such that $L/\Q$ is totally real, abelian, and $L/K$ is unramified everwhere.The point of working with $L$ is that one knows that the strongly ambiguous classes in $\OL_K$ (and in $\OL_L$) become (are) principal in $\OL_L$, so the transgression mapis injective, and$$(L^{\times}/\OL^{\times}_{L})^{G_L}/\Q^{ > 0} \simeq\I^{G}_L/\Q^{ > 0} \simeq \prod \Z/e_p.$$(The $e_p$ are the same for $L$ and $K$ since $L/K$ is everywhere unramified.)If $L = \Q(\zeta_N)$, then one has explicit generators for this group, namely,$1 - \zeta$ for each $p^n$th root of unity for $p^n \| N$. Note that, in some sense, this gives a "complete" description of the $d$-integers in $\Q^{\mathrm{ab}}$:The $d$-numbers in $K$ arethe $G_K$-invariants of the finitely generated group$$\Z[\zeta_N]^{\times} \times \prod_{p^n \| N} (1 - \zeta)^{\Z}.$$
Concerning the image of the norm map, I think you are again out of luck.To explain why, consider what is close to the simplest possible non-trivial example, namely, $K = \Q(\sqrt{6p})$, where$p \equiv 1 \mod 4$ is prime. Write $(2) = \p^2_2$, $(3) = \p^2_3$, and$(p) = \p^2_p$ respectively.The field $K$ does not contain a unit of norm $-1$, so we know that the$d$-numbers generate the group $(\Z/2\Z)^2$. Since $\p_2 \p_3 \p_p$ is principal, it followsthat exactly one of $\p_2$, $\p_3$, and $\p_p$ will be principal.
The genus class field of $K$ is $$F = \Q(\sqrt{6},\sqrt{p}).$$It follows that if $C_K$ is the class group of $K$, then$C_K/2C_K$ is cyclic, and hence the $2$-part of the class group iscyclic. Via the Artin map, we can identify the primes ideals which lie in $C_K[2]$ as exactly the primes whichsplit completely in the genus field. Since the genus field is given explicitly, we may compute that
$\p_2$ splits in $F/K$ if and only if $p \equiv 1,17 \mod 24$, $\p_3$ splits in $F/K$ if and only if $p \equiv 1,13 \mod 24$, $\p_p$ splits in $F/K$ if and only if $p \equiv 1,5 \mod 24$.
In particular, if $p$ lies
outside one of these equivalence classes, thenthe image of the corresponding $\p$ is non-trivial in $C_K/2 C_K$, and hence $\p$ is not principal.It follows that: If $p \equiv 17 \mod 24$, then the $d$-numbers are generated by $2$ and $6p$, If $p \equiv 13 \mod 24$, then the $d$-numbers are generated by $3$ and $6p$, If $p \equiv 5 \mod 24$, then the $d$-numbers are generated by $6$ and $6p$,
This leaves open the case when $q \equiv 1 \mod 24$. The computation above merely shows that$\p_2$, $\p_3$, and $\p_p$ all become trivial in $C_K/2C_K$. Since they cannot all be principal,it follows that when $p \equiv 1 \mod 4$, there must be a surjection$C_K \rightarrow \Z/4\Z$. By class field theory, this corresponds to the existence of an unramified extension$E/K$ with $\Gal(E/K) = \Z/4\Z$. We may construct $E$ explicitly as follows.Since $\Z[\sqrt{6}]$ has class number one, and $p \equiv 1 \mod 24$ splits this field, there existsa $\pi \in \Z[\sqrt{6}]$ with $N(\pi) = p$. (For local reasons the sign will be positive.)The choice of $\pi$ will be unique up to a sign and the fundamental unit $\eps = 5 + 2 \sqrt{6}$. If we explicitly write$\pi = A + B \sqrt{6}$, then we have$$A^2 - 6 B^2 = p.$$The congruence on $p$ forces $B$ to be even and $A$ to be odd. After possibly multiplying by a unit and by $-1$, we may assume that $A \equiv 1 \mod 4$ and $B \equiv 0 \mod 4$. This determines $\pi$ up to squares, and $E$is identified with the Galois closure of $\Q(\sqrt{\pi}) = \Q(\sqrt{A + B \sqrt{6}})$.In particular, $E$ is the splitting field of$$X^4 - 2 A X^2 + p.$$A necessary condition (and sufficient if the $2$-part of the class group has order $4$) for $\p_p$ to be prime isthat the residue degree of $p$ in $E$ is $1$, or equivalently that $(2 A/p) = 1$.Since $p \equiv 1 \mod 24$, this is the same as saying that $(A/p) = 1$. In fact, (proof omitted, because thisanswer is already too long and the argument is a somewhat tedious calculation of ring class fields and Kummer extensions)this is equvalent to $(6/p)_4 = 1$. So this leads to the following criterion:
If $p \equiv 1 \mod 24$, and the quartic residue $(6/p)_4 = -1$, then there are no $d$-numbers of norm $p$.
If$p \equiv 1 \mod 24$, and $8 \nmid h_K$, then there do exist $d$-numbers of norm $p$.
One could go on, giving similar criteria for $\p_2$ and $\p_3$, but it will just get worse (at least for $4 \| h_K$one can give some sort of classical criteria due to the existence of governing fields, probably for$8 \| h_K$ and certainly for $16 \| h_K$ there won't be any non-tautological criterion.)
All in all, the "best" cases are when $K$ is its own genus field, or at least, for all $p$ dividing $[K:\Q]$, the $p$-class field of $K$ is the $p$-genus field of $K$.
If this doesn't help, perhaps you could say more precisely you want to prove about dimensions (or otherwise) of fusion categories?
|
NTS ABSTRACTSpring2019
Return to [1]
Jan 23
Yunqing Tang Reductions of abelian surfaces over global function fields For a non-isotrivial ordinary abelian surface $A$ over a global function field, under mild assumptions, we prove that there are infinitely many places modulo which $A$ is geometrically isogenous to the product of two elliptic curves. This result can be viewed as a generalization of a theorem of Chai and Oort. This is joint work with Davesh Maulik and Ananth Shankar. Jan 24
Hassan-Mao-Smith--Zhu The diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$ Abstract: Assume a polynomial-time algorithm for factoring integers, Conjecture~\ref{conj}, $d\geq 3,$ and $q$ and $p$ prime numbers, where $p\leq q^A$ for some $A>0$. We develop a polynomial-time algorithm in $\log(q)$ that lifts every $\mathbb{Z}/q\mathbb{Z}$ point of $S^{d-2}\subset S^{d}$ to a $\mathbb{Z}[1/p]$ point of $S^d$ with the minimum height. We implement our algorithm for $d=3 \text{ and }4$. Based on our numerical results, we formulate a conjecture which can be checked in polynomial-time and gives the optimal bound on the diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$. Jan 31
Kyle Pratt Breaking the $\frac{1}{2}$-barrier for the twisted second moment of Dirichlet $L$-functions Abstract: I will discuss recent work, joint with Bui, Robles, and Zaharescu, on a moment problem for Dirichlet $L$-functions. By way of motivation I will spend some time discussing the Lindel\"of Hypothesis, and work of Bettin, Chandee, and Radziwi\l\l. The talk will be accessible, as I will give lots of background information and will not dwell on technicalities. Feb 7
Shamgar Gurevich Harmonic Analysis on $GL_n$ over finite fields Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters.
For evaluating or estimating these sums, one of the most salient quantities to understand is the {\it character ratio}: $$trace (\rho(g))/dim (\rho),$$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant {\it rank}. This talk will discuss the notion of rank for GLn over finite fields, and apply the results to random walks. This is joint work with Roger Howe (TAMU).
Feb 14
Tonghai Yang The Lambda invariant and its CM values Abstract: The Lambda invariant which parametrizes elliptic curves with two torsions (X_0(2)) has some interesting properties, some similar to that of the j-invariants, and some not. For example, $\lambda(\frac{d+\sqrt d}2)$ is a unit sometime. In this talk, I will briefly describe some of the properties. This is joint work with Hongbo Yin and Peng Yu. Feb 28
Brian Lawrence Diophantine problems and a p-adic period map. Abstract: I will outline a proof of Mordell's conjecture / Faltings's theorem using p-adic Hodge theory. Joint with Akshay Venkatesh.
|
We study a collection of $^4$He atoms confined to strictly one dimension at zero temperature. We use the exact Path Integral Ground State method to evaluate the equation of state and the radial distribution function, and we find that the system behaves as a Luttinger liquid with parameter $K_L = \hbar \pi \rho / m v$ that takes all the possible values $0<K_L <+\infty$, depending on the density $\rho$, being $m$ the $^4$He mass and $v$ the sound velocity\cite{uno}. Actually the system goes from $K_L \ll 1$ in the high density quasi--solid regime to $K_L \gg 1$ close to the low density spinodal decomposition. By inverting the imaginary--time intermediate scattering function with the Genetic Inversion via Falsification of Theories method\cite{due}, we also evaluate the dynamical structure factor $S(q,\omega)$ in the whole range in $K_L$, exploring the behavior of the dynamical correlations beyond the limits of applicability of Luttinger liquid theory. We find that the famous phonon--maxon--roton excitation spectrum of $^4$He is not present in 1D. On the contrary, $S(q,\omega)$ manifests a particle--hole continuum typical of a fermionic system, as expected from the Bose-Fermi mapping valid for 1D hard-core interactions. In qualitative agreement with recent non--linear Luttinger liquid theories, we find that the main weight of density fluctuations continuously shifts from the lower threshold branch in the quasi--solid regime, to the upper Bogoliubov branch in the compressible low--density regime. At an intermediate density near $\rho = 0.15$ \AA$^{-1}$, the system corresponds to $K_L = 1$ and $S(q,\omega)$ maps to a non interacting Fermi gas at very low energies $\hbar\omega$, while at higher energies display non--universal effects depending on the $^4$He interaction potential.
Titolo: Dynamical correlations in one-dimensional 4He beyond Luttinger theory Autori:
GALLI, DAVIDE EMILIO (Primo)
BERTAINA, GIANLUCA (Secondo)
VITALI, ETTORE (Penultimo)
Data di pubblicazione: nov-2014 Settore Scientifico Disciplinare: Settore FIS/03 - Fisica della Materia Tipologia: Conference Object Citazione: Dynamical correlations in one-dimensional 4He beyond Luttinger theory / D.E. Galli, G. Bertaina, M. Motta, M. Rossi, E. Vitali. ((Intervento presentato al convegno Phase Transitions in reduced Dimansions tenutosi a Amherst nel 2014. Appare nelle tipologie: 14 - Intervento a convegno non pubblicato
|
The separation-of-variables solution you quoted has two indices appearing in it:
n and
j (the subscripts of the coefficient $A_{nj}$). Here,
n is azimuthal mode order, i.e. it counts the number of nodes along the direction in which the polar-angle $\theta$ varies (divided by 2).
The index
j is needed because the wave is supposed to satisfy the boundary condition of being zero at the radius
a. Here, it's best to employ
a as the unit of length, so that
r=1 is the radius of the circle, and the boundary condition becomes
$$J_{n}(\frac{\omega_j}{c}) = 0$$
Here, I added the index
j to the frequency, because only a discrete set of $\omega_j$ can satisfy the above equation.
To determine these allowed frequencies, you can use
BesselJZero.
For the plot, I'll convert the polar coordinate form of the separated solution to Cartesian coordinates. This is done by defining the function
fXY below. It takes the index
n and the wave number $k_j\equiv\omega_j/c$ as inputs. I'll leave out the time dependent factor for now, i.e., consider only the spatial variation of the wave at a given fixed time. Also, I'll choose the phase $\phi$ of the azimuthal solution to be such that I get a cosine instead of a sine:
fXY[n_, k_][x_?NumericQ, y_?NumericQ] :=
BesselJ[n, k Sqrt[x^2 + y^2]] Cos[n ArcTan[y, x]]
Now do the plot, given a pair of indices
{n, j}:
wavePattern[n_, j_] := Module[
{k0 = N[BesselJZero[n, j]]},
DensityPlot[fXY[n, k0][x, y],
{x, -1.1, 1.1}, {y, -1.2, 1.1},
RegionFunction -> Function[{x, y}, x^2 + y^2 < 1],
ColorFunction -> "BlueGreenYellow",
PlotPoints -> 100,
MaxRecursion -> 0,
Epilog ->
Inset[Grid[{{"n", n}, {"j", j}}, Frame -> All], {-.9, .9}],
BaseStyle -> {FontFamily -> "Arial"}]
]
I count the index
j starting at
1 because there is always at least one radial node (at the boundary).
As an example, here are some plots:
Show[GraphicsGrid@Table[wavePattern[m, n], {m, 0, 2}, {n, 1, 3}],
ImageSize -> 700]
In the
DensityPlot inside the function
wavePattern, I set the option
MaxRecursion -> 0 to speed up the plotting. An additional ingredient in making the plot for this solution is the use of
RegionFunction to define the circular domain of the wave.
Since I left out the time dependence, it may be better to apply a color scheme that emphasizes the
nodal lines, because these lines are the only thing that stays constant in time for a standing wave such as this. So here is an alternative plotting function:
wavePattern[n_, j_] :=
Module[{k0 = N[BesselJZero[n, j]]},
DensityPlot[fXY[n, k0][x, y], {x, -1.1, 1.1}, {y, -1.2, 1.1},
RegionFunction -> Function[{x, y}, x^2 + y^2 < 1],
ColorFunction ->
Function[{x},
Blend[{White, Darker@Brown, White}, 2 ArcTan[10 x]/Pi + .5]],
ColorFunctionScaling -> False, PlotPoints -> 100,
MaxRecursion -> 0,
Epilog ->
Inset[Grid[{{"n", n}, {"j", j}}, Frame -> All], {-.9, .9}],
BaseStyle -> {FontFamily -> "Arial"}]]
Show[GraphicsGrid@Table[wavePattern[m, n], {m, 0, 2}, {n, 1, 3}],
ImageSize -> 700]
This is pretty close to the kind of pattern you would
actually observe if you took a vibrating plate and spread sand on it: the grains will move to the places in the standing wave where the amplitude remains zero at all times, i.e., the nodes. That's what is shown in brown above.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.