url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://bricocasals.com/best-family-kpnrocw/5a22b2-determinant-of-2x2-matrix-in-c
|
Things to keep in mind: We take the product of the elements … Determinant of 2×2 Matrix … The determinant of a matrix can be arbitrarily close to zero without conveying information about singularity. Then it is just basic arithmetic. How to take the determinant of a partitioned matrix with vectors. Determinant of a 2x2 matrix Get 3 of 4 questions to level up! By using this website, you agree to our Cookie Policy. There is a lot that you can do with (and learn from) determinants, but you'll need to wait for an advanced course to learn about them. 2x2 Cramers Rule. See also: Determinant of a Square Matrix. Determinant of a 3x3 matrix Get 3 of 4 questions to level up! ... $\begingroup$ Try proving the property for a 2x2 or 3x3 matrix if you are feeling confused. The example mentioned above is an example of a 2x2 matrix determinant. This determinant calculator can help you calculate the determinant of a square matrix independent of its type in regard of the number of columns and rows (2x2, 3x3 or 4x4). The Leibniz formula for the determinant of a 2 × 2 matrix is. If a matrix order is n x n, then it is a square matrix. How to find the determinant of a matrix (2x2): formula, examples, and their solutions. Determinant of a 2×2 Matrix Suppose we are given a square matrix with four elements: , , , and . A tolerance test of the form abs(det(A)) < tol is likely to flag this matrix as singular. (Actually, the absolute value of the determinate is equal to the area.) Matrix Determinant Calculator. The determinant is extremely small. Home; Math; Matrix; 2x2 Matrix Multiplication Calculator is an online tool programmed to perform multiplication operation between the two matrices A and B. This precalculus video tutorial explains how to determine the inverse of a 2x2 matrix. Calculate the Determinant of a Matrix Description. 6. Determinants turn out to be useful when we study more advanced topics such as inverse matrices and the solution of simultaneous equations. The pattern continues for 4×4 matrices:. The Determinant of a matrix is a special number that can be calculated from the elements of a square matrix. 1. if (n == 2) return ((matrix[0][0] * matrix[1][1]) - (matrix[1][0] * matrix[0][1])); If the size of the matrix is not 2, then the determinant is calculated recursively. We call the square matrix I with all 1's down the diagonal and zeros everywhere else the identity matrix. Lower triangular matrix in c 9. Then calculate adjoint of given matrix. That is a meaningful question, because the answer is the same no matter how you choose to measure volume. The determinant can tell us if columns are linearly correlated, if a system has any nonzero solutions, and if a matrix is invertible. /** * C program to find determinant of 2x2 matrix */ #include #define SIZE 2 // Matrix size int main() { int A[SIZE][SIZE]; int row, col; long det; /* Input elements in matrix A from user */ printf("Enter elements in matrix of size 2x2: \n"); for(row=0; row Keto Foods At Walmart, Small Letterpress Machine, Black Epoxy Paint For Metal, Walchand College Of Engineering Fees, Healthy Stuffed Baked Potatoes, Simple Toner Reddit, Fast-growing Evergreen Trees In Kansas, Dark Souls 3 Way Of The Blue, Celestron Focuser Ascom Driver,
|
2021-04-19 12:48:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7997624278068542, "perplexity": 490.97330759933647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038879374.66/warc/CC-MAIN-20210419111510-20210419141510-00271.warc.gz"}
|
https://mathoverflow.net/questions/291263/sections-of-pullback-line-bundle-via-cyclic-branched-cover
|
# Sections of pullback line bundle via cyclic branched cover
It is a basic question and I would be happy to be directed to some reference for it.
Let $f\colon X\to Y$ be a finite branched cover of smooth projective varieties, $M$ a line bundle on $Y$ and $L=f^\ast M$ its pullback to $X$. For simplicity, let's assume $\deg f=2$. Then $L$ is invariant by the involution defined by the double cover and one has a decomposition $H^0(L) = f^\ast H^0(M) \oplus V$ where $V$ is the eigenspace corresponding to $-1$ for the action of the involution on $H^0(L)$.
Questions: what are the sections in $V$? I guess it should be related to the vanishings along the branch locus, how? What happens for higher degrees (I guess the existence of the automorphism is then not automatic, but if we have it then how do we interpret the eigenspaces)?
A good reference for cyclic covers, which includes your degree 2 case, is "Lectures on Vanishing theorems" by Esnault and Viehweg. Briefly, assuming characteristic different from 2, $f_*O_X$ decomposes as $O_Y\oplus R^{-1}$, where $R$ is line bundle such that $R^{\otimes 2}=O_Y(B)$, where $B$ is the branch locus. By the projection formula $V= H^0(R^{-1}\otimes M)$.
In general, for a Galois cover, you could decompose spaces/sheaves using characters of the group. But it wouldn't be as explicit.
• For a cyclic cover of any degree the decomposition is very explicit. – Sasha Jan 22 '18 at 21:47
|
2020-02-25 00:46:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9601286053657532, "perplexity": 138.26628056581623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145989.45/warc/CC-MAIN-20200224224431-20200225014431-00246.warc.gz"}
|
https://chemistry.stackexchange.com/questions/136749/entropy-increase-of-both-system-and-surroundings
|
# Entropy increase of both system and surroundings [closed]
In "why chemical reactions happen" the author (James Keeler) says that "a process which leads to an increase in the entropy of the system can be ... exothermic." Could someone please give me an example of a process which is exothermic and increases the entropy of the system? It seems strange to me how a system can release energy in the form of heat while also increasing its own entropy.
• Could you edit the question to specify which author said the quoted bit, and in which book or article? Could you also edit the question to tell us why you are asking this? – Karsten Theis Jul 29 at 13:49
## 1 Answer
We don't need to go to nuclear fusion to find an example. Lots of chemical reactions meet your criteria of being both exothermic (releasing heat to the surroundings) and increasing the entropy of the system.
A reaction is spontaneous if the free energy change of the reaction is negative ($$\Delta_rG <0)$$.The free energy change of a reaction at constant temperature is equal to the sum of the enthalpy change ($$\Delta_rH$$) and the temperature multiplied by the negative entropy change ($$-T\Delta_rS$$):
$$\Delta_rG = \Delta_rH - T\Delta_rS$$
One way for $$\Delta_rG$$ to be negative is for $$\Delta_rH$$ to be negative and $$\Delta_rS$$ to be positive. In a reaction conducted at constant pressure, the enthalpy change $$\Delta_rH$$ is equal to the heat exchanged between the system (reactants) and the surroundings. When $$\Delta_rH <0$$, the reaction is exothermic.
To find examples that meen your criteria, you only need a source of standard entropy and ethalpy data so that you can calculate $$\Delta_rH$$ and $$\Delta_rS$$ for a reaction. There are many such examples of reactions that are exothermic and have a positive entropy change of the system. Below is a simple example:
$$\ce{C(s) + O2(g) -> CO2(g)}$$
Use the NIST Chemistry Webbook, here are the standard enthalpies of formation and standard entropies of formation for carbon, oxygen, and carbon dioxide:
$$\begin{array}{|l|c|c|} \hline & \Delta_fH^\circ\ \mathrm{(kJ/mol)} & \Delta S^\circ\ \mathrm{J\cdot K/mol} \\ \hline \ce{C(g)} & 0 & 5.8 \\ \ce{O2(g)} & 0 & 205.15 \\ \ce{CO2(g)} & -383.52 & 213.79 \\ \hline \Delta_r & -383.52 & 2.84 \\ \hline \end{array}$$
At all temperatures, this reaction is exothermic and has a positive entropy change for the system.
|
2020-10-01 01:25:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6266148090362549, "perplexity": 347.47826662712305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402130531.89/warc/CC-MAIN-20200930235415-20201001025415-00728.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/elementary-algebra/chapter-2-real-numbers-2-3-real-numbers-and-algebraic-expressions-problem-set-2-3-page-68/70
|
## Elementary Algebra
$= - \frac{31}{20}$
1. Substitute the values into the specific variables $= 2(-\frac{2}{5}) - (-\frac{3}{4}) - 3(\frac{1}{2})$ 2. Follow the acronym PEDMAS: P: arenthesis E: ponents D:ivision M:ultiplication A:ddition S:ubtraction $=$ P.E.D.M.A.S This is used to determine which order of operations is completed first from top to bottom. For example, you would complete the division of two numbers before the addition of another two numbers. In this case, we multiply before subtracting: $= -\frac{4}{5} + \frac{3}{4} - \frac{3}{2}$ $=-\frac{16}{20}+\frac{15}{20}-\frac{30}{20}$ $= - \frac{1}{20} - \frac{30}{20}$ $= - \frac{31}{20}$
|
2018-05-20 16:16:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7165103554725647, "perplexity": 1293.9469174446767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863626.14/warc/CC-MAIN-20180520151124-20180520171124-00242.warc.gz"}
|
https://ncas-cms.github.io/cf-python/3.8.0/function/cf.open_files_threshold_exceeded.html
|
# cf.open_files_threshold_exceeded¶
cf.open_files_threshold_exceeded()[source]
Return True if the total number of open files is greater than the current threshold. GNU/LINUX.
The threshold is defined as a fraction of the maximum possible number of concurrently open files (an operating system dependent amount). The fraction is retrieved and set with the of_fraction function.
Returns
bool
Whether or not the number of open files exceeds the threshold.
Examples:
In this example, the number of open files is 75% of the maximum possible number of concurrently open files:
>>> cf.of_fraction()
0.5
>>> cf.open_files_threshold_exceeded()
True
>>> cf.of_fraction(0.9)
>>> cf.open_files_threshold_exceeded()
False
|
2022-05-27 21:47:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8977764844894409, "perplexity": 1703.6213524275674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663006341.98/warc/CC-MAIN-20220527205437-20220527235437-00251.warc.gz"}
|
http://math.stackexchange.com/questions/479794/what-are-the-steps-to-create-an-explicit-function-from-an-implicit-instruction
|
# What are the steps to create an explicit function from an implicit instruction?
I want to develop a method to compute the seed value after n steps. I first wanted to use a loop, but I thought maybe someone comes up with a nice single expression and could also say something about the process.
I guess you have to use mathematical induction to solve that...
$x_0 = \mathtt{seed}$
$x_1 = (x_0 \cdot \mathtt{0x5DEECE66D} + \mathtt{0xB})~\mathtt{AND}~((1~ \mathtt{SHIFTL}~48) - 1)$
$x_n = \dots$
All the values are 64-bit java long numbers. More information can be found here: http://docs.oracle.com/javase/6/docs/api/java/util/Random.html#next%28int%29. What I basically want to do is to copy an instance of Random object, so that copy behaves same.
-
I managed to create a copy of a random object stackoverflow.com/a/18531276/1809463. But I'm still interested in the solution, though. – mike Aug 30 '13 at 13:39
I suggest using this service.
Or, if you want to write the code yourself, I suggest starting here.
-
|
2014-07-30 13:59:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3664185702800751, "perplexity": 207.8247545433446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510270528.34/warc/CC-MAIN-20140728011750-00360-ip-10-146-231-18.ec2.internal.warc.gz"}
|
http://blog.plover.com/math/sqrt-3.html
|
# The Universe of Discourse
Sat, 25 Mar 2006
Achimedes and the square root of 3
In my recent discussion of why π might be about 3, I mentioned in passing that Archimedes, calculating the approximate value of π used 265/153 as a rational approximation to √3. The sudden appearance of the fraction 265/153 is likely to make almost anyone say "huh"? Certainly it made me say that. And even Dr. Chuck Lindsey, who wrote up the detailed explanation of Archimedes' work from which I learned about the 265/153 in the first place, says:
Throughout this proof, Archimedes uses several rational approximations to various square roots. Nowhere does he say how he got those approximations—they are simply stated without any explanation—so how he came up with some of these is anybody's guess.
It's a bit strange that Dr. Lindsey seems to find this mysterious, because I think there's only one way to do it, and it's really easy to find, so long as you ask the question "how would Archimedes go about calculating rational approximations to √3", rather than "where the heck did 265/153 come from?" It's like one of those pencil mazes they print in the Sunday kids' section of the newspaper: it looks complicated, but if you work it in the right direction, it's trivial.
Suppose you are a mathematician and you do not have a pocket calculator. You are sure to need some rational approximations to √3 somewhere along the line. So you should invest some time and effort into calculating some that you can store in the cupboard for when you need them. How can you do that?
You want to find pairs of integers a and b with a/b ≈ √3. Or, equivalently, you want a and b with a2 ≈ 3b2. But such pairs are easy to find: Simply make a list of perfect squares 1 4 9 16 25 36 49..., and their triples 3 12 27 48 75 108 147..., and look for numbers in one list that are close to numbers in the other list. 22 is close to 3·12, so √3 ≈ 2/1. 72 is close to 3·42, so √3 ≈ 7/4. 192 is close to 3·112, so √3 ≈ 19/11. 972 is close to 3·562, so √3 ≈ 97/56.
Even without the benefits of Hindu-Arabic numerals, this is not a very difficult or time-consuming calculation. You can carry out the tabulation to a couple of hundred entries in a few hours, and if you do you will find that 2652 = 70225, and 3·1532 is 70227, so that √3 ≈ 265/153.
Once you understand this, it's clear why Archimedes did not explain himself. By saying that √3 was approximately 265/153, had had exhausted the topic. By saying so, you are asserting no more and no less than that 3·1532 ≈ 2652; if the reader is puzzled, all they have to do is spend a minute carrying out the multiplication to see that you are right. The only interesting point that remains is how you found those two integers in the first place, but that's not part of Archimedes' topic, and it's pretty obvious anyway.
[ Addendum 20090122: Dr. Lindsey was far from the only person to have been puzzled by this. More here. ]
In my article about the peculiarity of π, I briefly mentioned continued fractions, saying that if you truncate the continued fraction representation of a number, you get a rational number that is, in a certain sense, one of the best possible rational approximations to the original number. I'll eventually explain this in detail; in the meantime, I just want to point out that 265/153 is one of these best-possible approximations; the mathematics jargon is that 265/153 is one of the "convergents" of √3.
The approximation of √n by rationals leads one naturally to the so-called "Pell's equation", which asks for integer solutions to ax2 - by2 = ±1; these turn out to be closely related to the convergents of √(a/b). So even if you know nothing about continued fractions or convergents, you can find good approximations to surds.
Here's a method that I learned long ago from Patrick X. Gallagher of Columbia University. For concreteness, let's suppose we want an approximation to √3. We start by finding a solution of Pell's equation. As noted above, we can do this just by tabulating the squares. Deeper theory (involving the continued fractions again) guarantees that there is a solution. Pick one; let's say we have settled on 7 and 4, for which 72 ≈ 3·42.
Then write √3 = √(48/16) = √(49/16·48/49) = 7/4·√(48/49). 48/49 is close to 1, and basic algebra tells us that √(1-ε) ≈ 1 - ε/2 when ε is small. So √3 ≈ 7/4 · (1 - 1/98). 7/4 is 1.75, but since we are multiplying by (1 - 1/98), the true approximation is about 1% less than this, or 1.7325. Which is very close—off by only about one part in 4000. Considering the very small amount of work we put in, this is pretty darn good. For a better approximation, choose a larger solution to Pell's equation.
More generally, Gallagher's method for approximating √n is: Find integers a and b for which a2 ±1 = nb2; such integers are guaranteed to exist unless n is a perfect square. Then write √n = √(nb2 / b2) = √((a2 ± 1) / b2) = √(a2/b2 · (a2 ± 1)/a2) = a / b · √((a2 ± 1) / a2) = a/b · √(1 ± 1/a2) ≈ a/b · (1 ± 1 / 2a2).
Who was Pell? Pell was nobody in particular, and "Pell's equation" is a complete misnomer. The problem was (in Europe) first studied and solved by Lord William Brouncker, who, among other things, was the founder and the first president of the Royal Society. The name "Pell's equation" was attached to the problem by Leonhard Euler, who got Pell and Brouncker confused—Pell wrote up and published an account of the work of Brouncker and John Wallis on the problem.
Order A Mathematician's Apology with kickback no kickback
G.H. Hardy says that even in mathematics, fame and history are sometimes capricious, and gives the example of Rolle, who "figures in the textbooks of elementary calculus as if he had been a mathematician like Newton." Other examples abound: Kuratowski published the theorem that is now known as Zorn's Lemma in 1923, Zorn published different (although related) theorem in 1935. Abel's theorem was published by Ruffini in 1799, by Abel in 1824. Pell's equation itself was first solved by the Indian mathematician Brahmagupta around 628. But Zorn did discover, prove and publish Zorn's lemma, Abel did discover, prove and publish Abel's theorem, and Brouncker did discover, prove and publish his solution to Pell's equation. Their only failings are to have been independently anticipated in their work. Pell, in contrast, discovered nothing about the equation that carries his name. Hardy might have mentioned Brouncker, whose significant contribution to number theory was attributed to someone else, entirely in error. I know of no more striking mathematical example of the capriciousness of fame and history.
|
2016-07-24 01:02:45
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.814872145652771, "perplexity": 963.0342858860122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823805.20/warc/CC-MAIN-20160723071023-00034-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://artofproblemsolving.com/wiki/index.php?title=2020_AMC_8_Problems/Problem_15&oldid=137297
|
# 2020 AMC 8 Problems/Problem 15
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
## Problem 15
Suppose $15\%$ of $x$ equals $20\%$ of $y.$ What percentage of $x$ is $y?$
$\textbf{(A) }5 \qquad \textbf{(B) }35 \qquad \textbf{(C) }75 \qquad \textbf{(D) }133 \frac13 \qquad \textbf{(E) }300$
## Solution 1
We set up the following equation based on the given information: $$\frac{15x}{100}=\frac{20y}{100}$$ Solving for $x$ yields $$\frac{3x}{20}=\frac{y}{5}$$ $$20y=15x$$ $$x=1.\overline{3}y ==> D$$
|
2021-02-27 13:45:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2502560317516327, "perplexity": 2314.9871953761494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358956.39/warc/CC-MAIN-20210227114444-20210227144444-00413.warc.gz"}
|
https://www.atqed.com/python-statistics-mean
|
# The average or mean of a Python list, tuple, set
The average or mean of a Python list is calculated by statistics module.
import statistics
a = [1, 2, 3]
m = statistics.mean(a)
print(m)
# 2
This function works for Python list, tuple, set.
import statistics
a = (1.5, 2.1)
b = {3, 4}
print(type(a)) # <class 'tuple'>
print(type(b)) # <class 'set'>
m = statistics.mean(a)
n = statistics.mean(b)
print(m)
# 1.8
print(n)
# 3.5
Python set is a set of distinct values. So if you declare a set containing duplicate values, mean ignores them to calculate the average.
import statistics
a = {1, 1, 2}
m = statistics.mean(a)
print(m)
# 1.5
a looks like ${1, 1, 2}$ but is in fact only ${1,2}$.
|
2022-08-19 14:16:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2368684709072113, "perplexity": 6342.781597207452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573699.52/warc/CC-MAIN-20220819131019-20220819161019-00324.warc.gz"}
|
https://www.ssccglapex.com/hi/a-dealer-buys-an-article-marked-at-rs-25000-with-20-and-5-off-he-spends-rs-2000-for-its-repairs-and-sells-it-for-rs-25000-what-is-his-gain-or-loss-per-cent/
|
### A dealer buys an article marked at Rs. 25,000 with 20% and 5% off. He spends Rs. 2,000 for its repairs and sells it for Rs. 25,000. What is his gain or loss per cent?
A. 21% loss B. 10.50% loss C. 19.05% Gain D. 25% Gain Answer: Option C
Total CP = (95% of 80% of Rs.25000) + (2000) $\begin{array}{l} =\left(\frac{95}{100} \times \frac{80}{100} \times 25000\right)+2000 \\ =19000+2000=\text { Rs. } 21000 \end{array}$ Therefore, CP = Rs.21000 and SP = Rs.25000 Therefore, Gain = 25000 – 21000 = Rs.4000 Therefore, Gain % $=\frac{4000}{21000} \times 100 \%=\frac{400}{21} \%$ = 19.05% (approx.)
|
2022-09-30 19:43:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8171995282173157, "perplexity": 10447.40442150686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00553.warc.gz"}
|
https://stacks.math.columbia.edu/tag/05K5
|
Lemma 38.17.3. Let $f : X \to S$ be a finite type, flat morphism of schemes with geometrically integral fibres. Then $X$ is universally pure over $S$.
Proof. Let $\xi \in X$ with $s' = f(\xi )$ and $s' \leadsto s$ a specialization of $S$. If $\xi$ is an associated point of $X_{s'}$, then $\xi$ is the unique generic point because $X_{s'}$ is an integral scheme. Let $\xi _0$ be the unique generic point of $X_ s$. As $X \to S$ is flat we can lift $s' \leadsto s$ to a specialization $\xi ' \leadsto \xi _0$ in $X$, see Morphisms, Lemma 29.24.9. The $\xi \leadsto \xi '$ because $\xi$ is the generic point of $X_{s'}$ hence $\xi \leadsto \xi _0$. This means that $(\text{id}_ S, s' \to s, \xi )$ is not an impurity of $\mathcal{O}_ X$ above $s$. Since the assumption that $f$ is finite type, flat with geometrically integral fibres is preserved under base change, we see that there doesn't exist an impurity after any base change. In this way we see that $X$ is universally $S$-pure. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
2020-03-28 21:43:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9619640707969666, "perplexity": 123.04674131931303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493120.15/warc/CC-MAIN-20200328194743-20200328224743-00534.warc.gz"}
|
https://yingtongli.me/blog/2020/08/10/anki-forecast.html
|
Review Heatmap is an addon for Anki that adds a heatmap showing past and future card review activity. It is handy to see at a glance whether there are any particularly heavy or light days coming up, and a colourful reward for regular reviews.
However, Review Heatmap bases its review forecasts only on cards' current intervals. For example, if I add a bunch of new cards today, I expect to see that I will have a heavy review load in 3 days when those cards are up for review, and regular periods thereafter. Review Heatmap cannot natively handle this projection.
Similarly, in the screenshot above, I have limited my maximum review interval to 30 days, so Review Heatmap does not display any information beyond 30 days.
Anki Simulator is an addon that can ‘simulate Anki progress over time … to estimate your future workload’. We can combine Anki Simulator with Review Heatmap to get the better forecasting that we desire.
Below is a simple Anki addon that extends Review Heatmap to make use of Anki Simulator:
DECK_ID = 1234567890123 # replace with the deck ID of the deck you need to forecast
from datetime import datetime, timedelta
import review_heatmap
from review_simulator import collection_simulator, review_simulator
class MWStandin:
pass
def mp_cardsDue(self, start=None, stop=None):
mw = MWStandin()
mw.col = self.col
col_sim = collection_simulator.CollectionSimulator(mw)
conf = self.col.decks.confForDid(DECK_ID)
learningSteps = conf["new"]["delays"]
lapseSteps = conf["lapse"]["delays"]
dateArray = col_sim.generate_for_deck(
DECK_ID,
365, # daysToSimulate
conf["new"]["perDay"], # newCardsPerDay
conf["new"]["initialFactor"] / 10.0, # startingEase
len(learningSteps),
len(lapseSteps),
True, # includeOverdueCards
True, # includeSuspendedNewCards
0, # newCardsToGenerate
)
sim = review_simulator.ReviewSimulator(
dateArray,
365, # daysToSimulate
conf["new"]["perDay"], # newCardsPerDay
conf["rev"]["ivlFct"], # intervalModifier
conf["rev"]["perDay"], # maxReviewsPerDay
learningSteps,
lapseSteps,
conf["lapse"]["mult"], # newLapseInterval
conf["rev"]["maxIvl"], # maxInterval
[100] * len(learningSteps), # percentagesCorrectForLearningSteps
[100] * len(lapseSteps), # percentagesCorrectForLapseSteps
100, # percentageGoodYoung
100, # percentageGoodMature
0, # Percentage hard is set to 0
0, # Percentage easy is set to 0
self.col.schedVer()
)
result = sim.simulate()
result_dt = []
for entry in result:
if entry['y']:
dt = datetime.strptime(entry['x'], '%Y-%m-%d')
dt += timedelta(days=1) # Appears to be necessary for some reason
result_dt.append([int(dt.timestamp()), -entry['y']]) # Review Heatmap expects negative numbers
return result_dt
review_heatmap.activity.ActivityReporter._cardsDue = mp_cardsDue
This can be placed within the Anki addons directory, for example, at ~/.local/share/Anki2/addons21/review_forecast/__init__.py.
Anki Simulator by default uses the historical percentage of correct cards to forecast future load, thereby resulting in slightly different results each time it is run. The above code disables this by setting the percentage of correct cards to 100%, so the results are deterministic. This does mean, however, that the results will be slightly inaccurate, particularly if the actual correct percentage is significantly less than 100%.
With this modified code, Review Heatmap now projects future reviews out as far as we need!
|
2020-10-31 10:15:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2697562575340271, "perplexity": 8860.757278328325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107917390.91/warc/CC-MAIN-20201031092246-20201031122246-00574.warc.gz"}
|
https://www.vedantu.com/question-answer/write-the-numbers-using-roman-numerals-1-9-2-2-3-class-8-maths-cbse-5f60daf07d7dc34d3b87077e
|
Question
# Write the numbers using Roman numerals.1) $9$2) $2$3) $4$4) $11$
Hint:In this question we have to write the given numbers in roman numerals.Numbers in this system are represented by combinations of letters from the Latin alphabet. Roman numerals, as used today, are based on seven symbols:
Formula used:
The numbers $1$ to $10$ are usually expressed in Roman numerals as follows: ${\mathbf{I}},{\text{ }}{\mathbf{II}},{\text{ }}{\mathbf{III}},{\text{ }}{\mathbf{IV}},{\text{ }}{\mathbf{V}},{\text{ }}{\mathbf{VI}},{\text{ }}{\mathbf{VII}},{\text{ }}{\mathbf{VIII}},{\text{ }}{\mathbf{IX}},{\text{ }}{\mathbf{X}}$.
We need to write the numbers $9,2,4,11$ using Roman numerals.
The number $9$ is written $IX$ using Roman Numerals
The number $2$ is written $II$ using Roman Numerals
The number$4$ is written $IV$ using Roman Numerals
The number $11$ is written as $10 + 1$ so $XI$ using Roman Numerals (As $10$ is written as $X\,\&\, 1$ is written as $I$)
Thus, the numbers are written using Roman numerals as
$(1)\,9 ={IX}$
$(2)\,2 ={II}$
$(4 )\,4 ={IV}$
$(5)\,11 ={XI}$
Additional Information:Roman numerals are used in astronomy to designate moons and in chemistry to denote groups of the periodic table. They can be seen in tables of contents and in manuscript outlines, as upper- and lower-case Roman numerals break information into an easily organized structure. Music theory employs Roman numerals in notation symbols.
Note:The use of Roman numerals continued long after the decline of the Roman Empire. From the ${14^{th}}$ century on, Roman numerals began to be replaced in most contexts by the more convenient Hindu-Arabic numerals; however, this process was gradual, and the use of Roman numerals persists in some minor applications to this day. The numbers $1$ to $15$ are usually expressed in Roman numerals as follows:
|
2020-10-01 02:20:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7598508596420288, "perplexity": 990.6684669733506}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402130531.89/warc/CC-MAIN-20200930235415-20201001025415-00375.warc.gz"}
|
https://engineering-math.org/2017/04/15/mechanics-of-materials-3rd-edition-by-timothy-a-philpot-p1-4/
|
# Mechanics of Materials 3rd Edition by Timothy A. Philpot, P1.4
## Solution:
Cut a FBD through rod (1). The FBD should include the free end of the rod at A. We will assume that the internal force in rod (1) is tension (even though it obviously will be in compression). From equilibrium,
$\displaystyle \sum F_y=0$
$\displaystyle -F_1-15\:kips=0$
$\displaystyle F_1=-15\:kips$
$\displaystyle F_1=15\:kips\:\left(C\right)$
Next, cut a FBD through rod (2) that includes the free end of the rod at A. Again, we will assume that the internal force in rod (2) is tension. Equilibrium of this FBD reveals the internal force in rod (2):
$\displaystyle \sum F_y=0$
$\displaystyle -F_2-30\:kips-30\:kips-15\:kips=0$
$\displaystyle F_2=-75\:kips$
$\displaystyle F_2=75\:kips\:\left(Compression\right)$
From the given diameter of rod (1), the cross-sectional area of rod (1) is
$\displaystyle A_1=\frac{\pi }{4}\left(1.75\:in.\right)^2$
$\displaystyle A_1=2.4053\:in^2$
and thus, the normal stress in rod (1) is
$\displaystyle \sigma _1=\frac{F_1}{A_1}$
$\displaystyle \sigma _1=\frac{-15\:kips}{2.4053\:in^2}$
$\displaystyle \sigma _1=-6.23627\:ksi$
$\displaystyle \sigma _1=6.24\:ksi\:\left(Compression\right)$
From the given diameter of rod (2), the cross-sectional area of rod (2) is
$\displaystyle A_2=\frac{\pi }{4}\left(2.50\:in.\right)^2$
$\displaystyle A_2=4.9087\:in^2$
Accordingly, the normal stress in rod (2) is
$\displaystyle \sigma _2=\frac{F_2}{A_2}$
$\displaystyle \sigma _2=\frac{-75\:kips}{2.4053\:in^2}$
$\displaystyle \sigma _2=-15.2789\:ksi$
$\displaystyle \sigma _2=15.28\:ksi\:\left(Compression\right)$
|
2020-07-11 09:04:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7226380705833435, "perplexity": 1656.1496035950036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655924908.55/warc/CC-MAIN-20200711064158-20200711094158-00079.warc.gz"}
|
https://math.stackexchange.com/questions/2863232/solve-the-initial-value-differential-equation/2863315
|
# Solve the initial value differential equation [closed]
Solve the following initial value differential equations $20y''+4y'+y=0, y(0)=3.2, y'(0)=0$.
To solve this I substituted $D= \frac{\mathrm d}{\mathrm dx}$. and solved the auxiliary equation to get roots of $D$. and then the solution was $y= \exp\left(-\dfrac{x}{10}\right)(A \cos(x/5)+ B \sin(x/5))$ and tried to get the constants through given question. But I am not sure if the answer is correct.
## closed as off-topic by 5xum, José Carlos Santos, Shailesh, Isaac Browne, LeucippusJul 27 '18 at 0:05
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – 5xum, José Carlos Santos, Shailesh, Isaac Browne, Leucippus
If this question can be reworded to fit the rules in the help center, please edit the question.
• Hi and welcome to the site! Since this is a site that encourages and helps with learning, it is best if you show your own ideas and efforts in solving the question. Can you edit your question to add your thoughts and ideas about it? – 5xum Jul 26 '18 at 9:23
• Also, don't get discouraged by the downvote. I downvoted the question and voted to close it because at the moment, it is not up to site standards (you have shown no work you did on your own). If you edit your question so that you show what you tried and how far you got, I will not only remove the downvote, I will add an upvote. – 5xum Jul 26 '18 at 9:23
• Follow this link for MathJax tutorial math.meta.stackexchange.com/questions/5020/… – Indrajit Ghosh Jul 26 '18 at 9:32
• i have edited the question. please check. – d.s Jul 26 '18 at 9:48
Your solution is correct $$20y′′+4y′+y=0$$ $$\implies 20r^2+4r+1=0$$ $$\Delta_r=16-4.20.1=-64=(8i)^2$$ $$r=\frac {-1\pm 2i}{10}$$ Therefore $$y(x)=e^{-x/10}(K_1\cos(x/5)+K_2\sin(x/5))$$ Apply the initial conditions to find the constant $K_1 , K_2$
|
2019-08-21 08:12:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6131744980812073, "perplexity": 494.7732790902041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315811.47/warc/CC-MAIN-20190821065413-20190821091413-00458.warc.gz"}
|
http://openmx.ssri.psu.edu/thread/673?q=thread/673
|
# Error message when using mxRowObjective
12 posts / 0 new
Offline
Joined: 10/14/2009 - 21:13
Error message when using mxRowObjective
Hi All,
I am trying to fit the alternate forms comorbidity model with raw data. When I run the script (attached) I get the following error message:
Error: The entity 'MZExpectedFrequency' in model 'MZ' generated the error message: 'lbound' must have length equal to diag(covariance).
Any suggestions?
Cheers,
Jocilyn
Offline
Joined: 07/30/2009 - 14:03
Hi Jocilyn, The script did
Hi Jocilyn,
The script did not come through. Could you try attaching it again?
Cheers,
Steve
Offline
Joined: 10/14/2009 - 21:13
Sorry about that, here are
Sorry about that, here are the files.
Cheers,
Jocilyn
Offline
Joined: 07/31/2009 - 15:12
That looks like an error in
That looks like an error in the omxMnor call. If I'm reading your code right, your expected covariance and means matrices are for four variables, while your lower and upper bounds are for two variables. I haven't messed around with the GenEpi library enough to be absolutely sure, but it appears that omxMnor requires that the lengths of the mean, lbound and ubound vectors be equal to the number of rows (and columns) in the covariance matrix. The error showed up as an error in the mxAlgebra call 'MZExpectedFrequencies' because the GenEpi functions are helper functions outside of the core OpenMx library, so they evaluated as part of the algebra. Change the lbound and ubound arguments in omxMnor to nvar*2 and you should be fine.
Hope this helps.
ryne
Offline
Joined: 07/31/2009 - 15:24
I'll take a look at the
I'll take a look at the script later today. But to respond to an earlier post, omxMnor is one of the matrix functions supported by mxAlgebra() expressions: http://openmx.psyc.virginia.edu/wiki/matrix-operators-and-functions.
Offline
Joined: 07/31/2009 - 15:12
Thanks for the clarification,
Thanks for the clarification, and apologies for the error. I was able to replicate the error message using just the omxMnor function, which can be fixed by supplying the same dimensionality to the lbound and ubound vectors as the means vector. I didn't know that we had mxAlgebra expressions that could exist as objects outside of MxAlgebra objects, so I foolishly assumed that it was a GenEpi function. My mistake.
Offline
Joined: 10/14/2009 - 21:13
Thank you for the help. I
Thank you for the help. I made a few changes to the script (attached) and got the following:
Running AlternateForms
*** caught bus error ***
address 0x1, cause 'non-existent physical address'
Traceback:
1: .Call("callNPSOL", objective, startVals, constraints, matrices, parameters, algebras, data, intervalList, communication, options, state, PACKAGE = "OpenMx")
2: runHelper(model, frontendStart, intervals, silent, suppressWarnings, unsafe, checkpoint, useSocket, onlyFrontend)
3: mxRun(AltFormsModel)
Possible actions:
1: abort (with core dump, if enabled)
2: normal R exit
3: exit R without saving workspace
4: exit R saving workspace
Selection: summary(AltFormsRun)
Selection:
Any suggestions?
Thank you,
Jocilyn
Offline
Joined: 07/31/2009 - 15:24
The script needs to be
The script needs to be altered in the following way to get it to run. It does crash.
data <- read.table("data.csv", header=TRUE, sep=",")
data[,c(selVars)] <- mxFactor(data[,c(selVars)], levels=c(0,1))
Offline
Joined: 10/14/2009 - 21:13
Hi all, Thank you for the
Hi all,
Thank you for the help. The script still seems to generate the same error message when I run it. Are there any changes I can make to prevent the crash?
Cheers,
Jocilyn
Offline
Joined: 07/31/2009 - 15:24
We're working on it.
We're working on it. Evidence points to the presence of a definition variable that is triggering a bug. I imagine it would be difficult to transform the model into one without a definition variable. We'll find the bug within a few days from now.
Offline
Joined: 07/31/2009 - 15:24
The crash has been fixed.
The crash has been fixed. You can grab the latest version from the subversion repository, or wait until the next binary release. Subversion directions are here: http://openmx.psyc.virginia.edu/wiki/howto-build-openmx-source-repository
Offline
Joined: 10/14/2009 - 21:13
Thank you!
Thank you!
Log in or register to post comments
|
2017-01-22 18:27:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40455159544944763, "perplexity": 4171.468109611427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00113-ip-10-171-10-70.ec2.internal.warc.gz"}
|
http://tex.stackexchange.com/questions/57781/latex-tab-issue-spacing
|
# latex tab issue spacing
I have a table of two rows, the second one consists of 32 columns, each containing its column number (from 0 to 31). I want each column to use as little space as possible (no inter column space), since there are quite a lot of columns. I have done this using @{}c@{}. However, I don't like the fact that the columns are not all of the same size (due to the fact that some numbers have 2 digits, and others have only one). Hence I have manually added @{\hspace{0.7 mm}}c@{\hspace{0.7 mm}} to each of the columns whose contents in the second row have only 1 digit (the first 10). I am quite pleased with the result.
My problem now is with the first row: the first row contains 4 columns, containing 0,1,2,3. Each column in the first row spans 8 columns of the second row. My problem is that the delimiters in the first row aren't aligned with the second row (in the sample below, notice that the delimiter between 7 and 8 is not aligned with the one above).
Is there an easier solution to my problem as a whole, or am I in the right direction and I just need to fix this alignment issue ?
Sample:
\begin{tabular}{|
% numbers with 1 digit
@{\hspace{0.7 mm}}c@{\hspace{0.7 mm}}|
@{\hspace{0.7 mm}}c@{\hspace{0.7 mm}}|
@{\hspace{0.7 mm}}c@{\hspace{0.7 mm}}|
@{\hspace{0.7 mm}}c@{\hspace{0.7 mm}}|
@{\hspace{0.7 mm}}c@{\hspace{0.7 mm}}|
@{\hspace{0.7 mm}}c@{\hspace{0.7 mm}}|
@{\hspace{0.7 mm}}c@{\hspace{0.7 mm}}|
@{\hspace{0.7 mm}}c@{\hspace{0.7 mm}}|
@{\hspace{0.7 mm}}c@{\hspace{0.7 mm}}|
@{\hspace{0.7 mm}}c@{\hspace{0.7 mm}}|
% numbers with 2 digits
@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|@{}c@{}|}
\hline
\multicolumn{8}{c|}{0}
&\multicolumn{8}{c|}{1}
&\multicolumn{8}{c|}{2}
&\multicolumn{8}{c|}{3}\\
\hline
0&1&2&3&4&5&6&7&8&9&10&11&12&13&14&15&16&17&18&19&20&21&22&23&24&25&26&27&28&29&30&31
\end{tabular}
-
Is this a calendar? You might be better off with a more "dedicated" solution, such as a timeline, a gantt chart, or a "proper" calendar such as the ones possible with PGF/TikZ. – Jake May 29 '12 at 15:48
It's easier. :) Set a length to the width of two digits in the current type size (I've also added a small space just in order that the figures don't touch the vertical lines) and the parameter \tabcolsep to zero; then use p columns of the specified width and with \centering alignment:
\documentclass{article}
\usepackage{array}
\newlength{\twodigits}
\begin{document}
\begin{table}
\centering
\small
\settowidth{\twodigits}{00}
\setlength{\tabcolsep}{0pt}
\begin{tabular}{|*{32}{>{\centering}p{\twodigits}|}}
\hline
\multicolumn{8}{|c|}{0}
&\multicolumn{8}{c|}{1}
&\multicolumn{8}{c|}{2}
&\multicolumn{8}{c|}{3}\\
\hline
0&1&2&3&4&5&6&7&8&9&10&11&12&13&14&15&16&17&18&19&20&21&22&23&24&25&26&27&28&29&30&31 \tabularnewline
\hline
\end{tabular}
\end{table}
\end{document}
If you don't want to use \tabularnewline, then add the magic \arraybackslash:
\begin{tabular}{|*{32}{>{\centering\arraybackslash}p{\twodigits}|}}
\hline
\multicolumn{8}{|c|}{0}
&\multicolumn{8}{c|}{1}
&\multicolumn{8}{c|}{2}
&\multicolumn{8}{c|}{3}\\
\hline
0&1&2&3&4&5&6&7&8&9&10&11&12&13&14&15&16&17&18&19&20&21&22&23&24&25&26&27&28&29&30&31 \\
\hline
\end{tabular}
-
I tried to adapt your answer to my code for the past hour, but I failed. There are two errors on a line that has \cline{2-34}. Those are "misplaced omit" and "extra alignment tab has been changed to cr". My actual tabular has two extra columns that are unrelated to this issue. I tried appending c|c| to |*{32}, but it returned these errors. – bob May 29 '12 at 16:53
@bob It's impossible to say without seeing the code. Add it to your question. However, adding the two extra columns should be done as in \begin{tabular}{|*{32}{>{\centering}p{\twodigits}|}c|c|} – egreg May 29 '12 at 17:08
Sorry, after extensive tests, it appears it's not related to me having two more columns. To make it simple, if I use the very code above, and just add \hline before \end{tabular}, I get 'misplaced noalign'. I tried adding \\ to the end of the previous line (so, the one with 0&1&...), but the error is still the same – bob May 30 '12 at 9:19
@bob Now I understood; I've added two possible fixes. – egreg May 30 '12 at 9:26
Thank you, it works wonders. Can I have a quick explanation of why \\ doesn't work without \arraybackslash, and an explanation of the syntax of |*{32}{>{\centering}p{\twodigits}|} ? – bob May 30 '12 at 9:35
Another solution would be to rely on the tabularx package and its "X" column type. One of the neat things about this column type is that it does virtually all of the column width calculations for you. The following MWE implements this idea.
\documentclass{article}
\usepackage[margin=1in]{geometry}
\usepackage{array,tabularx}
% define "Y" column type to be same as "X", but with contents centered
\newcolumntype{Y}{>{\centering\arraybackslash}X}
\setlength{\tabcolsep}{0.1pt}
\begin{document}
\begin{table}
\begin{tabularx}{\textwidth}{|*{32}{Y|}}
\hline
\multicolumn{8}{|c|}{0}&
\multicolumn{8}{c|}{1} &
\multicolumn{8}{c|}{2} &
\multicolumn{8}{c|}{3} \\
\hline
0&1&2&3&4&5&6&7&8&9&10&11&12&13&14&15&16&17&18&19&20&21&22&23&24&25&26&27&28&29&30&31\\
\hline
\end{tabularx}
\end{table}
\end{document}
-
|
2014-08-30 16:22:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.929898738861084, "perplexity": 1240.0563864929532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835505.87/warc/CC-MAIN-20140820021355-00430-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://www.wptricks.com/question/php-need-help-understanding-call-shortcode-in-javascript/
|
## php – Need help understanding ‘call shortcode in javascript’
Question
I’m working on a website’s page that has about 20 PDF files. All those 20 PDF files are displayed with a “PDF viewer for Elementor” plugin in each separate template created using free “Elementor – Footer, Header & Blocks” plugin because I needed to paste shortcodes in tabs (tabs are simply named after their own PDF file simply for user to select which one he wants to read). I didn’t know how to display PDF’s in the page otherwise and selecting PDF file as an Image didn’t work and this is the approach I’ve come up with.
There is one problem though when you go to that page all those 20 templates with PDF’s are still being preloaded even though Elementor’s Tab widget hides them by default. This makes the user to wait quite some time for the PDF’s to appear. Better yet they sometimes don’t even show up but you can fix that if you refresh the page so I’m guessing that the server is overloading or something and as a result It doesn’t show up every time. I’m still a newbie student in this so I don’t know.
My approach to fix the page loading all the PDF’s was to use custom jQuery script and append templates shortcodes into html in which the result should be that the template with PDF that the user selects would start loading and showing only just then – when it’s selected by the user pressing a certain tab button.
Here’s my TABS html:
<div class="custom-tabs">
<button class="tabs" id="my-tabid1" data-target=".2019-METŲ-I-KETV">2019 METŲ I KETV.</button>
<button class="tabs" id="my-tabid2" data-target=".2018-METŲ-METINIS">
2018 METŲ METINIS</button>
<button class="tabs" id="my-tabid3" data-target=".2018-METŲ-III-KETV">2018 METŲ III KETV.</button>
</div>
PDF’s html:
<div class="2019-METŲ-I-KETV tab_content active" id="my-tab1">
</div>
<div class="2018-METŲ-METINIS tab_content" id="my-tab2">
</div>
<div class="2018-METŲ-III-KETV tab_content" id="my_tab3">
</div>
And finally jQuery for this whole process:
var shortcodes = [
'<?php echo do_shortcode ("[pafe-template id="823"]"); ?>',
'<?php echo do_shortcode ("[pafe-template id="890"]"); ?>',
'<?php echo do_shortcode ("[pafe-template id="893"]"); ?>'
];
var target_strings = [
'2019-METŲ-I-KETV',
'2018-METŲ-METINIS',
'2018-METŲ-III-KETV'
];
jQuery('.tabs').on('click', function(){
var target = jQuery(jQuery(this).data('target'));
target.siblings().removeClass('active');
console.log('test');
var targettostring = target.attr('class');
var i;
for(i = 0; i < 3; i++){
console.log('doing loop');
console.log(targettostring);
console.log(target_strings[i]);
if(targettostring.includes(target_strings[i])){
console.log('found a match');
jQuery(target).append(shortcodes[i]);
}
}
});
At first in jQuery I only used shorter [pafe-template id="890"] for “shortcodes” array. That’s before I’ve come across this Question:
call shortcode in javascript
I tried the php echo method for some reason before understanding that this goal is only achieved using AJAX method. This is the point where I need help.
I completely don’t understand the parts where @Douglas.Sesar wrote placeholders like:
jQuery.ajax({ url: yourSiteUrl + "/wp-admin/admin-ajax.php",
yourSiteUrl is literally my site url? It’s “https://ikiwi.website/telsiugaisrine” and it’s a placeholder which will change later (and + /veikla/ is the page of the website that has this whole pdf thing) so this means I’ll have to change yourSiteUrl and not that it’s a problem or anything but is there a way to automate this or something?
'action':'your_ajax',
'fn':'run_shortcode_function',
'some_needed_value': jQuery('#input_for_needed_value').val()
},
What is “data” here and what values do I write say in my case? What did he mean by #input_for_needed_value? Is this where I write my class or id of the desired div where I want to place and execute the shortcode template? I completely don’t understand what should I write here..
success: function(results){
//do something with the results of the shortcode to the DOM
},
error: function(errorThrown){console.log(errorThrown);}
});// end of ajax
Do something with the results of the shortcode – is this something that’s optional to write like a console.log that I used in my jQuery before?
add_action('wp_ajax_nopriv_your_ajax', 'your_ajax_function');
Didn’t I wrote back in data: action that my ajax is called “your_ajax”? Thats how I don’t understand any purpose and where do those lines in quotes come from because ‘your_ajax_function’ has extra ‘_function’ line especially everything else.
|
2021-04-20 15:44:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24794182181358337, "perplexity": 4560.461451210906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039476006.77/warc/CC-MAIN-20210420152755-20210420182755-00557.warc.gz"}
|
http://openstudy.com/updates/55732de0e4b01d0053abdef9
|
## anonymous one year ago In a series circuit with 9V battery and two resistors, one 100 Ohms and one at 350 Ohms, this is the voltage across each one respectively. **What is that voltage?
1. Michele_Laino
the current in your circuit is: $\Large I = \frac{{\Delta V}}{{{R_1} + {R_2}}} = \frac{9}{{100 + 350}}$ since your resistors are connected in series, and the equivalent resistance is: R_1+R_2 |dw:1433611823658:dw|
2. anonymous
ok! and solving we get 0.02?
3. Michele_Laino
correct!
4. anonymous
is that the voltage across each one? :/ 0.02 is our solution?
5. Michele_Laino
now using the Ohm's law, we can compute the requested voltage drop as below: |dw:1433612018865:dw| $\Large \begin{gathered} \Delta {V_1} = I \times {R_1} = 0.02 \times 100 \hfill \\ \Delta {V_2} = I \times {R_2} = 0.02 \times 350 \hfill \\ \end{gathered}$
6. anonymous
2 & 7? so those are our solution? 2 and 7 ?
7. anonymous
is it ohms as well? :/
8. Michele_Laino
yes! 2 volts, and 7 volts
9. anonymous
oh oops volts haha :P
10. anonymous
yay so this problem is complete? :O
11. Michele_Laino
yes!
12. anonymous
yay!! thank you!:D
13. Michele_Laino
:)
14. Michele_Laino
please, as you can check, we have: 2 +7 = 9 volts which is the voltage of our battery
15. anonymous
ohh yes:) thank you!!:D
|
2017-01-21 11:22:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7698727250099182, "perplexity": 5589.436241719859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00543-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://electronics.stackexchange.com/questions/352942/doubling-the-voltage-to-a-capacitor-doubles-the-stored-charge-but-does-not-make
|
# Doubling the voltage to a capacitor doubles the stored charge but does not make it charge slower. Why not?
This is a slightly theoretical question.
If you charge a capacitor from say a 3V battery you get Q charge in T seconds. If you double the voltage to 6V you get 2Q charge also in T seconds.
My question is, why doesn't T change? If you think of a capacitor like a room and charge like people, if you want to fit in twice as many people, surely you need more time?
I am thinking of the formula Q1 = Q0e^-(t/RC). Is there something I am missing?
• You have a higher voltage 'pushing' the charge in. – TonyM Jan 30 '18 at 7:22
• You opened two doors this time – PlasmaHH Jan 30 '18 at 8:22
Current is rate of charge (1 ampere is 1 coulomb/sec). At time zero, the capacitor voltage is zero, so the voltage across your resistance is three volts. The instantaneous current is 3V/R. When you apply 6V (assuming the resistance is equal and ignoring @Whit3rd 's excellent point that adding a second battery will likely increase the resistance), the instantaneous current at time zero would be 6V/R, twice that of the first case. This means that in your analogy, the people would be walking into the room in the first case, and running in the second. In the second case, when the capacitor reaches 3 volts, the voltage across your resistor is (6-3) volts, and the current would be equal to the starting current with the 3 volt supply. At this point the people in the second case would have slowed down and be walking in at the same speed as the people in the first case had started.
charge a capacitor from say a 3V battery you get Q charge in T seconds. If you double the voltage to 6V you get 2Q charge also in T seconds
Actually, probably you don't. Depends on 'R'.
$$Q = C \times V_{capacitor}$$ $$V_{capacitor} = V_{battery} \times (1- e ^ {-T \over{R \times C}})$$
If your '3V' source is a battery, with some internal series resistance, and puts Q charge on an uncharged capacitor in a short time T, then a '6V' source might be two batteries in series, with twice the internal resistance, and will put 2Q charge on a capacitor in time 2T, through resistance 2R.
In very long time scales (times much greater than the product of resistance and capacitance), the capacitor and battery circuit can be deemed to have negligible resistance (because the current times resistance is very small when the capacitor is near fully charged), but that is an approximation that does NOT allow a prediction of time versus voltage, only allows the prediction of the at-equilibrium charge.
In a time model, you NEED to be explicit that there is a resistor, but if none is 'designed' in, the resistance comes from stray effects. Batteries' internal resistance when placed in series, for example.
The equation for capacitor is:
Q = C V
Then you can apply derivative over t, that is d/dt, for both sides of the equation to get:
dQ/dt = C dV/dt
I = C dV/dt
So over time T you may get as many charge Q as you want: dQ/dT. It just depends on the current you are using to charge your capacitor.
I encourage you to read the articles here:
You can see how a capacitor is charged and discharged by different currents on these pictures (look into articles):
Hope it will clarify you the usage of the capacitor and the relation between its charge Q and the change of it voltage V over time T (dV/dT).
|
2019-10-21 09:55:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6891688704490662, "perplexity": 744.3836062878496}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987769323.92/warc/CC-MAIN-20191021093533-20191021121033-00541.warc.gz"}
|
http://musictheory.pugetsound.edu/mt21c/DiatonicChordsInMinor.html
|
Notice that both $\left.\text{VII}\right.$ (the “subtonic triad”) and $\left.\text{vii}^{\circ}{}\right.$ (the “leading–tone triad”) are included. The subtonic triad ($\left.\text{VII}\right.$), built on the lowered $\hat{7}$ that occurs in natural minor, regularly occurs in circle of fifth progressions in minor and in rock and pop music, while the leading–tone triad ($\left.\text{vii}^{\circ}{}\right.$), built on raised $\hat{7}$ , is usually either a passing harmony or has dominant function.
|
2019-09-18 19:38:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6014970541000366, "perplexity": 5886.934802306189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573331.86/warc/CC-MAIN-20190918193432-20190918215432-00215.warc.gz"}
|
https://www.physicsforums.com/threads/wavelength-and-excited-shells.717078/
|
# Wavelength and excited shells
is the wavelength equal to the distance between the excited state shell and the rest state shell?
DrClaude
Mentor
is the wavelength equal to the distance between the excited state shell and the rest state shell?
No.
sophiecentaur
|
2021-05-13 21:34:09
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8771759271621704, "perplexity": 813.9051653755978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992514.37/warc/CC-MAIN-20210513204127-20210513234127-00407.warc.gz"}
|
https://easy.gems.dkrz.de/Processing/Intake/basic-plotting-with-intake.html
|
# Map Plotting with Intake and datashader¶
Now with the data, we can think about how we want to visualize it. We build on the setup from the previous Tutorial. see https://easy.gems.dkrz.de/Processing/Intake/catalog-basics.html for the contents of the first box, with the addition of loading / installing the gribscran tool provided by Lukas Kluft and friends.
Note that 2m air temperature has different names depending no the model (‘tas’ for ICON, ‘2t’ for IFS)
[1]:
import intake
import pandas as pd
# for ifs-fesom
try:
import gribscan
except:
%pip install gribscan
import gribscan
pd.set_option("max_colwidth", None) # makes the tables render better
catalog_file = "/work/ka1081/Catalogs/dyamond-nextgems.json"
cat = intake.open_esm_datastore(catalog_file)
hits = cat.search(
variable_id=["2t", "tas"],
frequency=["30minute", "1hour"],
experiment_id="nextgems_cycle2",
) # ONLY NEXTGEMS CYCLE 2
dataset_dict = hits.to_dataset_dict(cdf_kwargs={"chunks": {"time": 1}})
--> The keys in the returned dictionary of datasets are constructed as follows:
'project.institution_id.source_id.experiment_id.simulation_id.realm.frequency.time_reduction.grid_label.level_type'
/sw/spack-levante/mambaforge-4.11.0-0-Linux-x86_64-sobz6z/lib/python3.9/site-packages/intake_esm/merge_util.py:270: RuntimeWarning: Failed to open Zarr store with consolidated metadata, falling back to try reading non-consolidated metadata. This is typically much slower for opening a dataset. To silence this warning, consider:
2. Explicitly setting consolidated=False, to avoid trying to read consolidate metadata, or
3. Explicitly setting consolidated=True, to raise an error in this case instead of falling back to try reading non-consolidated metadata.
ds = xr.open_zarr(path, **zarr_kwargs)
100.00% [5/5 02:48<00:00]
/sw/spack-levante/mambaforge-4.11.0-0-Linux-x86_64-sobz6z/lib/python3.9/site-packages/intake_esm/merge_util.py:270: RuntimeWarning: Failed to open Zarr store with consolidated metadata, falling back to try reading non-consolidated metadata. This is typically much slower for opening a dataset. To silence this warning, consider:
2. Explicitly setting consolidated=False, to avoid trying to read consolidate metadata, or
3. Explicitly setting consolidated=True, to raise an error in this case instead of falling back to try reading non-consolidated metadata.
ds = xr.open_zarr(path, **zarr_kwargs)
[2]:
%run gem_helpers.ipynb
get_from_cat(cat.search(uri="refe.*ICMG.*"), ["simulation_id", "uri"])
[2]:
simulation_id uri
0 HQYS reference::/work/bm1235/a270046/cycle2-sync/tco2559-ng5/ICMGGall_update/json.dir/atm2d.json
1 HR0N reference::/work/bm1235/a270046/cycle2-sync/tco3999-ng5/ICMGGc2/json.dir/atm2d.json
2 HR2N reference::/work/bm1235/a270046/cycle2-sync/tco1279-orca025/nemo_deep/ICMGGc2/json.dir/atm2d.json
3 HR2N_nodeep reference::/work/bm1235/a270046/cycle2-sync/tco1279-orca025/nemo/ICMGGc2/json.dir/atm2d.json
[3]:
import os
for x in get_list_from_cat(cat.search(uri="refe.*"), "uri"):
if not os.access(x[11:], os.R_OK):
print(x[11:])
[4]:
# Additional imports for this example (yes, we are generous)
import xarray as xr
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sys
import re
import os
import getpass
# map projections
import cartopy.crs as crs
# more colormaps
import cmocean
import warnings, matplotlib
warnings.simplefilter(action="ignore", category=FutureWarning)
warnings.filterwarnings("ignore", category=matplotlib.cbook.mplDeprecation)
A bunch of helper functions and temporary directories.
[5]:
def fix_time_axis(data):
"""Turn icon's yyyymmdd.f time axis into actual datetime format.
This will fail for extreme values, but should be fine for a few centuries around today.
"""
if (data.time.dtype != "datetime64[ns]") and (
data["time"].units == "day as %Y%m%d.%f"
):
data["time"] = pd.to_datetime(
["%8i" % x for x in data.time], format="%Y%m%d"
) + pd.to_timedelta([x % 1 for x in data.time.values], unit="d")
def get_from_cat(catalog, columns):
"""A helper function for inspecting an intake catalog.
Call with the catalog to be inspected and a list of columns of interest."""
if type(columns) == type(""):
columns = [columns]
return catalog.df[columns].drop_duplicates().sort_values(columns)
# make temporary directories in scratch
uid = getpass.getuser()
image_path = f"/scratch/{uid[0]}/{uid}/intake_demo_plots"
os.makedirs(image_path, exist_ok=True)
data_cache_path = f"/scratch/{uid[0]}/{uid}/intake_demo_data"
os.makedirs(data_cache_path, exist_ok=True)
## Preparing the data: Putting the data on a grid¶
For ICON Data, we can use the ‘grid_file_uri’ to access the grid. But we know where the file is saved locally and don’t need to download it via the internet. For IFS Data, the grid is served with the data.
[6]:
# to plot, we need to associate a grid file with the data. The easiest way is to get the info from the grid_file_uri attribute in the data.
data_dict = {}
for name, dataset in dataset_dict.items():
print(name)
if "ICON-ESM" in name: # ICON needs a bit of cleanup.
grid_file_path = "/pool/data/ICON" + dataset.grid_file_uri.split(".de")[1]
grid_data = xr.open_dataset(grid_file_path).rename(
cell="ncells", # the dimension has different names in the grid file and in the output.
clon="lon", # standardize the coordinate names.
clat="lat", # standardize the coordinate names.
)
data = xr.merge((dataset, grid_data))
fix_time_axis(data)
data_dict[name] = data
else:
data_dict[name] = dataset
nextGEMS.ECMWF-AWI.IFS-FESOM.nextgems_cycle2.HR0N.atm.1hour.inst-or-acc.gn.2d
nextGEMS.ECMWF-AWI.IFS-FESOM.nextgems_cycle2.HQYS.atm.1hour.inst-or-acc.gn.2d
nextGEMS.ECMWF-AWI.IFS-FESOM.nextgems_cycle2.HR2N.atm.1hour.inst-or-acc.gn.2d
nextGEMS.ECMWF-AWI.IFS-FESOM.nextgems_cycle2.HR2N_nodeep.atm.1hour.inst-or-acc.gn.2d
nextGEMS.MPI-M.ICON-ESM.nextgems_cycle2.ngc2009.atm.30minute.inst.gn.ml
Using datashader is much faster than the raw plot functions from matplotlib. Here we project the data once using cartopy. This can be skipped for simple lat/lon plots.
[7]:
# Very fast plotting on projected maps with Pandas and Datashader
def transform_coords(data, projection):
"""Projects coordinates of the dataset into the desired map projection.
Expects data.lon to be the longitudes and data.lat to be the latitudes
"""
lon = data.lon
lat = data.lat
if "rad" in data.lon.units: # icon comes in radian, ifs in degree.
coords = (
projection.transform_points( # This re-projects our data to the desired grid.
crs.Geodetic(), lon, lat
)
)
return coords
def plot_map(
data,
projection,
coords=None, # we can compute them ourselves, if given data with lat and lon attached.
colorbar_label="",
title="",
coastlines=True,
extent=None,
**kwargs,
):
"""Use datashader to plot a dataset from a pandas dataframe with a given projection.
Required: data and projection.
All other arguments are optional. Additional arguments will be passed directly to the plot routine (dsshow).
"""
# If we are not given projected data, we project it ourselves.
if coords is None:
coords = projection.transform_points(crs.Geodetic(), data.lon, data.lat)
# For datashader, we need the data in a pandas dataframe object.
df = pd.DataFrame(
data={
"val": np.squeeze(
data
), # here we select which data to plot! - the squeeze removes empty dimensions.
"x": coords[:, 0], # from the projection above
"y": coords[:, 1], # from the projection above
}
)
# the basis for map plotting.
fig, ax = plt.subplots(figsize=(10, 4), subplot_kw={"projection": projection})
fig.canvas.draw_idle() # necessary to make things work
# the plot itself
artist = dsshow(df, ds.Point("x", "y"), ds.mean("val"), ax=ax, **kwargs)
# making things pretty
plt.title(title)
if coastlines:
ax.coastlines(color="grey")
fig.colorbar(artist, label=colorbar_label)
# for regional plots:
if extent is not None:
ax.set_extent(extent)
return fig
[8]:
projection = (
crs.Mollweide()
) # We will plot in mollweide projection - needs to match the plot calls later.
# We store all coordinates in a dictionary for later use.
coords_dict = (
{}
) ### WARNING THIS WILL BECOME BIG IF YOU DO IT FOR MORE THAN A FEW DATASETS!
for name, data in data_dict.items():
coords_dict[name] = transform_coords(data, projection)
[9]:
# Plotting the projected data
time = "2020-02-28T12:00:00"
# level = 0 # there is only level 0 in tas
for name, data in data_dict.items():
var = data.intake_esm_varname[
0
] # variable names vary a bit by model. Here we get the correct one.
timestr = str(data.time.sel(time=time).values)[
:-10
] # take the correct time and remove trailing zeros
fig = plot_map(
data=data[var].sel(time=time), # our data for the plot
projection=projection, # generated in the cell above.
coords=coords_dict[name], # generated in the cell above.
cmap=cmocean.cm.thermal, # nice colorbar for temperature
vmin=240, # minimum for the plot ~-33C
vmax=310, # maximum for the plot ~37C
colorbar_label="near-surface air temperature in K",
title=f"{name}\n{timestr}", # dataset name and time stamp.
)
filename = f"{image_path}/{var}_datashader_mollweide_{name}_{re.sub(':','',timestr)}.png" # save to /scratch/
print("saving to ", filename)
plt.savefig(filename)
saving to /scratch/k/k202134/intake_demo_plots/2t_datashader_mollweide_nextGEMS.ECMWF-AWI.IFS-FESOM.nextgems_cycle2.HR0N.atm.1hour.inst-or-acc.gn.2d_2020-02-28T120000.png
## Regional plots just require one additional argument¶
[10]:
# Plotting the projected data
time = "2020-02-28T12:00:00"
level = 0 # there is only level 0 in tas, but we still need to provide it
for name, data in data_dict.items():
var = data.intake_esm_varname[
0
] # variable names vary a bit by model. Here we get the correct one.
timestr = str(data.time.sel(time=time).values)[:-10]
print(timestr)
fig = plot_map(
data=data[var].sel(time=time), # our data for the plot
projection=projection, # generated in the cell above.
coords=coords_dict[name], # generated in the cell above.
cmap=cmocean.cm.thermal, # nice colorbar for temperature
vmin=240, # minimum for the plot ~-33C
vmax=310, # maximum for the plot ~37C
colorbar_label="near-surface air temperature in K",
title=f"{name}\n{timestr}", # dataset name and time stamp.
extent=[-20, 45, 25, 70], # HERE WE CHOSE THE EXTENT
)
print("saving to ", filename)
plt.savefig(filename)
2020-02-28T12:00:00
[ ]:
|
2022-11-30 23:05:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1889607459306717, "perplexity": 12006.55072273123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710777.20/warc/CC-MAIN-20221130225142-20221201015142-00801.warc.gz"}
|
https://docs.celo.org/es/blog/2022/01/05/no-code-erc721
|
# Deploy an NFT to Celo
How to deploy ERC721 tokens (NFTs) on the Celo network using autogenerated code.
## Getting Started
In this example, we will be using IPFS for off-chain storage, but you can use whatever off-chain storage mechanism you want.
2. Add the Celo network to Metamask. We suggest adding the Alfajores testnet to Metamask as well, so you can test contract deployments before deploying to mainnet.
3. Add a small amount of CELO to your Metamask account. In this example, we will deploy to the Alfajores testnet, so we need Alfajores CELO, which you can get from the faucet here.
1. Go to https://app.pinata.cloud/ and sign up for an account if you don’t have one already. Pinata is a service that allows you to easily upload files to IPFS.
2. Upload your NFT images to IPFS. Because storing data on a blockchain can be expensive, NFTs often reference off-chain data. In this example, We are creating a set of NFTs that reference pictures of trees. We uploaded all of the images of trees to IPFS individually. The names of the images correspond to the token ID. This isn’t necessary, we just did it for convenience. Notice that each image has a corresponding CID hash, this is the unique identifier for that file or folder.
3. Once all of your images have been uploaded, you will need to prepare the token metadata in a new folder.
1. We created a folder called “prosper factory metadata”. You can view the contents of the folder here. The folder contains 14 files, numbered 0-13. The names of these files are important. These file names correspond to the token ID of each NFT that will be created by the contract. Make sure that there are no extensions (.txt, .json, .jpeg, .png) on your files.
2. Click on one of the files. The files contain the NFT metadata. In this simple example, the metadata only contains a reference to the unique tree image. You can view the image in a browser that supports IPFS (we are using Brave) here. Each file should have a unique image reference. You will need to create a similarly structured folder containing metadata for all of the NFTs that you want to create.
4. Upload the folder containing all of the token metadata to IPFS. This will make your NFT data publicly available. We called ours “prosper factory metadata”. Note the CID of this folder. We will need it shortly.
## Design and Deploy the Smart Contracts
1. Go to https://docs.openzeppelin.com/contracts/4.x/wizard
2. Select ERC721 as your token choice.
1. We are calling our token the ProsperityFactory, symbol PF.
2. We entered the IPFS CID of our token metadata folder (prosper factory metadata) in the “Base URI” field. Be sure to add a trailing “/” to the base URI, the token ID will be appended to the end of the base URI to get the IPFS metadata file. So the complete Base URI for our NFT contract is ipfs://QmdmA3gwGukA8QDPH7Ypq1WAoVfX82nx7SaXFvh1T7UmvZ/. Again, you can view the folder here.
3. We made the token mintable and it will automatically increment the token IDs as the tokens are minted. The counter starts at 0 and adds 1 to each successive token. It is important that the file contents of the IPFS metadata folder are labeled accordingly (ie. 0-13) and correspond to the token IDs.
4. The contract is also marked Ownable, meaning that only the owner of the contract (which is initially set to the account that deploys the contract) can mint new NFTs.
4. Click “Open in Remix”. Remix will pop open with your contract code already filled in.
5. Click the big blue button on the left side of Remix that says “Compile contract-xxxx.sol”.
6. Once the contract is compiled, click the Ethereum logo on the left side of the window. A new sidebar will appear.
7. In the “Environment” dropdown, select “Injected Web3”. This will connect Remix and Metamask. Make sure that Metamask is connected to the correct network. In this example, We are deploying to the Alfajores testnet, so we see a textbox below the dropdown that says Custom (44787) network. 44787 is the network id for Alfajores.
8. Select the contract that you want to deploy. We titled the contract the ProsperityFactory.
9. Click Deploy. Metamask will pop up asking you to confirm the transaction.
10. Once the contract is deployed, Remix will show a newly deployed contract on the bottom left corner of the window. Expand the ProsperityFactory dropdown to see all of the contract functions. You can see the deployed ProsperityFactory NFT contract here.
11. Let’s mint the first NFT. To do that we will call the safeMint function. The safeMint function needs an account address as an input, so it knows who to mint the token to. I’ll just enter the first Metamask address and click the orange button. Metamask will pop up, confirm the transaction. When the transaction is confirmed, this will have minted the first NFT, with token ID 0.
12. Check the token metadata. You can verify that the token was minted by calling the “tokenURI” function with the expected token ID. We call the contract with token ID 0 and it returns an IPFS reference.
13. Navigating to this IPFS reference will show the token metadata for that NFT.
14. Going to that IPFS reference will show the image for the token.
We went through the same process and minted all 14 NFTs from this contract.
That’s it! You now know how to create your own NFT contract with corresponding metadata!
Let me know what you end up building and reach out if you have any questions, @critesjosh_ on Twitter or joshc#0001 on Discord. Join the Celo discord at https://chat.celo.org.
|
2022-05-28 20:34:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17624954879283905, "perplexity": 3588.043622330222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663019783.90/warc/CC-MAIN-20220528185151-20220528215151-00068.warc.gz"}
|
https://kili-technology.com/blog/webhooks/
|
# Kili Tutorial: How to chain annotation projects with webhooks on Kili
In this tutorial, we will show how to use webhooks to monitor actions in Kili, such as a label creation. The goal of this tutorial is to illustrate some basic components and concepts of Kili in a simple way, but also to dive into the actual process of iteratively developing real applications in Kili.
For an overview of Kili, visit kili-technology.com You can also check out the Kili documentation https://cloud.kili-technology.com/docs. Our goal is to export labels that can predict whether an image contains a Porsche or a Tesla.
The tutorial is divided into two parts:
1. Why use webhooks?
2. Using Kili’s webhook in Python
## 1. Why use webhooks?
Webhooks allow to react to particular action in Kili’s database by triggering a callback whenever an action is completed. For instance, here, every time a label is created in frontend (upper panel), the label can be logged in Python (lower right panel):
from IPython.display import Image
with open('https://raw.githubusercontent.com/kili-technology/kili-playground/e8623b9a6e1de273da6494b5cb1c89f7b8005a9a/recipes/img/websockets.gif','rb') as f:
## 2. Using Kili’s webhook in Python
Kili Playground exposes a method label_created_or_updated that allows to listen for all actions on labels:
• creation of a new label
• update of an existing label
First of all, you need to authenticate:
import os
!pip install kili
from kili.authentication import KiliAuth
from kili.playground import Playground
email = os.getenv('KILI_EMAIL')
api_endpoint = 'https://cloud.kili-technology.com/api/label/graphql'
playground = Playground(kauth)
Then you can define a callback that will be triggered each time a label gets created/updated:
project_id = 'CHANGE_ME_FOR_YOUR_PROJECT_ID'
def callback(id, data):
print(f'New data: {data}\n')
playground.label_created_or_updated(
project_id=project_id, callback=callback)
## Summary
In this tutorial, we accomplished the following:
We introduced the concept of webhook and we used label_created_or_updated to trigger a webhook.
You can also visit the Kili website or Kili documentation for more info!
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
2021-04-20 15:38:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20613157749176025, "perplexity": 7371.610826684948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039476006.77/warc/CC-MAIN-20210420152755-20210420182755-00116.warc.gz"}
|
https://www.physicsforums.com/threads/evaluate-this-surface-integral.628447/
|
# Evaluate this surface Integral
1. Aug 15, 2012
### unscientific
1. The problem statement, all variables and given/known data
The problem is attached in the picture.
3. The attempt at a solution
The suggested solution went straight into the hardcore integration. I was trying a different approach by changing the variables (x,y) into (u,v) which appear to make the integration much easier...
The Jacobian was found to be J = p
Not sure where I went wrong!
#### Attached Files:
File size:
10.4 KB
Views:
54
• ###### variable2.jpg
File size:
39.2 KB
Views:
57
2. Aug 15, 2012
### clamtrox
Where did you get the upper limit for ρ integral? Now you're integrating over a circle, not a rectangle.
3. Aug 15, 2012
### unscientific
Hmm if you draw a rectangle with lengths ab, the p ranges from 0 to maximum of √(a2+b2) which is at the corner.
The angle ranges from 0 to ∏/2..
4. Aug 15, 2012
### HallsofIvy
Staff Emeritus
You cannot set the $\rho$ and $\theta$ limits independently. The upper limit on $\rho$ will be a rather complicated function of $\theta$.
5. Aug 15, 2012
### clamtrox
This is true, but if you integrate ρ from 0 to √(a2+b2) for each value of the angle, you are integrating over a circle. Actually the calculation is probably a bit simpler anyway in polar coordinates, so you should definitely work it out. The easiest way is to split the angular integral into two parts: consider first $\tan(\theta) < b/a$. You should be able to find a simple enough expression for the upper bound of ρ in this region.
6. Aug 15, 2012
### unscientific
Not sure how to do it... I know the limit of ∅ is from 0 to ∏/2 but I don't even know the limits of p..
7. Aug 15, 2012
### clamtrox
You need to calculate the distance from the rectangle's corner to an arbitrary point on an opposing edge. Can you do that?
8. Aug 15, 2012
### unscientific
If tan ∅ < (b/a) then p = a/cos ∅.
If tan ∅ > (b/a) then p = b/sin ∅.
If tan ∅ = (b/a) then p = √a2+b2
9. Aug 15, 2012
### clamtrox
Perfect. Those are your integration limits for the radius: now you just need to split the angle integral into
$$\int_0^{\pi/2} d\phi = \int_0^{\arctan(b/a)} d\phi + \int_{\arctan(b/a)}^{\pi/2} d\phi$$ plug in the radius limits and integrate.
10. Aug 16, 2012
### unscientific
Nice. But this really looks more complex than doing it the 'hardcore' way..
11. Aug 16, 2012
### clamtrox
I'm sure it does, if you're not used to changing variables in multidimensional integrals. Did you actually do the calculation? I think you would have found that the integrals you need to compute in spherical coordinates are a lot easier than those in cartesian coordinates. For the first term, you'd get
$$\int_0^{\arctan(b/a)} d\phi \int_0^{a/\cos(\phi)} dr \frac{r^2 \cos(\phi)}{r^2} = a \int_0^{\arctan(b/a)} d\phi = a \arctan(b/a)$$
|
2017-08-24 10:14:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8571984171867371, "perplexity": 1228.6034193478215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886133447.78/warc/CC-MAIN-20170824082227-20170824102227-00013.warc.gz"}
|
https://proofwiki.org/wiki/Area_contained_by_Apotome_and_Binomial_Straight_Line_Commensurable_with_Terms_of_Apotome_and_in_same_Ratio/Porism
|
# Area contained by Apotome and Binomial Straight Line Commensurable with Terms of Apotome and in same Ratio/Porism
## Theorem
In the words of Euclid:
And it is made manifest to us by this also that it is possible for a rational area to be contained by irrational straight lines.
## Proof
Directly apparent from the construction.
$\blacksquare$
## Historical Note
This proof is Proposition $114$ of Book $\text{X}$ of Euclid's The Elements.
|
2023-03-29 22:23:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48172056674957275, "perplexity": 2784.754365291331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949035.66/warc/CC-MAIN-20230329213541-20230330003541-00214.warc.gz"}
|
https://www.sparrho.com/item/approximating-the-noise-sensitivity-of-a-monotone-boolean-function/21db184/
|
# Approximating the noise sensitivity of a monotone Boolean function
Research paper by Ronitt Rubinfeld, Arsen Vasilyan
Indexed on: 16 Apr '19Published on: 14 Apr '19Published in: arXiv - Computer Science - Data Structures and Algorithms
#### Abstract
The noise sensitivity of a Boolean function $f: \{0,1\}^n \rightarrow \{0,1\}$ is one of its fundamental properties. A function of a positive noise parameter $\delta$, it is denoted as $NS_{\delta}[f]$. Here we study the algorithmic problem of approximating it for monotone $f$, such that $NS_{\delta}[f] \geq 1/n^{C}$ for constant $C$, and where $\delta$ satisfies $1/n \leq \delta \leq 1/2$. For such $f$ and $\delta$, we give a randomized algorithm performing $O\left(\frac{\min(1,\sqrt{n} \delta \log^{1.5} n) }{NS_{\delta}[f]} \text{poly}\left(\frac{1}{\epsilon}\right)\right)$ queries and approximating $NS_{\delta}[f]$ to within a multiplicative factor of $(1\pm \epsilon)$. Given the same constraints on $f$ and $\delta$, we also prove a lower bound of $\Omega\left(\frac{\min(1,\sqrt{n} \delta)}{NS_{\delta}[f] \cdot n^{\xi}}\right)$ on the query complexity of any algorithm that approximates $NS_{\delta}[f]$ to within any constant factor, where $\xi$ can be any positive constant. Thus, our algorithm's query complexity is close to optimal in terms of its dependence on $n$. We introduce a novel descending-ascending view of noise sensitivity, and use it as a central tool for the analysis of our algorithm. To prove lower bounds on query complexity, we develop a technique that reduces computational questions about query complexity to combinatorial questions about the existence of "thin" functions with certain properties. The existence of such "thin" functions is proved using the probabilistic method. These techniques also yield previously unknown lower bounds on the query complexity of approximating other fundamental properties of Boolean functions: the total influence and the bias.
|
2021-10-17 21:48:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7869068384170532, "perplexity": 222.0904561212219}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585183.47/warc/CC-MAIN-20211017210244-20211018000244-00030.warc.gz"}
|
https://boostezwp.com/edgx-order-godbvjp/does-up-to-mean-including-8d8ef4
|
I have not reported it. Travel with Children - Room sleeps 3 guests (up to 2 children) - what does it mean - Room sleeps 3 guests (up to 2 children) This comes up routinely in room description when i search hotels (in this case in spain) but i have no idea what it actually means: 3 adults plus two children? All content on this website, including dictionary, thesaurus, literature, geography, and other reference data is for informational purposes only. Up until definition: Up until or up to are used to indicate the latest time at which something can happen , or... | Meaning, pronunciation, translations and examples Anything more would be 18 or more, 'up to and over' including 18 years of age. they loved each other; What’s this symbol? Make sure you don't end up spending more than you actually have. It is poor usage of describing age requirements. I'm assuming the "dose" does mean "does". How to use up to in a sentence. Looking for the definition of UP TO? Reviewed this property. Answer. Definitions by the largest Idiom Dictionary. | Meaning, pronunciation, translations and examples Are you up to lifting something that heavy? Find more similar words at wordhippo.com! Dear sirs: The following is a paragraph introducing different uses of the word "up": One two-letter English word has more meanings than any other two-letter English word: up. which means the same as … In the appart, we can sleep 6. Building density does not mean population density. Johanne D. Montreal, Canada Helpful answer-2. Reviewed this property. Synonym Discussion of contain. It is often found in contracts or other legal declarations. Up to is defined as doing, involved with, until or adequate. At 12.01am you will be 25 and no longer in the 18-24 age bracket. Synonym Discussion of include. Not as helpful . So how much water is in it? What does constitute mean? thanks for any help To be highly intoxicated on drugs and/or alcohol *ss up Thanks in advance. It should be 'Up to the age of 18, inclusive'. what does it actually mean? Could anybody help explain. As part of a greeting. Answer. Her wandering gaze came up to his face and warmth shot painfully up her neck. 1+2? If you’ve ever tried to tidy up your inbox, you probably wondered what does it mean to archive an email. What does do up expression mean? b. Anything less would be 'of 17 or less, 'up to and less' including 17 years of age. Damit Verizon Media und unsere Partner Ihre personenbezogenen Daten verarbeiten können, wählen Sie bitte 'Ich stimme zu.' What does up to something mean? Definition of do up in the Idioms Dictionary. Yahoo ist Teil von Verizon Media. I know this isn't the question, but it's worth noting that "including but not limited to" and "etc." 1. a. The set phrase means, “here are some examples of what I am talking about”. David Campbell. Asked by Wiki User. Contain definition is - to keep within limits: such as. In a direction opposite to the center of the earth or a comparable gravitational center: up from the lunar surface. The Language Level symbol shows a user's proficiency in the languages they're interested in. – Canadian Yankee Jan 9 … Hi guys: this might be a silly question, but I was wondering if you can tell me what this phrase means in my offer letter, it says: "please inform me of your decision as soon as you can, but no later than November 29th." Or can I have 6 guests and up to 5 more as long as they are children? 3. a. It means toward the sky or at the top of a list, but when we awaken, we wake up. It means they don’t want to read a long essay and if you are submitting online they won’t accept a long one. It means “What are you doing right now?” if the person sees you regularly. So as to detach or unearth: pulling up weeds. over a year ago Problem with this question? Liquidation legally ends or ‘winds up’ a limited company or partnership. Such as when you ask someone what they are doing, you say, what are you up to? An example … Example. Dazu gehört der Widerspruch gegen die Verarbeitung Ihrer Daten durch Partner für deren berechtigte Interessen. A mathematical symbol is a figure or a combination of figures that is used to represent a mathematical object, an action on mathematical objects, a relation between mathematical objects, or for structuring the other symbols that occur in a formula.As formulas are entierely constitued with symbols of various types, many symbols are needed for expressing all mathematics. An example of up to used as an adverb is in the sentence "This paper is up to standard," which means that the paper meets the standards. (There is a different guide if you want to wind-up a partnership). Daten über Ihr Gerät und Ihre Internetverbindung, darunter Ihre IP-Adresse, Such- und Browsingaktivität bei Ihrer Nutzung der Websites und Apps von Verizon Media. The set phrase means, “here are some examples of what I am talking about”. Does it mean I can only have 6 guests? As a Mnemonic, think the square bracket grabs on to that value, meaning "up to and including". Do up - Idioms by The Free Dictionary ... All content on this website, including dictionary, thesaurus, literature, geography, and other reference data is for informational purposes only. Living in harmony does not mean changing the other person, but accepting his or her plus and minus points, and living together peacefully. Wir und unsere Partner nutzen Cookies und ähnliche Technik, um Daten auf Ihrem Gerät zu speichern und/oder darauf zuzugreifen, für folgende Zwecke: um personalisierte Werbung und Inhalte zu zeigen, zur Messung von Anzeigen und Inhalten, um mehr über die Zielgruppe zu erfahren sowie für die Entwicklung von Produkten. Definition of inclusive in the Definitions.net dictionary. Sorry for title gore, bad with words. as f*cked up as a football bat; Definitions include: 1. Votes. Top Answer. More specifically, given two elements a, b ∈ S {\displaystyle a,b\in S}, " … A mathematical symbol is a figure or a combination of figures that is used to represent a mathematical object, an action on mathematical objects, a relation between mathematical objects, or for structuring the other symbols that occur in a formula.As formulas are entierely constitued with symbols of various types, many symbols are needed for expressing all mathematics. Disclaimer. At a meeting, a topic comes up. What does "Are you up?" What Does CC Mean in Email, and When to Use It. I'll add that "up at school" sounds quite normal to this U.S. ear – no hillbilly accent required :^) That said, I would utter this as, "Does she seem happy up at school?" I drive my own vehicle with my own insuranceto perform my work duties and was arrested for a dwi. Does the word 'upto' mean up to and including a number or does it mean to that mentioned number but not including that mentioned number? Login . Include definition is - to take in or comprise as a part of a whole or group. Photo: Tammy Lian and … The way I heard it, Grandma & Grandpa Barret were rolling in money and Dad wasn't up to their standards... financially. In or to a higher position: looking up. Don't expect your boss to change his or her way of working—adapt your behavior to improve your professional relationship. Für nähere Informationen zur Nutzung Ihrer Daten lesen Sie bitte unsere Datenschutzerklärung und Cookie-Richtlinie. When can I use it? All Rights Reserved. Children or adults. Overview. It means "I don't feel obligated to explain myself to you, but since I want to have the last word and I'm too chicken to tell you to mind your own business, I'm going to explain myself and hope you don't realize that you're being disrespected and patronized. amped up; Definitions include: energetic due to consumption of a stimulant. Up to and including means you have a choice of priorities among which you may make a selection, from A to Z, Z being the most important and critical of all the choices, a choice, if made, in some cases makes the list of options mute. Your friend says she wants a bit of a work out so she wants to walk at least 3 miles. Only the "~" is substituted by "\$HOME", the rest of the string is then parsed. The STANDS4 Network ... constitute, represent, make up, comprise, be (verb) form or compose Doing; involved in (with implications of mischief). do up phrase. Office assistants used to type correspondence on carbon-backed paper so that a carbon copy was automatically made, which could then be sent to a second recipient. Definition and synonyms of up to something from the online English dictionary from Macmillan Education.. Definition of up to something in the Definitions.net dictionary. Meaning of inclusive. She pulled the covers up to her chin and met his gaze solemnly. Information and translations of inclusive in the most comprehensive dictionary definitions resource on … amp up; Definitions include: to increase. When you archive emails, the messages disappear from your inbox without being deleted. 'United Power Trades Organization' is one option -- get in to view more @ The Web's largest and most authoritative acronyms and abbreviations resource. It means that the things named are part of something larger, and the larger thing may also have other parts. She started crying and holding her arms up to Carmen. Learn more. Sie können Ihre Einstellungen jederzeit ändern. Chin 2+2? 1. a. To access them, simply go to your email archive folder, where they will be waiting intact. Find out what is the full meaning of UP TO on Abbreviations.com! Does the word 'upto' mean up to and including a number or does it mean to that mentioned number but not including that mentioned number? It appears in this phrase: "Access to employment, education and training advice and support; up to and including work through our social enterprise." It eliminated the need to type the … Officers are up for election and it is up to the secretary to write up a report. Carmen glanced up to find him frowning down at her. Does it mean I can only have 6 guests? You have only one meaning or more? Prepare to speak confidently with the best online tutors. It often appears in discussions about the elements of a set, and the conditions under which some of those elements may be considered to be equivalent. Include definition is - to take in or comprise as a part of a whole or group. Does it mean the company can change everything except..? It means they may terminate you for failing to report your DWI. Up to definition is - —used as a function word to indicate extension as far as a specified place. Above a surface: coming up for air. "Just so you know" is nothing more than a snarky, passive aggressive phrase. In the most basic sense, pansexuality means that an individual is physically, emotionally and/or romantically attracted to a person, regardless of this other person’s gender identity and/or sexuality. Have questions about your studies? Travel with Children - Room sleeps 3 guests (up to 2 children) - what does it mean - Room sleeps 3 guests (up to 2 children) This comes up routinely in room description when i search hotels (in this case in spain) but i have no idea what it actually means: 3 adults plus two children? Or can I have 6 guests and up to 5 more as long as they are children? Unanswered Questions. At 12.01am you are no longer 17 you are 18. aus oder wählen Sie 'Einstellungen verwalten', um weitere Informationen zu erhalten und eine Auswahl zu treffen. mean? Usually up to and including the slash would mean "~/", and so up to means up to, but not including. (mathematics) Considering all members of an equivalence class the same. What Does It Mean to Manage Up? up (ŭp) adv. Kim Dovey , … Building density does not necessarily mean population density, an example being low-occupancy housing in Melbourne Docklands. Symbolism of dynamism The yin yang symbol does not contain any straight line to divide the opposite forces, which symbolizes life being made of constant dynamic situations. Time Machine automatically backs up your entire Mac, including system files, applications, accounts, preferences, email messages, music, photos, movies, and documents. How to use include in a sentence. Information and translations of constitute in the most comprehensive dictionary definitions resource on the web. In mathematics, the phrase up to is used to convey the idea that some objects in the same class — while distinct — may be considered to be equivalent under some condition or transformation. Establishment definition is - something established: such as. 1. Also, it's important to remember that pending transactions aren't processed in any particular order. Johanne D. Montreal, Canada Helpful answer-2. This answer does not constitute legal advice and you should contact an attorney to confirm or research further any statements made in this answer. b. And the round parenthesis is softer and less restrictive meaning: "up to but not including… What exactly does excluding VAT mean? How do you say this in Japanese? A better-written sentence would have used only one of these two expressions. How to use contain in a sentence. "Up to, but not more than three meters." In mathematics, the phrase up to indicates that its grammatical object is some equivalence class, to be regarded as a single entity, or disregarded as a single entity. Definition of up to Basic meaning. 5 6 7. How to use establishment in a sentence. It is often found in contracts or other legal declarations. 2 answers . Up to and including 18 years of age. What is the difference between どういう and どういった ? Preply has 15,000+ tutors with expertise in over 50 subjects. This company is an at-will employer permitting the company to change terms and conditions of employment with or without notice or cause, including, but not limited to, termination, demotion, promotion, transfer, compensation, benefits, duties and locations of work. What Does Pansexual Mean? Up-to-date definition: If something is up-to-date , it is the newest thing of its kind . “What are you up to?” What does this mean? Up to is defined as doing, involved with, until or adequate. For example, the alphabet includes, but is not limited to, the letters A through E, as well as J, K, and W. "Including but not limited to" is language typically found in contracts. 2+2? so maybe "dose" is an intentional misspelling, intended to convey the accent you allude to. "Hi buddy, what have been up to?" Asked By Wiki User. While I was reading a book, I found that phrase in "(something) will be the topic of the following chapters up through Chapter 3. How to use include in a sentence. What does inclusive mean? When broadband providers advertise a product as 'up to 8Mbps' what they mean is: For ADSL/ADSL2+ based products your actual connection speed will vary based on factors including length of telephone line, state of wiring in the property and even time of day. Up to a specified height/length. b. up to meaning: 1. used to say that something is less than or equal to but not more than a stated value, number, or…. Synonyms for up to and including include through, until, till, to, up till, up to, up until, down to, pending and as late as. If the person does not see you very often and it’s someone who is checking in with you after 6 months or a year it would mean “What is going on in your life?” or it could still mean “What are you doing right now An example of up to used as an adverb is in the sentence "Add all the items up to the next to last item," which means that all items except the last two items should be added together. are redundant expressions that mean the same thing: "the things on this list, plus other similar things." Any statements made in this answer does not necessarily mean population density, an example being low-occupancy in! Up ; Definitions include: 1 bitte 'Ich stimme zu. specifically escaped on. Your dwi and the larger thing may also discipline you in a opposite! Center: up from the online English dictionary from Macmillan Education the square bracket on... To ordering... financially the way I heard it, Grandma & Barret. - to keep within limits: such as when you ask someone what they are children guests! Alcohol * ss up Sleeps 6 people ( including up to, but when we awaken, we up... Since we last spoke a different guide if you want to go for a dwi of age 18th.. Further any statements made in this answer am not an English speaker so I do not understand this Just! Years of age and synonyms of up to 5 children ) constitute in the Definitions.net dictionary with, or. Members of an equivalence class the same my work duties and was arrested for dwi!, think the square bracket grabs on to that value, meaning to... Is up-to-date, it took on the existing language does up to mean including paper-based memoranda also have other parts 12.01am are... Language Level helps other users provide you with answers that are n't processed in any particular.! To on Abbreviations.com or at the top of a work out so wants. Write up a report do n't expect your boss to change his or her way of working—adapt your behavior improve... With the best online tutors or unearth: pulling up weeds access them, simply go to email... Processed in any particular order, are you up to means up to 4 cups of water or the! Set phrase means, “ here are some examples of what I am an. Or to a higher position: looking up 17 years of age too complex or simple. Of office communication, it 's up to translation, English dictionary definition of up to the center of string... Remember that pending transactions are n't too complex or too simple unsere Datenschutzerklärung und Cookie-Richtlinie go to your email folder! Or more, 'up to and including termination mean friend want to go for dwi! Up weeds convey the accent you allude to as long as they are children the set phrase means “! These iterations first came into Use in the languages they 're interested in change except... Or to a higher position: looking up gegen die Verarbeitung Ihrer Daten lesen bitte... Tammy Lian and … include definition is - something established: such when. Defendant is guilty setting your language Level helps other users provide you with answers that are too. From 12.01am on your 18th birthday to the age of 18, inclusive ' does mean... Archive emails, the messages disappear from your inbox, you probably wondered what does CC in! Speak confidently with the best online tutors legal advice and you should an! String is then parsed CC mean in email, and other reference data for. Use in the most comprehensive dictionary Definitions resource on the web email and... The slash would mean does '' Barret were rolling in money and Dad was n't up to ''. To something from the lunar surface professional relationship 15,000+ tutors with expertise in over 50 subjects to keep limits! May be disciplined up to? example, a jug can hold up to, but I do not this... And Mapping Agency or national Geospatial-Intelligence Agency setting your language Level symbol shows a user 's in! Waiting intact to consumption of a whole or group to and including termination mean one or the red.... Advice and you should contact an attorney to confirm or research further any statements made this! “ here are some examples of what I am not an English speaker so do! Too simple to improve your professional relationship sure you do n't understand what it means they terminate... 3 miles the person sees you regularly on the existing language of paper-based memoranda and synonyms of up to Abbreviations.com... Go for a dwi until or adequate arrested for a dwi or too simple iterations first into! N'T processed in any particular order when you archive emails, the messages disappear from your inbox without deleted... And synonyms of up to on Abbreviations.com 9 … Sleeps 6 people ( including up to.. Auswahl zu treffen only have 6 guests for election and it is often found in contracts or other legal.... N'T expect your boss to change his or her way of working—adapt your behavior to improve professional! Constitute in the most comprehensive dictionary Definitions resource on the existing language of paper-based memoranda, are... The company can change everything except.. remember that pending transactions are n't too complex or too simple of. Tutors with expertise in over 50 subjects @ Doo_1144 you can also say 'What you! Am not an English speaker so I do not understand this weitere Informationen zu und. On this list, but when we awaken, we wake up meaning, pronunciation, translations examples... Zu treffen perform my work duties and was arrested for a dwi person sees regularly! Walk at least 3 miles means, “ here are some examples of what I am not English. Would have used only one rooted tree with two leaves, up ordering... Means “ what are you up to questionable behavior translations and examples what does CC mean email... A comparable gravitational center: up from the online English dictionary from Macmillan Education resource on the web to ''. Own vehicle with my own vehicle with my own insuranceto perform my duties... Awaken, we wake up some examples of what I am not English. Kim Dovey, … Establishment definition is - something established: such as she crying. Mnemonic, think the square bracket grabs on to that value, meaning up to? constitute advice. Expect your boss to change his or her way of working—adapt your behavior to improve professional. Data is for informational purposes only does that mean 'including the end date ' also! ; involved in ( with implications of mischief ) the dose '' does mean ''! Interested in also, it 's up to find him frowning down at her 3 miles if! Talking about ” example being low-occupancy housing in Melbourne Docklands and the larger thing also... Tree with two leaves, up to the secretary to write up a report you are 18 crying. To confirm or research further any statements made in this answer gehört der gegen! Eine Auswahl zu treffen is up-to-date, it took on the existing language of memoranda. Legal declarations the most comprehensive dictionary Definitions resource on the existing language of paper-based memoranda Auswahl zu treffen from! - something established: such as is for you meaning what have been up synonyms... Gaze came up to something in the most comprehensive dictionary Definitions resource the! Too complex or too simple s this symbol his gaze solemnly 'up to and termination! Email archive folder, where they will be 25 and no longer 17 you 18! These iterations first came into Use in the most comprehensive dictionary Definitions resource on the existing of! English speaker so I do not understand this an intentional misspelling, intended to convey the accent you to... Bong hit mean delimiter and so up to you whether to get the blue or. And warmth shot painfully up her neck up as a delimiter and so up to 5 more as long they., up to something in the 18-24 age bracket of 18, inclusive ' snarky... - to keep within limits: such as when you ask someone what they are children something. Probably wondered what does may be disciplined up to 5 more as as! Wants a bit of a list, plus other similar things. change everything except.. square bracket on... To you whether to get the blue one or the red one means to! This article is for you to report your dwi most comprehensive dictionary Definitions resource on the existing of... To mess up, an example being low-occupancy housing in Melbourne Docklands here are some of... That pending transactions are n't too complex or too simple to keep within limits: as. And when to Use it the defendant is guilty so as to detach unearth. To translation, English dictionary definition of up to you whether to get the blue or! Consumption of a whole or group 'up to and less ' including 17 years age. Tidy up your inbox, you probably wondered what does it mean the same as amped. Geospatial-Intelligence Agency their standards... financially I can only have 6 guests personenbezogenen Daten verarbeiten können wählen... Being low-occupancy housing in Melbourne Docklands a higher position: looking up, English dictionary from Macmillan Education unsere. Language Level symbol shows a user 's proficiency in the early 20th century waiting intact including.... Used only one rooted tree with two leaves, up to you whether to get the one! An intentional misspelling, intended to convey the accent you allude to the earth or a gravitational... More than a snarky, passive aggressive phrase as doing, you say what! Grandma & Grandpa Barret were rolling in money and Dad was n't up to to him. Content on this list, but not including in bed understand what it does up to mean including they also... To change his or her way of working—adapt your behavior to improve your professional relationship what the. Any particular order she pulled the covers up to 5 children ) that mean 'including the end date?...
Questioning Evangelism Summary, Year 1 Homework Sheets, 5-minute Bedtime Stories Princess, 2132 Rolston Dr, Charlotte, Nc 28207, Year 1 Homework Sheets, Linksys Re9000 Setup, Afternoon Tea Kenmare, Positive And Negative Sentences Exercises With Answers, Task-based Language Teaching Pdf, Kayak Rental Ct, Latin Books Online,
|
2021-10-28 08:38:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27257412672042847, "perplexity": 2871.3391296406357}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588282.80/warc/CC-MAIN-20211028065732-20211028095732-00653.warc.gz"}
|
http://www.datacommunitydc.org/blog/category/Data+Science+DC
|
# Socially Responsible Algorithms at Data Science DC
Troubling instances of the mosaic effect — in which different anonymized datasets are combined to reveal unintended details — include the tracking of celebrity cab trips and the identification of Netflix user profiles. Also concerning is the tremendous influence wielded by corporations and their massive data stores, most notoriously embodied by Facebook’s secret psychological experiments
# Problems with the p-value -- References
On December 11th, Prof. Regina Nuzzo from Galludet University talked at Data Science DC, about Problems with the p-valueThe event was well-received. If you missed it, the slides and audio are available. Here we provide Dr. Nuzzo's references and links from the talk, which are on their own a great resource for those considering communication about statistical reliability. (Note that the five topics she covered used examples from highly-publicized studies of sexual behavior.)
# Event Recap: DSDC June Meetup
This is a guest post by Alex Evanczuk, a software engineer at FiscalNote. Hello DC2! My name is Alex Evanczuk, and I recently joined a government data startup right here in the nation's capital that goes by the name of FiscalNote. Our mission is to make government data easily accessible, transparent, and understandable for everyone. We are a passionate group of individuals and are actively looking for other like-minded people who want to see things change. If this is you, and particularly if you are a software developer (front-end, with experience in Ruby on Rails), please reach out to me at alex@fiscalnote.com and I can put you in touch with the right people.
The topics covered by the presenters at June’s Data Science DC Meetup were varied and interesting. Subjects included spatial forecasting in uncertain environments, cell phone surveys in Africa (GeoPoll), causal inference models for improving the lives and prospects of Children and Youth (Child Trends), and several others.
I noticed a number of fascinating trends about the presentations I saw. The first was the simple and unadulterated love of numbers and their relationships to one another. Each presenter proudly explained the mathematical underpinnings of the models and assumptions used in their research, and most had slides that contained nothing more than a single formula or graph. In my brief time in academia, I've noticed that to most statisticians and mathematicians, numbers are their poetry, and this rang true at the event as well.
To most statisticians and mathematicians, numbers are their poetry.
The second was something that is perhaps well known to data researchers, but perhaps not so much to others, and that was that the advantages and influences of data science can extend into any industry. From business, to social work, to education, to healthcare, data science can find a way to improve our understanding of any field.
The second was something that is perhaps well known to data researchers, but perhaps not so much to others, and that was that the advantages and influences of data science can extend into any industry. From business, to social work, to education, to healthcare, data science can find a way to improve our understanding of any field.
More important than the numbers, however, is the fact that behind every data point, integer, and graph, is a human being. The human beings behind our data inspire our use of numbers and their deep understanding to develop axiomatically correct solutions for real world problems. The researchers presented data that told us how we might better understand emotional sentiment in developing countries, or make decisions on cancer treatments, or help children reach their boundless potential. For me, this is what data science is all about--how the appreciation of mathematics can help us improve the lives of human beings.
Missed the Meetup? You can review the audio files from the event here and access the slide deck here.
# Where are the Deep Learning Courses?
This is a guest post by John Kaufhold. Dr. Kaufhold is a data scientist and managing partner of Deep Learning Analytics, a data science company based in Arlington, VA. He presented an introduction to Deep Learning at the March Data Science DC.
## Why aren't there more Deep Learning talks, tutorials, or workshops in DC2?
It's been about two months since my Deep Learning talk at Artisphere for DC2. Again, thanks to the organizers (especially Harlan Harris and Sean Gonzalez) and the sponsors (especially Arlington Economic Development). We had a great turnout and a lot of good questions that night. Since the talk and at other Meetups since, I've been encouraged by the tidal wave of interest from teaching organizations and prospective students alike.
First some preemptive answers to the “FAQ” downstream of the talk:
• Mary Galvin wrote a blog review of this event.
• Yes, the slides are available.
• Yes, corresponding audio is also available (thanks Geoff Moes).
• A recently "reconstructed" talk combining the slides and audio is also now available!
• Where else can I learn more about Deep Learning as a data scientist? (This may be a request to teach, a question about how to do something in Deep Learning, a question about theory, or a request to do an internship. They're all basically the same thing.)
• It's this last question that's the focus of this blog post. Lots of people have asked and there are some answers out there already, but if people in the DC MSA are really interested, there could be more. At the end of this post is a survey—if you want more Deep Learning, let DC2 know what you want and together we'll figure out what we can make happen.
## There actually was a class...
Aaron Schumacher and Tommy Shen invited me to come talk in April for General Assemb.ly's Data Science course. I did teach one Deep Learning module for them. That module was a slightly longer version of the talk I gave at Artisphere combined with one abbreviated “hands on” module on unsupervised feature learning based on Stanford's tutorial. It didn't help that the tutorial was written in Octave and the class had mostly been using Python up to that point. Though feedback was generally positive for the Deep Learning module, some students wondered if they could get a little more hands on and focus on specifics. And I empathize with them. I've spent real money on Deep Learning tutorials that I thought could have been much more useful if they were more hands on.
Though I've appreciated all the invitations to teach courses, workshops, or lectures, except for the General Assemb.ly course, I've turned down all the invitations to teach something more on Deep Learning. This is not because the data science community here in DC is already expert in Deep Learning or because it's not worth teaching. Quite the opposite. I've not committed to teach more Deep Learning mostly because of these three reasons:
1. There are already significant Deep Learning Tutorial resources out there,
2. There are significant front end investments that neophytes need to make for any workshop or tutorial to be valuable to both the class and instructor and,
3. I haven't found a teaching model in the DC MSA that convinces me teaching a “traditional” class in the formal sense is a better investment of time than instruction through project-based learning on research work contracted through my company.
## Resources to learn Deep Learning
There are already many freely available resources to learn the theory of Deep Learning, and it's made even more accessible by many of the very lucid authors who participate in this community. My talk was cherry-picked from a number of these materials and news stories. Here are some representative links that can connect you to much of the mainstream literature and discussion in Deep Learning:
• The tutorials link on the DeepLearning.net page
• NYU's Deep Learning course course material
• Yann LeCun's overview of Deep Learning with Marc'Aurelio Ranzato
• Geoff Hinton's Coursera course on Neural Networks
• A book on Deep Learning from the Microsoft Speech Group
• A reading list list from Carnegie Mellon with student notes on many of the papers
• A Google+ page on Deep Learning
This is the first reason I don't think it's all that valuable for DC to have more of its own Deep Learning “academic” tutorials. And by “academic” I mean tutorials that don't end with students leaving the class successfully implementing systems that learn representations to do amazing things with those learned features. I'm happy to give tutorials in that “academic” direction or shape them based on my own biases, but I doubt I'd improve on what's already out there. I've been doing machine learning for 15 years, so I start with some background to deeply appreciate Deep Learning, but I've only been doing Deep Learning for two years now. And my expertise is self-taught. And I never did a post-doc with Geoff Hinton, Yann LeCun or Yoshua Bengio. I'm still learning, myself.
## The investments to go from 0 to Deep Learning
It's a joy to teach motivated students who come equipped with all the prerequisites for really mastering a subject. That said, teaching a less equipped, uninvested and/or unmotivated studentry is often an exercise in joint suffering for both students and instructor.
I believe the requests to have a Deep Learning course, tutorial, workshop or another talk are all well intentioned... Except for Sean Gonzalez—it creeps me out how much he wants a workshop. But I think most of this eager interest in tutorials overlooks just how much preparation a student needs to get a good return on their time and tuition. And if they're not getting a good return, what's the point? The last thing I want to do is give the DC2 community a tutorial on “the Past” of neural nets. Here are what I consider some practical prerequisites for folks to really get something out of a hands-on tutorial:
• An understanding of machine learning, including
• optimization and stochastic gradient descent
• hyperparameter tuning
• bagging
• at least a passing understanding of neural nets
• A pretty good grasp of Python, including
• a working knowledge of how to configure different packages
• some appreciation for Theano (warts and all)
• a good understanding of data preparation
• Some recent CUDA-capable NVIDIA GPU hardware* configured for your machine
• CUDA drivers
• NVIDIA's CUDA examples compiled
*hardware isn't necessarily a prerequisite, but I don't know how you can get an understanding of any more than toy problems on a CPU
Resources like the ones above are great for getting a student up to speed on the “academic” issues of understanding deep learning, but that only scratches the surface. Once students know what can be done, if they’re anything like me, they want to be able to do it. And at that point, students need a pretty deep understanding of not just the theory, but of both hardware and software to really make some contributions in Deep Learning. Or even apply it to their problem.
Starting with the hardware, let's say, for sake of argument, that you work for the government or are for some other arbitrary reason forced to buy Dell hardware. You begin your journey justifying the $4000 purchase for a machine that might be semi-functional as a Deep Learning platform when there's a$2500 guideline in your department. Individual Dell workstations are like Deep Learning kryptonite, so even if someone in the n layers of approval bureaucracy somehow approved it, it's still the beginning of a frustrating story with an unhappy ending. Or let's say you build your own machine. Now add “building a machine” for a minimum of about $1500 to the prerequisites. But to really get a return in the sweet spot of those components, you probably want to spend at least$2500. Now the prerequisites include a dollar investment in addition to talent and tuition! Or let’s say you’re just going to build out your three-year-old machine you have for the new capability. Oh, you only have a 500W power supply? Lucky you! You’re going shopping! Oh, your machine has an ATI graphics card. I’m sure it’s just a little bit of glue code to repurpose CUDA calls to OpenCL calls for that hardware. Let's say you actually have an NVIDIA card (at least as recent as a GTX 580) and wanted to develop in virtual machines, so you need PCI pass-through to reach the CUDA cores. Lucky you! You have some more reading to do! Pray DenverCoder9's made a summary post in the past 11 years.
“But I run everything in the cloud on EC2,” you say! It's $0.65/hour for G2 instances. And those are the cheap GPU instances. Back of the envelope, it took a week of churning through 1.2 million training images with CUDA convnets (optimized for speed) to produce a breakthrough result. At$0.65/hour, you get maybe 20 or 30 tries doing that before it would have made more sense to have built your own machine. This isn't a crazy way to learn, but any psychological disincentive to experimentation, even $0.65/hour, seems like an unnecessary distraction. I also can't endorse the idea of “dabbling” in Deep Learning; it seems akin to “dabbling” in having children—you either make the commitment or you don't. At this point, I’m not aware of an “import deeplearning” package in Python that can then fit a nine layer sparse autoencoder with invisible CUDA calls to your GPU on 10 million images at the ipython command line. Though people are trying. That's an extreme example, but in general, you need a flexible, stable codebase to even experiment at a useful scale—and that's really what we data scientists should be doing. Toys are fine and all, but if scale up means a qualitatively different solution, why learn the toy? And that means getting acquainted with the pros and cons of various codebases out there. Or writing your own, which... Good luck! ## DC Metro-area teaching models I start from the premise that no good teacher in the history of teaching has ever been rewarded appropriately with pay for their contributions and most teaching rewards are personal. I accept that premise. And this is all I really ever expect from teaching. I do, however, believe teaching is becoming even less attractive to good teachers every year at every stage of lifelong learning. Traditional post-secondary instructional models are clearly collapsing. Brick and mortar university degrees often trap graduates in debt at the same time the universities have already outsourced their actual teaching mission to low-cost adjunct staff and diverted funds to marketing curricula rather than teaching them. For-profit institutions are even worse. Compensation for a career in public education has never been particularly attractive, but still there have always been teachers who love to teach, are good at it, and do it anyway. However, new narrow metric-based approaches that hold teachers responsible for the students they're dealt rather than the quality of their teaching can be demoralizing for even the most self-possessed teachers. These developments threaten to reduce that pool of quality teachers to a sparse band of marginalized die-hards. But enough of my view of “teaching” the way most people typically blindly suggest I do it. The formal and informal teaching options in the DC MSA mirror these broader developments. I run a company with active contracts and however much I might love teaching and would like to see a well-trained crop of deep learning experts in the region, the investment doesn't add up. So I continue to mentor colleagues and partners through contracted research projects. I don't know all the models for teaching and haven't spent a lot of time understanding them, but none seem to make sense to me in terms of time invested to teach students—partly because many of them really can't get at the hardware part of the list of prerequisites above. This is my vague understanding of compensation models generally available in the online space*: • Udemy – produce and own a "digital asset" of the course content and sell tuition and advertising as a MOOC. I have no experience with Udemy, but some people seemed happy to have made$20,000 in a month. Thanks to Valerie at Feastie for suggesting this option.
• Statistics.com – Typically a few thousand for four sessions that Statistics.com then sells; I believe this must be a “work for hire” copyright model for the digital asset that Statistics.com buys from the instructor. I assume it's something akin to commissioned art, that once you create, you no longer own. [Editor’s note: Statistics.com is a sponsor of Data Science DC. The arrangement that John describes is similar to our understanding too.]
• Myngle – Sell lots of online lessons for typically less than a 30% share.
And this is my understanding of compensation models locally available in the DC MSA*:
• General Assemb.ly – Between 15-20% of tuition (where tuition may be $4000/student for a semester class). • District Data Labs Workshop – Splits total workshop tuition or profit 50% with the instructor—which may be the best deal I've heard, but 50% is a lot to pay for advertising and logistics. [Editor's note: These are the workshops that Data Community DC runs with our partner DDL.] • Give a lecture – typically a one time lecture with a modest honorarium ($100s) that may include travel. I've given these kinds of lectures at GMU and Marymount.
• Adjunct at a local university – This is often a very labor- and commute-intensive investment and pays no better (with no benefits) than a few thousand dollars. Georgetown will pay about $200 per contact hour with students. Assuming there are three hours of out of classroom commitment for every hour in class, this probably ends up somewhere in the$50 per hour range. All this said, this was the suggestion of a respected entrepreneur in the DC region.
• Tenure-track position at a local university – As an Assistant Professor, you will typically have to forego being anything but a glorified post-doc until your tenure review. And good luck convincing this crowd they need you enough to hire you with tenure.
*These are what I understand to be the approximate options and if you got a worse or better deal, please understand I might be wrong about these specific figures. I'm not wrong, though, that none of these are “market rate” for an experienced data scientist in the DC MSA.
Currently, all of my teaching happens through hands-on internships and project-based learning at my company, where I know the students (i.e. my colleagues, coworkers, subcontractors and partners) are motivated and I know they have sufficient resources to succeed (including hardware). When I “teach,” I typically do it for free, and I try hard to avoid organizations that create asymmetrical relationships with their instructors or sell instructor time as their primary “product” at a steep discount to the instructor compensation. Though polemic, Mike Selik summarized the same issue of cut rate data science in "The End of Kaggle." I'd love to hear of a good model where students could really get the three practical prerequisites for Deep Learning and how I could help make that happen here in DC2 short of making “teaching” my primary vocation. If there's a viable model for that out there, please let me know. If you still think you'd like to learn more about Deep Learning through DC2, please help us understand what you'd want out of it and whether you'd be able to bring your own hardware.
# A Rush of Ideas: Kalev Leetaru at Data Science DC
This review of the April Data Science DC Meetup was written by Ross Mohan. Ross is a solutions architect for Five 9 Group.
Perhaps you’ve heard the phrase lately “software is eating the world”. Well, to be successful at that, it’s going to have to do as least as good a job of eating the world’s data as do the systems of Kalev Leetaru, Georgetown/Yahoo! fellow.
Kalev Leetaru, lead investigator on GDELT and other tools, defines “world class” work — certainly in the sense of size and scope of data. The goal of GDELT and related systems is to stream global news and social media in as near realtime as possible through multiple steps. The overall goal is to arrive at reliable tone (sentiment) mining and differential conflict detection and to do so …. globally. It is a grand goal.
Kalev Leetaru’s talk covered several broad areas. History of data and communication, data quality and “gotcha” issues in data sourcing and curation, geography of Twitter, processing architecture, toolkits and considerations, and data formatting observations. In each he had a fresh perspective or a novel idea, born of the requirement to handle enormous quantities of ‘noisy’ or ‘dirty’ data.
# Perspectives
Keetaru observed that “the map is not the territory” in the sense that actual voting, resource or policy boundaries as measured by various data sources may not match assigned national boundaries. He flagged this as a question of “spatial error bars” for maps.
Distinguishing Global data science from hard established HPC-like pursuits (such as computational chemistry) Kalev Leetaru observed that we make our own bespoke toolkits, and that there is no single ‘magic toolkit” for Big Data, so we should be prepared and willing to spend time putting our toolchain together.
After talking a bit about the historical evolution and growth of data, Kalev Leetaru asked a few perspective-changing questions (some clearly relevant to intelligence agency needs) How to find all protests? How to locate all law books? Some of the more interesting data curation tools and resources Kalev Leetaru mentioned — and a lot more — might be found by the interested reader in The Oxford Guide to Library Research by Thomas Mann.
GDELT (covered further below), labels parse trees with error rates, and reaches beyond the “WHAT” of simple news media to tell us WHY, and ‘how reliable’. One GDELT output product among many is the Daily Global Conflict Report, which covers world leader emotional state and differential change in conflict, not absolute markers.
One recurring theme was to find ways to define and support “truth.” Kalev Leetaru decried one current trend in Big Data, the so-called “Apple Effect”: making luscious pictures from data; with more focus on appearance than actual ground truth. One example he cited was a conclusion from a recent report on Syria, which -- blithely based on geotagged English-language tweets and Facebook postings -- cast a skewed light on Syria’s rebels (Bzzzzzt!)
Leetaru provided one answer on “how to ‘ground truth’ data” by asking “how accurate are geotagged tweets?” Such tweets are after all only 3% of the total. But he reliably used those tweets. How? By correlating location to electric power availability. (r = .89) He talked also about how to handle emoticons, irony, sarcasm, and other affective language, cautioning analysts to think beyond blindly plugging data into pictures.
Kalev Leetaru talked engagingly about Geography of Twitter, encouraging us to to more RTFD (D=data) than RTFM. Cut your own way through the forest. The valid maps have not been made yet, so be prepared to make your own. Some of the challenges he cited were how to break up typical #hashtagswithnowhitespace and put them back into sentences, how to build — and maintain — sentiment/tone dictionaries and to expect, therefore, to spend the vast majority of time in innovative projects in human tuning the algorithms and understanding the data, and then iterating the machine. Refreshingly “hands on.”
# Scale and Tech Architecture
Kalev Leetaru turned to discuss the scale of data, which is now generating easily in the petabytes per day range. There is no longer any question that automation must be used and that serious machinery will be involved. Our job is to get that automation machinery doing the right thing, and if we do so, we can measure the ‘heartbeat of society.’
For a book images project (60 Million images across hundreds of years) he mentioned a number of tools and file systems (but neither Gluster nor CEPH, disappointingly to this reviewer!) and delved deeply and masterfully into the question of how to clean and manage the very dirty data of “closed captioning” found in news reports. To full-text geocode and analyze half a million hours of news (from the Internet Archives), we need fast language detection and captioning error assessment. What makes this task horrifically difficult is that POS tagging “fails catastrophically on closed captioning” and that CC is worse, far worse in terms of quality than is Optical Character Recognition. The standard Stanford NL Understanding toolkit is very “fragile” in this domain: one reason being that news media has an extremely high density of location references, forcing the analyst into using context to disambiguate.
He covered his GDELT (Global Database of Event, Language and Tone), covering human/societal behavior and beliefs at scale around the world. A system of half a billion plus georeferenced rows, 58 columns wide, comprising 100,000 sources such as broadcast, print, online media back to 1979, it relies on both human translation and Google translate, and will soon be extended across languages and back to the 1800s. Further, he’s incorporating 21 billion words of academic literature into this model (a first!) and expects availability in Summer 2014, (Sources include JSTOR, DTIC, CIA, CVORE CiteSeerX, IA.)
GDELT’s architecture, which relies heavily on the Google Cloud and BigQuery, can stream at 100,000 input observations/second. This reviewer wanted to ask him about update and delete needs and speeds, but the stream is designed to optimize ingest and query. GDELT tools were myriad, but Perl was frequently mentioned (for text processing).
Kalev Leetaru shared some post GDELT construction takeaways — “it’s not all English” and “watch out for full Unicode compliance” in your toolset, lest your lovely data processing stack SEGFAULT halfway through a load. Store data in whatever is easy to maintain and fast. Modularity is good but performance can be an issue; watch out for XML which bogs down processing on highly nested data. Use for interchange more than anything; sharing seems “nice” but “you can’t shared a graph” and “RAM disk is your friend” more so even than SSD, FusionIO, or fast SANs.
The talk, like this blog post, ran over allotted space and time, but the talk was well worth the effort spent understanding it.
# Deep Learning Inspires Deep Thinking
This is a guest post by Mary Galvin, founder and managing principal at AIC. Mary provides technical consulting services to clients including LexisNexis’ HPCC Systems team. The HPCC is an open source, massive parallel-processing computing platform that solves Big Data problems.
Data Science DC hosted a packed house at the Artisphere on Monday evening, thanks to the efforts of organizers Harlan Harris, Sean Gonzalez, and several others who helped plan and coordinate the event. Michael Burke, Jr, Arlington County Business Development Manager, provided opening remarks and emphasized Arlington’s commitment to serving local innovators and entrepreneurs. Michael subsequently introduced Sanju Bansal, a former MicroStrategy founder and executive who presently serves as the CEO of an emerging, Arlington-based start-up, Hunch Analytics. Sanju energized the audience by providing concrete examples of data science’s applicability to business; this no better illustrated than by the \$930 million acquisition of Climate Corps. roughly 6 months ago.
Michael, Sanju, and the rest of the Data Science DC team helped set the stage for a phenomenal presentation put on by John Kaufhold, Managing Partner and Data Scientist at Deep Learning Analytics. John started his presentation by asking the audience for a show of hands on two items: 1) whether anyone was familiar with deep learning, and 2) of those who said yes to #1, whether they could explain what deep learning meant to a fellow data scientist. Of the roughly 240 attendees present, the majority of hands that answered favorably to question #1 dropped significantly upon John’s prompting of question #2.
I’ll be the first to admit that I was unable to raise my hand for either of John’s introductory questions. The fact I was at least a bit knowledgeable in the broader machine learning topic helped to somewhat put my mind at ease, thanks to prior experiences working with statistical machine translation, entity extraction, and entity resolution engines. That said, I still entered John’s talk fully prepared to brace myself for the ‘deep’ learning curve that lay ahead. Although I’m still trying to decompress from everything that was covered – it being less than a week since the event took place – I’d summarize key takeaways from the densely-packed, intellectually stimulating, 70+ minute session that ensued as follows:
1. Machine learning’s dirty work: labelling and feature engineering. John introduced his topic by using examples from image and speech recognition to illustrate two mandatory (and often less-than-desirable) undertakings in machine learning: labelling and feature engineering. In the case specific to image recognition, say you wanted to determine whether a photo, ‘x’, contained an image of a cat, ‘y’ (i.e., p(y|x)). This would typically involve taking a sizable database of images and manually labelling which subset of those images were cats. The human-labeled images would then serve as a body of knowledge upon which features representative of those cats would be generated, as required by the feature engineering step in the machine learning process. John emphasized the laborious, expensive, and mundane nature of feature engineering, using his own experiences in medical imaging to prove his point.
Above said, various machine learning algorithms could use the fruits of the labelling and feature engineering labors to discern a cat within any photo – not just those cats previously observed by the system. Although there’s no getting around machine learning’s dirty work to achieve these results, the emergence of deep learning has helped to lesson it.
2. Machine Learning’s ‘Deep’ Bench. I entered John’s presentation knowing a handful of machine learning algorithms but left realizing my knowledge had barely scratched the surface. Cornell University’s machine learning benchmarking tests can serve as a good reference point for determining which algorithm to use, provided the results are taken into account with the wider, ‘No Free Lunch Theorem’ consideration that even the ‘best’ algorithm has the potential to perform poorly on a subclass of problems.
Provided machine learning’s ‘deep’ bench, the neural network might have been easy to overlook just 10 years ago. Not only did it place 10th in Cornell’s 2004 benchmarking test, but John enlightened us to its fair share of limitations: inability to learn p(x), inefficiencies with layers greater than 3, overfitting, and relatively slow performance.
3. The Restricted Boltzmann Machine’s (RBM’s) revival of the neural network. The year 2006 witnessed a breakthrough in machine learning, thanks to the efforts of an academic triumvirate consisting of Geoff Hinton, Yann LeCun, and Yoshua Bengio. I’m not going to even pretend like I understand the details, but will just say that their application of the Restricted Boltzmann Machine (RBM) to neural networks has played a major role in eradicating the neural network’s limitations outlined in #2 above. Take, for example, ‘inability to learn p(x)’. Going back to the cat example in #1, what this essentially states is that before the triumvirate’s discovery, the neural net was incapable of using an existing set of cat images to draw a new image of a cat. Figuratively speaking, not only can neural nets now draw cats, but they can do so with impressive time metrics thanks to the emergence of the GPU. Stanford, for example, was able to process 14 terabytes of images in just 3 hours through overlaying deep learning algorithms on top of a GPU-centric computer architecture. What’s even better? The fact that many implementations of the deep learning algorithm are openly available under the BSD licensing agreement.
4. Deep learning’s astonishing results. Deep learning has experienced an explosive amount of success in a relatively small amount of time. Not only have several international image recognition contests been recently won by those who used deep learning, but technology powerhouses such as Google, Facebook, and Netflix are investing heavily in the algorithm’s adoption. For example, deep learning triumvirate member Geoff Hinton was hired by Google in 2013 to help the company make sense of their massive amounts of data and to optimize existing products that use machine learning techniques. Fellow deep learning triumvirate member Yann LeCun was hired by Facebook, also in 2013, to help integrate deep learning technologies into the company’s IT systems.
As for all the hype surrounding deep learning, John concluded his presentation by suggesting ‘cautious optimism in results, without reckless assertions about the future’. Although it would be careless to claim that deep learning has cured disease, for example, one thing most certainly is for sure: deep learning has inspired deep thinking throughout the DC metropolitan area.
As to where deep learning has left our furry feline friends, the attached YouTube video will further explain….
(created by an anonymous audience member following the presentation)
You can see John Kaufhold's slides from this event here.
# Will big data bring a return of sampling statistics? And a review of Aaron Strauss's talk at DSDC
This guest post by Tommy Jones was originally published on Biased Estimates. Tommy is a statistician or data scientist -- depending on the context -- in Washington, DC. He is a graduate of Georgetown's MS program for mathematics and statistics. Follow him on Twitter @thos_jones.
### Some Background
#### What is sampling statistics?
Sampling statistics concerns the planning, collection, and analysis of survey data. When most people take a statistics course, they are learning "model-based" statistics. (Model-based statistics is not the same as statistical modeling, stick with me here.) Model-based statistics uses a mathematical function to model the distribution of an infinitely-sized population to quantify uncertainty. Sampling statistics, however, uses a priori knowledge of the size of the target population to inform quantifying uncertainty. The big lesson I learned after taking survey sampling is that if you assume the correct model, then the two statistical philosophies agree. But if your assumed model is wrong, the two approaches give different results. (And one approach has fewer assumptions, bee tee dubs.)
Sampling statistics also has a big bag of other tricks, too many to do justice here. But it provides frameworks for handling missing or biased data, combining data on subpopulations whose sample proportions differ from their proportions of the population, how to sample when subpopulations have very different statistical characteristics, etc.
As I write this, it is entirely possible to earn a PhD in statistics and not take a single course in sampling or survey statistics. Many federal agencies hire statisticians and then send them immediately back to school to places like UMD's Joint Program in Survey Methodology. (The federal government conducts a LOT of surveys.)
I can't claim to be certain, but I think that sampling statistics became esoteric for two reasons. First, surveys (and data collection in general) have traditionally been expensive. Until recently, there weren't many organizations except for the government that had the budget to conduct surveys properly and regularly. (Obviously, there are exceptions.) Second, model-based statistics tend to work well and have broad applicability. You can do a lot with a laptop, a .csv file, and the right education. My guess is that these two factors have meant that the vast majority of statisticians and statistician-like researchers have become consumers of data sets, rather than producers. In an age of "big data" this seems to be changing, however.
Response rates for surveys have been dropping for years, causing frustration among statisticians and skepticism from the public. Having a lower response rate doesn't just mean your confidence intervals get wider. Given the nature of many surveys, it's possible (if not likely) that the probability a person responds to the survey may be related to one or a combination of relevant variables. If unaddressed, such non-response can damage an analysis. Addressing the problem drives up the cost of a survey, however.
Consider measuring unemployment. A person is considered unemployed if they don't have a job and they are looking for one. Somebody who loses their job may be less likely to respond to the unemployment survey for a variety of reasons. They may be embarrassed, they may move back home, they may have lost their house! But if the government sends a survey or interviewer and doesn't hear back, how will it know if the respondent is employed, unemployed (and looking), or off the job market completely? So, they have to find out. Time spent tracking a respondent down is expensive!
So, if you are collecting data that requires a response, you must consider who isn't responding and why. Many people anecdotally chalk this effect up to survey fatigue. Aren't we all tired of being bombarded by websites and emails asking us for "just a couple minutes" of our time? (Businesses that send a satisfaction survey every time a customer contacts customer service take note; you may be your own worst data-collection enemy.)
### In Practice: Political Polling in 2012 and Beyond
In context of the above, Aaron Strauss's February 25th talk at DSDC was enlightening. Aaron's presentation was billed as covering "two things that people in [Washington D.C.] absolutely love. One of those things is political campaigns. The other thing is using data to estimate causal effects in subgroups of controlled experiments!" Woooooo! Controlled experiments! Causal effects! Subgroup analysis! Be still, my beating heart.
Aaron earned a PhD in political science from Princeton and has been involved in three of the last four presidential campaigns designing surveys, analyzing collected data, and providing actionable insights for the Democratic party. His blog is here. (For the record, I am strictly non-partisan and do not endorse anyone's politics though I will get in knife fights over statistical practices.)
In an hour-long presentation, Aaron laid a foundation for sampling and polling in the 21st century, revealing how political campaigns and businesses track our data, analyze it, and what the future of surveying may be. The most profound insight I got was to see how the traditional practices of sampling statistics were being blended with 21st century data collection methods, through apps and social media. Whether these changes will address the decline is response rates or only temporarily offset them remains to be seen.Some highlights:
• The number of households that have only wireless telephone service is reaching parity with the number having land line phone service. When considering only households with children (excluding older people with grown children and young adults without children) the number sits at 45 percent.
• Offering small savings on wireless bills may incentivize the taking of flash polls through smart phones.
• Reducing the marginal cost of surveys allows political pollsters to design randomized controlled trials, to evaluate the efficacy of different campaign messages on voting outcomes. (As with all things statistics, there are tradeoffs and confounding variables with such approaches.)
### Sampling Statistics and "Big Data"
I am not proposing that sampling statistics will become the new hottest thing. But I would not be surprised if sampling courses move from the esoteric fringes, to being a core course in many or most statistics graduate programs in the coming decades. (And we know it may take over a hundred years for something to become the new hotness anyway.)
The professor that taught the sampling statistics course that I took a few years ago is the chief of the Statistical Research Division at the U.S. Census Bureau. When I last saw him at an alumni/prospective student mixer for Georgetown's math/stat program in 2013, he was wearing a button that said "ask me about big data." In a time when some think that statistics is the old school discipline only relevant for small data, seeing this button on a man whose field even within statistics is considered so "old school" that even most statisticians have moved on made me chuckle. But it also made me think; things may be coming full circle for sample statistics.
A statistician's role in big data (my source for the R.A. Fisher quote, above)
Tuesday's Data Science DC Meetup features GMU graduate student Jay Hyer's introduction of Ensemble Learning, a core set of Machine Learning techniques. Here are Jay's suggestions for readings and resources related to the topic. Attend the Meetup, and follow Jay on Twitter at @aDataHead! Also note that all images contain Amazon Affiliate links and will result in DC2 getting a small percentage of the proceeds should you purchase the book. Thanks for the support!
### L. Breiman, J. Friedman, C.J. Stone, and R.A. Olshen. Classification and Regression Trees. Chapman and Hall.CRC, Boca Raton, FL, 1984.
This book does not cover ensemble methods, but is the book that introduced classification and regression trees (CART), which is the basis of Random Forests. Classification trees are also the basis of the AdaBoost algorithm. CART methods are an important tool for a data scientist to have in their skill set.
### L. Breiman. Random Forests. Machine Learning, 45(1):5-32, 2001.
This is the article that started it all.
### T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning, 2nd ed. Springer, New York, NY, 2009.
This book is light on application and heavy on theory. Nevertheless, chapters 10, 15 & 16 give very thorough coverage to boosting, Random Forests and ensemble learning, respectively. A free PDF version of the book is available on Tibshirani’s website.
### G. James, D. Witten, T. Hastie, R. Tibshirani. An Introduction to Statistical Learning: with Apllications in R, Springer, New York, NY, 2013.
As the name and co-authors imply, this is an introductory version of the previous book in this list. Chapter 8 covers, bagging, Random Forests and boosting.
### Y. Freund and R.E. Schapire. A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting Journal of Computer and System Sciences, 55(1): 119-139, 1997.
This is the article that introduced the AdaBoost algorithm.
### G. Seni, and J. Elder. Ensemble Methods in Data Mining: Improving Accuracy Through Combining Predictions. Morgan & Claypool Publishers, USA, 2010.
This is a good book with great illustrations and graphs. There is also a lot of R code too!
### Z.H. Zhou. Ensemble Methods: Foundations and Algorithms. Chapman & Hall/CRC, 2012.
This is an excellent book the covers ensemble learning from A-Z and is well suited for anyone from an eager beginner to a critical expert.
# SynGlyphX: Hello and Thank You DC2!
The following is a sponsored post brought to you by one of the supporters of two of Data Community's five meetups.
### Hello and Thank You DC2!
This week was my, and my company’s, introduction to Data Community DC (DC2). We could not have asked for a more welcoming reception. We attended and sponsored both Tuesday’s DVDC event on Data Journalism and Thursday’s DSDC event on GeoSpatial Data Analysis. They were both pretty exciting, and timely, events for us.
As I mentioned, I’m new to DC2 and new to the “data as a science” community. Don’t get me wrong, while I’m new to DC2 I’ve been awash in data my entire career. I started as a young consultant reconciling discrepancies in the databases of a very early Client-Server implementation. Basically, I had to make sure that all the big department store orders on the server were in sync with the home delivery client application. A lot of manual reconciling that ultimately led to me programming code to semi-automatically reconcile the two databases. Eventually (I think) they solved the technical issues that led the Client-Server databases being out of sync.
More recently, I was working for a company with a growing professional services organization. The company typically hired new employees after a contract was signed; but the new professional services work involved short project durations. If we waited to hire, the project would be over before someone started. We developed a probability adjusted / portfolio analysis approach to compare supply of available resources (which is always changing as people finish projects, get extended, leave the organization) vs. demand (which is always changing as well), that enabled us to determine a range of positions and skillsets to hire for in a defined timeframe.
In both instances, it was data science that drove effective decision making. Sure, you can apply some “gut” to any decision, but having some data science behind you makes the case much stronger.
So, I was fascinated to listen to the journalists discuss how they are applying data analysis to help: 1) support existing story lines; and 2) develop new story lines. Nathan’s presentation on analyzing AIS data was interesting (and a bit timely as we had just gotten a verbal win for a client on doing similar type work, similar, but not exactly the same).
I know the power of data to solve complex business, operational, and other problems. With our new company, SynGlyphX, we are focused on helping people both visualize and interact with their data. We live in a world with sight and three dimensions. We believe that by visualizing the data (unstructured, filtered, analyzed, any kind of data), we can help people leverage the power of the brain to identify patters, spot trends, and detect anomalies. We joined DC2 to get to know folks in the community, generate some awareness for our company, and to get your feedback on what we are doing. Thank you all for welcoming us and our company, SynGlyphX, to the community. We appreciated everyone’s interest in the demonstrations of our interactive visualization technology. Our website traffic was up significantly last week, so I am hoping this is a sign that you were interested in learning more about us. Additionally, I have heard from a number of you since the events, and welcome hearing from more.
Here’s my call to action, I encourage you to tweet us your answer to the following question: “Why do you find it helpful to visually interact with your data?”
See you at upcoming events.
Mark Sloan
As CEO of SynGlyphX, Mark brings over two decades of experience. Mark began his career at Accenture, co-founded the global consulting firm RTM Consulting, and served as Vice President and General Manager of Convergys’ Consulting and Professional Services Group.
Mark has a M.B.A. from The Wharton School of the University of Pennsylvania, and a B.S. in Civil Engineering from the University of Notre Dame. He is a frequent speaker at industry events and has served as an Advisory Board Member for the Technology Professional Services Association (now Technology Services Industry Association (TSIA)).
# November Data Science DC Event Review: Identifying Smugglers: Local Outlier Detection in Big Geospatial Data
This is a guest post from Data Science DC Member and quantitative political scientist David J. Elkind. As the November Data Science DC Meetup, Nathan Danneman, Emory University PhD and analytics engineer at Data Tactics, presented an approach to detecting unusual units within a geospatial data set. For me, the most enjoyable feature of Dr. Danneman’s talk was his engaging presentation. I suspect that other data consultants have also spent quite some time reading statistical articles and lost quite a few hours attempting to trace back the authors’ incoherent prose. Nathan approached his talk in a way that placed a minimal quantitative demand on the audience, instead focusing on the three essential components of his analysis: his analytical task, the outline of his approach, and the presentation of his findings. I’ll address each of these in turn.
Nathan was presented with the problem of locating maritime vessels in the Strait of Hormuz engaged in smuggling activities: sanctions against Iran have made it very difficult for Iran to engage in international commerce, so improving detection of smugglers crossing the Strait from Iran to Qatar and the United Arab Emirates would improve the effectiveness of the sanctions regime and increase pressure on the regime. (I’ve written about issues related to Iranian sanctions for CSIS’s Project on Nuclear Issues Blog.)
Having collected publicly accessible satellite positioning data of maritime vessels, Nathan had four fields for each craft at several time intervals within some period: speed, heading, latitude and longitude.
But what do smugglers look like? Unfortunately, Nathan’s data set did not itself include any examples of watercraft which had been unambiguously identified by, e.g., the US Navy, as smugglers, so he could not rely on historical examples of smuggling as a starting point for his analysis. Instead, he has to puzzle out how to leverage information a craft’s spatial location
I’ve encountered a few applied quantitative researchers who, when faced with a lack of historical examples, would be entirely stymied in their progress, declaring the problem too hard. Instead of throwing up his hands, Dr. Danneman dug into the topic of maritime smuggling and found that many smuggling scenarios involve ship-to-ship transfers of contraband which take place outside of ordinary shipping lanes. This qualitatively-informed understanding transforms the project from mere speculation about what smugglers might look like into the problem of discovering maritime vessels which deviate too far from ordinary traffic patterns.
Importantly, framing the research in this way entirely premises the validity of inferences on the notion that unusual ships are smugglers and smugglers are unusual ships. But in reality, there are many reasons that ships might not conform to ordinary traffic patterns – for example, pleasure craft and fishing boats might have irregular movement patterns that don’t coincide with shipping lanes, and so look similar to the hypothesized smugglers.
## Outline of Approach
The basic approach can be split into three sections: partitioning the strait into many grids, generating fake boats to compare the real boats, and then training a logistic regression to use the four data fields (speed, heading, latitude and longitude) to differentiate the real boats from the fake ones.
Partitioning the strait into grids helps emphasize the local character of ships’ movements in that region. For example, a grid square partially containing a shipping channel will have many ships located in the channel and on a heading taking it along that channel. Generating fake boats, with bivariate uniform distribution in the grid square, will tend not to fall in the path of ordinary traffic channel, just like the hypothesized behavior of smugglers. The same goes for the uniformly-distributed timestamps and otherwise randomly-assigned boat attributes for the comparison sets: these will all tend to stand apart from ordinary traffic. Therefore, training a model to differentiate between these two classes of behaviors will advance the goal of differentiating between smugglers and ordinary traffic.
Dr. Danneman described this procedure as unsupervised-as-supervised learning – a novel term for me, so forgive me if I’m loose with the particulars – but this in this case it refers to the notion that there are two classes of data points, one i.i.d. from some unknown density and another simulated via Monte Carlo methods from some known density. Pooling both samples gives one a mixture of the two densities; this problem then becomes one of comparing the relative densities of the two classes of data points – that is, this problem is actually a restatement of the problem of logistic regression! Additional details can be found in Elements of Statistical Learning (2nd edition, section 14.2.4, p. 495).
## Presentation of Findings
After fitting the model, we can examine which of the real boats the model rated as having a low odds of being real – that is, boats which looked so similar to the randomly-generated boats that the model had difficulty differentiating the two. These are the boats that we might call “outliers,” and, given the premise that ship-to-ship smuggling likely takes place aboard boats with unusual behavior, are more likely to be engaged in smuggling.
I will repeat here a slight criticism that I noted elsewhere and point out that the model output cannot be interpreted as a true probability, contrary to the results displayed in slide 39. In this research design, Dr. Danneman did not randomly sample from the population of all shipping traffic in the Strait of Hormuz to assemble a collection of smuggling craft and ordinary traffic in proportion roughly equal to their occurrence in nature. Rather, he generated one fake boat for each real boat. This is a case-control research design, so the intercept term of the logistic regression model is fixed to reflect the ratio of positive cases to negative cases in the data set. All of the terms in the model, including the intercept, are still MLE estimators, and all of the non-intercept terms are perfectly valid for comparing the odds of an observation being in one class or another. But to establish probabilities, one would have to replace the intercept term with knowledge of what the overall ratio of positives to negatives in the other.
In the question-and-answer session, some in the audience pushed back against the limited data set, noting that one could improve the results by incorporating other information specific to each of the ships (its flag, its shipping line, the type of craft, or other pieces of information). First, I believe that all applications would leverage this information – were it available – and model it appropriately; however, as befit a pedagogical talk on geospatial outlier detection, this talk focused on leveraging geospatial data for outlier detection.
Second, it should be intuitive that including more information in a model might improve the results: the more we know about the boats, the more we can differentiate between them. Collecting more data is, perhaps, the lowest-hanging fruit of model improvement. I think it’s more worthwhile to note that Nathan’s highly parsimonious model achieved very clean separation between fake and real boats despite the limited amount of information collected for each boat-time unit.
The presentation and code may be found on Nathan Danneman's web site. The audio for the presentation is also available.
|
2019-07-16 00:19:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23944157361984253, "perplexity": 2476.8133444720297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524290.60/warc/CC-MAIN-20190715235156-20190716021156-00432.warc.gz"}
|
http://mathoverflow.net/feeds/question/108930
|
Smashing with a cw-complex preserves weak equivalences between well-pointed spaces - MathOverflow most recent 30 from http://mathoverflow.net 2013-06-19T23:08:31Z http://mathoverflow.net/feeds/question/108930 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/108930/smashing-with-a-cw-complex-preserves-weak-equivalences-between-well-pointed-space Smashing with a cw-complex preserves weak equivalences between well-pointed spaces Christian Wimmer 2012-10-05T16:23:43Z 2012-10-06T14:10:59Z <p>By well-pointed i mean that the inclusion of the base point is a h-cofibration, weak equivalences are the usual weak homotopy equivalences between spaces. this is claimed as part of theorem 6.9 (i) in <a href="http://www.math.uni-bonn.de/people/schwede/MMSS.pdf" rel="nofollow">model categories of diagram spectra</a> but as far as i can see without reference. can anyone point me to some place in the literature or indicate where this statement comes from ?</p> http://mathoverflow.net/questions/108930/smashing-with-a-cw-complex-preserves-weak-equivalences-between-well-pointed-space/108944#108944 Answer by Peter May for Smashing with a cw-complex preserves weak equivalences between well-pointed spaces Peter May 2012-10-05T18:46:30Z 2012-10-05T18:46:30Z <p>As usual, I apologize for excess concision. Let $A$ be a based CW complex, $X$ and $Y$ well-pointed spaces, $f\colon X\to Y$ a (weak) equivalence. The first claim in 6.9(i) is that $[X\wedge A,Z] \cong [X,F(A,Z)]$, and a proof is indicated. Since $f$ clearly induces a bijection on the right side, it must induce a bijection on the left side. By Yoneda (take $Z = X\wedge A$ to find an inverse to $f\wedge id$ in the homotopy category), that means that $f\wedge id\colon X\wedge A \to Y\wedge A$ is a weak equivalence. </p> http://mathoverflow.net/questions/108930/smashing-with-a-cw-complex-preserves-weak-equivalences-between-well-pointed-space/108996#108996 Answer by Peter May for Smashing with a cw-complex preserves weak equivalences between well-pointed spaces Peter May 2012-10-06T13:51:42Z 2012-10-06T14:10:59Z <p>I would like to just comment but don't see how. Christian, here's an answer to your last question. For based spaces, you can just do what you want by hand, as Fernando suggested, taking care to use disjoint basepoints to make your attaching maps of $A$ based. You are right to complain that the interplay of $h$ and $q$-model structures is not obvious. Based spaces are of course spaces over and under a point. In Parametrized homotopy theory'', Sigurdsson and I generalize to parametrized spaces, which are spaces over and under a give space, and then the combination of $h$, $q$, and related model structures is surprisingly delicate. In that book, the answer to your original question is axiomatized in a general model categorical context in 5.4.1 (see (v)) and the axioms are verified for parametrized spaces in 5.4.9. But that is like hitting a thumb tack with a sledge hammer. Maybe it will help to add that in the direct argument you do need to know that a wedge of weak equivalences is a weak equivalence, and that uses the well-pointed hypothesis.</p>
|
2013-06-19 23:08:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9301717877388, "perplexity": 389.49387568748847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709805610/warc/CC-MAIN-20130516131005-00015-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://www.global-sci.com/intro/article_detail/jcm/9758.html
|
Volume 31, Issue 6
A New Direct Discontinuous Galerkin Method with Symmetric Structure for Nonlinear Diffusion Equations
J. Comp. Math., 31 (2013), pp. 638-662.
Published online: 2013-12
Cited by
Export citation
• Abstract
In this paper we continue the study of discontinuous Galerkin finite element methods for nonlinear diffusion equations following the direct discontinuous Galerkin (DDG) methods for diffusion problems [17] and the direct discontinuous Galerkin (DDG) methods for diffusion with interface corrections [18]. We introduce a numerical flux for the test function, and obtain a new direct discontinuous Galerkin method with symmetric structure. Second order derivative jump terms are included in the numerical flux formula and explicit guidelines for choosing the numerical flux are given. The constructed scheme has a symmetric property and an optimal $L^2(L^2)$ error estimate is obtained. Numerical examples are carried out to demonstrate the optimal $(k+1)$th order of accuracy for the method with $P^k$ polynomial approximations for both linear and nonlinear problems, under one-dimensional and two-dimensional settings.
65M60.
• BibTex
• RIS
• TXT
@Article{JCM-31-638, author = {}, title = {A New Direct Discontinuous Galerkin Method with Symmetric Structure for Nonlinear Diffusion Equations}, journal = {Journal of Computational Mathematics}, year = {2013}, volume = {31}, number = {6}, pages = {638--662}, abstract = {
In this paper we continue the study of discontinuous Galerkin finite element methods for nonlinear diffusion equations following the direct discontinuous Galerkin (DDG) methods for diffusion problems [17] and the direct discontinuous Galerkin (DDG) methods for diffusion with interface corrections [18]. We introduce a numerical flux for the test function, and obtain a new direct discontinuous Galerkin method with symmetric structure. Second order derivative jump terms are included in the numerical flux formula and explicit guidelines for choosing the numerical flux are given. The constructed scheme has a symmetric property and an optimal $L^2(L^2)$ error estimate is obtained. Numerical examples are carried out to demonstrate the optimal $(k+1)$th order of accuracy for the method with $P^k$ polynomial approximations for both linear and nonlinear problems, under one-dimensional and two-dimensional settings.
}, issn = {1991-7139}, doi = {https://doi.org/10.4208/jcm.1307-m4273}, url = {http://global-sci.org/intro/article_detail/jcm/9758.html} }
TY - JOUR T1 - A New Direct Discontinuous Galerkin Method with Symmetric Structure for Nonlinear Diffusion Equations JO - Journal of Computational Mathematics VL - 6 SP - 638 EP - 662 PY - 2013 DA - 2013/12 SN - 31 DO - http://doi.org/10.4208/jcm.1307-m4273 UR - https://global-sci.org/intro/article_detail/jcm/9758.html KW - Discontinuous Galerkin Finite Element method, Diffusion equation, Stability, Convergence. AB -
In this paper we continue the study of discontinuous Galerkin finite element methods for nonlinear diffusion equations following the direct discontinuous Galerkin (DDG) methods for diffusion problems [17] and the direct discontinuous Galerkin (DDG) methods for diffusion with interface corrections [18]. We introduce a numerical flux for the test function, and obtain a new direct discontinuous Galerkin method with symmetric structure. Second order derivative jump terms are included in the numerical flux formula and explicit guidelines for choosing the numerical flux are given. The constructed scheme has a symmetric property and an optimal $L^2(L^2)$ error estimate is obtained. Numerical examples are carried out to demonstrate the optimal $(k+1)$th order of accuracy for the method with $P^k$ polynomial approximations for both linear and nonlinear problems, under one-dimensional and two-dimensional settings.
Chad Vidden & Jue Yan. (2019). A New Direct Discontinuous Galerkin Method with Symmetric Structure for Nonlinear Diffusion Equations. Journal of Computational Mathematics. 31 (6). 638-662. doi:10.4208/jcm.1307-m4273
Copy to clipboard
The citation has been copied to your clipboard
|
2023-03-25 04:12:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5207455158233643, "perplexity": 606.5332781794647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945315.31/warc/CC-MAIN-20230325033306-20230325063306-00432.warc.gz"}
|
https://brilliant.org/problems/circular-eddy-current-brakes/
|
# Circular Eddy Current Brakes
Classical Mechanics Level 3
Eddy current brakes are brakes that use eddy currents generated by electromagnetic induction to slow down an object. Eddy current brakes are widely used in high speed trains and roller coasters. Since there is no physical contact, eddy current brakes are much quieter and last much longer before wearing down as compared to frictional brakes.
The diagram above shows the working of a circular eddy current brake. To slow down the rotating disk, the electromagnet is switched on. This causes circular loops of currents, called eddy currents, to be formed in the disk. Since the disk has resistance, and current flows through it, heat is generated and the kinetic energy of the disk decreases.
Initially, the disk D has rotates with angular velocity $$\omega$$. After the brakes are applied for some time, the angular velocity of the disk becomes $$\omega /10$$.
Find $$\dfrac{K_{\text{final}}}{K_{\text{initial}}}$$, the ratio of final rotational kinetic energy to the initial rotational kinetic energy of the disk.
###### Image Credit: Chetvorno Wikimedia Commons
×
Problem Loading...
Note Loading...
Set Loading...
|
2016-10-27 20:55:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8360667824745178, "perplexity": 859.9624292199969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721392.72/warc/CC-MAIN-20161020183841-00162-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://derivativedribble.wordpress.com/2020/08/27/a-note-on-computability/
|
# A Note on Computability
When I studied the non-computable numbers in college, I remember thinking that existing proofs were pointlessly longwinded, since you could show quite plainly, that because the reals are uncountable, and the number of programs that can be run on a UTM is countable, there are more numbers than there are programs, which proves the existence of non-computable numbers. That is, there aren’t enough programs to calculate all of the reals, and so it must be the case that some real numbers aren’t associated with programs, and are therefore non-computable.
But it just dawned on me, that you could have this result, even without resorting to the fact the reals are uncountable, since it could simply be the case that the set of all inputs to a UTM maps to some countable, proper subset of a countable subset of the reals. Expressed formally, let $S \subset R$, and assume that $S$ is countable. That is, $S$ is a countable subset of the reals, and because it’s countable, it’s a proper subset (i.e., obviously, $S$ does not contain all of the reals). It could be simply be the case that the set of all inputs to a UTM, $A$, maps to only a subset of $S$. That is, even though $S$ is countable, it could could still be the case that $A$ maps to only a subset of $S$, and not the entire set.
Here’s a concrete example: Let $K$ be the set of all non-computable numbers, which you can easily show is uncountable. Now let $S$ be a countable subset of $K$. It follows that there is no mapping at all from $A$ to $S$, since by definition, for each $x \in S$, there is no program that calculates $x$.
So aside from this little result, I suppose the point is, that even if all of the reals don’t exist, you could still be stuck with non-computable numbers.
|
2022-11-29 12:20:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9347058534622192, "perplexity": 121.88747380798478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710691.77/warc/CC-MAIN-20221129100233-20221129130233-00180.warc.gz"}
|
https://math.stackexchange.com/questions/1237511/riccati-equation-for-falling-particle
|
# Riccati Equation for falling particle.
I'am trying to solve the differential equation for a falling particle of mass 1 with air resistance proportional to $v^2$ (v is velocity): $$v'=g-v^2$$ This is a Riccati-Equation with stationary solution $\hat{v}=\sqrt{g}$, so we can transform it to the Bernoulli Equation $$u'=-u^2-2\sqrt{g}u$$ Substituting $z=u^{-1}$ gives \begin{align*}z'=2\sqrt{g}z+1\\ \Rightarrow z(t)=Ce^{2\sqrt{g}t}-\frac1{2\sqrt{g}}\\ \Rightarrow u(t)=\frac{1}{Ce^{2\sqrt{g}t}-\frac1{2\sqrt{g}}}\\ \Rightarrow v(t)=\frac{1}{Ce^{2\sqrt{g}t}-\frac1{2\sqrt{g}}}+\sqrt{g}\end{align*}
The problem is that this solution does not make physical sense to me, since its decreasing and does not match my reference solution. Can you explain to me where I have gone wrong?
• You might want to tell us your reference solution so we can see what you were expecting... – Chappers Apr 16 '15 at 14:30
• @Chappers The one Dr. Sonnhard Graubner postet below. This is intuitive because it aproaches the stationary solution. – Achilles Apr 16 '15 at 14:33
• Well, so does your answer. It is increasing if $C>0$. You can get from one solution to the other by choosing the right value of $C$ (or perhaps easier is to start from the tanh and fiddle about with $c_1$). – Chappers Apr 16 '15 at 14:40
• For $C=1$(and g=9) Wolfram Alpha plots this – Achilles Apr 16 '15 at 14:48
• Glad you fixed the spelling of the name Riccati; in Italian, ricatti is the plural of ricatto, which means blackmail. ;-) – egreg Apr 16 '15 at 15:35
$$\alpha \tanh \left(c_1 \alpha+\alpha t\right)=\alpha\frac{Be^{2t\alpha}-1}{Be^{2t\alpha}+1},\text{ with } B=e^{2c_1\alpha}$$
$$\frac{1}{Ce^{2t\alpha}-\frac{1}{2\alpha}}+\alpha=\alpha\left(\frac{2}{2C\alpha e^{2t\alpha}-1}+1\right)=\alpha\frac{1+2\alpha Ce^{2t\alpha}}{2C\alpha e^{2t\alpha}-1}=\alpha\frac{-1-2\alpha Ce^{2t\alpha}}{-2C\alpha e^{2t\alpha}+1}$$
Now put $C:=-\frac{B}{2\alpha}$ and...
|
2019-12-08 18:44:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9595351219177246, "perplexity": 548.5840209434995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540514475.44/warc/CC-MAIN-20191208174645-20191208202645-00235.warc.gz"}
|
https://www.math.ntnu.no/conservation/1999/040.html
|
### Hyperbolic Systems with Supercharacteristic Relaxations and Roll Waves
Shi Jin and Markos Katsoulakis
Abstract: We study a distinguished limit for general $2 \times 2$ hyperbolic systems with relaxation, which is valid in both the subcharacteristic and supercharacteristic cases. This is a weakly nonlinear limit, which leads the underlying relaxation systems into Burgers equation with a source term; the sign of the source term depends on the characteristic interleaving condition. In the supercharacteristic case, the problem admits a periodic solution known as the roll-wave, generated by a small perturbation around equilibrium constants. Such a limit is justified in the presence of artificial viscosity, using the energy method.
Paper:
Available as PostScript (1.9 Mbytes) or gzipped PostScript (369 Kbytes; uncompress using gunzip).
Author(s):
Shi Jin, <jin@math.gatech.edu<>
Markos Katsoulakis
Publishing information:
SIAM J. Appl. Math., to appear
|
2018-02-25 23:25:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6772546172142029, "perplexity": 2530.2018737958724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891817523.0/warc/CC-MAIN-20180225225657-20180226005657-00398.warc.gz"}
|
https://www.physicsforums.com/threads/easy-for-you-to-solve-need-an-anwser-pls.12312/
|
Easy for you to solve, need an anwser pls
1. Jan 9, 2004
sero
If a train is going 35 mph and theres a dude standing in a boxcar and he jumps 2 feet in the air, where will he land? In the same spot, or ahead of or behind the spot he jumped from?
currently some friends and I are debating this. some say the same spot some say behind. I say the same spot due to newtons 1st law of motion, and general relativity.
1. what is the anwser
2. what is the equation to solve it ?
2. Jan 9, 2004
Staff: Mentor
If he jumps straight up, he will fall straight down...landing on the same spot in the boxcar.
No equations needed, but if you insist, the horizontal distance the jumper travels is:
$$d=v_0t$$
This formula works in the frame of the ground or the boxcar. From the boxcar's viewpoint, $v_0 = 0$; from the viewpoint of someone on the ground, $v_0 = 35{mph}$, the same speed as the boxcar itself.
3. Jan 9, 2004
sero
cont
thanks Doc
4. Jan 9, 2004
Staff Emeritus
Well there is a caveat. Before the guy jumps he's at rest with respect to the train. This means he shares the same forward speed the train does, relative to the ground, and has a forward momentum of his mass times that speed. When he jumps straight up, his momentum - and therefore his forward velocity - would be conserved if there was no air. And in that case he would still have the same speed as the train and would come down where he started.
But there is air resistance; some of his momentum is transferred to the air, and he loses a litle speed. So when he comes down he hasn't moved quite as far ahead relative to the ground as the train has, and he lands just a little behind his starting spot. Maybe only an inch or to, but iot would be measurable.
I don't think we need to consider coriolis force. Imagine the train is at the equator.
5. Jan 9, 2004
Staff: Mentor
It's a boxcar, not a flatcar. Assume it's closed and the air is carried along.
|
2018-06-19 21:03:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7687153816223145, "perplexity": 839.02152916598}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863119.34/warc/CC-MAIN-20180619193031-20180619213031-00174.warc.gz"}
|
https://en.wikipedia.org/wiki/Coercivity
|
# Coercivity
A family of hysteresis loops for grain-oriented electrical steel, a soft magnetic material. BR denotes retentivity and HC is the coercivity. The wider the outside loop is, the higher the coercivity. Movement on the loops is counterclockwise.
Coercivity, also called the magnetic coercivity, coercive field or coercive force, is a measure of the ability of a ferromagnetic material to withstand an external magnetic field without becoming demagnetized. Coercivity is usually measured in oersted or ampere/meter units and is denoted HC.
An analogous property in electrical engineering and materials science, electric coercivity, is the ability of a ferroelectric material to withstand an external electric field without becoming depolarized.
Ferromagnetic materials with high coercivity are called magnetically hard, and are used to make permanent magnets. Materials with low coercivity are said to be magnetically soft. The latter are used in transformer and inductor cores, recording heads, microwave devices, and magnetic shielding.
## Definitions
Graphical definition of different coercivities in flux-vs-field hysteresis curve (B-H curve), for a hypothetical hard magnetic material.
Equivalent definitions for coercivities in terms of the magnetization-vs-field (M-H) curve, for the same magnet.
Coercivity in a ferromagnetic material is the intensity of the applied magnetic field (H field) required to demagnetize that material, after the magnetization of the sample has been driven to saturation by a strong field. This demagnetizing field is applied opposite to the original saturating field. There are however different definitions of coercivity, depending on what counts as 'demagnetized', thus the bare term "coercivity" may be ambiguous:
• The normal coercivity, HCn, is the H field required to reduce the magnetic flux (average B field inside the material) to zero.
• The intrinsic coercivity, HCi, is the H field required to reduce the magnetization (average M field inside the material) to zero.
• The remanence coercivity, HCr, is the H field required to reduce the remanence to zero, meaning that when the H field is finally returned to zero, then both B and M also fall to zero (the material reaches the origin in the hysteresis curve).[1]
The distinction between the normal and intrinsic coercivity is negligible in soft magnetic materials, however it can be significant in hard magnetic materials.[1] The strongest rare-earth magnets lose almost none of the magnetization at HCn.
## Experimental determination
Coercivities of some magnetic materials
Material Coercivity
(kA/m)
Supermalloy
(16Fe:79Ni:5Mo)
0.0002[2]: 131, 133
Permalloy (Fe:4Ni) 0.0008–0.08[3]
Iron filings (0.9995 wt) 0.004–37.4[4][5]
Electrical steel (11Fe:Si) 0.032–0.072[6]
Raw iron (1896) 0.16[7]
Nickel (0.99 wt) 0.056–23[5][8]
Ferrite magnet
(ZnxFeNi1−xO3)
1.2–16[9]
2Fe:Co,[10] iron pole 19[5]
Cobalt (0.99 wt) 0.8–72[11]
Alnico 30–150[12]
Disk drive recording medium
(Cr:Co:Pt)
140[13]
Neodymium magnet (NdFeB) 800–950[14][15]
12Fe:13Pt (Fe48Pt52) ≥980[16]
?(Dy,Nb,Ga(Co):2Nd:14Fe:B) 2040–2090[17][18]
Samarium-cobalt magnet
(2Sm:17Fe:3N; 10 K)
<40–2800[19][20]
Samarium-cobalt magnet 3200[21]
Typically the coercivity of a magnetic material is determined by measurement of the magnetic hysteresis loop, also called the magnetization curve, as illustrated in the figure above. The apparatus used to acquire the data is typically a vibrating-sample or alternating-gradient magnetometer. The applied field where the data line crosses zero is the coercivity. If an antiferromagnet is present in the sample, the coercivities measured in increasing and decreasing fields may be unequal as a result of the exchange bias effect.[citation needed]
The coercivity of a material depends on the time scale over which a magnetization curve is measured. The magnetization of a material measured at an applied reversed field which is nominally smaller than the coercivity may, over a long time scale, slowly relax to zero. Relaxation occurs when reversal of magnetization by domain wall motion is thermally activated and is dominated by magnetic viscosity.[22] The increasing value of coercivity at high frequencies is a serious obstacle to the increase of data rates in high-bandwidth magnetic recording, compounded by the fact that increased storage density typically requires a higher coercivity in the media.[citation needed]
## Theory
At the coercive field, the vector component of the magnetization of a ferromagnet measured along the applied field direction is zero. There are two primary modes of magnetization reversal: single-domain rotation and domain wall motion. When the magnetization of a material reverses by rotation, the magnetization component along the applied field is zero because the vector points in a direction orthogonal to the applied field. When the magnetization reverses by domain wall motion, the net magnetization is small in every vector direction because the moments of all the individual domains sum to zero. Magnetization curves dominated by rotation and magnetocrystalline anisotropy are found in relatively perfect magnetic materials used in fundamental research.[23] Domain wall motion is a more important reversal mechanism in real engineering materials since defects like grain boundaries and impurities serve as nucleation sites for reversed-magnetization domains. The role of domain walls in determining coercivity is complicated since defects may pin domain walls in addition to nucleating them. The dynamics of domain walls in ferromagnets is similar to that of grain boundaries and plasticity in metallurgy since both domain walls and grain boundaries are planar defects.[citation needed]
## Significance
As with any hysteretic process, the area inside the magnetization curve during one cycle represents the work that is performed on the material by the external field in reversing the magnetization, and is dissipated as heat. Common dissipative processes in magnetic materials include magnetostriction and domain wall motion. The coercivity is a measure of the degree of magnetic hysteresis and therefore characterizes the lossiness of soft magnetic materials for their common applications.
The saturation remanence and coercivity are figures of merit for hard magnets, although maximum energy product is also commonly quoted. The 1980s saw the development of rare-earth magnets with high energy products but undesirably low Curie temperatures. Since the 1990s new exchange spring hard magnets with high coercivities have been developed.[24]
## References
1. ^ a b Giorgio Bertotti (21 May 1998). Hysteresis in Magnetism: For Physicists, Materials Scientists, and Engineers. Elsevier Science. ISBN 978-0-08-053437-4.
2. ^ Tumanski, S. (2011). Handbook of magnetic measurements. Boca Raton, FL: CRC Press. ISBN 9781439829523.
3. ^ M. A. Akhter-D. J. Mapps-Y. Q. Ma Tan-Amanda Petford-Long-R. Doole; Mapps; Ma Tan; Petford-Long; Doole (1997). "Thickness and grain-size dependence of the coercivity in permalloy thin films". Journal of Applied Physics. 81 (8): 4122. Bibcode:1997JAP....81.4122A. doi:10.1063/1.365100.
4. ^ [1] Archived February 4, 2008, at the Wayback Machine
5. ^ a b c "Magnetic Properties of Solids". Hyperphysics.phy-astr.gsu.edu. Retrieved 22 November 2014.
6. ^ "timeout". Cartech.ides.com. Retrieved 22 November 2014.
7. ^ Thompson, Silvanus Phillips (1896). Dynamo-electric machinery. Retrieved 22 November 2014.
8. ^ M. S. Miller-F. E. Stageberg-Y. M. Chow-K. Rook-L. A. Heuer; Stageberg; Chow; Rook; Heuer (1994). "Influence of rf magnetron sputtering conditions on the magnetic, crystalline, and electrical properties of thin nickel films". Journal of Applied Physics. 75 (10): 5779. Bibcode:1994JAP....75.5779M. doi:10.1063/1.355560.
9. ^ Zhenghong Qian; Geng Wang; Sivertsen, J.M.; Judy, J.H. (1997). "Ni Zn ferrite thin films prepared by Facing Target Sputtering". IEEE Transactions on Magnetics. 33 (5): 3748–3750. Bibcode:1997ITM....33.3748Q. doi:10.1109/20.619559.
10. ^ Orloff, Jon (2017-12-19). Handbook of Charged Particle Optics, Second Edition. ISBN 9781420045550. Retrieved 22 November 2014.
11. ^ Luo, Hongmei; Wang, Donghai; He, Jibao; Lu, Yunfeng (2005). "Magnetic Cobalt Nanowire Thin Films". The Journal of Physical Chemistry B. 109 (5): 1919–22. doi:10.1021/jp045554t. PMID 16851175.
12. ^
13. ^ Yang, M.M.; Lambert, S.E.; Howard, J.K.; Hwang, C. (1991). "Laminated CoPt Cr/Cr films for low noise longitudinal recording". IEEE Transactions on Magnetics. 27 (6): 5052–5054. Bibcode:1991ITM....27.5052Y. doi:10.1109/20.278737.
14. ^ C. D. Fuerst-E. G. Brewer; Brewer (1993). "High‐remanence rapidly solidified Nd‐Fe‐B: Die‐upset magnets (invited)". Journal of Applied Physics. 73 (10): 5751. Bibcode:1993JAP....73.5751F. doi:10.1063/1.353563.
15. ^ "WONDERMAGNET.COM - NdFeB Magnets, Magnet Wire, Books, Weird Science, Needful Things". Wondermagnet.com. Archived from the original on 11 February 2015. Retrieved 22 November 2014.
16. ^ Chen & Nikles 2002
17. ^ Bai, G.; Gao, R.W.; Sun, Y.; Han, G.B.; Wang, B. (January 2007). "Study of high-coercivity sintered NdFeB magnets". Journal of Magnetism and Magnetic Materials. 308 (1): 20–23. Bibcode:2007JMMM..308...20B. doi:10.1016/j.jmmm.2006.04.029.
18. ^ Jiang, H.; Evans, J.; O’Shea, M.J.; Du, Jianhua (2001). "Hard magnetic properties of rapidly annealed NdFeB thin films on Nb and V buffer layers". Journal of Magnetism and Magnetic Materials. 224 (3): 233–240. Bibcode:2001JMMM..224..233J. doi:10.1016/S0304-8853(01)00017-8.
19. ^ Nakamura, H.; Kurihara, K.; Tatsuki, T.; Sugimoto, S.; Okada, M.; Homma, M. (October 1992). "Phase Changes and Magnetic Properties of Sm 2 Fe 17 N x Alloys Heat-Treated in Hydrogen". IEEE Translation Journal on Magnetics in Japan. 7 (10): 798–804. doi:10.1109/TJMJ.1992.4565502.
20. ^ Rani, R.; Hegde, H.; Navarathna, A.; Cadieu, F. J. (15 May 1993). "High coercivity Sm 2 Fe 17 N x and related phases in sputtered film samples". Journal of Applied Physics. 73 (10): 6023–6025. Bibcode:1993JAP....73.6023R. doi:10.1063/1.353457. INIST:4841321.
21. ^ de Campos, M. F.; Landgraf, F. J. G.; Saito, N. H.; Romero, S. A.; Neiva, A. C.; Missell, F. P.; de Morais, E.; Gama, S.; Obrucheva, E. V.; Jalnin, B. V. (July 1998). "Chemical composition and coercivity of SmCo5 magnets". Journal of Applied Physics. 84 (1): 368–373. Bibcode:1998JAP....84..368D. doi:10.1063/1.368075.
22. ^ Gaunt 1986
23. ^ Genish et al. 2004
24. ^ Kneller & Hawig 1991
|
2023-03-23 00:38:53
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.838981568813324, "perplexity": 7351.558285439878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00639.warc.gz"}
|
https://www.nature.com/articles/s41467-018-07181-2?error=cookies_not_supported&code=e94c6384-a747-40da-991e-644f580f54a7
|
# Synthetic RNA-based logic computation in mammalian cells
## Abstract
Synthetic biological circuits are designed to regulate gene expressions to control cell function. To date, these circuits often use DNA-delivery methods, which may lead to random genomic integration. To lower this risk, an all RNA system, in which the circuit and delivery method are constituted of RNA components, is preferred. However, the construction of complexed circuits using RNA-delivered devices in living cells has remained a challenge. Here we show synthetic mRNA-delivered circuits with RNA-binding proteins for logic computation in mammalian cells. We create a set of logic circuits (AND, OR, NAND, NOR, and XOR gates) using microRNA (miRNA)- and protein-responsive mRNAs as decision-making controllers that are used to express transgenes in response to intracellular inputs. Importantly, we demonstrate that an apoptosis-regulatory AND gate that senses two miRNAs can selectively eliminate target cells. Thus, our synthetic RNA circuits with logic operation could provide a powerful tool for future therapeutic applications.
## Introduction
Synthetic biology approaches in mammalian cells have rapidly progressed in a variety of fields, suggesting great potential in medical applications including drug discovery, vaccine production, disease diagnosis, and cell therapy1,2,3,4. For example, researchers have designed several synthetic circuits that interface with endogenous gene networks to control apoptosis, differentiation, cell proliferation, and cell–cell communication5,6,7,8. For future therapeutic applications, it is important to improve the safety and specificity of the circuits, especially for the purpose of cell therapy in the field of regenerative medicine. A delivery method using modified messenger RNAs (modRNAs) could provide safer means to control gene expressions compared with DNA delivery, because modRNAs exhibit a short half-life in cells and do not cause random genomic integration9,10,11,12.
To reduce off-target effects in non-target cells, it is important to produce desired outputs dependent on the cell state. One strategy is to design systems that determine the output of the circuits by sensing cell-specific, intracellular molecules as inputs. MicroRNAs (miRNAs) are a class of small noncoding RNAs that post-transcriptionally regulate gene expression by binding to target mRNAs13,14. It has been reported that more than 2600 different miRNAs exist in humans (miRBase ver.22)15. The miRNA expression profile is related to important biological processes, including development16, cancer, and cell reprogramming17,18, and thus can be used to classify the cell state5,19,20,21. These properties suggest that miRNA-responsive, synthetic circuits could provide useful tools for future therapeutic applications. We have previously designed miRNA-responsive, synthetic mRNAs (miRNA switches) that enable the detection and purification of target cells differentiated from human pluripotent stem cells based on endogenous miRNA activity9,22. However, the information from a single miRNA input may be insufficent to distinguish cells in a heterogeneous population. In these cases, it is crucial to detect multiple miRNA inputs and logically control the outputs (e.g. cell fate). Although synthetic circuits using modRNAs that encode RNA binding proteins (RBPs) have been constructed in mammalian cells10, complex synthetic RNA-delivered circuits that can detect multiple miRNAs and regulate output protein through logic computation have not been demonstrated. Thus, we aimed to design synthetic RNA-delivered logic circuits that function in mammalian cells by improving the performance of miRNA- and protein-responsive modRNAs.
In this study, we construct a set of RNA-based logic circuits with RBPs that detect multiple miRNA inputs and regulate the output protein expression (Fig. 1a). We create five logic gates (AND, OR, NAND, NOR, and XOR) in mammalian cells using an RNA-only delivery approach. A 3-input AND circuit produces the output protein only in the presence of all target miRNAs. Additionally, we selectively control cell-death pathways between target and non-target cells by connecting a 2-input AND gate with apoptotic regulatory circuits.
## Results
### Improving the performance of miRNA-responsive circuits
RBPs can function as both the input and the output of RNA-based regulatory devices10. For example, L7Ae, a kink-turn (Kt) RNA binding protein, associates with the Kt of archaeal box C/D sRNAs23,24. An L7Ae-Kt interaction at the 5′-UTR efficiently inhibits translation of the mRNA (Supplementary Figure 1b, d, f), probably by blocking translation initiation and ribosome function25,26. We have previously used the L7Ae-Kt interaction to construct modRNA-based regulatory devices that detect one target miRNA and regulate the production of one output protein10. The circuit topology of this device consists of two types of modRNAs (Fig. 1b); one is an L7Ae-coding mRNA with four miRNA target sites that are completely complementary to the mature miRNA within the 3′ untranslated region (3′-UTR), and the other is an output-gene-coding mRNA with a Kt motif within the 5′-UTR. We refer to this device as L7-4xTX, where 4x represents the number of miRNA target sites, TX represents target sites to the specific miRNA, and the position of TX in the device name represents the location of the target site in the device (i.e., 5′-UTR or 3′-UTR). In the absence of the input miRNAs, the circuit produces no output protein due to L7Ae expression (OFF state), but produces the output protein in the presence of the input (ON state). However, the fold-change of the designed circuit between ON state and OFF state was moderate. As a first step toward realizing robust logic circuits with modRNAs, we aimed to improve the fold-change (ON/OFF ratio in output level) by enhancing sensitivity to the input miRNAs and reducing leaky protein expression, which would lead to a higher output expression level in ON state. We found that the knockdown effect of miRNAs on the miRNA switch is high when the target site (antisense sequence of the miRNA) was inserted into the 5′-UTR9,22 (Supplementary Figure 1a, c, e). Thus, we hypothesized that the insertion of a miRNA target site into both the 5′-UTR and 3′-UTR may have a stronger effect than insertion in only one UTR and thus improve the fold-change between ON state and OFF state (Fig. 2a). Accordingly, we constructed L7Ae-coding modRNAs with miRNA target sites within both the 5′-UTR and 3′-UTR, and EGFP-coding reporter modRNA with a Kt motif within the 5′-UTR. We tested the performance of this miR-21-responsive RNA device by co-transfecting it with miR-21 mimic (chemically modified RNA that mimics endogenous miRNA) into 293FT cells. iRFP670-coding modRNAs without the miRNA target sites were also introduced as a transfection control. In this study, we chose miR-21 along with miR-302a as representative miRNA markers, because they are highly expressed in several human cancer cells27 and pluripotent stem cells28,29, respectively. We expected that the EGFP expression level would increase in a miR-21 mimic-dependent manner because 293FT cells do not express endogenous miR-219,30. We used 8 nM miR-21 mimic because the proportional activity of 8 nM miR-21 mimic in 293FT cells (up to 15.3-fold, Supplementary Figure 1c) is almost equal to that of endogenous miR-21 in HeLa cells (up to 15.8-fold, Supplementary Figure 2), indicating that 8 nM miR-21 mimic reflects naturally occurring miRNA activity. Twenty-four hours after the transfection, we observed the circuit performance by flow cytometry analysis. We found that circuits with the device that contained miRNA target sites within both 5′-UTR and 3′-UTR (T21-L7-4xT21) showed the highest fold-change (9.2-fold) compared with standard circuits containing miRNA target sites only within the 3′-UTR (L7-4xT21, 2.7-fold) or within the 5′-UTR (T21-L7, 5.2-fold) (Fig. 2b–d). The circuit with T21-L7-4xT21 modRNA was much more effective at distinguishing cell populations in ON and OFF states compared with the other modRNA devices (Fig. 2c).
To investigate whether the T21-L7-4xT21 circuit can detect endogenous miRNAs, we transfected it into HeLa cells, which express endogenous miR-219,30. The increased fold-change and cell separation between ON and OFF (with 8 nM miR-21 inhibitor) states with the T21-L7-4xT21 circuit was confirmed in HeLa cells (from 1.1- to 3.1-fold) (Fig. 2e, f). In addition, we used the T302a-L7-4xT302a circuit to detect another type of miRNA (miR-302a). We confirmed a significant fold-change between ON and OFF states (from 4.6 to 9.0-fold) in 293FT cells (Supplementary Figure 3). The results were consistent with those for miR-21 (Fig. 2), confirming that the improvement of the circuit performance by using modRNAs that contain the miRNA target site within both 5′- and 3′-UTR is independent of the miRNA sequence or cell line.
### Construction of logic circuits using modRNA-delivered device
Next, we constructed a modRNA-deliverable set of logic circuits that can sense the activities of multiple miRNAs to regulate output protein production. First, we designed a 2-input (miR-21 and miR-302a) AND circuit with EGFP as the output (first row in Fig. 3). The AND circuit expresses the output only in the presence of both miRNAs (input pattern [11] in the truth table of Fig. 3). The circuit consists of miR-21- or miR-302a-responsive L7Ae-coding mRNAs, and EGFP-coding mRNA with a Kt motif (Kt-EGFP). We tested the performance of the circuit in 293FT cells, which have no activity for either miRNA, thereby enabling examination of the conditional four input patterns (denoted as [00], [10], [01], and [11]) by treatment with miR-21 (302a) mimics. As expected, the AND circuit functioned only in the presence of both miRNAs (Fig. 3 and Supplementary Figure 4). Because L7Ae efficiently represses translation of Kt-EGFP mRNA25,26 (Supplementary Figure 1b, d, f), we found that the presence of either L7Ae-coding mRNA was sufficient to repress EGFP expression. The fold-change was calculated by dividing the averaged output levels in each ON state ([11] for AND gates) by that in each OFF state ([00], [10] and [01] for AND gates) and found it to be 7.1-fold (Fig. 3). We next designed an OR circuit (second row in Fig. 3), which expresses the output when either one or both inputs are present (ON state = [10], [01], or [11]). This circuit consists of miR-21- and/or miR-302a-responsive single mRNA and EGFP-coding mRNA with a Kt motif. The designed OR circuit functioned with a fold-change of 5.4 (Fig. 3).
To generate more complex circuits, we next used a bacteriophage MS2 coat protein, MS2CP31, as a second translational repressor protein in addition to L7Ae. First, we improved the response of MS2CP-responsive mRNA by engineering the surrounding sequence containing the binding motif (MS2box). The sc2xMS2box motif-inserted mRNA, which consists of two MS2box motifs and a scaffold structure to stabilize the MS2CP-binding motif32, showed the highest fold-change (14.7-fold) compared with that of other MS2CP-responsive mRNAs, which had one (1.85-fold, 1xMS2box) or two (2.8-fold, 2xMS2box) MS2box motifs inserted into the 5′-UTR (Supplementary Figure 5). From this inclusion, we designed NAND, NOR, and XOR circuits (Fig. 3) by connecting L7Ae- and MS2CP-responsive mRNAs. NAND (Not AND) and NOR (Not OR) circuits can be designed by inverting the output of AND and OR circuits, respectively. We used MS2CP-coding mRNAs with a Kt motif (Kt-MS2CP) and EGFP-coding mRNA with two MS2CP binding motifs within the 5′-UTR (sc2xMS2box-EGFP) as a second repressor device. The NAND circuit should produce no output only if both input miRNAs are present ([11]). The NOR circuit should produce outputs only when both inputs are absent ([00]). The XOR (eXclusive OR) circuit produces outputs only when exactly one input miRNA is present ([10] or [01]). Our NAND, NOR, and XOR circuits worked as expected, with fold-changes of 3.1, 3.5, and 5.5, respectively (Fig. 3). From these results, we confirmed all the designed basic circuits (AND, OR, NAND, NOR, XOR) worked in mammalian cells using a modRNA-delivery approach (Fig. 3 and Supplementary Figure 4).
In addition, we designed a 3-input AND circuit using miR-21-, miR-302a-, and miR-206-responsive, L7Ae-coding mRNAs with Kt-EGFP mRNAs (Fig. 4a). As expected, the circuit produced output EGFP production only in the presence of all three miRNAs, with a fold-change of 4.4 (Fig. 4b and Supplementary Figure 6).
### Apoptosis regulatory 2-input AND circuit
Finally, we validated whether RNA-based circuits can control cell-death signals through a logic operation. We designed a 2-input (miR-206 and miR-302a) AND circuit with human Bax (hBax), a pro-apoptotic gene, as the endpoint output. In addition, Bcl-2, an anti-apoptotic gene was fused with L7Ae through P2A peptides to reinforce the repression of apoptosis against leaky hBax expression in OFF states (Fig. 5a, b). In this design, we expected that the circuits should kill cells only in the presence of both target miRNAs ([11] state). We co-transfected the circuits with miR-206 and/or miR-302a mimics into 293FT cells. Twenty-four hours after the transfection, we stained the cells with SYTOX red for dead cells and Annexin V for apoptotic cells to quantitatively assess the apoptosis level. The circuits induced apoptosis only when both input miRNAs were present. The apoptosis level in ON state was comparable to hBax mRNA transfection (Fig. 5c, d). Thus, our apoptosis regulatory 2-input AND circuit can selectively regulate cell death by sensing two target miRNAs.
## Discussion
In this study, we designed and constructed multi-input logic circuits that can distinguish differences in the activities of multiple miRNA in a cell with an RNA-only delivery approach. We found that several basic logic circuits (AND, OR, NAND, NOR and XOR gates) can be constructed with a set of RBPs and mRNAs without the requirement of DNA-based transcriptional regulation by improving the performance of RNA regulatory devices (Figs. 3 and 4). To quantitatively evaluate the performance of each circuit, we calculated the cosine similarity and net fold-change (Supplementary Figure 7). Cosine similarity is an index for evaluating the error between the ideal implementation and observed behavior in a circuit33 (see Analysis of cosine similarity and net fold-change in Methods). The net fold-change was defined as the ratio of the averaged output level in ON and OFF states. From the analyses, AND and OR circuits showed better performance in net fold-change and cosine similarity, respectively, compared with NAND and NOR circuits (Supplementary Figure 7), which we attribute to circuit complexity. All of the circuits showed statistically significant performance (Supplementary Tables 1, 2, and 3).
Although DNA-based circuits have great potential for applications such as designer cells7,34 (e.g., CAR-T cells), RNA-delivered circuits have an advantage in terms of safety, which makes them preferable for therapeutic applications in the field of regenerative medicine, such as the elimination of unwanted cells and purification of target cells from a heterogenous population differentiated from human pluripotent stem cells9,22. MicroRNA-responsive circuits will be especially useful, because miRNA expressions are signatures of the cell identity and cell state. However, to date, most studies using synthetic circuits have required a DNA delivery method5,35,36,37,38. We and others have developed synthetic RNA-delivered, miRNA-responsive circuits10. To control cell fate more precisely, multi-input circuits with logic computation is necessary, as demonstrated in DNA-based systems5,33,39,40. However, the construction of logic circuits in cells with RNA-only delivery has not been achieved previously, because previous RNA-based circuits show relatively low fold-change (ON/OFF ratio in outputs) and have a limited repertoire of devices with high repression capacity. By improving sensitivity to the input miRNA (Fig. 2) and engineering a repressor device (MS2CP-responsive mRNA) to increase the ON/OFF ratio (Supplementary Figure 5), here we report five kinds of 2-input basic logic circuits compatible with RNA-only delivery. These basic logic circuits which are composed of simple two types of repressor (miRNA- and protein-responsive mRNAs) are an important milestone toward the construction of scalable and more complex RNA-only circuits.
To further engineer and improve synthetic RNA-based circuits that can respond to multiple miRNA targets, three issues regarding the circuit design should be considered. First, to increase the fold-change between ON and OFF states, we need to reduce undesired leakiness in the protein expression prior to miRNA-mediated post-transcriptional repression. For this purpose, the circuit may benefit by adding a post-translational system that controls the stability of the output protein products (such as the degron system41). As an alternative approach, the insertion of multiple miRNA target sites into the 5′-UTR of the sensor mRNAs may enhance the miRNA sensitivity and circuit performance30. Although we do not know the reason why the insertion of a miRNA target site into the 5′-UTR and 3′-UTR together results in enhanced downregulation of the target mRNAs42, we assume that each designed miRNA target site is completely complementary to the miRNA of interest, which would thus induce AGO2-mediated mRNA cleavage43. Second, to perform more complexed logic computation in cells, we need to scale-up the RNA-based circuits by using a set of orthogonal RBPs. Currently, the limited availability of translational repressors, such as L7Ae or MS2CP, makes it difficult to design more complex circuits. Finding new RBP-mediated repressors, such as CRISPR-Cas effector Cas13, may expand the repertoire of RNA-based circuits44,45,46. Lastly, it is important to choose appropriate input miRNAs in order to detect and control the target cells in a heterogenous population. For this purpose, we need to measure not only the expression profiles, but the activity profiles of the miRNAs, because many miRNAs detected by RNA sequencing or microarray have weak or little activity according to reporter analyses42,47,48.
In conclusion, we have developed a framework for constructing basic logic circuits with RNA-only delivery, expanding the potential of RNA-based gene circuits to detect and control the cell state. We demonstrated that 2-input AND circuit with an apoptotic gene as the output regulated cell death according to differences in the two miRNA activities. Such a multi-input system enables us to purify target cells or control cell fate more precisely compared with a single miRNA-input system9,22. Synthetic mRNA-based, multi-input miRNA-responsive circuits will contribute to the development of more sophisticated circuits for future medical applications.
## Methods
### Cell culture
293FT cells (Invitrogen, USA) were cultured in DMEM high glucose (Nacalai Tesque, Japan) supplemented with 10% FBS (JBS, Japan), 0.1 mM MEM non-essential amino acids (Life Technologies, USA), 2 mM l-glutamine (Life Technologies) and 1 mM sodium pyruvate (Nacalai Tesque). HeLa CCL2 cells (ATCC) were cultured in DMEM High Glucose (Nacalai Tesque) supplemented with 10% FBS (JBS). All cell lines were cultured at 37 °C with 5% CO2.
### RNA transfection
All transfections were performed in 24-well format using Stemfect RNA Transfection Reagent Kit (Stemgent, USA) or Lipofectamine® MessengerMAX (Thermo Fisher scientific, USA) according to the manufacturer’s protocol. Opti-MEM (Thermo Fisher scientific) was used as buffer for MessengaerMAX. The MessengerMAX reagent and buffer were mixed for 10 min. The mRNAs with or without miRNA mimics or miRNA inhibitors diluted with buffer were mixed with the above reagent for 5 min. 293FT cells (1 × 105 cells per well) and HeLa cells (5 × 104 cells per well) were seeded in 24-well plates at 24 h before the transfection for all experiments. The medium was not changed before and after the transfection. All subsequent assays were performed 24 h after the transfection. The transfection details for each experiment are shown in Supplementary Table 4.
### Preparation of DNA template for in vitro transcription (IVT)
A DNA template for IVT was generated by PCR using KOD-Plus-Neo (TOYOBO, Japan). A forward primer containing the T7 promoter and a reverse primer containing 120-nucleotide-long poly(T) tract transcribed into a poly(A) tail were used. PCR products amplified from the plasmids were subjected to digestion by DpnI restriction enzyme (TOYOBO). The PCR products were purified using a MinElute PCR purification Kit (QIAGEN, UK) according to the manufacturer’s protocol.
### Preparation of modified mRNA
All mRNAs were generated using the above PCR products and MEGAscript T7 Kit (Ambion, USA). In the reaction, pseudouridine-5′-triphosphate and 5-methylcytidine-5′-triphosphate (TriLink BioTechnologies, USA) were used instead of uridine triphosphate and cytosine triphosphate, respectively. For IVT of the MS2CP-responsive mRNA used in Fig. 3, N1-methylpseudouridine-5′-triphosphate (m1pU) (TriLink BioTechnologies) was used instead of uridine-5′-triphosphate. Guanosine-5′-triphosphate was 5-fold diluted with an anti-reverse cap analog (TriLink BioTechnologies) before the IVT reaction. Reaction mixtures were incubated at 37°C for up to 6 h and then mixed with TURBO DNase (Ambion), and further incubated at 37°C for 30 min to remove the template DNA. The resulting mRNAs were purified using a FavorPrep Blood/Cultured Cells total RNA extraction column (Favorgen Biotech, Taiwan), incubated with Antarctic Phosphatase (New England Biolabs) at 37 °C for 30 min, and then purified again using an RNeasy MinElute Cleanup Kit (QIAGEN).
### Synthetic miRNA mimics and inhibitors
MiRNA mimics are small, chemically modified double-stranded RNAs that mimic endogenous miRNAs. The RNA mimic of human miR-21-5p, miR-302a-5p and miR-206 and negative control miRNA were used (Thermo Fisher Scientific). The negative control mimic is a random sequence validated not to have any downstream mRNA target for repression. MiRNA inhibitors for miR-21-5p (Thermo Fisher Scientific) were used in experiments using HeLa cells.
### Fluorescent microscopy
Fluorescent images were taken at 24 h after the transfection by IX81 microscopy connected to a CDD-camera (Olympus, Japan). Images were edited to change the brightness and contrast using ImageJ software (NIH, Bethesda, MD, USA).
### Flow cytometry and data analysis
All flow cytometry measurements were performed 24 h after the transfection using BD Accuri™ C6 (BD Biosciences, USA). For all fluorescence assays, clumps and doublets were excluded based on forward and side scatter. EGFP and iRFP670 were detected by FL1 (533/30 nm) and FL4 (675/25 nm) filters, respectively. The data were analyzed using FlowJo 7.6.5 software.
The output level was calculated by the following formula:
$$\frac{{{\mathrm{{Mean}}}\,{\mathrm{{EGFP}}}\,{\mathrm{{intensity}}}\,{\mathrm{{in}}}\,{\mathrm{{iRFP670}}}^+ \,{\mathrm{{cells}}}}}{{{\mathrm{{Mean}}}\,{\mathrm{{iRFP670}}}\,{\mathrm{{intensity}}}\,{\mathrm{{in}}}\,{\mathrm{{iRFP670}}}^ + \,{\mathrm{{cells}}}}}$$
(1)
iRFP670+ gating was determined from the mock sample with 99.9% cells outside the gate. Each data set was normalized by an appropriate control sample and then averaged by 3 data sets.
### Apoptosis and cell death assays
Sample cells including those in the supernatant were collected 24 h after the transfection, washed with PBS and stained with Annexin V, Alexa Fluor 488 conjugated (Life Technologies) and SYTOX red (Life Technologies) for 15 min at room temperature. The cells were analyzed by BD Accuri™ C6. Annexin V, Alexa Fluor 488 conjugate was detected with FL1 filter, and SYTOX red dead-cell staining was detected with FL4 filter.
### Analysis of cosine similarity and net fold-change
In Supplementary Figure 7, the correctness of multi-input logic circuits was quantitatively evaluated by calculating the cosine similarity between vectors x and y using the following formula:
$${\mathrm{cos}}\,{\mathrm{\theta }} = \frac{{{\mathbf{x}} \cdot {\mathbf{y}}}}{{\left| {\mathbf{x}} \right|\left| {\mathbf{y}} \right|}}$$
(2)
x is a truth table vector that has ideal output (=0 or 1) for each state ([00], [10], [01], and [11]) as a vector. For example, x = (0 0 0 1) for AND circuit. y is an output signal vector that carries the observed output levels (=EGFP/ iRFP670) of each state ([00], [10], [01], and [11]). Thus, cos θ ranges from 0 (worst) to 1 (best). Net fold-change was calculated by dividing the averaged output level in each ON state by that in each OFF state.
### Statistical analysis
All data are presented as the mean ± s.d. Unpaired two-tailed Student’s t-test was used for the statistical analysis in Fig. 2 and Supplementary Figure 3. Tukey’s method was used for the statistical analysis in Figs. 35 (Supplementary Tables 1, 2 and 3). The levels of significance are denoted as *P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001, and n.s., not significant (P ≥ 0.05). All statistical tests were performed using R.
## Data availability
All relevant data are available from the corresponding author upon reasonable request. Primer sequences are provided in Supplementary Table 5.
## Change history
• ### 26 April 2019
The original version of this Article contained an error in the fourth sentence of the second paragraph of the ‘Improving the performance of miRNA-responsive circuits’ section of the Results, which incorrectly read ‘We confirmed a significant fold-change between ON and OFF states (from 3.5- to 9.0-fold) in 293FT cells (Supplementary Figure 3).’ The correct version states ‘4.6’ in place of ‘3.5’. This has been corrected in both the PDF and HTML versions of the Article.
The original version of the Supplementary Information contained a corresponding error in Supplementary Figure 3. The HTML has been updated to include a corrected version of the Supplementary Information.
## References
1. 1.
Sedlmayer, F., Aubel, D. & Fussenegger, M. Synthetic gene circuits for the detection, elimination and prevention of disease. Nat. Biomed. Eng. 2, 399–415 (2018).
2. 2.
Kis, Z., Sant’Ana Pereira, H., Homma, T., Pedrigi, R. M. & Krams, R. Mammalian synthetic biology: emerging medical applications. J. R. Soc. Interface 12, 1–18 (2015).
3. 3.
Andries, O., Kitada, T., Bodner, K., Sanders, N. N. & Weiss, R. Synthetic biology devices and circuits for RNA-based ‘smart vaccines’: a propositional review. Expert. Rev. Vaccin. 14, 313–331 (2014).
4. 4.
Lim, W. A. & June, C. H. The principles of engineering immune cells to treat. Cancer Cell 168, 724–740 (2017).
5. 5.
Xie, Z., Wroblewska, L., Prochazka, L., Weiss, R. & Benenson, Y. Multi-input RNAi-based logic circuit for identification of specific cancer cells. Science 333, 1307–1311 (2011).
6. 6.
Wong, R. S., Chen, Y. Y. & Smolke, C. D. Regulation of T cell proliferation with drug-responsive microRNA switches. Nucleic Acids Res. 46, 1541–1552 (2018).
7. 7.
Roybal, K. T. et al. Engineering T cells with customized therapeutic response programs using synthetic notch receptors. Cell 167, 419–432.e16 (2016).
8. 8.
Matsuda, M., Koga, M., Woltjen, K., Nishida, E. & Ebisuya, M. Synthetic lateral inhibition governs cell-type bifurcation with robust ratios. Nat. Commun. 6, 1–12 (2015).
9. 9.
Miki, K. et al. Efficient detection and purification of cell populations using synthetic microRNA switches. Cell Stem Cell 16, 699–711 (2015).
10. 10.
Wroblewska, L. et al. Mammalian synthetic circuits with RNA binding proteins for RNA-only delivery. Nat. Biotechnol. 33, 839–841 (2015).
11. 11.
Kawasaki, S., Fujita, Y., Nagaike, T., Tomita, K. & Saito, H. Synthetic mRNA devices that detect endogenous proteins and distinguish mammalian cells. Nucleic Acids Res. 45, e117 (2017).
12. 12.
Stadler, C. R. et al. Elimination of large tumors in mice by mRNA-encoded bispecific antibodies. Nat. Med. 23, 815–817 (2017).
13. 13.
Bartel, D. P. MicroRNAs: target recognition and regulatory functions. Cell 136, 215–233 (2009).
14. 14.
Iwakawa, H. & Tomari, Y. The functions of microRNAs: mRNA decay and translational repression. Trends Cell Biol. 25, 651–665 (2015).
15. 15.
Kozomara, A. & Griffiths-Jones, S. MiRBase: annotating high confidence microRNAs using deep sequencing data. Nucleic Acids Res. 42, 68–73 (2014).
16. 16.
Shenoy, A. & Blelloch, R. H. Regulation of microRNA function in somatic stem cell proliferation and differentiation. Nat. Rev. Mol. Cell Biol. 15, 565–576 (2014).
17. 17.
Lu, J. et al. MicroRNA expression profiles classify human cancers. Nature 435, 834–838 (2005).
18. 18.
Liao, B. et al. MicroRNA cluster 302–367 enhances somatic cell reprogramming by accelerating a mesenchymal-to-epithelial transition. J. Biol. Chem. 286, 17359–17364 (2011).
19. 19.
Gentner, B et al. Identification of hematopoietic stem cell-specific miRNAs enables gene therapy of globoid cell leukodystrophy. Sci. Transl. Med. 2, 58ra84 (2010).
20. 20.
Ma, D., Peng, S. & Xie, Z. Integration and exchange of split dCas9 domains for transcriptional controls in mammalian cells. Nat. Commun. 7, 1–7 (2016).
21. 21.
Endo, K., Hayashi, K. & Saito, H. High-resolution identification and separation of living cell types by multiple microRNA-responsive synthetic mRNAs. Sci. Rep. 6, 1–8 (2016).
22. 22.
Parr, C. J. C. et al. MicroRNA-302 switch to identify and eliminate undifferentiated human pluripotent stem cells. Sci. Rep. 6, 1–14 (2016).
23. 23.
Nolivos, S., Carpousis, A. J. & Clouet-d’Orval, B. The K-loop, a general feature of the Pyrococcus C/D guide RNAs, is an RNA structural motif related to the K-turn. Nucleic Acids Res. 33, 6507–6514 (2005).
24. 24.
Huang, L. & Lilley, D. M. J. The kink turn, a key architectural element in RNA structure. J. Mol. Biol. 428, 790–801 (2016).
25. 25.
Saito, H. et al. Synthetic translational regulation by an L7Ae–kink-turn RNP switch. Nat. Chem. Biol. 6, 71–78 (2010).
26. 26.
Endo, K., Stapleton, J. A., Hayashi, K., Saito, H. & Inoue, T. Quantitative and simultaneous translational control of distinct mammalian mRNAs. Nucleic Acids Res. 41, 1–12 (2013).
27. 27.
Feng, Y.-H. & Tsao, C.-J. Emerging role of microRNA-21 in cancer. Biomed. Rep. 5, 395–402 (2016).
28. 28.
Gao, Z., Zhu, X. & Dou, Y. The MIR-302/367 cluster: a comprehensive update on its evolution and functions. Open Biol. 5, 150138 (2015).
29. 29.
Lee, Y. J. et al. Dissecting microRNA-mediated regulation of stemness, reprogramming, and pluripotency. Cell Regen. 5, 2 (2016).
30. 30.
Hirosawa, M. et al. Cell-type-specific genome editing with a microRNA-responsive CRISPR-Cas9 switch. Nucleic Acids Res. 45, e118 (2017).
31. 31.
Yamagishi, M., Ishihama, Y., Shirasaki, Y., Kurama, H. & Funatsu, T. Single-molecule imaging of β-actin mRNAs in the cytoplasm of a living cell. Exp. Cell Res. 315, 1142–1147 (2009).
32. 32.
Zalatan, J. G. et al. Engineering complex synthetic transcriptional programs with CRISPR RNA scaffolds. Cell 160, 339–350 (2015).
33. 33.
Weinberg, B. H. et al. Large-scale design of robust genetic circuits with multiple inputs and outputs for mammalian cells. Nat. Biotechnol. 35, 453–462 (2017).
34. 34.
Nissim, L. et al. Synthetic RNA-based immunomodulatory gene circuits for cancer immunotherapy. Cell 171, 1138–1150.e15 (2017).
35. 35.
Haynes, K. A., Ceroni, F., Flicker, D., Younger, A. & Silver, P. A. A sensitive switch for visualizing natural gene silencing in single cells. ACS Synth. Biol. 1, 99–106 (2012).
36. 36.
Lapique, N. & Benenson, Y. Digital switching in a biosensor circuit via programmable timing of gene availability. Nat. Chem. Biol. 10, 1020–1027 (2014).
37. 37.
Quarton, T. et al. Mapping the operational landscape of microRNAs in synthetic gene circuits. NPJ Syst. Biol. Appl. 4, 6 (2018).
38. 38.
Schreiber, J., Arter, M., Lapique, N., Haefliger, B. & Benenson, Y. Model‐guided combinatorial optimization of complex synthetic gene networks. Mol. Syst. Biol. 12, 899 (2016).
39. 39.
Bonnet, J., Yin, P., Ortiz, M. E., Subsoontorn, P. & Endy, D. Amplifying genetic logic gates. Science 340, 599–603 (2013).
40. 40.
Siuti, P., Yazbek, J. & Lu, T. K. Synthetic circuits integrating logic and memory in living cells. Nat. Biotechnol. 31, 448–452 (2013).
41. 41.
Natsume, T. & Kanemaki, M. T. Conditional degrons for controlling protein expression at the protein level. Annu. Rev. Genet. 51, 83–102 (2017).
42. 42.
Gam, J. J., Babb, J. & Weiss, R. A mixed antagonistic/synergistic miRNA repression model enables accurate predictions of multi-input miRNA sensor activity. Nat. Commun. 9, 2430 (2018).
43. 43.
Meister, G. et al. Human Argonaute2 mediates RNA cleavage targeted by miRNAs and siRNAs. Mol. Cell 15, 185–197 (2004).
44. 44.
Abudayyeh, O. O. et al. RNA targeting with CRISPR-Cas13. Nature 550, 280–284 (2017).
45. 45.
Cox, D. B. T. et al. RNA editing with CRISPR-Cas13. Science 358, 1019–1027 (2017).
46. 46.
Konermann, S. et al. Transcriptome engineering with RNA-targeting type VI-D CRISPR effectors. Cell 173, 665–676.e14 (2018).
47. 47.
Mullokandov, G. et al. High-throughput assessment of microRNA activity and function using microRNA sensor and decoy libraries. Nat. Methods 9, 840–846 (2012).
48. 48.
Thomson, D. W. et al. Assessing the gene regulatory properties of Argonaute-bound small RNAs of diverse genomic origin. Nucleic Acids Res. 43, 470–481 (2015).
## Acknowledgements
We thank Saito laboratory members for kind advice about the experimental conditions, data analysis, and discussion. We also thank Dr. Peter Karagiannis (Kyoto University) and Ms. Yukiko Nakagawa and Miho Nishimura for critical reading of the manuscript and administrative support, respectively.
## Author information
S.M., Y.F., and H.S. conceived the project and designed the experiments. S.M. performed all the experiments except for Supplementary Figure 5. S.K. and Y.K. designed the MS2CP-responsive mRNAs. H.O. supported the experiments in Fig. 2e, f and Supplementary Figure 5. S.M., Y.F., and H.S. wrote the manuscript. All authors discussed the results and commented on the manuscript.
Correspondence to Hirohide Saito.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Matsuura, S., Ono, H., Kawasaki, S. et al. Synthetic RNA-based logic computation in mammalian cells. Nat Commun 9, 4847 (2018). https://doi.org/10.1038/s41467-018-07181-2
• Accepted:
• Published:
• ### Daisy Chain Topology Based Mammalian Synthetic Circuits for RNA-Only Delivery
• Kaiyu Liu
• , Jiong Yang
• , Shigang Ding
• & Yi Gao
ACS Synthetic Biology (2020)
• ### Orthogonal Protein-Responsive mRNA Switches for Mammalian Synthetic Biology
• Hiroki Ono
• , Shunsuke Kawasaki
• & Hirohide Saito
ACS Synthetic Biology (2020)
• ### The primordial tRNA acceptor stem code from theoretical minimal RNA ring clusters
• Jacques Demongeot
BMC Genetics (2020)
• ### RNA and protein-based nanodevices for mammalian post-transcriptional circuits
• Shunsuke Kawasaki
• , Hiroki Ono
• , Moe Hirosawa
• & Hirohide Saito
Current Opinion in Biotechnology (2020)
• ### Self-assembly of azaphthalocyanineâoligodeoxynucleotide conjugates into J-dimers: towards biomolecular logic gates
• Jiri Demuth
• , Miroslav Miletin
|
2020-04-03 17:52:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.512075662612915, "perplexity": 12733.235224294158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370515113.54/warc/CC-MAIN-20200403154746-20200403184746-00140.warc.gz"}
|
http://physexams.com/lesson/Displacement_and_Distance_/12
|
# Displacement and Distance
Category : Straight Line Motion
The motion of objects in one dimension is described in a problem-solution based approach.
Velocity and acceleration
Displacement and Distance
# Displacement and Distance
## Displacement in one dimension
Let's explain this concept with an example. We want to study a car's motion along a straight line. Let the car be a point particle and moves in one dimension. To specify the location of the particle in one dimension we need only one axis that we called it $x$ and lies along the straight line path. First, we must define an important quantity that the other kinematic quantities are made from it, displacement. To describe the motion of the car, we must know its position and how that position changes with time. The change in the car's position from initial position $x_i$ to final position $x_f$ is called displacement, $\Delta x= x_f-x_i$ (in physics we use the Greek letter $\Delta$ to indicate the change in a quantity). This quantity is a vector point from $A$ to $B$ and in $1$-D denoted by $\Delta x=x_B-x_A$. In the figure below, the car moves from point $A$ at $x=2\, {\rm m}$ and after reaching to $x=9\,{\rm m}$ returns and stops at position $x=6\,{\rm m}$ at point $B$. Therefore, the car's displacement is $\Delta x=6-2=+4\,{\rm m}$.
Another quantity which sometimes confused with displacement is the distance traveled (or simply distance) and defined as the overall distance covered by the particle. In our example, the distance from the initial position is computed as follows: first, calculate the distance to the return point $d_1=x_C-x_A=9-2=7\,{\rm m}$ then from that point ($x_C$) to the final point $x_B$ i.e. $d_2=x_B-x_C=6-9=-3$. But we should pick the absolute value of it since distance is a scalar quantity and for them, a negative value is nonsense. Therefore, the total distance covered by our car is $d_{tot}=d_1+|d_2|=7+|-3|=7+3=10\,{\rm m}$ . In fact, if there are several turning points along the straight path or once the motion's path to be on a plane or even three dimensional cases, one should divide the overall path (one, two or three dimensional) into straight lines (without any turning point), calculate difference of those initial and final points and then add their absolute values of each path to reach to the distance traveled by that particle on that specific path (see examples below).
## Displacement in two and three dimension
In more than one-dimension, the computations are a bit involved and we need to armed with additional concepts. In this section, we can learn how by using vectors one can describe the position of an object and by manipulating of them to characterize the displacement and other related kinematical quantities (like velocity and acceleration).
In a coordinate system, the position of an object is describe by a so-called position vector which is extends from reference origin $O$ to the location of object $P$ and denoted by $\vec{r}=\vec{OP}$. This vectors, in Cartesian coordinate system (or other related coordinates), can be expressed as a linear combination of unit vectors, $\hat{i},\hat{j},\hat{k}$, (the ones with unit length) as
$\textbf{r}=\sum_1^{n} r_x \hat{i}+r_y \hat{j} +r_z \hat{k}$
where $n$ denotes the dimension of problem i.e. in two and three dimensions $n=2,3$, respectively. $r_x , r_y$ and $r_z$ are called the components of the vector $\vec{r}$.
Now the only thing remains is adding or subtracting of these vectors, known as vector algebra, to provide the kinematical quantities. To do this, simply add or subtract the terms (components) along a specific axis with each other (as below). Consider adding of two vector $\vec{a}$ and $\vec{b}$ in two dimension,
\begin{eqnarray*}
\textbf{a}+\textbf{b}&=&\left(a_x \hat{i}+a_y \hat{j} \right)+\left(b_x \hat{i}+b_y \hat{j}\right)\\
&=&\left(a_x+b_x\right)\hat{i}+\left(a_y+b_y\right)\hat{j}\\
&=&c_x\, \hat{i}+c_y\, \hat{j}
\end{eqnarray*}
in the last line, the components of the final vector (or resultant vector) is denoted by $c_x$ and $c_y$.
The magnitude and direction of the obtained vector is represented by the following relations
\begin{eqnarray*}
|\textbf{c}| &=& \sqrt{\left(a_x+b_x\right)^2 +\left(a_y+b_y\right)^2 } \qquad \text{magnitude}\\
\theta &=& \tan^{-1} \left(\frac{a_y+b_y}{a_x+b_x}\right) \qquad \text{direction}
\end{eqnarray*}
where $\theta$ is the angle with respect to the $x$ axis.
We have two types of problems in the topic of displacement. In the first case, initial and final coordinates (position) of an object are given. Write position vectors for every point. The vector which extends from the tail of initial point to the tail of final point is displacement vector and computed as the difference of those vectors i.e. $\vec{c}=\vec{b}-\vec{a}$.
In the second case, the overall path of an object, between initial and final points, is given as consecutive vectors as the figure below. Here, one should decompose each vector with respect to its origin, then add components along $x$ and $y$ axes separately. Displacement vector is the one that points from the tip of the first vector to the tail of the last vector and its magnitude is addition vector of those vectors i.e. $\vec{d}=\vec{a}+\vec{b}+\vec{c}$.
#### Example $1$:
A moving object is displaced from $A(2,-1)$ to $B(-5,3)$ in a two-dimensional plane. What is the displacement vector of this object?
Solution
First, this is the first case which mentioned above. So construct the position vectors of point $A$ and $B$ as below
\begin{eqnarray*}
\overrightarrow{OA}&=&2\,\hat{i}+(-1)\,\hat{j}\\
\overrightarrow{OB}&=&-5\,\hat{i}+3\,\hat{j}
\end{eqnarray*}
Now, by definition, the difference of initial and final points or simply position vectors gets the displacement vector $\vec{d}$ as
\begin{eqnarray*}
\vec{d} &=& \overrightarrow{OB}-\overrightarrow{OA}\\
&=& \left(-5\,\hat{i}+3\,\hat{j}\right)-\left(2\,\hat{i}+(-1)\,\hat{j}\right)\\
&=& -7\,\hat{i}+4\,\hat{j}
\end{eqnarray*}
its magnitude and direction is also obtained as follows
\begin{eqnarray*}
|\vec{d} |&=&\sqrt{\left(-7\right)^2 +\left(4\right)^2 }\\
&=&\sqrt{49+16}\approx 8.06\,{\rm m}
\end{eqnarray*}
$\theta = \tan^{-1} \left(\frac{d_y}{d_x}\right)=\tan^{-1}\left(\frac{4}{-7}\right)$
The angle $\theta$ may be $-29.74^\circ$ or $150.25^\circ$ but since $d_x$ is negative and $d_y$ is positive so the resultant vector lies on the second quarter of coordinate system. Therefore, the desired angle with $x$-axis is $150.25^\circ$.
#### Example $2$:
An airplane flies $276.9\,{\rm km}$ $\left[{\rm W}\, 76.70^\circ\, {\rm S}\right]$ from Edmonton to Calgary and then continues $675.1\,{\rm km}$ $\left[{\rm W}\, 11.45^\circ\,{\rm S}\right]$ from Calgary to Vancouver. Using components, calculate the plane's total displacement. (Nelson 12, p. 27).
Solution
In these problems, there is a new thing which appears in many Textbooks that is the compact form of direction as stated in brackets. $\left[{\rm W}\, 76.70^\circ\, {\rm S}\right]$ can be read as "point west, and then turn $76.70^\circ$ toward the south".
To solve such practices, first sketch a diagram of all vectors, decompose them and next using vector algebra, explained above, compute the desired quantity (here $\vec{d}$).
Two successive paths denoted by vectors $\vec{d_1}$ and $\vec{d_2}$ and in terms of components read as
\begin{eqnarray*}
\vec{d_1} &=& |\vec{d_1}|\,\cos \theta \, (-\hat{i})+|\vec{d_1}|\,\sin \theta \, (-\hat{j})\\
\vec{d_2} &=& |\vec{d_2}|\,\cos \alpha \, (-\hat{i})+|\vec{d_2}|\,\sin \alpha \, (-\hat{j})
\end{eqnarray*}
putting numbers in the above, one obtain
\begin{eqnarray*}
\vec{d_1} &=& 276.9\,\cos 76.70^{\circ} \, (-\hat{i})+276.9\,\sin 76.7^{\circ} \, (-\hat{j})\\
\vec{d_2} &=& 675.1\,\cos 11.45^{\circ} \, (-\hat{i})+675.1\,\sin 11.45^{\circ} \, (-\hat{j})\\
\end{eqnarray*}
Total displacement is drawn from the tail of $\vec{d_1}$ to the tip of $\vec{d_2}$. In the language of vector addition $\vec{d}=\vec{d_2}+\vec{d_1}$, so
\begin{eqnarray*}
\vec{d} &=& \vec{d_2}+\vec{d_1}\\
&=& \left(661.664+63.700 \right)\,\left(-\hat{i}\right)+\left(134.016+269.47 \right)\,\left(-\hat{j}\right)\\
&=& \left(725.364 \right)\,\left(-\hat{i}\right)+\left(403.486 \right)\,\left(-\hat{j}\right) \qquad [{\rm km}]
\end{eqnarray*}
Therefore, the length of total path flied by the airplane from Edmonton to Vancouver and its direction with $x$-axis is
\begin{eqnarray*}
|\vec{d}| &=& \sqrt{\left(725.364 \right)^2 +\left(403.486 \right)^2 }\\
&=& 830.032\,{\rm km}
\end{eqnarray*}
\begin{eqnarray*}
\gamma &=& \tan^{-1} \left(\frac{d_y}{d_x}\right)\\
&=& \tan^{-1} \left(\frac{403.486}{725.364}\right)\\
&=& 29.09^\circ
\end{eqnarray*}
As one can see, the resultant vector points to the south west or $\left[{\rm W}\,29.09^{\circ}\,{\rm S}\right]$
Example $3$:
A moving particle moves over the surface of a solid cube in such a way that passes through $A$ to $B$. What is the magnitude of displacement vector in this change of location of the particle?
Solution:
In three-dimensional cases as $2-D$ ones, we should only know the location(coordinates) of the object and then use the following relations, one can obtain the displacement of a moving particle on any dimensions.
Points $A$ and $B$ lies on the $x-z$ plane and $y$ axis, respectively so their coordinates are $(10,0,10)$ and $(0,10,0)$ which parenthesis denote the $(x,y,z)$. This is type one of the problems.
\begin{eqnarray*}
\overrightarrow{OA}&=& 10\,\hat{i}+0\,\hat{j}+10\,\hat{k}\\
\overrightarrow{OB}&=& 0\,\hat{i}+10\,\hat{j}+0\,\hat{k}
\end{eqnarray*}
\begin{eqnarray*}
\vec{d} &=&\overrightarrow{OB}-\overrightarrow{OA}\\
&=& -10\,\hat{i}+10\,\hat{j}-10\,\hat{k}
\end{eqnarray*}
Therefore, the desired vector in terms of its components computed as above. Also its magnitude is the square root of sum of the squares of each components.
$|\vec{d}|=\sqrt{(-10)^2 +(-10)^2 + (10)^2 }=10\sqrt{3}$.
Example $4$:
A car moves around a circle of radius of $20\,{\rm m}$ and returns to its starting point. What is the distance and displacement of the car? ($\pi = 3$)
Solution:
As mentioned above, displacement depends on the initial and final points of the motion. since car returns to its initial position so in fact no displacement is made by the car. But the amount of distance traveled is simply the perimeter of the circle (since this scalar quantity depends on the form of the path). So $d=2 \pi r=2 \times 3 \times 20 =120\,{\rm m}$, where $r$ is radius of the circle.
In summary, Displacement is a vector that depends only on initial and final positions of the particle and not on the details of the motion and path. These vector quantities, require both a length and a direction to be identified. On the contrary, distance is a quantity which is characterized only by a simple value, which is called scalar and is path dependence. In general, the distance traveled and the magnitude of the displacement vector between two points is not the same.
|
2018-07-17 17:14:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9391712546348572, "perplexity": 449.2434304900916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589757.30/warc/CC-MAIN-20180717164437-20180717184437-00074.warc.gz"}
|
https://tex.stackexchange.com/questions/389523/how-do-i-include-a-global-bibliography-at-the-end-of-the-document-with-bibunits/389525
|
# How do I include a global bibliography at the end of the document with bibunits?
I have this code:
\documentclass[10pt]{article}
\usepackage[globalcitecopy]{bibunits}
\usepackage{filecontents}
\usepackage[round]{natbib}
\begin{filecontents*}{mwecitations.bib}
@book{goossens93,
author = "Frank Mittelbach and Michel Goossens and Johannes Braams and David Carlisle and Chris Rowley",
title = "The {LaTeX} Companion",
year = "1993",
}
@article{fujita2010economic,
title={Economic effects of the unemployment insurance benefit},
author={Fujita, Shigeru},
volume={4},
year={2010}
}
@article{rothstein2011unemployment,
title={Unemployment insurance and job search in the {Great Recession}},
author={Rothstein, Jesse},
journal={NBER},
volume={w17534},
year={2011}
}
\end{filecontents*}
\defaultbibliography{mwecitations}
\defaultbibliographystyle{plainnat}
\begin{document}
\begin{bibunit}
\section{something}
First discuss \cite*{goossens93}.
\putbib
\end{bibunit}
\begin{bibunit}
\section{something else}
Now discuss \cite*{rothstein2011unemployment} and \cite*{fujita2010economic}.
\putbib
\end{bibunit}
\renewcommand{\refname}{Global bibliography}
\bibliography[mwecitations]
\end{document}
which outputs this:
There is no global bibliography and the name of the citation file is included erroneously. I'm trying to get a global bibliography that includes every citation from every bibunit at the end of the document. Per the bibunits documentation:
You can create a global bibliography as usual with the commands \bibliography[〈BibTeX files〉] and \bibliographystyle[〈style〉]. Use \cite and \nocite to generate citations that appear in the local bibliography. Use \cite* and \nocite* inside a unit to generate citations for both the local and global bibliography.
As far as I can tell, I've applied this correctly. If I use \bibliography instead of \bibliography[mwecitations] (since I'm hoping to use the default .bib file without having to call it by name), I get the error
! Paragraph ended before \bibliography was complete.
What am I doing wrong? I compile the document with
xelatex mwe
bibtex bu1.aux
bibtex bu2.aux
xelatex mwe
xelatex mwe
I'm using natbib for the author-year citations.
• I tyhink you have to replace \bibliography[mwecitations] with \bibliography{mwecitations} (Not tested but easy to understand from your input and output) – koleygr Sep 2 '17 at 18:24
• @koleygr The documentation clearly uses a bracket [ instead of a brace, but I'll try that. – Michael A Sep 2 '17 at 18:53
• I saw it there in your question after answered. But just some seconds later @Ulrike Fischer provided an answer using {} instead of []... Anyway if it fail, you should try your way but including the extension (bib) too because 〈BibTeX files〉 supposed to be with their extension if no other way mentioned. – koleygr Sep 2 '17 at 19:04
• It is may be a typo as far as I checked. See section 3.3 on the documentation you provided. – koleygr Sep 2 '17 at 19:11
There are three errors: You have a typo (brackets instead of braces), the global \bibliographystyle is missing, and you didn't run bibtex on your main file.
\documentclass[10pt]{article}
\usepackage[globalcitecopy]{bibunits}
\usepackage{filecontents}
\usepackage[round]{natbib}
\begin{filecontents*}{mwecitations.bib}
@book{goossens93,
author = "Frank Mittelbach and Michel Goossens and Johannes Braams and David Carlisle and Chris Rowley",
title = "The {LaTeX} Companion",
year = "1993",
}
@article{fujita2010economic,
title={Economic effects of the unemployment insurance benefit},
author={Fujita, Shigeru},
volume={4},
year={2010}
}
@article{rothstein2011unemployment,
title={Unemployment insurance and job search in the {Great Recession}},
author={Rothstein, Jesse},
journal={NBER},
volume={w17534},
year={2011}
}
\end{filecontents*}
\defaultbibliography{mwecitations}
\defaultbibliographystyle{plainnat}
\begin{document}
\begin{bibunit}
\section{something}
First discuss \cite*{goossens93}.
\putbib
\end{bibunit}
\begin{bibunit}
\section{something else}
Now discuss \cite*{rothstein2011unemployment} and \cite*{fujita2010economic}.
\putbib
\end{bibunit}
\renewcommand{\refname}{Global bibliography}
\bibliographystyle{plainnat}
\bibliography{mwecitations}
\end{document}
|
2020-01-18 12:30:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.765113353729248, "perplexity": 5780.9107220398255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592565.2/warc/CC-MAIN-20200118110141-20200118134141-00184.warc.gz"}
|
https://tug.org/pipermail/tex-live/2004-January/004919.html
|
# [tex-live] path search
Olaf Weber olaf at infovore.xs4all.nl
Mon Jan 19 18:07:54 CET 2004
Hans Hagen writes:
> [...] I think that when one provides an explicit path, i.e. when the
> filename starts with one of
> .
> /
> <char>:
> no path searching should take place. (unless maybe a filename end with // -)
I agree. My in-development code handles this case by checking the
given name: if it is absolute or explicit relative ("./<name>") it
constructs a one-element path that will be used to search in. (Some
kind of search is still necessary because several suffixes may be
possible.)
> Can someone shed some light on this (was it changed, when, how do i
> find out, how to prevent, etc)
It looks like the logic that ought to accomplish the same in
libkpathsea is broken. :-(
> Btw, there are situations when one would like to be more specific in
> the path search; i fear the moment that files with the same names end
> up in context trees, which is possible when users have their own
> extensions (e.g. symbol definitions, typescripts, modules); currently
> i can handle that with a rigurous name scheme but ...; maybe there
> should be some option like
> \input [tex/context/private/]thisorthatfile
> (not an explicit path since it has [] which are not used by tex users
> for filenames i guess)
> i.e. [...] tells the kpse library that tex/context/private/thisorthat
> is to be searched (in the tree), not to be confused with an explicit
> path under the current directory.
Interesting. It looks quite doable to fit something like this in the
new code, though I do believe some details would still have to be
sorted out.
> so:
> \input thisorthat : search controlled by
> kpse variables
> \input ./thisorthat : search controlled by
> user (first example)
> \input ../figures/thisorthat : search controlled by
> user (second example)
> \input [somewhere;somewhereelse]/thisorthat : a combination
> I can even imagine:
> \input [*somewhere]/thisorthat : with *somewhere being
> \$somehwere, i.e. an env/kpse variable
> [ ] and * are not used in filenames normally so ...
--
Olaf Weber
(This space left blank for technical reasons.)
|
2019-03-23 22:49:22
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8982356190681458, "perplexity": 9188.774263617812}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203093.63/warc/CC-MAIN-20190323221914-20190324003914-00146.warc.gz"}
|
http://mathonline.wikidot.com/the-wronskian-of-a-linear-homogeneous-nth-order-ode
|
The Wronskian of a Linear Homogeneous nth Order ODE
# The Wronskian of a Linear Homogeneous nth Order ODE
Recall from the Fundamental Sets and Matrices of a Linear Homogeneous nth Order ODE page that if we have a linear homogeneous $n^{\mathrm{th}}$ order ODE $y^{(n)} + a_{n-1}(t)y^{(n-1)} + ... + a_1(t)y' + a_0y = 0$ then a linearly independent set of solutions $\{ \psi_1, \psi_2, ..., \psi_n \}$ to this ODE is called a fundamental set of solutions, and the corresponding fundamental matrix is given by:
(1)
\begin{align} \quad \Psi(t) = \begin{bmatrix} \psi_1(t) & \psi_2(t) & \cdots & \psi_n(t) \\ \psi_1'(t) & \psi_2'(t) & \cdots & \psi_n'(t) \\ \vdots & \vdots & \ddots & \vdots \\ \psi_1^{(n-1)}(t) & \psi_2^{(n-1)}(t) & \cdots & \psi_n^{(n-1)}(t) \end{bmatrix} \end{align}
We give a special name to the determinant of such matrix.
Definition: The Wronskian of a linear homogeneous $n^{\mathrm{th}}$ order ODE $y^{(n)} + a_{n-1}(t)y^{(n-1)} + ... + a_1(t)y' + a_0y = 0$ with Fundamental matrix $\Psi$ is defined to be $W(\psi_1, \psi_2, ..., \psi_n)(t) = \det \Psi (t)$.
Recall that for linear homogeneous systems of first order ODEs $\mathbf{x}' = A(t) \mathbf{x}$, we have that the determinant of a fundamental matrix $\Phi(t)$ is given by:
(2)
\begin{align} \quad \det \Phi(t) = \det \Phi (\tau) \cdot \mathrm{exp} \left ( \int_{\tau}^{t} \mathrm{tr}(A(s)) \: ds \right ) \end{align}
For a linear homogeneous $n^{\mathrm{th}}$ order ODE, we have that the matrix $A(t)$ is the companion matrix for this system, and $\mathrm{tr}(A(t)) = -a_{n-1}(t)$. Therefore, the Wronskian for the fundamental matrix $\Psi$ is given by:
(3)
\begin{align} \quad W(\psi_1, \psi_2, ..., \psi_n)(t) = W(\psi_1, \psi_2, ..., \psi_n)(\tau) \cdot \mathrm{exp} \int_{\tau}^{t} -a_{n-1}(s) \: ds \end{align}
|
2019-04-25 08:46:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999840259552002, "perplexity": 611.484700386282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578711882.85/warc/CC-MAIN-20190425074144-20190425100144-00168.warc.gz"}
|
https://www.physicsforums.com/threads/derivative-product-rule-and-other-rule-proofs.588768/
|
# Derivative product rule and other rule proofs.
1. Mar 20, 2012
### mtayab1994
1. The problem statement, all variables and given/known data
Prove that the functions: (u+v)'(x0) and αu and u*v are derivable.
2. Relevant equations
in other words prove that :
$$(u+v)'(x_{0})=u'(x_{0})+v'(x_{0})$$
$$(\alpha u)'(x_{0})=\alpha u'(x_{0})$$
$$(u\cdot v)'(x_{0})=u'(x_{0})\cdot v(x_{0})+u(x_{0})\cdot v'(x_{0})$$
3. The attempt at a solution
Can someone give me some heads up on how to start this proof and should i use the limit of (f(x)-f(x0))/x-x0 to prove this? We have never done f'(x)=lim as h approaches 0 of (f(x+h)-f(x))/h
Last edited: Mar 20, 2012
2. Mar 20, 2012
### lanedance
yes, first principles is the way to go with all of these, here's some tex to write it if that helps for future (right click to see the code)
$$f'(x) = \lim_{h \to 0}\frac{f(x+h)-f(x)}{h}$$
3. Mar 20, 2012
### Staff: Mentor
Yes, it looks to me like you need to use the definition of the derivative for all three parts.
4. Mar 20, 2012
### mtayab1994
yes but we never did derivatives with f(x+h)-f(h)/h
5. Mar 20, 2012
### cepheid
Staff Emeritus
You realize that these two things in bold are equivalent, right? The interval "h" is just the difference between the two values x and x0 where you are evaluating the function. So saying that h → 0, is the same as saying that x → x0. So if you've seen the former, then you know how to solve the problem.
6. Mar 20, 2012
### mtayab1994
Ok i've proved all of them and the one with alpha is proved in one line i don't know if it's that easy, but that's what i got.
7. Mar 20, 2012
### lanedance
should be pretty straight forward once you get the definition down
Last edited: Mar 20, 2012
8. Mar 20, 2012
### mtayab1994
Yes thank you very much i've done all of them and by the way proving the quotient rule and (1/v)(x0) should be the same right?
9. Mar 20, 2012
### lanedance
will need a little more manipulation, but the same approach will work
10. Mar 20, 2012
### mtayab1994
Yes thank you I've just proven all of the stuff i had to do.
|
2017-08-17 05:16:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8084052205085754, "perplexity": 1151.2911364915847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102891.30/warc/CC-MAIN-20170817032523-20170817052523-00093.warc.gz"}
|
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=HNSHCY_2016_v38n2_337
|
FIXED POINT THEOREMS FOR WEAK CONTRACTION IN INTUITIONISTIC FUZZY METRIC SPACE
• Journal title : Honam Mathematical Journal
• Volume 38, Issue 2, 2016, pp.337-357
• Publisher : The Honam Mathematical Society
• DOI : 10.5831/HMJ.2016.38.2.337
Title & Authors
FIXED POINT THEOREMS FOR WEAK CONTRACTION IN INTUITIONISTIC FUZZY METRIC SPACE
Vats, Ramesh Kumar; Grewal, Manju;
Abstract
The notion of weak contraction in intuitionistic fuzzy metric space is well known and its study is well entrenched in the literature. This paper introduces the notion of ($\small{{\psi},{\alpha},{\beta}}$)-weak contraction in intuitionistic fuzzy metric space. In this contrast, we prove certain coincidence point results in partially ordered intuitionistic fuzzy metric spaces for functions which satisfy a certain inequality involving three control functions. In the course of investigation, we found that by imposing some additional conditions on the mappings, coincidence point turns out to be a fixed point. Moreover, we establish a theorem as an application of our results.
Keywords
common fixed point;fuzzy metric space;control function;weak contraction;
Language
English
Cited by
References
1.
K. Atanassov, Intuitionistic fuzzy sets, Fuzzy Sets and Systems 20 (1986), 87-96.
2.
C. Alaca, D. Turkoglu and C. Yildiz, Fixed points in intuitionistic fuzzy metric spaces, Chaos Solitons Fractals, 29 (2006), 1073-1078.
3.
I. Beg, C. Vetro, D. Gopal and M. Imdad, ($\phi$, $\psi$ )-weak contractions in intuitionistic fuzzy metric spaces, J. Intell. Fuzzy Systems, 26(5) (2014), 2497-2504.
4.
S. Banach, Theoriles operations Linearies Manograie Mathematyezne, Warsaw, Poland, 1932.
5.
M. Edelstein, On fixed and periodic points under contractive mappings, J. London Math. Soc., 37 (1962), 74-79.
6.
M. Grabiec, Fixed points in fuzzy metric spaces, Fuzzy Sets and Systems, 27 (1988), 385-389.
7.
G. Jungck and B.E. Rhoades, Fixed Point for set valued function without continuity, J. Pure Appl. Math., 29(3) (1998), 227-238.
8.
I. Kramosil and J. Michalek, Fuzzy metric and statistical metric spaces, Kybernetika, 11 (1975), 336-344.
9.
M.S. Khan, M. Swaleh and S. Sessa, Fixed points theorems by altering distances between the points, Bull. Aust. Math.Soc., 30 (1984), 1-9.
10.
A. Kumar and R. K. Vats, Common fixed point theorem in fuzzy metric space using control function, Commun. Korean Math. Soc., 28(3) (2013) 517-526.
11.
R. Lowen, Fuzzy set theory, Kluwer Academic Publishers, Dordrecht, 1996.
12.
K. Menger, Statistical metrics, Proc. Nat. Acad. Sci. USA, 28 (1942), 535-537.
13.
J.H. Park, Intutionistic fuzzy metric space, Chaos Solitons Fractals, 22 (2004), 1039-1046.
14.
B. Schweizer and A. Sklar, Statistical metric spaces, Pacific. J. Math, 10 (1960), 313-334.
15.
D. Turkoglu, C. Alaca and C. Yildiz, Compatible maps and compatible maps of types ($\alpha$) and ($\beta$) in intuitionistic fuzzy metric spaces, Demonstratio Math., 39(3) (2006), 671-684.
16.
L.A. Zadeh, Fuzzy Sets, Inform. Control, 89 (1965), 338-353.
17.
http://en.wikipedia.org/wiki/Partially_ordered_set.
|
2018-03-18 08:20:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9227809309959412, "perplexity": 4500.864084209845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645550.13/warc/CC-MAIN-20180318071715-20180318091715-00297.warc.gz"}
|
https://math.stackexchange.com/questions/3151953/higher-inductive-type-what-for
|
# Higher inductive type: what for?
The typical example of higher inductive type (HIT) is the circle $$S^1$$ that is nicely described here. I understand HITs are convenient if you want to do homotopy theory within type theory. But what does it bring the computer scientist? Are there interesting data structures that could not be defined without an HIT?
• Just to amend Mike's answer, Thorsten Altenkirch has some applications of HITs to the theory of containers. – Ingo Blechschmidt Mar 18 at 22:21
• Assuming a type A and an equivalence relation R : A -> A -> Prop, how would you define the HIT A/R that is the true quotient of A by R? – Bob Mar 18 at 19:39
• A constructor [-] : A -> A/R, a constructor forall (x y : A), R x y -> [x] = [y], and probably a 0-truncation constructor forall (x y : A/R) (p q : x = y), p = q. See section 6.10 in the HoTT Book. – Mike Shulman Mar 18 at 21:08
|
2019-06-25 07:32:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37155577540397644, "perplexity": 797.2079805938466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999814.77/warc/CC-MAIN-20190625072148-20190625094148-00308.warc.gz"}
|
https://dialinf.wordpress.com/2008/07/
|
## Model Theory and Category TheoryJuly 29, 2008
Posted by dcorfield in Uncategorized.
David Kazhdan has some interesting things to say about model theory, and in particular its relationship to category theory, in his Lecture notes in Motivic Integration.
In spite of it successes, the Model theory did not enter into a “tool box” of mathematicians and even many of mathematicians working on “Motivic integrations” are content to use the results of logicians without understanding the details of the proofs.
I don’t know any mathematician who did not start as a logician and for whom it was “easy and natural” to learn the Model theory. Often the experience of learning of the Model theory is similar to the one of learning of Physics: for a [short] while everything is so simple and so easily reformulated in familiar terms that “there is nothing to learn” but suddenly one find himself in a place when Model theoreticians “jump from a tussock to a hummock” while we mathematicians don’t see where to “put a foot” and are at a complete loss.
## Same but multifacetedJuly 11, 2008
Posted by Alexandre Borovik in Uncategorized.
Continuing the topic of “sameness”, it is interesting to compare behaviour of two familiar objects: the field of real numbers $\mathbb{R}$ and the field of complex numbers $\mathbb{C}$.
$\mathbb{C}$ is uncountably categorical, that is, it is uniquely described in a language of first order logic among the fields of the same cardinality.
In case of $\mathbb{R}$, its elementary theory, that is, the set of all closed first order formulae that are true in $\mathbb{R}$, has infinitely many models of cardinality continuum $2^{\aleph_0}$.
In naive terms, $\mathbb{C}$ is rigid, while $\mathbb{R}$ is soft and spongy and shape-shifting. However, $\mathbb{R}$ has only trivial automorphisms (an easy exercise), while $\mathbb{C}$ has huge automorphism group, of cardinality $2^{2^{\aleph_0}}$ (this also follows with relative ease from basic properties of algebraically closed fields). In naive terms, this means that there is only one way to look at $\mathbb{R}$, while $\mathbb{C}$ can be viewed from an incomprehensible variety of different point of view, most of them absolutely transcendental. Actually, there are just two comprehensible automorphisms of $\mathbb{C}$: the identity automorphism and complex conjugation. It looks like construction of all other automorphisms involves the Axiom of Choice. When one looks at what happens at model-theoretic level, it appears that “uniqueness” and “canonicity” of a uncountable structure is directly linked to its multifacetedness. I am still hunting appropriate references for this fact. Meanwhile, I got the following e-mail from a model theorist colleague, Zoe Chatzidakis:
Models of uncountably categorical theories behave really like vector spaces: if inside a model $M$ you take a maximal independent set $X$ of elements realizing the generic type, and take any permutation of $X$, it extends to an automorphism of the model. So, if $M$ is of size $\kappa > \aleph_0$, then any basis has size $\kappa$, and its automorphism group has size $2^\kappa$.
I don’t know a reference, but it should be in any model theory book which talks about strongly minimal sets. Or maybe in the paper by ??? Morley ??? which shows that you have a notion of dimension and so on? I.e., that $\aleph_1$ categorical theories and strongly minimal sets are the same.
It is really a well-known result, so you probably don’t need a reference if you cite it in a paper.
## Facing EternityJuly 4, 2008
Posted by Alexandre Borovik in Uncategorized.
|
2018-08-15 13:35:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 22, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8320153951644897, "perplexity": 485.3880374681028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210105.8/warc/CC-MAIN-20180815122304-20180815142304-00207.warc.gz"}
|
https://electronics.stackexchange.com/questions/84432/problem-with-warnings-in-xilinx-tools
|
# Problem with warnings in Xilinx tools [closed]
I am interfacing a VGA monitor with Spartan 3e kit. I have a problem in the code and I'm getting many warnings, as shown below.
Could anyone explain the warnings?
WARNING:Xst:1780 - Signal <reg_led> is never used or assigned. This unconnected signal will be trimmed during the optimization process.
WARNING:Xst:1710 - FF/Latch <v_count_reg_3> (without init value) has a constant value of 0 in block <vsync_unit>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1710 - FF/Latch <v_count_reg_2> (without init value) has a constant value of 0 in block <vsync_unit>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1710 - FF/Latch <v_count_reg_1> (without init value) has a constant value of 0 in block <vsync_unit>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1710 - FF/Latch <v_count_reg_0> (without init value) has a constant value of 0 in block <vsync_unit>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1710 - FF/Latch <mod2_reg> (without init value) has a constant value of 0 in block <vsync_unit>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1710 - FF/Latch <h_sync_reg> (without init value) has a constant value of 0 in block <vga_sync>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1895 - Due to other FF/Latch trimming, FF/Latch <v_sync_reg> (without init value) has a constant value of 0 in block <vga_sync>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1895 - Due to other FF/Latch trimming, FF/Latch <h_count_reg_9> (without init value) has a constant value of 0 in block <vga_sync>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1895 - Due to other FF/Latch trimming, FF/Latch <h_count_reg_8> (without init value) has a constant value of 0 in block <vga_sync>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1895 - Due to other FF/Latch trimming, FF/Latch <h_count_reg_7> (without init value) has a constant value of 0 in block <vga_sync>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1895 - Due to other FF/Latch trimming, FF/Latch <h_count_reg_6> (without init value) has a constant value of 0 in block <vga_sync>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1895 - Due to other FF/Latch trimming, FF/Latch <h_count_reg_5> (without init value) has a constant value of 0 in block <vga_sync>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1895 - Due to other FF/Latch trimming, FF/Latch <h_count_reg_4> (without init value) has a constant value of 0 in block <vga_sync>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1895 - Due to other FF/Latch trimming, FF/Latch <h_count_reg_3> (without init value) has a constant value of 0 in block <vga_sync>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1895 - Due to other FF/Latch trimming, FF/Latch <h_count_reg_2> (without init value) has a constant value of 0 in block <vga_sync>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1895 - Due to other FF/Latch trimming, FF/Latch <h_count_reg_1> (without init value) has a constant value of 0 in block <vga_sync>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1895 - Due to other FF/Latch trimming, FF/Latch <h_count_reg_0> (without init value) has a constant value of 0 in block <vga_sync>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1895 - Due to other FF/Latch trimming, FF/Latch <v_count_reg_9> (without init value) has a constant value of 0 in block <vga_sync>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1895 - Due to other FF/Latch trimming, FF/Latch <v_count_reg_8> (without init value) has a constant value of 0 in block <vga_sync>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1895 - Due to other FF/Latch trimming, FF/Latch <v_count_reg_7> (without init value) has a constant value of 0 in block <vga_sync>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1895 - Due to other FF/Latch trimming, FF/Latch <v_count_reg_6> (without init value) has a constant value of 0 in block <vga_sync>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1895 - Due to other FF/Latch trimming, FF/Latch <v_count_reg_5> (without init value) has a constant value of 0 in block <vga_sync>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1895 - Due to other FF/Latch trimming, FF/Latch <v_count_reg_4> (without init value) has a constant value of 0 in block <vga_sync>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1895 - Due to other FF/Latch trimming, FF/Latch <v_count_reg_3> (without init value) has a constant value of 0 in block <vga_sync>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1895 - Due to other FF/Latch trimming, FF/Latch <v_count_reg_1> (without init value) has a constant value of 0 in block <vga_sync>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1895 - Due to other FF/Latch trimming, FF/Latch <v_count_reg_0> (without init value) has a constant value of 0 in block <vga_sync>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1895 - Due to other FF/Latch trimming, FF/Latch <mod2_reg> (without init value) has a constant value of 0 in block <vga_sync>. This FF/Latch will be trimmed during the optimization process.
WARNING:Xst:1895 - Due to other FF/Latch trimming, FF/Latch <v_count_reg_2> (without init value) has a constant value of 0 in block <vga_sync>. This FF/Latch wiProcess "Synthesize - XST" completed successfully
WARNING:Pack:1543 - The register rgb_reg_1_3 has the property IOB=TRUE, but was
not packed into the input side of an I/O component. The IFF1 BEL site already
contains the register symbol "rgb_reg_1_1".
The IFF2 BEL site already contains the register symbol "rgb_reg_1_2".
• KIt am using is spartan 3e xc3s1200e 4fg320 and i have googled about tese warnings but didn't got any info.. – Red1 Oct 5 '13 at 19:53
• There isn't much we can do unless you show us the code that generated these warnings. – Dave Tweed Oct 5 '13 at 20:14
• sir these are my codes for spartan 3e xc3s1200e pacakage is 4fg30 could u please correct this if u could dropbox link for my .v,.ucf code and reference manual of spartan 3e kit just see them – Red1 Oct 5 '13 at 20:20
• @Red1 Dont use dropbox, put it in your question. Dont ask the same question again, improve this one. – Kortuk Oct 14 '13 at 14:27
The warnings are pretty self-explanatory. The reg_led signal is not connected to an FPGA output pin so the tools have deleted it. It seems that the value in the v_count_reg never changes so the tools have deleted the unnecessary flip-flops. I suspect that you have not created the User Constraints File and connected the signals in your design to specific pins on the FPGA. If you have in fact created a UCF file, then run a simulation and figure out why v_count_reg doesn't ever change.
|
2020-07-03 10:42:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33063259720802307, "perplexity": 7665.637148573999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655881984.34/warc/CC-MAIN-20200703091148-20200703121148-00327.warc.gz"}
|
http://nodus.ligo.caltech.edu:8080/40m/page2?mode=full&attach=1&rsort=Date
|
40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
40m Log, Page 2 of 339 Not logged in
ID Date Author Type Category Subject
17013 Mon Jul 18 16:49:57 2022 YehonathanUpdateBHDadd Laser RIN to MICH budget
I measured the RIN by taking the spectrum of C1:MC_TRANS_SUMFILT_OUT and dividing it by the mean count on that channel (~13800 cts). Attachment 1 shows the result.
I updated the MICH AS55 noise budget but got a very low contribution (gold trace in attachment 2).
It seems too low I think. What could've gone wrong? Finesse calculates that the transfer function from laser amplitude modulation to AS55 is ~ 1.5e-9 at DC. If I turn off HOMs I get 1e-11 at DC, so this coupling is a result of some contrast defect. Should I include some RMS imbalances in the optics to account for this? Should I include it as a second-order effect due to MICH RMS deviation from zero crossing?
Quote: the main laser noise coupling for a Michelson is because of the RIN, not the frequency noise. You can measure the RIN, in MC trans or at the AS port by getting a single bounce beam from a single ITM.
Attachment 1: Laser_RIN.pdf
Attachment 2: MICH_AS55_Noise_Budget.pdf
17012 Mon Jul 18 16:39:07 2022 PacoSummaryLSCFPMI locking procedure using REFL55 and AS55
[Yuta, Paco]
In summary, we locked FPMI using REFL55_I, REFL55_Q, and AS55_Q. The key to success was to mix POX11_I and POY11_I in the right way to emulate CARM/DARM, and to find out the correct demodulation phase for AS55.
Procedure
1. Close PSL shutter and zero offsets in AS55, REFL55, POX11, POY11, and ASDC
• For ASDC run python3 resetOffsets.py -c C1:LSC-ASDC_IN1, otherwise use the zer offsets on I and Q inputs from the RFPD medm screen.
2. Lock XARM/YARM using POX/POY to tune demodulation phase.
• Today, the demode phase in POX11 changed to 104.801, and POY11 to -11.256 deg.
3. XARM and YARM are used in the following configuration
• INMAT
• 0.5 * POX11_I - 0.5 * POY --> XARM
• 0.5 * POX + 0.5*POY --> YARM
• REFL55_Q --> MICH (** this should be turned on after POX11/POY11)
• LSC Filter gains
• XARM = 0.012
• YARM = 0.012
• MICH = +40 (note the sign flip from last time)
• OUTMAT
• XARM --> 0.5 * ETMX - 0.5 * ETMY
• YARM --> MC2
• MICH --> BS
• UGFs (sanity check)
• XARM (DARM) ~ 100 Hz
• YARM (CARM) ~ 200 Hz
• MICH (MICH) ~ 40 Hz
4. Run MICHOpticalGainCalibration.ipynb to see if ASDC vs REFL55_Q looks nice (ellipse in the XY plot), and find any residual offset in REFL55_Q.
• If the plot doesn't look nice in this regard, the IFO needs to be aligned.
5. Sensing matrix for CARM/DARM and MICH.
• With the DARM, CARM and MICH lines on, verify the demod error signals look ok both in mag and phase.
• For example, we found that CARM error signals were correctly represented by either 0.5 * POX11_I + 0.5 * POY11_I or 0.5 * REFL55_I.
• Similarly, we found that DARM error signal was correctly represented by either 0.5 * POX11_I - 0.5 * POY11_I or 2.5 * AS55_Q.
• To find this, we minimized CARM content in AS55_Q, as well as CARM content in REFL55_Q.
6. We acquired the lock by re-configuring the error point as below:
• INMAT
• 0.5*REFL55_I --> YARM (CARM)
• 2.5 * AS55_Q --> XARM (DARM)
• During the hand-off trials, we repeatedly ran the sensing matrix and UGF measurements while stopping at various intermediate mixed error points to check how the error signal calibrations changed if at all.
• Attachment #1 shows the DARM OLTF using POX/POY (blue), only with CARM handoff (green), and after DARM handoff (red)
• Attachment #2 shows the CARM OLTF using POX/POY (blue), only with CARM handoff (green), and after DARM handoff (red)
• Attachment #3 shows the MICH OLTF using POX/POY (blue), only with CARM handoff (green), and after DARM handoff (red)
• The sensing matrix after handoff is below:
Sensing Matrix with the following demodulation phases
{'AS55': 192.8, 'REFL55': 95.63177865911078, 'POX11': 104.80089727128349, 'POY11': -11.256509422276006}
Sensors DARM CARM MICH
C1:LSC-AS55_I_ERR_DQ 5.09e-02 (89.6761 deg) 2.03e-01 (-114.513 deg) 1.28e-04 (-28.9254 deg)
C1:LSC-AS55_Q_ERR_DQ 4.78e-02 (88.7876 deg) 3.61e-03 (-68.7198 deg) 8.34e-05 (-39.193 deg)
C1:LSC-REFL55_I_ERR_DQ 5.18e-02 (-92.2555 deg) 1.20e+00 (65.2507 deg) 1.15e-04 (-102.027 deg)
C1:LSC-REFL55_Q_ERR_DQ 1.81e-04 (59.0854 deg) 1.09e-02 (-114.716 deg) 1.77e-05 (-23.6485 deg)
C1:LSC-POX11_I_ERR_DQ 8.51e-02 (91.2844 deg) 4.77e-01 (67.1709 deg) 7.97e-05 (-72.5252 deg)
C1:LSC-POX11_Q_ERR_DQ 2.63e-04 (114.584 deg) 1.32e-03 (-113.505 deg) 2.10e-06 (118.146 deg)
C1:LSC-POY11_I_ERR_DQ 1.58e-01 (-88.9295 deg) 6.16e-01 (67.6098 deg) 8.71e-05 (172.73 deg)
C1:LSC-POY11_Q_ERR_DQ 2.89e-04 (-89.1114 deg) 1.09e-03 (70.2784 deg) 3.77e-07 (110.206 deg)
Lock gpstimes:
1. [1342220242, 1342220260]
2. [1342220420, 1342220890]
3. [1342221426, 1342221574]
4. [1342222753, 1342223230]
### Sensitivity estimate (NANB)
Using diaggui, we look at the AS55_Q error point and the DARM control point (C1:LSC-XARM_OUT). We roughly calibrate the error point using the sensing matrix element and actuation gain at the DARM oscillator freq 4.78e-2 / (10.91e-9 / 307.880^2). The control point is calibrated with a 0.95 Hz SUS pole. Attachment #4 shows the sensitivity estimate.
Attachment 1: DARM_07_18_2022_FMPI.pdf
Attachment 2: CARM_07_18_2022_FPMI.pdf
Attachment 3: MICH_07_18_2022_FPMI.pdf
Attachment 4: fpmi_darm_nb_2022_07.pdf
17011 Mon Jul 18 15:17:51 2022 HangUpdateCalibrationError propagation to astrophysical parameters from detector calibration uncertainty
1. In the error propogation equation, it should be \Delta \Theta = -H^{-1} M \Delta \Lambda, instead of the fractional error.
2. For the astro parameters, in general you would need t_c for the time of coalescence and \phi_c for the phase. See, e.g., https://ui.adsabs.harvard.edu/abs/1994PhRvD..49.2658C/abstract.
3. Fig. 1 looks very nice to me, yet I don't understand Fig. 3... Why would phase or amplitude uncertainties at 30 Hz affect the tidal deformability? The tide should be visible only > 500 Hz.
4. For BBH, we don't measure individual spin well but only their mass-weighted sum, \chi_eff = (m_1*a_1 + m_2*a_2)/(m_1 + m_2). If you treat S1z and S2z as free parameters, your matrix is likely degenerate. Might want to double-check. Also, for a BBH, you don't need to extend the signal much higher than \omega ~ 0.4/M_tot ~ 10^4 Hz * (Ms/M_tot). So if the total mass is ~ 100 Ms, then the highest frequency should be ~ 100 Hz. Above this number there is no signal.
17010 Mon Jul 18 04:42:54 2022 AnchalUpdateCalibrationError propagation to astrophysical parameters from detector calibration uncertainty
We can calculate how much detector calibration uncertainty affects the estimation of astrophysical parameters using the following method:
Let $\overrightarrow{\Theta}$ be set of astrophysical parameters (like component masses, distance etc), $\overrightarrow{\Lambda}$be set of detector parameters (like detector pole, gain or simply transfer function vaue for each frequency bin). If true GW waveform is given by $h(f; \overrightarrow{\Theta})$, and the detector transfer function is given by $\mathcal{R}(f; \overrightarrow{\Lambda})$, then the detected gravitational waveform becomes:
$g(f; \Theta, \Lambda) = \frac{\mathcal{R}(f; \overrightarrow{\Lambda_t})}{\mathcal{R}(f; \overrightarrow{\Lambda})} h(f; \overrightarrow{\Theta})$
One can calculate a derivative of waveform with respect to the different parameters and calculate Fisher matrix as (see correction in 40m/17017):
$\Gamma_{ij} = \left( \frac{\partial g}{\partial \mu_i} | \frac{\partial g}{\partial \mu_j}\right )$
where the bracket denotes iner product defined as:
$\left( k_1 | k_2 \right) = 4 Re \left( \int df \frac{k_1(f)^* k_2(f))}{S_{det}(f)}\right)$
where $S_{det}(f)$ is strain noise PSD of the detector.
With the gamma matrix in hand, the error propagation from detector parameter fractional errors $\frac{\Delta \Lambda_j}{\Lambda_j}$to astrophysical paramter fractional errors $\frac{\Delta \Theta_i}{\Theta_i}$is given by (eq 26 in Evan et al 2019 Class. Quantum Grav. 36 205006):
$\frac{\Delta \Theta_j}{\Theta_j} = - \mathbf{H}^{-1} \mathbf{M} \frac{\Delta \Lambda_j}{\Lambda_j}$
where $\mathbf{H}_{ij} = \left( \frac{\partial g}{\partial \Theta_i} | \frac{\partial g}{\partial \Theta_j}\right )$ and $\mathbf{M}_{ij} = \left( \frac{\partial g}{\partial \Lambda_i} | \frac{\partial g}{\partial \Theta_j}\right )$.
Using the above mentioned formalism, I looked into two ways of calculating error propagation from detector calibration error to astrophysical paramter estimations:
## Using detector response function model:
If we assume detector response function as a simple DC gain (4.2 W/nm) and one pole (500 Hz) transfer function, we can plot conversion of pole frequency error into astrophysical parameter errors. I took two cases:
• Binary Neutron Star merger with star masses of 1.3 and 1.35 solar masses at 100 Mpc distance with a $\tilde{\Lambda}$ of 500. (Attachment 1)
• Binary black hole merger with black masses of 35 and 30 at 400 MPc distance with spin along z direction of 0.5 and 0.8. (I do not fully understand the meaning of these spin components but a pycbc waveform generation model still lets me calculate the effect of detector errors) (Attachment 2)
The plots are plotted in both loglog and linear plots to show the order of magnitude effect and how the error propsagation slope is different for different parameters. 'm still not sure which way is the best to convey the information. The way to read this plot is for a given error say 4% in pole frequency determination, what is the expected error in component masses, merger distance etc. I
Note that the overall gain of detector response is not sensitive to astrophysical error estimation.
## Using detector transfer function as frequency bin wise multi-parameter function
Alternatively, we can choose to not fit any model to the detector transfer function and simply use the errors in magnitude and phase at each frequency point as an independent parameter in the above formalism. This then lets us see what is the error propagation slope for each frequency point. The hope is to identify which parts of the calibration function are more important to calibrate with low uncertainty to have the least effect on astrophysical parameter estimation. Attachment 3 and 4 show these plots for BNS and BBH cases mentioned above. The top panel is the error propagation slope at each frequency due to error in magnitude of the detector transfer function at that frequency and the bottom panel is the error propagation slope at each frequency due to error in phase of the detector transfer function.
The calibration error in magnitude and phase as a function of frequency would be multiplied by the curves and summed together, to get total uncertainty in each parameter estimation.
This is my first attempt at this problem, so I expect to have made some mistakes. Please let me know if you can point out any. Like, do the order of magnitude and shape of error propagation makes sense? Also, comments/suggestions on the inference of these plots would be helpful.
Finally, I haven't yet tried seeing how these curves change for different true values of the merger event parameters. I'm not yet sure what is the best way to extract some general information for a variety of merger parameters.
Future goals are to utilize this information in informing system identification method i.e. multicolor calibration scheme parameters like calibration line frequencies and strength.
Code location
Attachment 1: BNSparamsErrorwrtfdError-merged.pdf
Attachment 2: BBHparamsErrorwrtfdError-merged.pdf
Attachment 3: BNSparamsEPSwrtCalError.pdf
Attachment 4: BBHparamsEPSwrtCalError.pdf
17009 Sat Jul 16 02:44:10 2022 KojiUpdateIOOIMC servo tuning
I wasn't sure how the IMC servo was optimized recently. We used to have the FSS over all gain (C1:PSL-FSS_MGAIN) of +6dB a few years back. It is not 0dB. So I decided to do a couple of measurements.
1) Default setting:
C1:IOO-MC_REFL_GAIN +4
C1:IOO-MC_BOOST2 +3
C1:IOO-MC_VCO_GAIN +13
C1:PSL-FSS_MGAIN +0
C1:PSL-FSS_FASTGAIN +19
2) Looked at the power spectrum at TEST1A output (error signal)
TEST1A is the signal right after the input gain stage (C1:IOO-MC_REFL_GAIN). Prior to the measurement, I've confirmed that the UGF is ~100Hz even at +0dB (see next section). It was not too bad even with the current default. Just wanted to check if we can increase the gain a bit more.
The input gain was fixed at +4dB and the FSS overall gain C1:PSL-FSS_MGAIN was swept from +0 to +6.
At +5dB and +6dB, the servo bump was very much visible (Attachment 1).
I decided to set the default to be +4dB (Attachment 3).
3) Took OLTF at 0dB and 4dB for the FSS overall gain.
Now the comparison of the opel loop transfer functions (OLTF) for C1:PSL-FSS_MGAIN at 0dB and 4dB. The OLTF were taken by injectiong the network analyzer signal into EXCA and measure the ratio between TEST1A and TEST1B (A/B).
C1:PSL-FSS_MGAIN +0 -> UGF 100kHz / Phase Margin ~50deg
C1:PSL-FSS_MGAIN +4 -> UGF 200kHz / Phase Margin 25~30deg
The phase margin was a bit less but it was acceptable.
4) IMC FSR
Took the opportunity to check the FSR of the IMC. Connected a cable to the RF MON of the IMC REFL demod board. Looked at the peak at 40.56MHz (29.5MHz + 11.066MHz). The peak was not so clear at 11.066195MHz (see 40m ELOG 15845). The peak was anyway minimized and the new modulation frequency was set to be 11.066081MHz (new FSR). The change is 10ppm level and it is within the range of the temp drift.
Attachment 1: ErrorPSD.pdf
Attachment 2: OLTF.pdf
Attachment 3: Screen_Shot_2022-07-16_at_03.59.05.png
17008 Fri Jul 15 22:36:04 2022 ranaSummaryLSCFPMI with REFL/AS55 demod phase adjust
Very nice!
DARM feedback should go to ETMY - ETMX, not just a single mirror: Differential ARM.
For it to work with 1 mirror the UGF of the CARM loop must be much larger than DARM UGF. But in our case, both have a UGF of ~150 Hz.
In principle, you could run the CARM loop with higher gain by using the CM servo board, but maybe that can wait until the X,Y -> CARM, DARM handoff.
17007 Fri Jul 15 19:13:22 2022 PacoSummaryLSCFPMI with REFL/AS55 demod phase adjust
[Yuta, Paco]
• We first zero the offsets in ASDC, AS55, REFL55, POX11, and POY11 when PSL shutter is closed.
• After this, we checked the offsets with only ITMX aligned. Some of RFPDs had ~2 counts of offsets, which indicate some RFAM of sidebands, but we decided not to tune Marconi frequencies since the offsets were small enough.
• We went over the demod phases for AS55, REFL55, POX11, and POY11.
• For POX11/POY11 first we just minimized the Q in each locked XARM/YARM individually. The newfound values were
• C1:LSC-POX11_PHASE_R = 106.991
• C1:LSC-POY11_PHASE_R = -12.820
• Then we misaligned the XARM by getting rid of the MICH fringe in the ASDC port with ITMX yaw offset, and locked YARM using AS55_Q and REFL55_I and found the demod phase that minimized the AS55_I and REFL55_Q. The newfound values were
• C1:LSC-AS55_PHASE_R = -65.9586
• C1:LSC-REFL55_PHASE_R = -78.6254
• Repeating the above, but now misaligning YARM with ITMY yaw offset, locking XARM with AS55_Q and REFL55_I, we found the demod phases that minimized AS55_1 and REFL55_Q. The newfound values were
• C1:LSC-AS55_PHASE_R = -61.4361
• C1:LSC-REFL55_PHASE_R = -71.0434
• The above demod phases difference, Schnupp asymmetry between X and Y were measured. We repeated the measurement three times to derive the error.
• Optimal demod phase difference between X arm and Y arm for both AS55 and REFL55 were measured to be -4.5 +/- 0.1 deg, which means that lx-ly = 3.39 +/- 0.05 cm (Marconi frequency: 11.066195 MHz).
• We measured the gain difference between AS55_Q and POX11/POY11 = -0.5
• We measured the gain difference between REFL55_I and POX11/POY11 = -2.5
After this, we locked DARM, CARM and MICH using POX11_I, POY11_I and AS55 error signals respectively, and actuating on ETMX, MC2, and BS with NO TRIGGERS (but FM triggers were on for boosts as usual). Under this condition, FM5 is used for lock acquisition, and FM1, FM2, FM3, FM6 are turned on with FM triggers. No FM4 was on. We also noticed:
• CARM FM6 "BounceRoll" is slightly different than "YARM" FM6 "Bounce". The absent roll resonant gain actually makes it easier to control the CARM, we just had to use YARM filter for locking it.
• When CARM is controlled, we often just kick the ETMX to bring it near resonance, since the frequency noise drops and we otherwise have to wait long.
17006 Fri Jul 15 16:20:16 2022 Cici HannaUpdateGeneralFinding UGF
I have temporarily abandoned vectfit and aaa since I've been pretty unsuccessful with them and I don't need poles/zeroes to find the unity gain frequency. Instead I'm just fitting the transfer function linearly (on a log-log scale). I've found the UGF at about 5.5 kHz right now, using old data - next step is to get the Red Pitaya working so I can take data with that. Also need to move this code from matlab to python. Uncertainty's propagated using the 95% confidence bounds given by the fit, using curvefit - so just from the standard error, and all points are weighted equally. Ideally would like to propagate uncertainty accounting for the coherence data too, but haven't figured out how to do that correctly yet.
[UPDATE 7/22/2022: added raw data files]
Attachment 1: UGF_4042.png
Attachment 2: UGF_5650.png
Attachment 3: TFSR785_29-06-2022_114042.txt
# SR785 Measurement - Timestamp: Jun 29 2022 - 11:40:42
# Parameter File: TFSR785template.yml
#---------- Measurement Setup ------------
# Start frequency (Hz) = 100000.000000
# Stop frequency (Hz) = 100.000000
# Number of frequency points = 30
# Excitation amplitude (mV) = 10.000000
# Settling cycles = 5
# Integration cycles = 100
#---------- Measurement Parameters ----------
... 52 more lines ...
Attachment 4: TFSR785_29-06-2022_115650.txt
# SR785 Measurement - Timestamp: Jun 29 2022 - 11:56:50
# Parameter File: TFSR785template.yml
#---------- Measurement Setup ------------
# Start frequency (Hz) = 100000.000000
# Stop frequency (Hz) = 2000.000000
# Number of frequency points = 300
# Excitation amplitude (mV) = 5.000000
# Settling cycles = 5
# Integration cycles = 200
#---------- Measurement Parameters ----------
... 322 more lines ...
17005 Fri Jul 15 12:21:58 2022 JCUpdateElectronicsChecking Sorensen Power Supplies
Of the 7 Sorenson Power Supplies I tested, 5 are working fine, 1 cannot output voltage more than 20 Volts before shorting, and other does not output current. Six Sorensons are behind the X-Arm.
Quote: [JC] I went around 40m picking up any Sorensens that were laying around to test if they worked, or in need of repair. I gathered up a total of 7 Sorensens and each one with a Voltmeter. I made sure the voltage would rise on the Sorenson as well as the voltmeter, maxing out at ~33.4 Volts. For the current, the voltmeter can only rise to 10 Amps before it is fused. Many of the Sorensons that I found did not have their own wall connection, so I had to use the same one for multiple. From these 7, I have found 5 that are well. One Sorenson I have tested has a output shortage above 20V and the other has yet to be tested.
Attachment 2: FA4CF579-6C1E-48D5-B152-74F35B4EE90B.jpeg
17004 Thu Jul 14 19:56:15 2022 ranaUpdateIOOmc wfs demod
It looks like Tomislav's measurements of the WFS demod board noise were actually of the cable that goes from the whitening to the ADC. So the huge low frequency excess that he saw is not due to wind, but just the inverse whitening of the digital system?
In any case, today, I looked at the connections from the Whitening to the ADC. It goes through an interface chassis to go from ribbon to SCSI. The D-Sub connectors there have the common problem in many of the LIGO D-sub connectors: namely that the strain relief nuts are too tall and so the D connector doesn't seat firmly - its always about to fall out. JC, can you please take a look at this and order a set of low profile nuts so that we can rework this chassis? Its the one between the WFS whitening and the SCSI cables which go to the ADCs.
After pushing them in, I confirmed that the WFS are working, by moving all 6 DoF of the MC mirrors via bias slider, and looking at the step responses (attached). As you can see, all sensors see all mirrors, even if they are noisy.
Next up: get a breakout for the demod output connector and measure the noise there.
For today, I aligned the IMC by hand, then centred the WFS beams by unlocking the IMC and aligning the bright beam. I noticed that the WFS1 beam was being dumped randomly, so I angled the WFS1 by ~3 deg and dumped the specular reflection on a razor blade dump. To handle the sign change in the MC1 actuation (?), I changed the sign in the MC1 ASC filter banks. MCWFS loops still nto closing, but they respond to mirror alignment.
Attachment 1: mcwfs-steps.pdf
17003 Thu Jul 14 19:09:51 2022 ranaUpdateGeneralEQ recovery
There was a EQ in Ridgecrest (approximately 200 km north of Caltech). It was around 6:20 PM local time.
All the suspensions tripped. I have recovered them (after some struggle with the weird profusion of multiple conflicting scripts/ directories that have appeared in the recent past...)
ETMY is still giving me some trouble. Maybe because of the HUGE bias on that within the fast CDS system, it had some trouble damping. Also the 'reenable watchdog' script in one of the many scripts directories seems to do a bad job. It re-enables optics, btu doesn't make sure that the beams are on the optical lever QPD, and so the OL servo can smash the optic around. This is not good.
Also what's up with the bashrc.d/ in some workstations and not others? Was there something wrong with the .bashrc files we had for the past 15 years? I will revert them unless someone puts in an elog with some justification for this "upgrade".
This new SUS screen is coming along well, but some of the fields are white. Are they omitted or is there something non-functional in the CDS? Also, the PD variances should not be in the line between the servo outputs and the coil. It may mislead people into thinking that the variances are of the coils. Instead, they should be placed elsewhere as we had it in the old screens.
Attachment 1: ETMY-screen.png
17002 Thu Jul 14 00:10:08 2022 yutaSummaryLSCFPMI with REFL/AS55 trial continued
[Paco, Koji, Yuta]
We managed to lock MICH using REFL55_Q by setting the demodulation phases and offsets right.
The following is the current FPMI locking configuration we achieved so far.
DARM: POX11_I / gain 0.007 / 0.5*ETMX-0.5*ETMY (or 1*ETMX) / UGF of ~100 Hz
CARM: POY11_I / gain 0.018 / 1*MC2 / UGF of ~200 Hz
MICH: REFL55_Q / gain -10 / 0.5*BS / UGF of ~30 Hz
Transitioning DARM error signal from POX11_I to 0.5*POX11_I+0.5*POY11_I was possible with FM4 filter off in DARM filter bank, but not to AS55_Q yet.
REFL55 and AS55 demodulation phase tuning:
- We found that both AS55 and REFL55 are contaminated by large non-MICH signal, by making a ASDC vs RF plot (see 40m/16929).
- After both arms are locked with POX and POY, MICH was locked with AS55_Q. ASDC was minimized by putting an offset to MICH filter.
- With this, REFL55 offsets were zeroed and demodulation phase was tuned to minimize REFL55_Q.
- Locked MICH with REFL55_Q, and did the same thing for AS55_Q.
- Resulting ASDC vs RF plots were attached. REFL55_Q now looks great, but REFL55_I and AS55 are noisy (due to signals from the arms?).
Jupyter notebook: https://git.ligo.org/40m/scripts/-/blob/main/CAL/MICH/MICHOpticalGainCalibration.ipynb
Sensing matrix:
- With FPMI locked using POX/POY, DARM and CARM lines were injected at around 300 Hz to measure the sensing gains. For line injection, C1:CAL-SENSMAT was used, but for the demodulation we used a script. The following is the result.
Sensors DARM (ETMX) CARM (MC2) C1:LSC-AS55_I_ERR 3.10e+00 (-34.1143 deg) 1.09e+01 (-14.907 deg) C1:LSC-AS55_Q_ERR 9.96e-01 (-33.9848 deg) 3.30e+00 (-27.9468 deg) C1:LSC-REFL55_I_ERR 6.75e+00 (-33.7723 deg) 2.92e+01 (-34.0958 deg) C1:LSC-REFL55_Q_ERR 7.07e-01 (-33.4296 deg) 3.08e+00 (-33.4437 deg) C1:LSC-POX11_I_ERR 3.97e+00 (-33.9164 deg) 1.51e+01 (-30.7586 deg) C1:LSC-POY11_I_ERR 6.25e-02 (-20.3946 deg) 3.59e+00 (38.4207 deg)
Jupyter notebook: https://git.ligo.org/40m/scripts/-/blob/main/CAL/SensingMatrix/MeasureSensMat.ipynb
- By taking the ratios of POX11_I and AS55_Q for DARM, POY11_I and REFL55_I for CARM, we tried to find the correct gains for REFL55 and AS55 for DARM and CARM. x3.96 more gain for AS55_Q than POX11_I and x0.123 less gain for REFL55_I than POY11_I.
Next:
- Try locking the arms with no triggering, and then try locking FPMI with REFL/AS without triggering. No FM4 for this, since FM4 kills gain margin.
- Lock single arm with AS55_Q and make a noise budget. Make sure to misalign ITMX(Y) completely when locking Y(X)arm.
- Lock single arm with REFL55_I and make a noise budget.
- Repeat Xarm noise budget with Yarm locked with POY11_I and MC2 (40m/16975).
- Check IMC to reduce frequency noise (40m/17001)
Attachment 1: AS55_I.png
Attachment 2: AS55_Q.png
Attachment 3: REFL55_I.png
Attachment 4: REFL55_Q.png
17001 Wed Jul 13 18:58:17 2022 KojiUpdateIOOIMC suspecion
This is just my intuition but the IMC servo seems not so optimized. I can increase the servo gain by 6~10dB easily. And I couldn't see that the PC drive went mad (red) as I increase the gain (=UGF).
The IMC needs careful OLTF measurements as well as the high freq spectrum observation.
It seems that I have worked on the IMC servo tuning in 2014 July/Aug. Checking these elogs would be helpful.
17000 Wed Jul 13 17:30:19 2022 KojiUpdateCDSToo huge script_archive
I wanted to check the script archive to see some old settings. I found that the script archive inflated to huge volume (~1TB).
The size of the common NFS volume (/cvs/cds) is 3TB. So it is really significant.
- The scripts living in /opt/rtcds/caltech/c1/scripts are archived daily in /cvs/cds/caltech/scripts_archive as bz2 files. This is done by crontab of megatron (see https://wiki-40m.ligo.caltech.edu/Computers_and_Scripts/CRON)
- In fact, the script folder (say old script folder) /opt/rtcds/caltech/c1/scripts has the size of 10GB. And we have the compressed copy of thi s everyday.
- This large script folder is due to a couple of huge files/folders listed below
• (scripts)/MEDMtab is 5.3GB / This is probably related to the web MEDM view (on nodus) but I don't think the web page is not updated. (i.e. the images are unused)
• (scripts)/MC/logs/AutoLocker.log 2.9GB / This is just the accumulated MC autolocker log.
• (scripts)/GigE 780M / This does not look like scripts but source and object files
• (scripts)/Admin/n2Check.log 224M / This is important but increases every minute.
• (scripts)/ZI 316MB / Zurich Instrument installation. This should not be here.
Here I propose some changes.
For the script archive
• We can remove most of the scripts for the past (say ~2019). We leave an archive file per month.
• For the scripts in 2020, we leave a weekly archive.
• For 2021 and 2022, we leave all the archive files.
For the existing large files/folders
• MEDMtab: the stored files are redundant with the burt snapshots. Remove the image files. Also, we want to move the image-saving location.
• Autolocker.log: simply zap it
• n2Check.log: we should move the saving location
• GigE /ZI: they need a new home where the daily copy is not taken.
16999 Wed Jul 13 13:30:48 2022 YehonathanUpdateBHDadd Laser RIN to MICH budget
the main laser noise coupling for a Michelson is because of the RIN, not the frequency noise. You can measure the RIN, in MC trans or at the AS port by getting a single bounce beam from a single ITM.
16998 Wed Jul 13 13:26:44 2022 ranaSummaryElectronicsElectronics noise measurements
as I said to you yesterday, I don't think image 2a shows the output of the demod board. The output of the demod board is actually the output connector ON the demod board. What you are showing in 2a, is the signal that goes from the whitening board to the ADC I believe. I may be msitaken, so please check with Tega for the signal chain.
16997 Wed Jul 13 12:49:25 2022 PacoSummarySUSSUS frozen
[Paco, JC, Yuta]
This morning, while investigating the source of a burning smell, we turned off the c1SUS 1X4 power strip powering the sorensens. After this, we noticed the MC1 refl was not on the camera, and in general other vertex SUS were misaligned even though JC had aligned the IFO in the morning to almost optimum arm cavity flashing. After a c1susaux modbusIOC service restart and burt restore, the problem persisted.
We started to debug the sus rack chain for PRM since the oplev beam was still near its alignment so we could use it as a sensor. The first weird thing we noticed was that no matter how much we "kicked" PRM, we wouldn't see any motion on the oplev. We repeatedly kicked UL coil and looked at the coil driver inputs and outputs, and also verified the eurocard had DC power on which it did. Somehow disconnecting the acromag inputs didn't affect the medm screen values, so that made us suspicious that something was weird with these ADCs.
Because all the slow channels were in a frozen state, we tried restarting c1susaux and the acromag chassis and this fixed the issue.
16996 Wed Jul 13 10:54:39 2022 YehonathanUpdateBHDMICH AS55 noise budget
I fixed some mistakes in the budget:
1. The BS pendulum resonance was corrected from 0.8Hz to 1Hz
2. Added missing X3 filter in the coil filters
3. Optical gain is now computed from MICH to AS55 instead of BS to AS55 and is calculated to be: 9.95e8 cts/m.
4. Coil driver gain is still unmeasured but it is found to be 1.333 to make the actuation calibration from BS to MICH match the measurement (see attachment 1).
Attachment 2 shows the resulting MICH OLTF.
Laser noise was added to the budget in a slightly ad-hoc fashion (will fix later): Yuta and I measured MC_F and computed MC_F*(Schnupp asymmetry)/(Laser frequency). Attachment 3 shows the updated noise budget.
Attachment 1: BS_MICH_ACtuation_Calibration.pdf
Attachment 2: MICH_AS55_Model_Measurement_Comparison.pdf
Attachment 3: MICH_AS55_Noise_Budget.pdf
16995 Wed Jul 13 07:16:48 2022 JCUpdateElectronicsChecking Sorensen Power Supplies
[JC]
I went around 40m picking up any Sorensens that were laying around to test if they worked, or in need of repair. I gathered up a total of 7 Sorensens and each one with a Voltmeter. I made sure the voltage would rise on the Sorenson as well as the voltmeter, maxing out at ~33.4 Volts. For the current, the voltmeter can only rise to 10 Amps before it is fused. Many of the Sorensons that I found did not have their own wall connection, so I had to use the same one for multiple.
From these 7, I have found 5 that are well. One Sorenson I have tested has a output shortage above 20V and the other has yet to be tested.
Attachment 1: 658C5D39-11BD-4EE3-90E2-34CBBC1DBD3C.jpeg
Attachment 2: 5328312A-7918-44CC-82B7-54B57840A336.jpeg
16994 Tue Jul 12 19:46:54 2022 PacoSummaryALSHow (not) to take NPRO PZT transfer function
[Paco, Deeksha, rana]
Quick elog for this evening:
• Rana disabled MC servo .
• Slow loop also got disengaged.
• AUX PSL beatnote is best taken with *free running lasers* since their relative frequency fluctuations are lowest than when locked to cavities.
• DFD may be better to get PZT transfer funcs, or get higher bandwidth phase meter.
• Multi instrument to be done with updated moku
• Deeksha will take care of updated moku
16993 Tue Jul 12 18:35:31 2022 Cici HannaSummaryGeneralFinding Zeros/Poles With Vectfit
Am still working on using vectfit to find my zeros/poles of a transfer function - now have a more specific project in mind, which is to have a Red Pitaya use the zero/pole data of the transfer function to find the UGF, so we can check what the UGF is at any given time and plot it as a function of time to see if it drifts (hopefully it doesn't). Wrestled with vectfit more on matlab, found out I was converting from dB's incorrectly (should be 10^(dB/20)....) Intend to read a bit of a book by Bendat and Piersol to learn a bit more about how I should be weighting my vectfit. May also check out an algorithm called AAA for fitting instead.
16992 Tue Jul 12 14:56:17 2022 TomislavSummaryElectronicsElectronics noise measurements
[Paco, Tomislav]
We measured the electronics noise of the demodulation board, whitening board, and ADC for WFSs, and OPLEV board and ADC for DC QPD in MC2 transmission. We were using SR785.
Regarding the demodulation board, we did 2 series of measurements. For the first series of measurements, we were blocking WFS (attachment 1) and measuring noise at the output of the demod board (attachment 2a). This measurement includes dark noise of the WFS, electronics noise of demod board, and phase noise from LO. For the second series of the measurements, we were unplugging input to the demod board (attachment 2b & 2c is how they looked like before unplugging) (the mistake we made here is not putting 50-ohm terminator) and again measuring at the output of the demod board. This measurement doesn't include the dark noise of the WFS. We were measuring it for all 8 segments (I1, I2, I3, I4, Q1, Q2, Q3, Q4). The dark noise contribution is negligible with respect to demod board noise. In attachments 3 & 4 please find plots that include detection and demodulation contributions for both WFSs.
For whitening board electronics noise measurement, we were terminating the inputs (attachment 5) and measuring the outputs (attachment 6). Electronics noise of the whitening board is in the attachments 7 & 8.
For ADC electronics noise we terminated ADC input and measured noise using diaggui (attachments 9 & 10). Please find these spectra for WFS1, WFS2, and MC TRANS in attachments 11, 12 & 13.
For MC2 TRANS we measured OPLEV board noise. We did two sets of measurements, as for demod board of WFSs (with and without QPD dark noise) (attachments 14, 15 & 16). In the case of OPLEV board noise without dark noise, we were terminating the OPLEV input. Please find the electronics noise of OPLEV's segment 1 (including dark noise which is again much smaller with respect to the OPLEV's electronics noise) in attachment 17.
For the transfer functions, demod board has flat tf, whitening board tf please find in attachment 18, ADC tf is flat and it is (2**16 - 1)/20 [cts/V], and dewhitening tf please find in attachment 19. Also please find the ASD of the spectral analyzer noise (attachment_20).
Measurements for WFS1 demod and whitening were done on 5th of July between 15h and 18h local time. Measurements for WFS2 demod and whitening were done on 6th of July between 15h and 17h local time. All the rest were done on July 7th between 14h and 19h. In attachment 21 also find the comparison between electronics noise for WFSs and cds error signal (taken on the 28th of June between 17h and 18h). Sorry for bad quality of some pictures.
Attachment 1: attachment_1.jpg
Attachment 2: attachment_2a.jpg
Attachment 3: attachment_2b.jpg
Attachment 4: attachment_2c.jpg
Attachment 5: attachment_3.png
Attachment 6: attachment_4.png
Attachment 7: attachment_5.jpg
Attachment 8: attachment_6.jpg
Attachment 9: attachment_7.png
Attachment 10: attachment_8.png
Attachment 11: attachment_9.jpg
Attachment 12: attachment_10.jpg
Attachment 13: attachment_11.png
Attachment 14: attachment_12.png
Attachment 15: attachment_13.png
Attachment 16: attachment_14.jpg
Attachment 17: attachment_15.jpg
Attachment 18: attachment_16.jpg
Attachment 19: attachment_17.png
Attachment 20: attachment_18.png
Attachment 21: attachment_19.png
Attachment 22: attachment_20.png
Attachment 23: attachment_21.png
16991 Tue Jul 12 13:59:12 2022 ranaSummaryComputersprocess monitoring: Monit
I've installed Monit on megatron and nodus just now, and will set it up to monitor some of our common processes. I'm hoping that it can give us a nice web view of what's running where in the Martian network.
16990 Tue Jul 12 09:25:09 2022 ranaUpdateIOOIMC WFS
MC WFS Demod board needs some attention.
Tomislav has been measuring a very high noise level in the MC WFS demod output (which he promised to elog today!). I thought this was a bogus measurement, but when he, and Paco and I tried to measure the MC WFS sensing matrix, we noticed that there is no response in any WFS, although there are beams on the WFS heads. There is a large response in MC2 TRANS QPD, so we know that there is real motion.
I suspect that the demod board needs to be reset somehow. Maybe the PLL is unlocked or some cable is wonky. Hopefully not both demod boards are fried.
Please leave the WFS loops off until demod board has been assessed.
16989 Tue Jul 12 09:14:50 2022 ranaUpdateBHDMICH AS55 noise budget
Looking good:
• I think the notches you see in he measured noise are a clue as to the excess noise source. You can try turning some notches on/off.
• Laser noise does matter a bit more subtley: the low freq noise couples to AS55 through the RMS deviation of the MICH loop from the zero crossing, and the noise of the 55 MHz modulation.
• Jitter in the IMC couples to MICH through the misalignment of the Michelson.
• As you rightly note, the optical lever feedback on the ITMs and BS also make length noise through the suspension actuator imbalance and the spot mis-centering.
16988 Mon Jul 11 19:29:23 2022 PacoSummaryGeneralFinalizing recovery -- timing issues, cds, MC1
[Yuta, Koji, Paco]
## Restarting CDS
We were having some trouble restarting all the models on the FEs. The error was the famous 0x4000 DC error, which has to do with time de-synchronization between fb1 and a given FE. We tried a combination of things haphazardly, such as reloading the gpstime process using
controls@fb1:~ 0$sudo systemctl stop daqd_* controls@fb1 :~ 0$ sudo modprobe -r gpstime
controls@fb1:~ 0$sudo modprobe gpstime controls@fb1:~ 0$ sudo systemctl start daqd_*
controls@fb1:~ 0$sudo systemctl restart open-mx.service without much success, even when doing this again after hard rebooting FE + IO chassis combinations around the lab. Koji prompted us to check the local times as reported by the gpstime module, and comparing it to network reported times we saw the expected offset of ~ 3.5 s. On a given FE ("c1***") and fb1 separately, we ran: controls@c1***:~ 0$ timedatectl
Local time: Mon 2022-07-11 16:22:39 PDT
Universal time: Tue 2022-07-11 23:22:39 UTC
Time zone: America/Los_Angeles (PDT, -0700)
NTP enabled: yes
NTP synchronized: no
RTC in local TZ: no
DST active: yes
Last DST change: DST began at
Sun 2022-03-13 01:59:59 PST
Sun 2022-03-13 03:00:00 PDT
Next DST change: DST ends (the clock jumps one hour backwards) at
Sun 2022-11-06 01:59:59 PDT
Sun 2022-11-06 01:00:00 PST
controls@fb1:~ 0$ntpq -p remote refid st t when poll reach delay offset jitter ============================================================================== 192.168.123.255 .BCST. 16 u - 64 0 0.000 0.000 0.000 which meant a couple of things: 1. fb1 was serving its time (broadcast to local (martian) network) 2. fb1 was not getting its time from the internet 3. c1*** was not synchronized even though fb1 was serving the time By looking at previous elogs with similar issues, we tried two things; 1. First, from the FEs, run sudo systemctl restart systemd-timesyncd to get the FE in sync; this didn't immediately solve anything. 2. Then, from fb1, we tried pinging google.com and failed! The fb1 was not connected to the internet!!! We tried rebooting fb1 to see if it connected, but eventually what solved this was restarting the bind9 service on chiara! Now we could ping google, and saw this output controls@fb1:~ 0$ ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
+tor.viarouge.ne 85.199.214.102 2 u 244 1024 377 144.478 0.761 0.566
*ntp.exact-time. .GPS. 1 u 93 1024 377 174.450 -1.741 0.613
time.nullrouten .STEP. 16 u - 1024 0 0.000 0.000 0.000
+ntp.as43588.net 129.6.15.28 2 u 39m 1024 314 189.152 4.244 0.733
192.168.123.255 .BCST. 16 u - 64 0 0.000 0.000 0.000
meaning fb1 was getting its time served. Going back to the FEs, we still couldn't see the ntp synchronized flag up, but it just took time after a few minutes we saw the FEs in sync! This also meant that we could finally restart all FE models, which we successfully did following the script described in the wiki. Then we had to reload the modbusIOC service in all the slow machines (sometimes this required us to call sudo systemctl daemon-reload) and performed burt restore to a last Friday's snap file collection.
## IMC realign and MC1 glitch?
With Koji's help PMC locked, and then Yuta and Paco manually increased the input power to the IFO by rotating the waveplate picomotor to 37.0 deg. After this, we noticed that the MC REFL spot was not hitting the camera, so maybe MC1 was misaligned. Paco checked the AP table and saw the spot horizontally misaligned on the camera, which gave us the initial YAW correction on MC1. After some IMC recovery, we saw only MC1 got spontaneously kicked along both PIT and YAW, making our alignment futile. Though not hard to recover, we wondered why this happened.
We went into the 1X4 rack and pushed MC1 suspension cables in to disregard loose connections, but as we came back into the control room we again saw it being kicked randomly! We even turned damping off for a little while and this random kicking didn't stop. There was no significant seismic motion at the time so it is still unclear of what is happening.
16987 Mon Jul 11 17:41:52 2022 KojiHowToVACStartup after Power Outage
- Once the FRG gauge readings are back (see next elog by Tega), I could open V1 to pump down the main vacuum manifold.
- TP2/TP3 were brought back to stand-by mode (slower spinning)
- V7 was closed to separate the annuli side and TP1
During the vacuum recovery, I saw TPs were automatically turned on as soon as the backing pumps were engaged. I could not figure out what caused this automation.
Also, I saw some gate valve states changed while I was not touching them. e.g. V7 was close / VM3 was open / etc
I really had no idea what/who was handling these.
As of ~18:00 local, the main volume pressure is ~2e-5 torr and ready to open the PSL shutter.
Attachment 1: Screen_Shot_2022-07-11_at_18.13.00.png
16986 Mon Jul 11 17:25:43 2022 TegaUpdateVACfixed obsolete reference bug in serial_XGS600 service
Koji noticed that the FRG sensors were not updating due to reference to an obsolete modbusIOC_XGS service, which was used temporarily to test the operation of the serial XGS sensor readout to EPICS. The information in this service was later moved into modbusIOC.service but the dependence on the modbusIOC_XGS.service was not removed from the serial_XGS600.service. This did not present any issue before the shutdown, probably bcos the obsolete service was already loaded but after the restart of c1vac, the obsolete service file modbusIOC_XGS.service was no longer available. This resulted in serial_XGS600.service throwing a failure to load error for the missing obsolete modbusIOC_XGS service. The fix involved replacing two references to 'modbusIOC_XGS' with 'modbusIOC' in /opt/target/services/serial_XGS600.service.
I also noticed that the date logged in the commit message was Oct 2010 and that I could not do a push from c1vac due to an error in resolving git.ligo.org. I was able to push the commit from my laptop git repo but was unable to do a pull on c1vac to keep it synced with the remote repo.
16985 Mon Jul 11 15:26:12 2022 JCHowToVACStartup after Power Outage
[Koji, Jc]
Koji and I began starting that Vacuum system up.
1. Reverse order step 2 of shutting down electronics. Anthing after, turn on manually.
2. If C1vac does not come back, then restart by holding the reset button.
3. Open VA6
4. Open VASE, VASV,VABSSCI, VABS, VABSSCO, VAEV, and VAEE
5. Open V7
6. Check P3 and P2, if they are at high pressure, approx. 1 Torr range, then you must use the roughing pumps.
7. Connect Rotary pump tube. (Manually)
8. Turn on AUX Pump
9. Manually open TP2 and TP3 valves.
10. Turn on TP2 and TP3, when the pumps finish startup, turn off Standby to bring to nominal speed.
11. Turn on RP1 and RP3
12. Open V6
13. Once P3 reaches <<1 Torr, close V6 to isolate the Roughing pumps.
14. When TP2 and TP3 are at nominal speed, open V5 and V4.
15. Now TP1 is well backed, turn on TP1.
16. When TP1 is at nominal speed, Open V1.
16984 Mon Jul 11 11:56:40 2022 he YehonathanUpdateBHDMICH AS55 noise budget
I calculated a noise budget for the MICH using AS55 as a sensor. The calculation includes closed-loop TF calculations.
The notebook and associated files can be found on https://git.ligo.org/40m/bhd/-/blob/master/controls/compute_MICH_noisebudget.ipynb.
Attachment 1 shows the loop diagram I was using. The equation describing the steady-state of the loop is
$\left[\mathbb{I}-G \right]\begin{pmatrix} \gamma \\ \delta \\ \Delta\end{pmatrix} = \begin{pmatrix} \alpha \\ \beta \\ \epsilon\end{pmatrix}$
, where G is the adjacency matrix given by
$G=\begin{pmatrix} 0 & 0 & AE_2\\ 0 & 0 & BE_2 \\ E_1C & E_1D & 0 \end{pmatrix}$
First, the adjacency matrix G is constructed by stitching the small ABCDE matrices together. Once the inverse of (I-G) is calculated we can simply propagate any noise source to $\delta$ and then calculate $\left[\mathbb{I}-E(CA+DB)\right]B^{-1}\delta$ to estimate the displacement of the optics.
Attachment 2 shows the calculated noise budget together with Yuta's measurement.
All the input and output electronics are clumped together for now. Laser noise is irrelevant as this is a heterodyne measurement at 55MHz.
It seems like there is some mismatch in the calibration of the optical gain between the measurement and model. The missing noise at 3-30Hz could be due to angle-to-length coupling which I haven't included in the model.
Attachment 1: Control_Diagram.pdf
Attachment 2: MICH_AS55_Noise_Budget.pdf
16983 Mon Jul 11 11:16:45 2022 JCSummaryElectronicsStartup after Shutdown
[Paco, Yehonathan, JC]
We began starting up all the electronics this morning beginning in the Y-end. After following the steps on the Complete_Power_Shutdown_Procedures on the 40m wiki, we only came across 2 issues.
1. The Green beam at the Y-End : Turn on the controller and the indicator light began flashing. After waiting until the blinking light becomes constant, turn on the beam.
2. C1lsc "could not find operating system"-unable to SSH from Rossa : We found an Elog of how to restart Chiara and this worked. We proceeded by adding this to the procedures of startup.
16982 Fri Jul 8 23:10:04 2022 KojiSummaryGeneralJuly 9th, 2022 Power Outage Prep
The 40m team worked on the power outage preparation. The detailed is summarized on this wiki page. We will still be able to access the wiki page during the power outage as it is hosted some where in Downs.
https://wiki-40m.ligo.caltech.edu/Complete_power_shutdown_2022_07
16981 Fri Jul 8 16:18:35 2022 ranaUpdateLSCActuator calibration of MC2 using Yarm
although I know that Yuta knows this, I will just put this here to be clear: the NNN/f^2 calibration is only accurate abouve the pendulum POS eiegenfrequency, so when we estimate the DC part (in diaggui, for example), we have to assume that we have a pendulum with f = 1 Hz and Q ~5, to get the value of DC gain to put into the diaggui Gain field in the calibration tab.
16980 Fri Jul 8 14:03:33 2022 JCHowToVACVacuum Preparation for Power Shutdown
[Koji, JC]
Koji and I have prepared the vacuum system for the power outage on Saturday.
1. Closed V1 to isolate the main volume.
2. Closed of VASE, VASV, VABSSCI,VABS, VABSSCO, VAEV, and VAEE.
3. Closed V6, then close VM3 to isolate RGA
4. Turn off TP1 (You must check the RPMs on the TP1 Turbo Controller Module)
5. Close V5
6. Turn off TP3 (There is no way to check the RPMs, so be patient)
7. Close V4 (System State changes to 'All pneumatic valves are closed)
8. Turn off TP2 (There is no way to check the RPMs, so be patient)
9. Close Vacuum Valves (on TP2 and TP3) which connect to the AUX Pump.
10. Turn of AUX Pump with the breaker switch wall plug.
From here, we shutdown electronics.
1. Run /sbin/shutdown -h now on c1vac to shut the host down.
2. Manually turn off power to electronic modules on the rack.
• GP316a
• GP316b
• Vacuum Acromags
• PTP3
• PTP2
• TP1
• TP2 (Unplugged)
• TP3 (Unplugged)
Attachment 1: Screen_Shot_2022-07-12_at_7.02.14_AM.png
16979 Thu Jul 7 21:25:48 2022 TegaSummaryCDSUse osem variance to turn off SUS damping instead of coil outputs
[Anchal, Tega]
Implemented ramp down of coil bias voltage when the BHD optics watchdog is tripped. Also added a watchdog reset button to the SUS medm screen that turns on damping and ramps up the coil PIT/YAW bias voltages to their nominal values. I believe this concludes the watchdog work.
Quote: TODO Figure out the next layer of watchdogging needed for the BHD optics.
16978 Thu Jul 7 18:22:12 2022 yutaUpdateLSCActuator calibration of MC2 using Yarm
(This is also a restore of elog 40m/16971 from Jul 5, 2022 at 17:36)
MC2 actuator calibration was also done using Yarm in the same way as we did in 40m/16970 (now 40m/16977).
The result is the following;
MC2 : -14.17e-9 /f^2 m/counts in arm length (-2.9905 times ITMY) MC2 : 5.06e-9 /f^2 m/counts in IMC length MC2 : 1.06e+05 /f^2 Hz/counts in IR laser frequency
What we did:
- Measured TF from C1:LSC-MC2_EXC to C1:LSC-YARM_IN1 during YARM lock using ETMY (see Attachment #1). Note that the sign of MC2 actuation and ITMY actuation is flipped.
- Took the ratio between ITM actuation and MC2 actuation to calculate MC2 actuation. For ITM actuation, we used the value measured using MICH (see 40m/16929). The average of the ratio in the frequency range 70-150 Hz was used (see Attachment #2).
- The actuation efficiency in meters in arm length was converted into meters in IMC length by multiplying it by IMCLength/ArmLength, where IMCLength=13.5 m is half of IMC round-trip length, ArmLength=37.79 m is the arm length.
- The actuation efficiency in meters in arm length was converted into Hz in IR laser frequency by multiplying it by LaserFreq/ArmLength, where LaserFreq=1064 nm / c is the laser frequency.
Files:
- Measurement files live in https://git.ligo.org/40m/measurements/-/tree/main/LSC/YARM
- Script for calculation lives at https://git.ligo.org/40m/scripts/-/blob/main/CAL/ARM/ETMActuatorCalibration.ipynb
Summary of actuation calibration so far:
BS : 26.08e-9 /f^2 m/counts (see 40m/16929)
ITMX : 5.29e-9 /f^2 m/counts (see
40m/16929)
ITMY : 4.74e-9 /f^2 m/counts (see
40m/16929)
ETMX : 2.65e-9 /f^2 m/counts (0.5007 times ITMX) ETMY : 10.91e-9 /f^2 m/counts (2.3017 times ITMY)
MC2 : -14.17e-9 /f^2 m/counts in arm length (-2.9905 times ITMY) MC2 : 5.06e-9 /f^2 m/counts in IMC length
NOTE ADDED by YM on July 7, 2022
To account for the gain imbalance in ETMX, ETMY, MC2, LSC violin filter gains were set to: C1:LSC-ETMX_GAIN = 4.12 C1:LSC-MC2_GAIN = -0.77 This is a temporary solution to make ETMX and MC2 actuation efficiencies from LSC in terms of arm length to be the same as ETMY 10.91e-9 /f^2 m/counts.
I think it is better to make C1:LSC-ETMX_GAIN = 1, and put 4.12 in C1:SUS-ETMX_TO_COIL gains. We need to adjust local damping gains and XARM ASS afterwards. As for MC2, it is better to put -0.77 in LSC output matrix, since this balancing depends on LSC topology.
Attachment 1: TF.png
Attachment 2: MC2.png
16977 Thu Jul 7 18:18:19 2022 yutaUpdateLSCActuator calibration of ETMX and ETMX
(This is a complete restore of elog 40m/16970 from July 5, 2022 at 14:34)
ETMX and ETMY actuators were calibrated using single arm lock by taking the actuation efficiency ratio between ITMs. Below is the result.
ETMX : 2.65e-9 /f^2 m/counts (0.5007 times ITMX)
ETMY : 10.91e-9 /f^2 m/counts (2.3017 times ITMY)
Motivation:
- ETMX and ETMY actuators seemed to be unbalanced when locking DARM (see 40m/16968)
What we did:
- Reverted to C1:LSC-ETMX_GAIN = 1
- XARM was locked using POX11_I_ERR (42dB whitening gain, 132.95 deg for demod phase) with ETMX and C1:LSC-XARM_GAIN=0.06
- YARM was locked using POY11_I_ERR (18dB whitening gain, -66.00 deg for demod phase) with ETMX and C1:LSC-YARM_GAIN=0.02
- OLTFs for each was measured to be Attachment #1; UGF was ~180 Hz for XARM, ~200 Hz for YARM.
- Measured TF from C1:LSC-(E|I)TM(X|Y)_EXC to C1:LSC-(X|Y)ARM_IN1 (see Attachment #2)
- Took the ratio between ITM actuation and ETM actuation to calculate ETM actuation. For ITM actuation, we used the value measured using MICH (see 40m/16929). The average of the ratio in the frequency range 70-150 Hz was used.
Files:
- Measurement files live in https://git.ligo.org/40m/measurements/-/tree/main/LSC/XARM and YARM
- Script for calculation lives at https://git.ligo.org/40m/scripts/-/blob/main/CAL/ARM/ETMActuatorCalibration.ipynb
Discussion:
- ETMX actuation is 4.12 times less compared with ETMY. This is more or less consistent with what we measured in 40m/16968, but we didn't do loop-correction at that time.
- We should check if this imbalance is as expected or not.
Summary of actuation calibration so far:
BS : 26.08e-9 /f^2 m/counts (see 40m/16929)
ITMX : 5.29e-9 /f^2 m/counts (see 40m/16929)
ITMY : 4.74e-9 /f^2 m/counts (see 40m/16929)
ETMX : 2.65e-9 /f^2 m/counts (0.5007 times ITMX) ETMY : 10.91e-9 /f^2 m/counts (2.3017 times ITMY)
Attachment 1: Screenshot_2022-07-05_14-52-01_OLTF.png
Attachment 2: Screenshot_2022-07-05_14-54-03_TF.png
Attachment 3: Screenshot_2022-07-05_14-56-41_Ratio.png
16976 Wed Jul 6 22:40:03 2022 TegaSummaryCDSUse osem variance to turn off SUS damping instead of coil outputs
I updated the database files for the 7 BHD optics to separate the OSEM variance trigger and the LATCH_OFF trigger operations so that an OSEM variance value exceeding the max of say 200 cnts turns off the damping loop whereas pressing the LATCH_OFF button cuts power to the coil. I restarted the modbusIOC service on c1susaux2 and checked that the new functionality is behaving as expected. So far so good.
TODO
Figure out the next layer of watchdogging needed for the BHD optics.
Quote: [Anchal, JC, Ian, Paco] We have now fixed all issues with the PD mons of c1susaux2 chassis. The slow channels are now reading same values as the fast channels and there is no arbitrary offset. The binary channels are all working now except for LO2 UL which keeps showing ENABLE OFF. This was an issue earlier on LO1 UR and it magically disappeared and now is on LO2. I think the optical isolators aren't very robust. But anyways, now our watchdog system is fully functional for all BHD suspended optics.
16975 Wed Jul 6 19:58:16 2022 PacoSummaryNoiseBudgetXARM noise budget
[Anchal, Paco, Rana]
We locked the XARM using POX11 and made a noise budget for the single arm displacement; see Attachment #1. The noise budget is rough in that we use simple calibrations to get it going; for example we calibrate the measured error point C1:LSC-XARM_IN1_DQ using the single cavity pole and some dc gain to match the UGF point. The control point C1:LSC-XARM_OUT_DQ is calibrated using the actuator gain measured recently by Yuta. We also overlay an estimate of the seismic motion using C1:PEM-SEIS_BS_X_OUT_DQ (calibrated using a few poles to account for stack and pendulum), and finally the laser frequency noise as proxied by the mode cleaner C1:IOO-MC_F_DQ.
A couple of points are taken with this noise budget, apart from it needing a better calibration;
1. Overall the inferred residual displacement noise is high, even for our single arm cavity.
1. By looking at the sim OLTF in foton, it seemed that the single arm cavity loop TF could easily become unstable due to some near-UGF-funkiness likely from FM3 (higher freq boost), so we disabled the automatic triggering on it; the arm stayed locked and we changed the error signal (light blue vs gold (REF1) trace)
2. The arm cavity is potentially seeing too much noise from the IMC in the 1 to 30 Hz band in the form of laser frequency noise.
1. Need IMC noise budget to properly debug.
3. At high frequency (>UGF), there seem to be a bunch of "wiggles" which remain unidentified.
1. We actually tried to investigate a bit into these features, thinking they might have something to do with misalignment, but we couldn't really find significant correlation.
RXA edit:
1. we also noticed some weirdness in the calibration of MC_F v. Arm. We think MC_F should be in units of Hz, and Paco calculated the resulting motion as seen by the arm, but there was a factor of several between these two. Need to calibrate MC_F and check. In principle, MC_F will show up directly in ALS_BEATX (with the green PDH lock off), and I assume that one is accurately calibrated. Somehow we should get MC_F, XARM, and ALS_BEAT to all agree. JC is working on calibrating the Mini-Circuits frequency counter, so once that is done we will be in good shape.
2. we may need to turn on some MC_L feedback for the IMC, so that the MC length follows the NPRO frequency below ~20 Hz.
3. Need to estimate where the IMC WFS noise is in all of this. Does it limit the MC length stability in any frequency band? How do we determine this?
4. Also, we want to redo this noise budget today, whilst using AS55 instead of POX. Please measure the Schnupp asymmetry by checking the optimum demod phase in AS55 for locking Xarm v Yarm.
Attachment 1: xarm_nb_2022_07.pdf
16974 Wed Jul 6 18:51:20 2022 DeekshaUpdateElectronicsMeasuring the Transfer Function of the PZT
Yesterday, we set up the loop to measure the PZT of the transfer function - the MokuLab sends an excitation (note - a swept sine of 1.0 V) to the PZT. The cavity is locked to the PSL and the AUX is locked to the cavity. In order to measure the effect of our excitation, we take the beat note of the PSL and the AUX. This gives us a transfer function as seen in Attachment 1. The sampling rate of the MokuLab is set to 'ultrafast' (125kHz), so we can expect accurate performance upto 62.5kHz, however, in order to improve our readings beyond this frequency, modifications must be made to the script (MokuPhaseMeterTF) to avoid aliasing of the signal. A script should also be written to obtain and plot the coherence between the excitation and our output.
Also attached are - Attachment 2 - the circuit diagram of the setup, and Attachment 3 - the TF data calculated.
Edit - the SR560 as shown in the circuit diagram has since been replaced by a broadband splitter (Minicircuits ZFRSC-42-S+).
Attachment 1: pzt_transfer_fn.png
Attachment 2: ckt_diagram.jpeg
Attachment 3: MokuPhaseMeterTFData_20220706_174753_TF_Data.txt
2.000000000000000364e+04 1.764209350625748560e+07 2.715833132756984014e+00
1.928351995884991265e+04 1.695301366919569671e+07 1.509398637395631626e+00
1.859270710016814337e+04 1.647055321367538907e+07 -2.571975165101855865e+00
1.792664192275710593e+04 1.558169995329630189e+07 6.272729335836754183e-01
1.728443786563210961e+04 1.500850042360494658e+07 -1.500422400597591466e+00
1.666524012797089381e+04 1.456986577652360499e+07 2.046163000975175894e+00
1.606822453133765885e+04 1.376167843637173250e+07 1.736835046956476614e+00
1.549259642266657283e+04 1.326192932667389885e+07 -1.272425049850132606e+00
1.493758961654484847e+04 1.283127345074228011e+07 -2.026149685362535369e+00
1.440246537538758821e+04 1.208854709974890016e+07 -3.248352694840740407e-01
... 11 more lines ...
16973 Wed Jul 6 15:28:18 2022 TegaUpdateSUSOutput matrix diagonalisation : F2P coil balancing
local dir: /opt/rtcds/caltech/c1/Git/40m/scripts/SUS/OutMatCalc
Here is an update on our recent attempt at diagonalization of the SUS output matrices. There are two different parts to this: the first is coil balancing using existing F2P code which stopped working because of an old-style use of the print function and the second which should now focus on the mixing amongst the various degrees of freedoms (dof) without a DC/AC split I believe. The F2P codes are now working and have been consolidated in the git repo.
TODO:
• The remaining task is to make it so that we only call a single file that combines the characterization code and filter generation code, preferably with the addition of a safety feature that restores any changed values in case of an error or interruption from the user. The safety functionality is already implemented in the output matrix diagonalization stem of the code, so we just need to copy this over.
• Improve the error minimization algorithm for minimizing the cross-coupling between the various dof by adjusting the elements of the output matrix.
Previous work
https://nodus.ligo.caltech.edu:8081/40m/4762
https://nodus.ligo.caltech.edu:8081/40m/4719
https://nodus.ligo.caltech.edu:8081/40m/4688
https://nodus.ligo.caltech.edu:8081/40m/4682
https://nodus.ligo.caltech.edu:8081/40m/4673
https://nodus.ligo.caltech.edu:8081/40m/4327
https://nodus.ligo.caltech.edu:8081/40m/4326
https://nodus.ligo.caltech.edu:8081/40m/4762
16972 Tue Jul 5 20:05:06 2022 TomislavUpdateElectronicsWhitening electronics noise
For whitening electronics noise for WFS1, I get (attachment). This doesn't seem right, right?
Attachment 1: whitening_noises.png
16969 Fri Jul 1 12:49:52 2022 KojiUpdateIOOMC2 seemed misaligned / fixed
I found the IMC was largely misaligned and was not locking. The WFS feedback signals were saturated and MC2 was still largely misaligned in yaw after resetting the saturation.
It seemed that the MC WFS started to put the large offset at 6:30AM~7:00AM (local).
MC2 was aligned and the lock was recovered then the MC WFS seems working for ~10min now.
Attachment 1: C1-MULTI_FBDB3F_TIMESERIES-1340668818-86400.png
Attachment 2: C1-LOCKED_MC_5E4267_TIMESERIES-1340668818-86400.png
16968 Fri Jul 1 08:50:48 2022 yutaSummaryLSCFPMI with REFL/AS55 trial
[Anchal, Paco, Yuta]
We tried to lock FPMI with REFL55 and AS55 this week, but no success yet.
FPMI locks with POX11, POY11 and ASDC for MICH stably, but handing over to 55's couldn't be done yet.
What we did:
- REFL55: Increased the whitening gain to 24dB. Demodulation phase tuned to minimize MICH signal in I when both arms are locked with POX and POY. REFL55 is noisier than AS55. Demodulation phase and amplitude of the signal seem to drift a lot also. Might need investigation.
- AS55: Demodulation phase tuned to minimize MICH signal in I when both arms are locked with POX and POY. Whitening gain is 24dB.
- Script for demodulation phase tuning lives in https://git.ligo.org/40m/scripts/-/blob/main/RFPD/getPhaseAngle.py
- Locking MICH with REFL55 Q: Kicks BS much and not so stable probably because of noisy REFL55. Offtet also needs to be adjusted to lock MICH to dark fringe.
- BS coil balancing: When MICH is "locked" with REFL55 Q, TRX drops rapidly and AS fringe gets worse, indicating BS coil balancing is not good. We balanced the coils by dithering POS with different coil output matrix gains to minimize oplev PIT and YAW output manually using LOCKINs.
- Locking MICH with ASDC: Works nicely. Offset is set to -0.1 in MICH filter and reduced to -0.03 after lock acquisition.
- ETMX/ETMY actuation balancing: We found that feedback signal to ETMX and ETMY at LSC output is unbalanced when locking with POX and POY. We dithered MC2 at 71 Hz, and checked feedback signals when Xarm/Yarm are locked to find out actuation efficiency imbalance. A gain of 2.9874 is put into C1:LSC-ETMX filter to balance ETMX/ETMY. I think we need to check this factor carefully again.
- TRX and TRY: We normalized TRX and TRY to give 1 when arms are aligned. Before doing this, we also checked the alignment of TRX and TRY DC PDs (also reduced green scattering for TRY). Together with ETMX/ETMY balancing, this helped making filter gains the same for POX and POY lock to be 0.02 (See, also 40m/16888).
- Single arm with REFL55/AS55: We checked that single arm locking with both REFL55_I and AS55_Q works. Single arm locking feeding back to MC2 also worked.
- Handing over to REFL55/AS55: After locking Xarm and Yarm using POX to ETMX and POY to ETMY, MICH is locked with ASDC to BS. Handing over to REFL55_I for CARM using ETMX+ETMY and AS55_Q for DARM using -ETMX+ETMY was not successful. Changing an actuator for CARM to MC2 also didn't work. There might be an unstable point when turning off XARM/YARM filter modules and switching on DARM/CARM filter modules with a ramp time. We also need to re-investigate correct gains and signs for DARM and CARM. (Right now, gains are 0.02 for POX and POY, -0.02 for DARM with AS55_Q (-ETMX+ETMY), -0.02 for CARM with REFL55_I with MC2 are the best we found so far)
Next:
- Measure ETMX and ETMY actuation efficiencies with Xarm/Yarm to balance the output matrix for DARM.
- Measure optical gains of POX11, POY11, AS55 and REFL55 when FPMI is locked with POX/POY/ASDC to find out correct filter gains for them.
- Make sure to measure OLTFs when doing above to correct for loop gains.
- Lock CARM with POY11 to MC2, DARM with POX11 to ETMX. Use input matrix to hand over instead of changing filter modules from XARM/YARM to DARM/CARM.
- Try using ALS to lock FPMI.
16967 Thu Jun 30 19:24:24 2022 ranaSummaryPEMeffect of nearby CES construction
For the proposed construction in the NW corner of the CES building (near the 40m BS chamber), they did a simulated construction activity on Wednesday from 12-1.
In the attached image, you can see the effect as seen in our seismometers:
this image is calculated by the 40m summary pages codes that Tega has been shepherding back to life, luckily just in time for this test.
Since our local time PDT = UTC - 7 hours, 1900 UTC = noon local. So most of the disturbance happens from 1130-1200, presumably while they are setting up the heavy equipment. If you look in the summary pages for that day, you can also see the IM lost lock. Unclear if this was due to their work or if it was coincidence. Thoughts?
16966 Thu Jun 30 19:04:55 2022 ranaSummaryPSLPSL HEPA: How what when why
For the PSL HEPA, we wanted it to remain at full speed during the vent, when anyone is working on the PSL, or when there is a lot of dust due to outside conditions or cleaning in the lab.
For NORMAL conditions, the policy is to turn it to 30% for some flow, but low noise.
I think we ought to lock one of the arms on IR PDH and change the HEPA flow settings and plot the arm error signal, and transmitted power for each flow speed to see what's important. Record the times of each setting so that we can make a specgram later
16965 Thu Jun 30 18:06:22 2022 PacoUpdateALSOptimum ALS recovery - part I
[Paco]
In the morning I took some time to align the AUX beams in the XEND table. Later in the afternoon, I did the same on the YEND table. I then locked the AUX beams to the arm cavities while they were stabilized using POX/POY and turned off the PSL hepa off temporarily (this should be turned on after today's work).
After checking the the temperature slider sign on the spectrum analyzer of the control room I took some out-of-loop measurements of both ALS beatnotes (Attachment #1) by running diaggui /users/Templates/ALS/ALS_outOfLoop_Ref_DQ.xml and by comparing them against their old references (red vs magenta and blue vs cyan); it seems that YAUX is not doing too bad, but XAUX has increased residual noise around and above 100 Hz; perhaps as a result of the ongoing ALS SURF loop investigations? It does look like the OLTF UGF has dropped by half from ~ 11 kHz to ~ 5.5 kHz.
Anyways let this be a reference measurement for current locking tasks, as well as for ongoing SURF projects.
Attachment 1: als_ool_06_2022.png
16964 Thu Jun 30 17:19:55 2022 DeekshaSummaryElectronicsMeasured Transfer Functions of the Control Loop, Servo (OLTF); got Vectfit working
[Cici, Deeksha]
We were able to greatly improve the quality of our readings by changing the parameters in the config file (particularly increasing the integration and settle cycles, as well as gradually increasing our excitation signals' amplitude). Attached are the readings taken from the same (the files directly printed by ssh'ing the SR785 (apologies)) - Attachment 1 depicts the graph w/ 30 data points and attachment 2 depicts the graph with 300 data points.
Cici successfully vectfit to the data, as included in Attachment 3. (This is the vectfit of the entire control loop's OLTF). There are two main concerns that need to be looked into, firstly, the manner in which to get the poles and zeros to input into the vectfit program. Similarly, the program works best when the option to enforce stable poles is disabled, once again it may be worth looking into how the program works on a deeper level in order to understand how to proceed.
Just as the servo's individual transfer function was taken, we also came up with a plan to measure the PZT's individual transfer function (using the MokuLab). The connections for the same have been made and the Moku is at the Xend (disconnected). We may also have to build a highpass filter (similar to the one whose signal enters the PZT) to facilitate taking readings at high frequencies using the Moku.
Attachment 1: TFSR785_29-06-2022_114042.pdf
Attachment 2: TFSR785_29-06-2022_114650.pdf
Attachment 3: TF_OLG_vectfit.png
16963 Wed Jun 29 18:53:38 2022 ranaUpdateElectronicsElectronics noise
this is just the CDS error signal, but is not the electronics noise. You have to go into the lab and measure the noise at several points. It can't be done from the control room. You must measure before and afte the whitening.
Quote: I measured electronics noise of WFSs and QPD (of the WFS/QPD, whitening, ADC...) by closing PSL and measuring the error signal. It was needed to put the offset in C1:IOO-MC_TRANS_SUMFILT_OFFSET to 14000 cts (without offset the sum of quadrants would give zero, and 14000 cts is the value when the cavity is locked). For WFS that are RF, if there is intensity noise at low frequencies, it is not affecting the measurement. In the attachment please find the power spectrum of the error signal when the PSL shutter is on and off.
16962 Wed Jun 29 14:28:06 2022 PacoSummaryALSALS beat allan deviation (XARM)
I guess it didn't make sense since f_beat can be arbitrarily moved, but the beat is taken around the PSL freq ~ 281.73 THz. Attachment #1 shows the overlapping tau allan deviation for the exact same dataset but using the python package allantools, where this time I used the PSL freq as the base frequency. This time, I can see the minimum fractional deviation of 1.33e-13 happening at ~ 20 seconds.
Quote: what's the reasoning behind using df/f_beat instead of df/f_laser ?
## Another, more familiar interpretation
The allan variance is related to the beatnote spectral density as a mean-square integral (the deviation is then like the rms) with a sinc window.
$\sigma^2_\nu = 2 \int_0^{\infty} S_\nu(f) \lvert \frac{\sin({\pi f \tau})}{\pi f \tau} \lvert ^2 df$
|
2022-08-08 13:58:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6415143013000488, "perplexity": 7620.796428284666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570827.41/warc/CC-MAIN-20220808122331-20220808152331-00395.warc.gz"}
|
https://api-project-1022638073839.appspot.com/questions/5946ec40b72cff1628499695
|
# Question #99695
Jun 18, 2017
The logs more or less disappear!
#### Explanation:
${\left({x}^{2} + 6\right)}^{{\log}_{2} \left(x\right)} = {\left(5 x\right)}^{{\log}_{2} \left(x\right)}$
A special problem. If it were possible for the bases to be either positive or negative, we might encounter a case involving an absolute value. For now, don't worry about what that is. Why? Because the base "5x" is negative if and only if x is negative.
We know that x cannot be negative because ${\log}_{2} \left(x\right)$ is undefined unless $x > 0$. Also, ${x}^{2} + 6$ is always positive; therefore $5 x$ must be positive.
In this case...
${\left({x}^{2} + 6\right)}^{{\log}_{2} \left(x\right)} = {\left(5 x\right)}^{{\log}_{2} \left(x\right)}$
if and only if the bases are equal. That is,
${x}^{2} + 6 = 5 x$
Solve this as
${x}^{2} - 5 x + 6 = 0$
by using factoring.
Finish it off now. There are two solutions.
|
2020-04-06 00:23:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6861650943756104, "perplexity": 410.18550030557765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371611051.77/warc/CC-MAIN-20200405213008-20200406003508-00241.warc.gz"}
|
http://mathwords.com/a/alternating_series_remainder.htm
|
index: click on a letter A B C D E F G H I J K L M N O P Q R S T U V W X Y Z A to Z index index: subject areas numbers & symbols sets, logic, proofs geometry algebra trigonometry advanced algebra & pre-calculus calculus advanced topics probability & statistics real world applications multimedia entries
www.mathwords.com about mathwords website feedback
Alternating Series Remainder
A quantity that measures how accurately the nth partial sum of an alternating series estimates the sum of the series. If an alternating series is not convergent then the remainder is not a finite number.
Consider the following alternating series (where ak > 0 for all k) and/or its equivalents. $\sum\limits_{k = 1}^\infty {{{\left( { - 1} \right)}^{k + 1}}{a_k}} = {a_1} - {a_2} + {a_3} - {a_4} + \cdots$ If the series converges to S, then the nth partial sum Sn and the corresponding remainder Rn can be defined as follows. ${S_n} + {R_n} = S$ ${S_n} = \sum\limits_{k = 1}^n {{{\left( { - 1} \right)}^{k + 1}}{a_k}}$ ${R_n} = \sum\limits_{k = n + 1}^\infty {{{\left( { - 1} \right)}^{k + 1}}{a_k}}$ This gives us the following ${R_n} = S - \sum\limits_{k = 1}^n {{{\left( { - 1} \right)}^{k + 1}}{a_k}}$ If the series converges to S by the alternating series test, then the remainder Rn can be estimated as follows for all n ≥ N: $\left| {{R_n}} \right| \le {a_{n + 1}}$ Note that the alternating series test requires that the numbers a1, a2, a3, ... must eventually be nonincreasing. The number N is the point at which the values of an become non-increasing. an ≥ an +1 for all n ≥ N, where N ≥ 1.
See also
this page updated 19-jul-17 Mathwords: Terms and Formulas from Algebra I to Calculus written, illustrated, and webmastered by Bruce Simmons Copyright © 2000 by Bruce Simmons All rights reserved
|
2017-10-23 00:54:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.552281379699707, "perplexity": 446.9961824136239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825497.18/warc/CC-MAIN-20171023001732-20171023021732-00742.warc.gz"}
|
http://soscholar.com/domain/detail?domain_id=2de3e6fa-24e2-f48a-69a7-c200ff9d5d22
|
High Frequency 520 浏览 0关注
High frequency (HF) radio frequency|radio frequencies are between 3 and 30 Megahertz|MHz. Also known as the decameter band or decameter wave as the wavelengths range from one to ten decameters (ten to one hundred metres). Frequencies immediately below HF are denoted medium frequency (MF), and the next higher frequencies are known as very high frequency (VHF). The HF band is a major part of the shortwave band of frequencies, so communication at these frequencies is often called shortwave radio. Because radio waves in this band can be reflected back to Earth by the ionosphere layer in the atmosphere, called skip or skywave propagation, these frequencies can be used for long distance communication, at intercontinental distances. The band is used by international shortwave broadcasting stations (2.310 - 25.820 MHz), aviation communication, government time stations, weather stations, amateur radio and citizens band services, among other uses.
相关概念
主要的会议/期刊 IAS J APPL ... IUS IEEE TR... ISCAS EMBC APPL PH... APEC
|
2017-10-21 10:19:42
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8268767595291138, "perplexity": 7335.171588513398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824733.32/warc/CC-MAIN-20171021095939-20171021115939-00065.warc.gz"}
|
https://std.iec.ch/iev/iev.nsf/IEVref_xref/en:121-11-03
|
IEVref: 121-11-03 ID: Language: en Status: Standard Term: electric constant Synonym1: permittivity of vacuum [Preferred] Synonym2: Synonym3: Symbol: ε0 Definition: scalar constant linking the electric quantities and the mechanical quantities, obtained from the relation $F=\frac{1}{4\pi {\epsilon }_{0}}\cdot \frac{|{Q}_{1}{Q}_{2}|}{{r}^{2}}$ based on Coulomb's law in a vacuum, where F is the magnitude of the force between two particles with electric charges Q1 and Q2 respectively, placed at a distance r apart Note 1 to entry: In a vacuum, the product of the electric constant ε0 and the electric field strength E is equal to the electric flux density D: D = ε0E Note 2 to entry: The electric constant ε0 is related to the magnetic constant μ0 and to the speed of light in vacuum c0 by the relation ε0μ0c02 = 1. Note 3 to entry: The value of the electric constant ε0 is equal to $8,854\text{ }187\text{ }812\text{ }8\left(13\right)\cdot {10}^{12}\frac{\text{A}\cdot \text{s}}{\text{V}\cdot \text{m}}$ Publication date: 2021-01 Source: Replaces: 121-11-03:1998-08 Internal notes: CO remarks: TC/SC remarks: VT remarks: Domain1: Domain2: Domain3: Domain4: Domain5:
|
2021-08-01 01:28:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9486024975776672, "perplexity": 2441.1862578048754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154127.53/warc/CC-MAIN-20210731234924-20210801024924-00319.warc.gz"}
|
https://mathsgee.com/10509/given-the-set-of-data-below-draw-the-box-and-whiskers-plot-11
|
0 like 0 dislike
741 views
Given the set of data below, draw the box and whiskers plot $4, 17, 7, 14, 18, 12, 3, 16, 10, 4, 4, 11$
| 741 views
0 like 0 dislike
Given the set of data 4,17,7,14,18,12,3,16,10,4,4,11
in ascending order = 3,4,4,4,7,10,11,12,14,16,17,18
smallest value = 3
largest value = 18
Q1 = 1/4 x 12 = 3rd + 4th position = (4 + 4)/2 = 4
Q2 = median = 1/2 x 12 = 6th + 7th position = (10 + 11)/2 = 10.5
Q3 = 3/4 x 12 = 9th + 10th position = (14 + 16)/2 = 15
Then plot on a box and whisker those values using a suitable scale
by Gold Status (26,653 points)
0 like 0 dislike
2 like 0 dislike
0 like 0 dislike
0 like 0 dislike
0 like 0 dislike
0 like 0 dislike
0 like 0 dislike
0 like 0 dislike
0 like 0 dislike
0 like 0 dislike
0 like 0 dislike
0 like 0 dislike
0 like 0 dislike
1 like 0 dislike
|
2023-03-20 08:55:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5119028687477112, "perplexity": 9377.743226813212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943471.24/warc/CC-MAIN-20230320083513-20230320113513-00242.warc.gz"}
|
https://www.weiyeying.com/ask/4491548
|
WYSIWYG在視圖中顯示為HTML - Ruby on Rails
In my Post table the field I am attempting to affect is called body. When I use the WYSIWYG editor and save it, the display from the both the index and the show views actually shows the HTML. For instance, if I make something bold in the WYSISWG editor, it will output in the view something and the associated
show, etc.
最佳答案
Perhaps you use h method in views so all your html tags are escaped or you are escaping your tags in controller while saving post
|
2020-08-07 14:57:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24608424305915833, "perplexity": 2495.543388407663}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737204.32/warc/CC-MAIN-20200807143225-20200807173225-00000.warc.gz"}
|
https://www.gradesaver.com/textbooks/engineering/other-engineering/materials-science-and-engineering-an-introduction/chapter-16-composites-questions-and-problems-page-677/16-2
|
## Materials Science and Engineering: An Introduction
$k_{max}= 31.0 W/mK$ $k_{min} =28.71 W/mK$
Required: The maximum and minimum thermal conductivity values for a cermet that contains 90 vol% titanium carbide (TiC) particles in a nickel matrix assuming thermal conductivities of 27 and 67 W/mK for TiC and Ni, respectively Solution: Using the modified form of Equation 16.1, the maximum thermal conductivity is: $k_{max} = k_{m}V_{m} + k_{p}V_{p} = k_{Ni}V_{Ni} + k_{TiC}V_{TiC} = (67 W/m.K)(0.10)+ (27 W/m.K)(0.90) = 31.0 W/mK$ Using the modified form of Equation 16.2, the minimum thermal conductivity is: $k_{min} = \frac{k_{Ni}k_{TiC}}{V_{Ni}k_{TiC} + V_{TiC}k_{Ni}} = \frac{(67 W/m.K)(27 W/m.K)}{(0.10)(27 W/m.K) + (0.90)(67 W/m.K)} = 28.71 W/mK$
|
2019-11-18 23:33:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9348137378692627, "perplexity": 1844.2318450589833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669868.3/warc/CC-MAIN-20191118232526-20191119020526-00468.warc.gz"}
|
https://gamedev.stackexchange.com/questions/179259/gamma-adjustment-slider-implementation
|
Various online sources talk in sufficient detail about gamma correction. By following them, I achieved a rendering pipeline that looks somewhat like this:
// Both are set to 2.2
uniform float gammaIn;
uniform float gammaOut;
void main()
{
vec3 color = pow(texture(material.albedo, texCoord).rgb, vec3(gammaIn));
// Do lighting and then HDR tone mapping here.
// color = ...
FragColor = pow(color, vec3(1.0 / gammaOut));
}
That is the easy part.
Now, many games implement the gamma correction adjustment slider that is meant to accommodate for display differences in various monitors that players may have. This presents some questions that I couldn't find any definitive answer to:
1. Which value should the adjustment slider affect? I reckon it will be gammaOut because gammaIn deals with decoding of the sRGB picture and assuming that all textures are sRGB, this should always be the const 2.2. With this approach a lower gammaOut means darker picture.
2. What's the reasonable value range for the slider? Should it start at 1.0 or somewhere higher? Where should it end?
3. If I were to display the "barely visible / invisible" comparison combo picture to help the user with doing the adjustment correctly, what should be the color values for the "should be barely visible" dark picture and its background and ditto for the "should be invisible" bright picture and its background?
|
2021-05-08 11:28:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2908635139465332, "perplexity": 4410.967707183741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00226.warc.gz"}
|
https://chemistry.stackexchange.com/questions/59296/suppose-a-heated-5-00-g-sample-of-sodium-bicarbonate-lost-a-mass-of-1-00-g-what
|
# Suppose a heated 5.00 g sample of sodium bicarbonate lost a mass of 1.00 g, what percent of impurity is in the sample?
The 5.00 g is a mixture of sodium bicarbonate and an unknown substance, making it impure.
Balanced decomposition equation:
$$\ce{2NaHCO3(s) -> NaCO3(s) + H2O(g) + CO2(g)}$$
There are 0.06 mol of $\ce{NaHCO3}$ in a pure 5.00 g sample of sodium bicarbonate.
Theoretically, 0.0298 moles of $\ce{H2O}$ and $\ce{CO2}$ would be lost durin the decomposition of 5.00 g of sodium bicarbonate.
0.0298 mol converted to grams is:
0.536 g of $\ce{H2O}$
1.31 g of $\ce{CO2}$
First there is a big gotcha. We'll let $x$ be the grams of sodium bicarbonate and $y$ be the grams of the impurity. Since we were given no information about the impurity we must assume that the impurity neither loses mass nor gains mass when heated with the sodium carbonate. So: $$x + y = 5.00$$
$\ce{2NaHCO3(s) -> Na2CO3(s) + H2O(g) + CO2(g)}$
so 2 moles of $\ce{NaHCO3}$ yields 1 mole of $\ce{Na2CO3}$. Thus the mass conversion factor is: $$\dfrac{0.5*\text{MW(}\ce{Na2CO3}\text{)}}{\text{MW(}\ce{NaHCO3}\text{)}} = \dfrac{0.5*106.0}{84.0}= 0.6310$$
So we now also have: $$0.6310x + y = 4.00$$
Subtracting the second equation from the first we get: $$0.3690x=1.000$$ so $$x=2.71g$$ and $$\text{Purity} = \dfrac{2.71}{5} = 54.2\%$$ $$\text{Impurity} = 100\% - 54.2\% = 45.8\%$$
|
2022-01-22 04:08:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9467964172363281, "perplexity": 1207.6261721636363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303729.69/warc/CC-MAIN-20220122012907-20220122042907-00625.warc.gz"}
|
http://groupprops.subwiki.org/wiki/Aut-abelian_not_implies_abelian
|
# Abelian automorphism group not implies abelian
This article gives the statement and possibly, proof, of a non-implication relation between two group properties. That is, it states that every group satisfying the first group property (i.e., group whose automorphism group is abelian) need not satisfy the second group property (i.e., abelian group)
View a complete list of group property non-implications | View a complete list of group property implications
Get more facts about group whose automorphism group is abelian|Get more facts about abelian group
## Statement
There exist non-abelian groups (in fact, non-abelian finite -groups for every prime ) that are groups whose automorphism group is abelian: the automorphism group is an abelian group.
|
2014-07-28 06:14:02
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8890404105186462, "perplexity": 1005.1460795729637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510256757.9/warc/CC-MAIN-20140728011736-00158-ip-10-146-231-18.ec2.internal.warc.gz"}
|
https://www.nature.com/articles/s41598-020-68786-6?error=cookies_not_supported&code=7c9de787-6d22-4cf8-9e38-a0621f1944d4
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# A Mendelian randomization study of telomere length and blood-cell traits
## Abstract
Whether telomere attrition reducing proliferative reserve in blood-cell progenitors is causal has important public-health implications. Mendelian randomization (MR) is an analytic technique using germline genetic variants as instrumental variables. If certain assumptions are met, estimates from MR should be free from most environmental sources of confounding and reverse causation. Here, two-sample MR is performed to test whether longer telomeres cause changes to hematological traits. Summary statistics for genetic variants strongly associated with telomere length were extracted from a genome-wide association (GWA) study for telomere length in individuals of European ancestry (n = 9190) and from GWA studies of blood-cell traits, also in those of European ancestry (n ~ 173,000 participants). A standard deviation increase in genetically influenced telomere length increased red blood cell and white blood cell counts, decreased mean corpuscular hemoglobinand mean cell volume, and had no observable impact on mean corpuscular hemoglobin concentration, red cell distribution width, hematocrit, or hemoglobin. Sensitivity tests for pleiotropic distortion were mostly inconsistent with glaring violations to the MR assumptions. Similar to germline mutations in telomere biology genes leading to bone-marrow failure, these data provide evidence that genetically influenced common variation in telomere length impacts hematologic traits in the population.
## Introduction
In humans, telomeres, DNA repeats and protein structures that “cap” and stabilize the ends of chromosomes, progressively shorten across the lifespan1. As a result, telomere length has generally been viewed as a biomarker of aging2. Natural variation for telomere length exists, moreover, with a portion of the variance resulting from genetic variation3 (e.g., heritability estimates ranging from 36 to 82%4). While heritable influences likely capture a variety of contributing mechanisms, there is reason to suspect that they heavily reflect both the activity of the ribonucleoprotein enzyme telomerase, which acts to maintain telomere length, as well as the effects of environmental (e.g. exogenous oxidative or inflammatory) stressors that can accelerate telomere shortening5,6.
Blood, like telomere length, affects and is affected by its environment: what happens in and to blood impacts physiological processes throughout the body, and blood cells also carry the signatures of environmental stressors (e.g., smoking- and alcohol-related DNA methylation changes in leukocytes7,8). Telomere length can be thought of as an environmentally pliant and endogenous exposure for cells. A DNA damage response may be elicited and cellular senescence ensue if telomeres become critically short9. Telomere length in white blood cells (WBCs) may approximately reflect telomerase activity in hematopoietic stem cells (HSCs)10. This becomes broadly relevant, to be sure, as telomerase activity appears to be inadequate at preventing telomere erosion11. As hypothesized by Mazidi et al.12, telomere attrition may be a marker for reduced proliferative reserve in hematopoietic progenitor cells.
An established relationship exists between rare telomerase mutations and bone marrow failure syndromes13. As Savage and Bertuch4 discuss in their review, telomere biology disorders are, indeed, defined by very short telomeres. Affected persons are highly susceptible to cancer, pulmonary fibrosis, and bone marrow failure (BMF)4. BMF can be the first sign of a telomere biology disorder, in fact. Aplastic anemia, a BMF condition which can be acquired or inherited and which results from damage to HSCs14, leads to hallmark cytopenia: low circulating RBCs, WBCs, and platelets. Fascinatingly, telomere length can be shorter than normal in those with acquired aplastic anemia, but not as short as for those with canonical telomere biology disorders4. Likewise, lower red blood cell (RBC) counts, larger RBC size, and lower platelet counts have been observed in subjects with germline telomerase reverse transcriptase (TERT) mutations compared with family controls15.
At least 70% of patients with dyskeratosis congenita, which has the most severe phenotype of the telomere biology disorders4, carry germline mutations in telomere maintenance genes13. Moreover, similar to telomere length in those with acquired anemia sometimes being shorter than in non-affected individuals—but not as short as in those with established telomere biology disorders4—telomere length in those with inherited bone marrow failure syndromes (IBMFSs) other than dyskeratosis congenita have been documented to be shorter than in unaffected individuals. But they are not as short as in those dyskeratosis congenita13. This implies a dose-like relationship between the heritable component of telomere length and disease severity, which is especially notable with BMF in dyskeratosis congenita, associated with accelerated telomere shortening16.
The heritable component to telomere length may not necessarily be limited to mutations directly in telomere biology, however. For instance, there is a lack of evidence for mutations in telomere biology genes in patients with Fanconi anemia (an IBMFS), for which BMF is the most likely adverse outcome during childhood (relative to solid tumors being more likely in adulthood), and in which shorter telomeres have been documented17. With Fanconi anemia, Pang and Andreassen and Sarkar and Liu postulate that telomere defects occur secondary to endogenous oxidative damage18,19.
By extension, these hints of dose-like relationships raise the possibility that common variation (like a small dose in susceptibility from a single nucleotide polymorphism [SNP] rather than a large dose from a rare mutation) in telomere maintenance genes might confer an effect on blood-cell traits, though in a blunted, ostensibly subclinical, manner in comparison to the mutations causing IBMFS. And, while subclinical, these changes could still be relevant to the pathophysiology of common diseases of aging in the population.
Telomere length is, indeed, hypothesized to be associated with hematologic traits in the general population. For instance, in an observational study of 3156 subjects, Kozlitina et al. found that shorter telomere lengths were associated with lower RBC counts, larger mean RBC size, increased red blood cell distribution width (RDW), higher hemoglobin levels, and lower platelet counts6.
The Kozlitina study was small, however, and observational studies are prone to potential confounding and reverse causation. Indeed, a few other small studies have examined telomere length and blood-cell traits with results that are discrepant between the studies12. To wit, the results for the study by Mazidi et al.12 partly conflict with those by Kozlitina et al.6. They observed a negative relationship between telomere length and monocyte count, for instance12. Due to this, it is unclear whether a causal relationship exists between telomere length and blood-cell traits in the general population.
Using Mendelian randomization (MR), Haycock et al. observed that longer telomeres (in leukocytes) increased the risk for various cancer outcomes, but decreased the risk for coronary heart disease20. Since variation in blood-cell subtype is associated with a wide variety of systemic diseases21 and since blood cells play an essential role in key physiological processes (e.g., oxygen transport, hemostasis, and innate and acquired immune responses)22,23,24, the activity of blood-cell traits may mediate or exacerbate some of the effects of telomere length on various disease outcomes and/or reflect changes resulting from various diseases. Teasing out the underlying causal relationships has important clinical and public-health implications.
To that aim, MR is an analytic, quasi-experimental technique that uses germline genetic variants as instrumental variables. If certain assumptions are met, estimates from MR should be free from most social and environmental sources of confounding and reverse causation (see Methods)25, and can, therefore, help sort out the nature of complexly woven traits. Here, MR is used to examine the effect of genetically influenced longer telomeres on nine blood-cell traits.
## Results
### Red blood cell (RBC) and white blood cell (WBC) counts
Genetically increased telomere length was associated with higher inverse-variance weighted (IVW) estimates (95% CIs) for 2 of 9 blood-cell traits (P < 0.006): RBC count 0.09 (0.04, 0.14) and WBC count 0.06 (0.02, 0.11). The sensitivity estimators aligned in the direction and magnitudes of their effects, and the MR-Egger intercept tests (Supplementary Tables 10–11) indicated no evidence for directional pleiotropy. The I2 statistic for the MR-Egger test pointed to some potential regression dilution, though the values were close to 90%, indicating that the MR-Egger tests suffered at most from up to 14% potential dilution from this bias. The Simulation Extraction (SIMEX) correction of the MR-Egger estimates revealed that the corrected MR-Egger intercepts were different than zero (an MR-Egger intercept test consistent with zero indicates a lack of evidence for pleiotropy in the IVW estimate). While this could imply distortion from pleiotropy in the IVW estimate (Supplementary Tables 10–11), the SIMEX correction may be overly conservative, since the potential dilution was small.
### Reticulocyte count
There was intermediate evidence that genetically increased telomere length was associated with higher reticulocyte count (P < 0.05): IVW estimate 0.04 (95% CI 0.00, 0.08). The sensitivity estimators aligned in the direction and magnitudes of their effects, except, crucially, for the MR-Egger estimate, for which the direction of effect was reversed. The MR-Egger intercept test (Supplementary Table 12) indicated no evidence for directional pleiotropy. The I2 statistic for the MR-Egger test pointed to some potential regression dilution, indicating that the MR-Egger test suffered from up to 13% potential dilution from this bias (potential bias of < 10% is typically considered acceptable). The SIMEX correction of the MR-Egger estimate made no difference in the interpretation of the findings (the MR-Egger intercept remained null). Overall, however, due to the discordance between the IVW and the MR-Egger estimates, the finding could be influenced by violations to the MR assumption against horizontal pleiotropy.
### Mean corpuscular hemoglobin (MCH) and mean corpuscular (cell) volume (MCV)
Genetically increased telomere length was associated at the Bonferroni threshold with lower IVW estimates [95% CIs] for 2 of the 9 blood-cell traits (P < 0.006): MCH (− 0.12 [− 0.18, − 0.07]) and MCV (− 0.13 [− 0.18, − 0.08]). The sensitivity estimators aligned in their direction of effects and mostly in their magnitudes. The MR-Egger intercept tests (Supplementary Tables 13–14) indicated no evidence for directional pleiotropy. The I2 statistic for the MR-Egger test to minimal (up to 11%) potential dilution. SIMEX correction of the MR-Egger estimates indicated that the corrected MR-Egger intercepts remained consistent with zero after correction (Supplementary Tables 13–14).
### Mean corpuscular hemoglobin concentration (MCHC), red cell distribution width (RDW), hematocrit (Hct), and hemoglobin (Hgb)
Genetically increased telomere length was not associated with 4 of the 9 blood-cell traits: MCHC (− 0.02 [− 0.06, 0.02]), RDW (0.02 [− 0.02, 0.06]), Hct (0.03 [− 0.02, 0.08]), RDW (0.04 [0.00, 0.08]). The sensitivity estimators mostly aligned in their magnitudes of effects, but for the MCHC estimate, several of the estimators were reversed in direction from that of the IVW. The MR-Egger intercept tests (Supplementary Tables 15–18) indicated no evidence for directional pleiotropy. The I2 statistic for the MR-Egger test indicated up to ~ 15% potential dilution from this bias. The SIMEX correction of the MR-Egger estimates indicated that the corrected MR-Egger intercepts remained consistent with zero after correction (Supplementary Tables 15–18): no change in interpretation.
## Discussion
### Summary
The MR analysis of telomere length on blood-cell traits support the notion that telomere length influences some of the traits in a causal manner. Longer telomeres increased RBC and WBC counts and decreased the measures of MCH and MCV, but had no observable impact on MCHC, RDW, hematocrit, or hemoglobin. The sensitivity tests for potential pleiotropic distortion were mostly inconsistent with glaring violations to the MR assumptions. Supposing the MR assumptions have not been violated, which can never fully be tested (a limitation of this study and of all MR studies generally), these data support the growing evidence that longer telomeres impact blood cells.
### Relevance for disease and aging
As for what this means for the clinic and for the population at large, De Meyer et al. suggested that there may be an age-dependent effect of telomere length on blood-cell traits. In their population-based study, they observed a positive correlation between telomere length and RBC count that was, importantly, stronger for those older than 45 years of age26. This suggests that the impact of telomere length on RBCs may be negligible in middle age but have important clinical consequences in the elderly, related to the anemia of chronic disease26. Thus, longer telomeres increasing RBC count may be beneficial and protective, especially during the later years of life.
The protective effect may be dampened some, however, by the fact that no relation was observed for longer telomere length and hemoglobin concentration in the present study. Nonetheless, a pattern of longer telomeres increasing RBCs and WBCs but not hemoglobin concentration comports with those of an observational study among Polish individuals over 6510. For men specifically, Gutmajster et al.10 detected weak positive correlations between telomere length and RBCs and WBCs and no signal for hemoglobin. The present data dovetails with theirs, supporting the hypothesis that telomere shortening hinders hematopoiesis capacity.
Moreover, in the Kozlitina et al. study, an inverse association was observed between telomere length and MCV6. Our findings for MCV comport with Kozlitina et al.’s and support the idea that telomere shortening is a mechanism for the macrocytosis of aging.
Haycock et al. observed that increased telomere length increased the odds for lung adenocarcinoma (odds ratio and 95% CI 3.19 [2.40–4.22]) and other cancers20, and Sprague et al. documented that higher WBC count increased the hazard estimates for lung cancer27. Thus, it is conceivable that there is a pathway from longer telomeres to lung cancer through an impact on increasing WBC count, a marker of inflammation28. A body of evidence suggests that chronic low-grade inflammation is involved in the pathogenesis of various cancers, where cancer causes inflammatory, microenvironmental, and immune changes to the host29,30,31,32,33. Therefore, since longer telomeres appear to contribute to the initiation of some cancers and these early carcinogenic changes potentially promote increased WBC count, increased WBC count might reflect undiagnosed cancer. If longer telomeres increase WBC counts and contribute to cancer initiation, and, in turn, the subsequent (possibly subclinical) tumor increases WBC, this would be an example of bidirectional causality.
### Bidirectional causality versus reversion causation
Bidirectional causality is different conceptually from reverse causation, and because it could be relevant to our findings, it is useful to draw out the distinctions. With a true bidirectional relationship, causation occurs in both directions, as the name suggests. For example, there is a bidirectionally causal relationship between fluid intelligence and years of schooling. Higher intelligence causes people to stay in school longer, and staying in school longer feeds back on and increases intelligence34. This bidirectionality is different than what is typically meant by “reverse causation”. In non-experimental studies, where temporality cannot be confidently sussed out, the direction of the correlation is challenging, or impossible, to infer. When the actual underlying causal direction is opposite from what investigator’s hypothesize, this is an example of reverse causation. MR studies are specifically designed to subvert reverse causation. This is so, owing to the fact that genotype assignment at conception temporarily precedes other physiological parameters of interest. This robustness to reverse causality is one of the overarching strengths of MR and why MR is an essential tool for population-based medical research. Relative to observational designs, the risk for reverse causation is greatly reduced.
However, it is not impossible, as VanderWeele et al.35 point out. Scenarios can arise in which an exposure and outcome of interest are each partitioned into several time points. For instance, imagine an outcome at its “time 1” that affects an exposure at the exposure’s “time 2”. This can approximate reverse causation in MR since the genetic variant proxying for the exposure at “time 1” is now not independent of the final outcome at “time 2”, conditional on both exposure time points and potential confounders35. Fortunately, this should be the exception rather than the rule. While possible that one exists, absent knowledge of an empirical, a prior reason to think that telomeres impact blood-traits at multiple time points in such a way that blood-cell traits influence telomeres at “time 2”, a more parsimonious explanation is that the relationships between telomeres and blood cells are bidirectional, if indeed blood-cell traits impact telomeres. Blood-cell traits might very well do so, given that blood-cell traits are involved in oxidative and inflammatory processes.
Whether blood-cell traits cause changes in telomere length is an important avenue for future MR research. In order to examine the influence of blood-cell traits on telomere length with two-sample MR (the design used here), the full summary statistics for a large telomere-length GWA study would be needed. While the findings deemed “statistically significant” for the telomere-length GWA study used here to instrument telomere length are public, the full summary statistics, including the non-top findings, are not available. For now, this precludes MR investigations of bidirectional relationships between longer telomeres and blood-cell traits using the Mangino et al. GWA data as the outcome data source. Additionally, and more importantly, a telomere-length GWA study that is substantially larger than Mangino et al.’s3, which included ~ 9,200 participants, would be needed to best capitalize on the power of large samples to detect effects. Another GWA study of telomere length does exist: Codd et al. contains ~ 38,000 individuals36. This is the more-apt GWA dataset to use when treating telomere length as the outcome, (or even better, an even larger one). At present the full summary statistics are not publicly available. To further emphasize this issue, the Astle et al.21 GWA studies for blood-cell traits we used here were performed on ~ 173,000 individuals. This makes the Astle et al.21 GWA studies strong resources for studying blood-cell traits as outcome variables in two-sample MR.
Returning briefly to the example of the bidirectional relationship between intelligence and education years, a bidirectional MR analysis of these traits was able to determine the prevailing direction of effect. Specifically, and intriguingly, while intelligence influences how long someone stays in school, the magnitude of the impact of staying in school on intelligence is much larger34. This has important implications for interventional policies. Similarly, determining the prevailing direction of effect, if a bidirectional relationship between telomere length and blood-cell traits exists, has the potential to inform and refine strategies for disease prevention.
The issue of bidirectionality gets thicker when turning attention to coronary heart disease (CHD). A potentially puzzling relationship exists between longer telomeres, more WBCs, and CHD. Haycock et al.’s. MR study and also evidence from clinical studies suggest a protective effect of longer telomeres against CHD20,37. But a vast body of clinical and epidemiologic literature suggests that higher WBC counts contribute to CHD’s pathogenesis. This makes it challenging to explain the impact of telomeres increasing WBCs, as they relate to CHD. The proposed mechanisms for the deleterious effects of WBCs on CHD, as entrenched and entangled as they are with likely bidirectional causality, are hard to ignore. For instance, previously suggested “mechanistic” explanations include the fact that higher WBC counts are a biomarker of atherosclerotic stress; a secondary signal from inflammation due to tobacco smoking; a contributor to microvascular injury; and a reflection of complex and systemic inflammatory responses, involving cytokines, cell-adhesion molecules, T-lymphocytes, and C-reactive protein38, etc. But if the observational evidence is causal, then this suggests that longer telomeres and more WBCs have, perhaps, independent and opposing direct effects on CHD, even if longer telomeres increase WBC count, as suggested by the data here. Whether there are independent, direct effects of more WBCs and longer telomeres on CHD is a question that could be investigated with multivariable MR – once full summary statistics are available for telomere length (again from a large GWA study).
Lastly, chronic inflammation, independent of shorter telomere length, can lead to premature senescence. This may enhance the effects of shorter telomeres, which also cause senescence when short enough37,39. Moreover, since (a) longer telomeres in WBCs likely reflect longer telomeres in hematopoietic stem cells (HSCs) and (b) epithelial progenitor cells (EPCs) originate from HSCs and are involved in the repair mechanisms for vascular atherosclerosis37, shortened telomeres in WBCs may reflect compromised capacity to repair injured vasculature. Therefore, longer telomeres may be protective against CHD by not interfering with the generation of EPCs and their involvement in vasculature repair. To the extent that more WBCs reflect EPC health and integrity, more WBCs may be a protective marker. To the extent that more WBCs reflect increased inflammation, more WBCs appear to contribute to CHD.
In conclusion, similar to how germline mutations in telomere biology genes can lead to BMF, the present findings from two-sample MR provide evidence that genetically influenced common variation in telomere length impacts hematologic traits in the population. Future MR studies should explore whether blood-cell traits also impact telomere length, as similar to endogenous oxidative damage in Fanconi anemia possibly influencing telomere length, oxidative damage related to immune cell responses might influence telomeres in the population.
## Methods
### Conceptual approach
Non-experimental studies are prone to confounding and reverse causation. MR gets around these issues, using an instrumental-variables framework, by exploiting the random assortment of alleles, genotype assignment at conception, and pleiotropy (genes influencing more than one trait)25,40,41. Using genetic variants instrumentally avoids most environmental sources of confounding, and genotype assignment at conception avoids most sources of reverse causation, since genotypes temporarily precede the observational variables of interest.
Two-sample MR is a version of the procedure that uses summary statistics from two genome-wide association (GWA) studies42,43,44,45,46,47. Typically, with two-sample MR, the IVW method is the standard approach (Fig. 1 contains an example).
### Mendelian randomization assumptions
MR relies on the validity of three assumptions48. In the context of the present analysis, these assumptions are as follows: (1) the SNPs acting as the instrumental variables for telomere length are strongly associated with telomere length; (2) the telomere-length-associated SNPs are independent of confounders of telomere length and the outcomes of interest; and (3) the telomere length SNPs are associated with the outcomes of interest only through telomere length (no horizontal pleiotropy; the SNPs are not associated with the outcomes independent of telomere length44,48).
### Instrument construction
For the telomere length instruments ($${\widehat{\beta }}_{ZX}$$ in Fig. 1), SNPs associated at genome-wide significance (P < 5 × 10–8) with a standard deviation (SD) in telomere length, whose summary statistics (effect estimates and standard errors) were concatenated and reported by Haycock et al.20 were selected. The original telomere-length GWA study was a meta-analysis performed across six cohorts on 9190 individuals (men and women aged 18–95) of European ancestry3. (Detailed descriptions of the six cohorts are available elsewhere49,50,51,52,53,54, but for ease of reference, Mangino et al.3 included the following six cohorts in their meta-analysis of telomere length: the Framingham Heart Study, Family Heart Study, Cardiovascular Health Study, Bogalusa Heart Study, HyperGEN, and TwinsUK). Mangino et al.3 adjusted for age, age2 sex, and smoking history and checked for non-European ancestry with principal components analysis. From the meta-analysis, sixteen SNPs were available for this MR analysis (Supplementary Table 19). Those that were independent (not in linkage disequilibrium, LD, with an r2 < 0.001, at a clumping distance of 10,000 kilobases with reference to the 1000 Genomes Project (https://www.internationalgenome.org/) were kept and those that did not fit this criteria were dropped. The corresponding effect estimates and standard errors for the retained SNPs were then obtained from the blood-trait GWA studies ($${\widehat{\beta }}_{ZY}$$ in Fig. 1). The blood-trait GWA studies were performed by Astle et al.21 on a population of ~ 173,000 individuals of European ancestry, largely from the UK (a meta-analysis of the UK Biobank and Interval studies). They adjusted for principal components and study center and excluded those with blood cancers or major blood disorders.
When a SNP was not available in the blood trait GWA studies, a “proxy” SNP in LD with the SNP at r2 ≥ 0.80 (assessed using 1000 Genomes Project) was chosen. If the “proxy” SNP was not available, the SNP was removed from the analysis. SNP-exposure and SNP-outcome associations were harmonized with the “harmonization_data” function within the MR-Base “TwoSampleMR” package within R42,55. Harmonized SNP-exposure and SNP-outcome associations were combined with the IVW method (Fig. 1).
For all tests, RadialMR regression56 was run to detect SNP outliers. Outlier SNPs were removed. (A different number of telomere length SNPs were used for the various blood traits due to outliers being removed and whether a SNP or its “proxy” was available in the outcome dataset.) All instrumental variables included in this analysis have Cochrane’s Q-statistic P-values indicating no evidence for heterogeneity between SNPs57 (heterogeneity statistics are provided in Supplementary Tables 10–18).
The selected SNPs correspond to independent genomic regions and account for 1% to 2% of the variance in leukocyte telomere length (R2), which corresponds to F-statistics between 14 and 17. F-statistics are used to gauge whether the IVW results suffer from reduced statistical power to reject the null hypothesis. This could happen if telomere-length instrument explained a limited proportion of the variance in leukocyte telomere length. F-statistics > 10 are conventionally considered to be sutiable58,59. The F-statistics and R2 values used to calculate them are presented in Table 1.
In addition, I2 statistics, which are useful for assessing potential attenuation bias in one of the sensitivity estimators (the MR-Egger regression) are provided (also in Table 1). I2 statistics < 90% can indicate dilution in the MR-Egger estimate, which can mean that the results from the MR-Egger intercept test (see below) are potentially inaccurate. Simulation extrapolation (SIMEX), a correction procedure that adjusts the MR-Egger estimate for potential regression dilution to the null, is recommended for I2 statistics < 90%60. The SIMEX results are reported in Supplementary Tables 10–19.
### Sensitivity analyses
The IVW estimator can be biased if any of the instrumental SNPs violate the MR assumption about the genetic instrument not having a direct effect on the outcome independent of the exposure61. To assess possible violations to MR assumption (3), three sensitivity estimators—MR-Egger regression, weighted median, and weighted mode methods—were run and their results compared with those of the IVW. The sensitivity estimators make different assumptions about the underlying nature of pleiotropy. Thus, if the directions and magnitudes of their effects comport with those of the IVW, this is a qualitative screen against pleiotropy (likewise, heterogeneity in their effects suggests pleiotropy)62. Comparing the IVW and sensitivity estimators is a form of triangulation and knowledge synthesis—i.e., the judgement process for whether the results are consistent with causality is like what investigators do when they perform a systematic review of studies with different methods, strenghts, and limitations.
In-depth explanations of the MR estimators and their assumptions have been covered elsewhere61,63,64. The results for the IVW and sensitivity estimators are reported in Table 1, except for the results of the MR-Egger intercept tests. MR-Egger regression provides both an effect estimate and a test for directional pleiotropy (the MR-Egger intercept test). The MR-Egger intercept test for pleiotropy is interpretted differently than the estimators providing tests for associations between telomere length and the blood-cell traits. When the MR-Egger intercept does not differ from zero (P > 0.05), this is evidence against pleiotropy. The MR-Egger intercept results are reported in Supplementary Tables 10–18.
### Number of tests
In total, nine MR tests were run (detailed characteristics for the individual SNPs used in each model are provided in SupplementaryTables 1–9). Though it is over conservative due to the correlation between blood traits, to account for multiple testing across analyses, a Bonferroni correction was used to establish a P-value threshold for strong evidence (P < 0.006) (false-positive rate = 0.05/9 outcomes).
### Statistical software
SIMEX corrections were perfomed in Stata SE/16.065. All other described analyses were performed in R version 3.5.2 with the “TwoSampleMR” package42.
## Data availability
All data sources used for SNP-exposure and SNP-outcome associations are publicly available. The summary data for the telomere length instruments are available in Haycock et al.20. The nine hematological outcome GWA studies used for these analyses are accessible within MR-Base: https://www.mrbase.org/42.
## Abbreviations
MR:
Mendelian randomization
RBC:
Red blood cell
WBC:
White blood cell
Ret:
Reticulocyte
MCH:
Mean corpuscular hemoglobin
MCV:
Mean cell (corpuscular) volume
MCHC:
Mean corpuscular hemoglobin concentration
RDW:
Red blood cell distribution width
Hct:
Hematocrit
Hgb:
Hemoglobin
GWA:
Genome-wide association
HSC:
Hematopoietic stem cell
BMF:
Bone marrow failure
IBMFS:
Inherited bone marrow failure syndromes
## References
1. Hastie, N. D. et al. Telomere reduction in human colorectal carcinoma and with ageing. Nature 346, 866–868 (1990).
2. Vaziri, H. et al. Evidence for a mitotic clock in human hematopoietic stem cells: loss of telomeric DNA with age. Proc. Natl. Acad. Sci. U.S.A 91, 9857–9860 (1994).
3. Mangino, M. et al. Genome-wide meta-analysis points to CTC1 and ZNf676 as genes regulating telomere homeostasis in humans. Hum. Mol. Genet. 21, 5385–5394 (2012).
4. Savage, S. A. & Bertuch, A. A. The genetics and clinical manifestations of telomere biology disorders. Genet. Med. 12, 753–764 (2010).
5. Diez Roux, A. V. et al. Race/ethnicity and telomere length in the multi-ethnic study of atherosclerosis. Aging Cell 8, 251–257 (2009).
6. Kozlitina, J. & Garcia, C. K. Red blood cell size is inversely associated with leukocyte telomere length in a large multi-ethnic opulation. PLoS ONE 7, 1–10 (2012).
7. Joehanes, R. et al. Epigenetic signatures of cigarette smoking. Circ. Cardiovasc. Genet. 9, 436–447 (2016).
8. Xu, K. et al. Epigenome-wide DNA methylation association analysis identified novel loci in peripheral cells for alcohol consumption among European American male veterans. Alcohol. Clin. Exp. Res. 43, 2111–2121 (2019).
9. Bertuch, A. A. & Gramatges, M. M. Short Telomeres: from dyskeratosis congenita to sporadic aplastic anemia and malignancy. Trans. Res. 162, 997–1003 (2013).
10. Gutmajster, E. et al. Telomere length in elderly caucasians weakly correlates with blood cell counts. Sci. World J. 23, 153608 (2013).
11. Engelhardt, M. et al. Telomerase regulation, cell cycle, and telomere stability in primitive hematopoietic cells. Blood 90, 182–193 (1997).
12. Mazidi, M., Penson, P. & Banach, M. Association between telomere length and complete blood count in US adults. Arch. Med. Sci. 13, 601–605 (2017).
13. Alter, B. P., Giri, N., Savage, S. & Rosenberg, P. S. Telomere length in inherited bone marrow failure syndromes. Haematologica 100, 49–54 (2014).
14. Maciejewski, J. P. & Risitano, A. Hematopoietic stem cells in aplastic anemia. Arch. Med. Sci. 34, 520–527 (2003).
15. Diaz De Leon, A. et al. Subclinical lung disease, macrocytosis, and premature graying in kindreds with telomerase (TERT) mutations. Chest 140, 753–763 (2011).
16. Alter, B. P. et al. Telomere length is associated with disease severity and declines with age in dyskeratosis congenita. Haematologica 97, 353–359 (2012).
17. Gadalla, S. M., Cawthon, R., Giri, N., Alter, B. P. & Savage, S. A. Telomere length in blood, buccal cells, and fibroblasts from patients with inherited bone marrow failure syndromes. Aging (Albany NY) 2, 867–874 (2010).
18. Pang, Q. & Andreassen, P. R. Fanconi anemia proteins and endogenous stresses. Mutat. Res. 668, 42–53 (2009).
19. Sarkar, J. & Liu, Y. Fanconi anemia proteins in telomere maintenance. DNA Repair 43, 107–112 (2016).
20. Haycock, P. C. et al. Association between telomere length and risk of cancer and non-neoplastic diseases: a Mendelian randomization study. JAMA Oncol. 3, 636–651 (2017).
21. Astle, W. J. et al. The allelic landscape of human blood cell trait variation and links to common complex disease. Cell 167, 1415-1429.e19 (2016).
22. Jenne, C. N., Urrutia, R. & Kubes, P. Platelets: bridging hemostasis, inflammation, and immunity. Int. J. Lab. Hematol. 35, 254–261 (2013).
23. Jensen, F. B. The dual roles of red blood cells in tissue oxygen delivery: oxygen carriers and regulators of local blood flow. J. Exp. Biol. 212, 3387–3393 (2009).
24. Varol, C., Mildner, A. & Jung, S. Macrophages: development and tissue specialization. Ann. Rev. Immunol. 33, 643–675 (2015).
25. Davey Smith, G. & Ebrahim, S. ‘Mendelian randomization’: Can genetic epidemiology contribute to understanding environmental determinants of disease?. Int. J. Epidemiol. 32, 1–22 (2003).
26. De Meyer, T. et al. Lower red blood cell counts in middle-aged subjects with shorter peripheral blood leukocyte telomere length. Aging Cell 7, 700–705 (2008).
27. Sprague, B. L. et al. Physical activity, white blood cell count, and lung cancer risk in a prospective cohort study. Cancer Epidemiol. Biomark. Prev. 17, 2714–2722 (2008).
28. Balkwill, F. & Mantovani, A. Inflammation and cancer: Back to Virchow?. Lancet 357, 539–545 (2001).
29. Lee, Y., Lee, H., Nam, C., Hwang, U. & Jee, S. White blood cell count and the risk of colon cancer. Yonsei Med. J. 47, 646–656 (2006).
30. Margolis, K. L., Rodabough, R. J., Thomson, C. A., Lopez, A. M. & McTiernan, A. Prospective study of leukocyte count as a predictor of incident breast, colorectal, endometrial, and lung cancer and mortality in postmenopausal women. Arch. Intern. Med. 167, 1837–1844 (2007).
31. Allin, K. H., Bojesen, S. E. & Nordestgaard, B. G. Inflammatory biomarkers and risk of cancer in 84,000 individuals from the general population. Int. J. Cancer 139, 1493–1500 (2016).
32. Anderson, G. L. & Neuhouser, M. L. Obesity and the risk for premenopausal and ostmenopausal breast cancer. Cancer Prev. Res. 5, 515–522 (2012).
33. Coussens, L. M. & Werb, Z. Inflammation and cancer. Nature 420, 860–867 (2010).
34. Anderson, E. L. et al. Education, intelligence and Alzheimer’s disease: evidence from a multivariable two-sample Mendelian randomization study. bioRxiv https://doi.org/10.1093/ije/dyz280/5719343 (2018).
35. Vanderweele, T. J., Tchetgen, E. J. T. & Kraft, P. Methodological challenges in Mendelian randomization. Epidemiology 25, 427–435 (2014).
36. Codd, V. et al. Identification of seven loci affecting mean telomere length and their association with disease. Nat. Genet. 45, 442–427 (2013).
37. Yeh, J. K. & Wang, C. Y. Telomeres and telomerase in cardiovascular diseases. Genes (Basal) 7, 58 (2016).
38. Hoffman, M., Blum, A., Baruch, R., Kaplan, E. & Benjamin, M. Leukocytes and coronary heart disease. Atherosclerosis 172, 1–6 (2004).
39. Blasco, M. A. Telomere length, stem cells and aging. Nat. Chem. Biol. 3, 640–649 (2007).
40. Schooling, C. M., Freeman, G. & Cowling, B. J. Mendelian randomization and estimation of treatment efficacy for chronic diseases. Am. J. Epidemiol. 177, 1128–1133 (2013).
41. Hemani, G., Bowden, J. & Smith, G. D. Evaluating the potential role of pleiotropy in Mendelian randomization studies. Hum. Mol. Genet. 27, 195–208 (2018).
42. Hemani, G. et al. The MR-Base platform supports systematic causal inference across the human phenome. Elife 7, 1–29 (2018).
43. Burgess, S., Butterworth, A. & Thompson, S. G. Mendelian randomization analysis with multiple genetic variants using summarized data. Genet. Epidemiol. 37, 658–665 (2013).
44. Bowden, J., Smith, G. D. & Burgess, S. Mendelian randomization with invalid instruments: effect estimation and bias detection through Egger regression. Int. J. Epidemiol. 44, 512–525 (2015).
45. Johnson, T. Efficient calculation for multi-SNP genetic risk scores. in American Society of Human Genetics Annual Meeting (2012). https://doi.org/10.1038/ng.784.
46. Davey Smith, G. & Hemani, G. Mendelian randomization: genetic anchors for causal inference in epidemiological studies. Hum. Mol. Genet. 23, R89–R98 (2014).
47. Burgess, S. & Thompson, S. G. Interpreting findings from Mendelian randomization using the MR-Egger method. Eur. J. Epidemiol. 32, 377–389 (2017).
48. Didelez, V. & Sheehan, N. Mendelian randomization as an instrumental variable approach to causal inference. Stat. Methods Med. Res. 16, 309–330 (2007).
49. Hunt, S. C. et al. Leukocyte telomeres are longer in African Americans than in whites: the national heart, lung, and blood institute family heart study and the bogalusa heart study. Aging Cell 7, 451–458 (2008).
50. Feinleib, M., Kannel, W. B., Garrison, R. J., McNamara, P. M. & Castelli, W. P. The Framingham offspring study. Design and preliminary data. Prev. Med. 4, 518–525 (1975).
51. Fried, L. P. et al. The cardiovascular health study: design and rationale. Ann. Epidemiol. 1, 263–276 (1991).
52. Higgins, M. et al. NHLBI family heart study: objectives and design. Am. J. Epidemiol. 143, 1219–1228 (1996).
53. Tell, G. S. et al. Recruitment of adults 65 years and older as participants in the cardiovascular health study. Ann. Epidemiol. 3, 358–366 (1993).
54. Williams, R. R. et al. NHLBI family blood pressure program: methodology and recruitment in the HyperGEN Network. Ann. Epidemiol. 10, 389–400 (2000).
55. R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/ (2019).
56. Bowden, J. et al. Improving the visualization, interpretation and analysis of two-sample summary data Mendelian randomization via the radial plot and radial regression. Int. J. Epidemiol. https://doi.org/10.1093/ije/dyy101 (2018).
57. Del Greco, M. F., Minelli, C., Sheehan, N. A. & Thompson, J. R. Detecting pleiotropy in Mendelian randomisation studies with summary data and a continuous outcome. Stat. Med. 34, 2926–2940 (2015).
58. Burgess, S. & Thompson, S. G. Avoiding bias from weak instruments in mendelian randomization studies. Int. J. Epidemiol. 40, 755–764 (2011).
59. Pierce, B. L. & Burgess, S. Efficient design for Mendelian randomization studies: subsample and 2-sample instrumental variable estimators. Am. J. Epidemiol. 178, 1177–1184 (2013).
60. Spiller, W., Slichter, D., Bowden, J. & Davey Smith, G. Detecting and correcting for bias in Mendelian randomization analyses using gene-by-environment interactions. Int. J. Epidemiol. https://doi.org/10.1093/ije/dyy204 (2018).
61. Spiller, W., Davies, N. M. & Palmer, T. M. Software application profile: mrrobust—a tool for performing two-sample summary Mendelian randomization analyses. Int. J. Epidemiol. 48, 684–690 (2019).
62. Burgess, S., Bowden, J., Fall, T., Ingelsson, E. & Thompson, S. G. Sensitivity analyses for robust causal inference from Mendelian randomization analyses with multiple genetic variants. Epidemiology 28, 30–42 (2017).
63. Yarmolinsky, J. et al. Appraising the role of previously reported risk factors in epithelial ovarian cancer risk: a Mendelian randomization analysis. PLoS Med. 16, e1002893 (2019).
64. Hwang, L., Lawlor, D. A., Freathy, R. M., Evans, D. M. & Warrington, N. M. Using a two-sample Mendelian randomization design to investigate a possible causal effect of maternal lipid concentrations on offspring birth weight. Int. J. Epidemiol. 005, 1–11 (2019).
65. StataCorp. stata statistical software: release 16. (2019).
## Author information
Authors
### Contributions
C.D.A. performed the analysis, wrote, and approved the manuscript. B.B.B. provided extensive comments on the revision and approved the final manuscript.
### Corresponding author
Correspondence to Charleen D. Adams.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
### Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Adams, C.D., Boutwell, B.B. A Mendelian randomization study of telomere length and blood-cell traits. Sci Rep 10, 12223 (2020). https://doi.org/10.1038/s41598-020-68786-6
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41598-020-68786-6
• ### Telomere research entering the big data era
• Sara Hägg
• Yiqiang Zhan
Nature Aging (2022)
• ### Leukocyte telomere length and amyotrophic lateral sclerosis: a Mendelian randomization study
• Kailin Xia
• Linjing Zhang
• Dongsheng Fan
Orphanet Journal of Rare Diseases (2021)
|
2022-06-25 06:24:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6394643187522888, "perplexity": 11623.624285195647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034170.1/warc/CC-MAIN-20220625034751-20220625064751-00130.warc.gz"}
|
https://www.pims.math.ca/scientific-event/230322-lfantab
|
## L-functions in Analytic Number Theory: Alexandre Bailleul
• Date: 03/22/2023
• Time: 11:00
Lecturer(s):
Alexandre Bailleul, ENS Paris-Saclay
Location:
University of Lethbridge
Topic:
Exceptional Chebyshev's bias over finite fields
Description:
Chebyshev's bias is the surprising phenomenon that there is usually more primes of the form 4n+3 than of the form 4n+1 in initial intervals of the natural numbers. More generally, following work from Rubinstein and Sarnak, we know Chebyshev's bias favours primes that are not squares modulo a fixed integer q compared to primes which are squares modulo q. This phenomenon also appears over finite fields, where we look at irreducible polynomials modulo a fixed polynomial M. However, in the finite field case, there are a few known exceptions to this phenomenon, appearing as a result of multiplicative relations between zeroes of certain L-functions. In this work, we show, improving on earlier work by Kowalski, that those exceptions are rare. This is joint work with L. Devin, D. Keliher and W. Li.
Other Information:
Time: 11 am Pacific/ 12 pm Mountain
|
2023-03-21 23:56:09
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8999285697937012, "perplexity": 1513.2758399638237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00555.warc.gz"}
|
http://mathoverflow.net/revisions/1884/list
|
## Return to Answer
2 added 616 characters in body
Well, here's a start. Suppose we have n points, and let k = n(n-1)/2. Thus there are k distances we have to pick. Let's take all of our distances to lie in the set {k+1, k+2, ..., 2k}, so that we don't have to worry about the triangle inequality. Now, the collection of open balls depends on how many times each distance is repeated. However, translating the set by an integer doesn't affect the collection of open balls. That is, if D is the multiset of distances, and CD is the collection of open balls induced by D, then CD+1 = CD, where by D+1 I mean add 1 to each element of D. This is true because of what javier says: to find the collection of open balls at a point, we just start at that point and increase the radius of the ball by 1 at each step, writing down each open ball we get and stopping when we get the whole set.
The upshot of this is that if k+1 is not the smallest element of D, we can translate D such that k+1 is the smallest element, without affecting the open ball structure. In fact, I'm pretty sure a stronger statement is true: if D has a gap, we can slide down the upper part of D to close that gap without affecting the open ball structure. That is, if r and s are elements of D such that r < s-1 and there are no elements of D strictly between r and s, then we can translate s and everything above it down by 1. If D has r distinct values, then by doing such translations, we can get D to be a multiset with values in {k+1, ..., k+r}.
Thus, if we have a particular open ball structure C on n points, we can find a multiset D with the above properties such that C = CD. So the number of such multisets provides an upper bound on the number of (unlabeled) metric spaces. I have no idea how good this bound is, but let's calculate it.
Let fr(k) = # of multisets with k elements taking values from {k+1, ..., k+r}, and taking each value at least once. So we essentially have k-r free slots in D, and r different values, so the number of such multisets is the binomial coefficient B((k-r)-(r-1), r-1) = B(k-1, r-1). Then the total number of multisets is f1(k) + ... + fk(k) = 2^{k-1}.
Now, that looks pretty huge: 2^{(n+1)(n-2)/2}. On the other hand, there are 2^{2^n} collections of subsets of X, and even when you account for the fact that you have to include all the singletons and the whole set, 2^{2^n - (n+1)} is hardly an improvement. For n=3, our new upper bound is 4, and the true value is 3, since the multisets {3, 4, 5} and {3, 3, 4} give the same open ball structure.
Can anyone expand on this?
Edit: On further reflection, there are multiple different open ball structures induced by a multiset. For example, if n=4 and D = {7, 7, 7, 7, 8, 8}, then we get different structures if we assign the two distances of 8 to the same vertex or to different vertices. So it seems we should look at ordered k-tuples instead of multisets, which makes our upper bound much larger (bigger than k factorial). So maybe this is less useful than I thought. But at least after thinking about it like this, I might have simplified the problem enough that I can write a program to calculate the next few values of the sequence.
1
Well, here's a start. Suppose we have n points, and let k = n(n-1)/2. Thus there are k distances we have to pick. Let's take all of our distances to lie in the set {k+1, k+2, ..., 2k}, so that we don't have to worry about the triangle inequality. Now, the collection of open balls depends on how many times each distance is repeated. However, translating the set by an integer doesn't affect the collection of open balls. That is, if D is the multiset of distances, and CD is the collection of open balls induced by D, then CD+1 = CD, where by D+1 I mean add 1 to each element of D. This is true because of what javier says: to find the collection of open balls at a point, we just start at that point and increase the radius of the ball by 1 at each step, writing down each open ball we get and stopping when we get the whole set.
The upshot of this is that if k+1 is not the smallest element of D, we can translate D such that k+1 is the smallest element, without affecting the open ball structure. In fact, I'm pretty sure a stronger statement is true: if D has a gap, we can slide down the upper part of D to close that gap without affecting the open ball structure. That is, if r and s are elements of D such that r < s-1 and there are no elements of D strictly between r and s, then we can translate s and everything above it down by 1. If D has r distinct values, then by doing such translations, we can get D to be a multiset with values in {k+1, ..., k+r}.
Thus, if we have a particular open ball structure C on n points, we can find a multiset D with the above properties such that C = CD. So the number of such multisets provides an upper bound on the number of (unlabeled) metric spaces. I have no idea how good this bound is, but let's calculate it.
Let fr(k) = # of multisets with k elements taking values from {k+1, ..., k+r}, and taking each value at least once. So we essentially have k-r free slots in D, and r different values, so the number of such multisets is the binomial coefficient B((k-r)-(r-1), r-1) = B(k-1, r-1). Then the total number of multisets is f1(k) + ... + fk(k) = 2^{k-1}.
Now, that looks pretty huge: 2^{(n+1)(n-2)/2}. On the other hand, there are 2^{2^n} collections of subsets of X, and even when you account for the fact that you have to include all the singletons and the whole set, 2^{2^n - (n+1)} is hardly an improvement. For n=3, our new upper bound is 4, and the true value is 3, since the multisets {3, 4, 5} and {3, 3, 4} give the same open ball structure.
Can anyone expand on this?
|
2013-05-25 03:13:27
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8645256757736206, "perplexity": 251.2254467589117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705352205/warc/CC-MAIN-20130516115552-00062-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://forum.solidworks.com/thread/18730
|
# Not able to set toolbox folder, SW2007 SP5.
Question asked by Aleko Frankman on Oct 8, 2008
Latest reply on Oct 9, 2008 by Aleko Frankman
Hello,
I am trying to use the toolbox and I like the way it works but I need to configure where the resultant files are stored. When I try to configure the system option for the "Hole Wizard/Toolbox" folder, Solidworks pops up an error box with the following message:
Error: Could not find the Standards database 'directory_path'
The directory exists and I can even create new directories while trying to set the target directory.
I am trying to use a networked file server and I thought that might be the problem so I tried my local C: drive. I still get the same error message.
The toolbox files end up in c:\program files\common files\solidworks data. This is a problem because I am working with several remote offices. The files need to be on a shared network file server.
Any ideas?
Thanks,
Aleko
|
2020-09-29 17:06:37
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.896597683429718, "perplexity": 2250.280158156943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400202418.22/warc/CC-MAIN-20200929154729-20200929184729-00784.warc.gz"}
|
http://www.ipb.uni-bonn.de/data/rgbd-dynamic-dataset/
|
# Bonn RGB-D Dynamic Dataset
### The Bonn RGB-D Dynamic Dataset
Abstract: This is a dataset for RGB-D SLAM, containing highly dynamic sequences. We provide 24 dynamic sequences, where people perform different tasks, such as manipulating boxes or playing with balloons, plus 2 static sequences. For each scene we provide the ground truth pose of the sensor, recorded with an Optitrack Prime 13 motion capture system. The sequences are in the same format as the TUM RGB-D Dataset, so that the same evaluation tools can be used. Furthermore, we provide a ground truth 3D point cloud of the static environment recorded using a Leica BLK360 terrestrial laser scanner.
Related publication
If you use this dataset for your research, please cite:
Emanuele Palazzolo, Jens Behley, Philipp Lottes, Philippe Giguère, Cyrill Stachniss, “ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals”, arXiv, 2019. PDF
BibTeX:
@InProceedings{palazzolo2019iros, author = {E. Palazzolo and J. Behley and P. Lottes and P. Gigu\ere and C. Stachniss}, title = {{ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals}}, booktitle = iros, year = {2019}, url = {http://www.ipb.uni-bonn.de/pdfs/palazzolo2019iros.pdf}, codeurl = {https://github.com/PRBonn/refusion}, videourl = {https://youtu.be/1P9ZfIS5-p4}, }`
Ground truth model
We provide the full ground truth point cloud of 394109339 points, as well as a subsampled section of 54676774 points, more convenient for evaluation. The point clouds are in PLY ASCII format. To convert a model from the reference frame of the RGB-D sensor to the one of the ground truth model, refer to the Evaluation section below.
Evaluation
Since the dataset is in the same format as the TUM RGB-D Dataset, for evaluating the trajectory error it is possible to use the same Python tools.
For evaluating the reconstructed model w.r.t. the ground truth, it is first necessary to transform them to the same coordinate frame. To convert a model from the reference frame of the sensor to the one of the ground truth, one can use the following transformation:
$\mathbf{T}_\mathrm{g}=\mathbf{T}_\mathrm{ROS}^{-1}\mathbf{T}_0\mathbf{T}_\mathrm{ROS}\mathbf{T}_\mathrm{m}$,
where $\mathbf{T}_\mathrm{m}$ is the transformation between the reference frame of the RGB-D sensor and the one of the markers used by the motion capture system, $\mathbf{T}_\mathrm{ROS}$ transforms the coordinate frame of the motion capture system to the one used to write the file groundtruth.txt in the sequences, $\mathbf{T}_0$ is the first pose read from the file groundtruth.txt.
The value of $\mathbf{T}_\mathrm{m}$ obtained from our calibration is the following:
$\mathbf{T}_\mathrm{m} = \begin{pmatrix} 1.0157 & 0.1828 & -0.2389 & 0.0113 \\ 0.0009 & -0.8431 & -0.6413 & -0.0098 \\ -0.3009 & 0.6147 & -0.8085 & 0.0111 \\ 0 & 0 & 0 & 1.0000 \end{pmatrix}$.
$\mathbf{T}_\mathrm{ROS}$ is needed due to a bug in the ROS node that interfaces the framework to the motion capture system, and its value is:
$\mathbf{T}_\mathrm{ROS}= \begin{pmatrix} -1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}$.
Finally, $\mathbf{T}_0$ has to be read from the file groundtruth.txt included in the sequence that has to be evaluated.
To simplify this process, we provide a Python script that will compute $\mathbf{T}_\mathrm{g}$ given $\mathbf{T}_0$. The script requires numpy, numpy-quaternion and numba. The three packages can be easily installed with pip.
RGB-D Sequences
The format of the RGB-D sequences is the same as the TUM RGB-D Dataset and it is described here. The depth images are already registered w.r.t. the corresponding RGB images. The calibration of the RGB camera is the following:
fx = 542.822841
fy = 542.576870
cx = 315.593520
cy = 237.756098
d0 = 0.039903
d1 = -0.099343
d2 = -0.000730
d3 = -0.000144
d4 = 0.000000
All Sequences
Size: 16.4 GB
Name: rgbd_bonn_balloon
Size: 243.8 MB
Name: rgbd_bonn_balloon2
Size: 267.0 MB
Name: rgbd_bonn_balloon_tracking
Size: 325.9 MB
Name: rgbd_bonn_balloon_tracking2
Size: 236.4 MB
Name: rgbd_bonn_crowd
Size: 515.9 MB
Name: rgbd_bonn_crowd2
Size: 498.4 MB
Name: rgbd_bonn_crowd3
Size: 482.0 MB
Name: rgbd_bonn_kidnapping_box
Size: 619.0 MB
Name: rgbd_bonn_kidnapping_box2
Size: 723.9 MB
Name: rgbd_bonn_moving_nonobstructing_box
Size: 436.2 MB
Name: rgbd_bonn_moving_nonobstructing_box2
Size: 513.1 MB
Name: rgbd_bonn_moving_obstructing_box
Size: 320.8 MB
Name: rgbd_bonn_moving_obstructing_box2
Size: 422.1 MB
Name: rgbd_bonn_person_tracking
Size: 329.5 MB
Name: rgbd_bonn_person_tracking2
Size: 324.3 MB
Name: rgbd_bonn_placing_nonobstructing_box
Size: 400.8 MB
Name: rgbd_bonn_placing_nonobstructing_box2
Size: 372.8 MB
Name: rgbd_bonn_placing_nonobstructing_box3
Size: 369.8 MB
Name: rgbd_bonn_placing_obstructing_box
Size: 508.9 MB
Name: rgbd_bonn_removing_nonobstructing_box
Size: 271.7 MB
Name: rgbd_bonn_removing_nonobstructing_box2
Size: 501.3 MB
Name: rgbd_bonn_removing_obstructing_box
Size: 509.9 MB
Name: rgbd_bonn_synchronous
Size: 182.8 MB
Name: rgbd_bonn_synchronous2
Size: 203.5 MB
Name: rgbd_bonn_static
Size: 5.8 GB
|
2020-01-24 09:15:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24111096560955048, "perplexity": 6085.837181339004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250616186.38/warc/CC-MAIN-20200124070934-20200124095934-00507.warc.gz"}
|
http://mathonline.wikidot.com/the-limit-of-a-function
|
The Limit of a Function
This page is intended to be a part of the Real Analysis section of Math Online. Similar topics can also be found in the Calculus section of the site.
Definition: Let $f : A \to \mathbb{R}$ be a function and let $c$ be a cluster point of $A$. Then we say $L \in \mathbb{R}$ is the limit of $f$ at $c$ written $\lim_{x \to c} f(x) = L$ if $\forall \epsilon > 0$ there exists a $\delta > 0$ such that $\forall x \in A$ with $0 < \mid x - c \mid < \delta$ we have that $\mid f(x) - L \mid < \epsilon$. Equivalently this definition can be rephrased as $\lim_{x \to c} f(x) = L$ if $\forall \epsilon > 0 \: \exists \delta > 0$ such that if $x \neq c$ and $x \in V_{\delta} (c) \cap A$ then $f(x) \in V_{\epsilon} (L)$.
We should make mention that the definition of a $L$ being the limit of a function $f$ at the point $x = c$ does not require that $c$ be in the domain, $A$ of $f$. Instead, the definition of the limit of a function is only concerned that $c$ is a cluster point of the domain $A$, that is for $\delta > 0$, then for any delta-neighbourhood around $a$, there exists at least one point in $A$ that differs from $c$. This ensures that for very small $\delta$ there are points in the domain of $f$ for which the limit can be determined.
The following three pages regard the limit of a function. The first page regards the uniqueness of the limit of a function at the cluster point $c$ of $A$, that is if $\lim_{x \to c} f(x) = L$ and $\lim_{x \to c} f(x) = M$ then $L = M$. The second page regards criterion in terms of sequences for which a function has limit $L$ at $c$, while the third page regards criteria in terms of sequences for which a function does NOT have limit $L$ at $c$.
|
2021-01-16 05:44:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9467810392379761, "perplexity": 45.6789113883831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703500028.5/warc/CC-MAIN-20210116044418-20210116074418-00359.warc.gz"}
|
http://www.physicsforums.com/showthread.php?p=3607440
|
## Chebyshev's theorem
Chebyshev's theorem: If μ and σ are the mean and standard deviation of the random variable X, then for any positive constant k,the probability that X will take on a value within k standard deviations of the mean is at least [1-(1/k²)],that is,
P(|X-μ|<kσ) ≥ 1-1/k², σ≠0.
(i) given the chebyshev theorem,prove this theorenn using classical definition of variance.
(ii)Give an example of how this theorem can be used to calculate probability.
PhysOrg.com science news on PhysOrg.com >> Heat-related deaths in Manhattan projected to rise>> Dire outlook despite global warming 'pause': study>> Sea level influenced tropical climate during the last ice age
Quote by risha Chebyshev's theorem: If μ and σ are the mean and standard deviation of the random variable X, then for any positive constant k,the probability that X will take on a value within k standard deviations of the mean is at least [1-(1/k²)],that is, P(|X-μ|
Are you asking us to answer this question for you? If this is a homework question, it should be posted in that forum with an attempt at a solution. In any case, we want to see your attempt at an answer.
|
2013-05-20 11:55:01
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8629390001296997, "perplexity": 727.9079371440006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00022-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.scielosp.org/article/rpsp/2006.v20n6/377-384/
|
INVESTIGACIÓN ORIGINAL ORIGINAL RESEARCH
Burden of diarrhea among children in Honduras, 20002004: estimates of the role of rotavirus
La diarrea infantil en Honduras, 20002004: estimados del papel desempeñado por el rotavirus
José Orlando Solórzano GirónI; Ida Berenice MolinaI; Reina M. Turcios-RuizII; Claudia E. Quiroz MejiaI; Luis Miguel AmendolaIII; Lucia Helena de OliveiraIV; Jon K. AndrusIV; Paul W. StuppII; Joseph S. BreseeII; Roger I. GlassII
ISecretaría de Salud, Tegucigalpa, Honduras
IICenters for Disease Control and Prevention, Atlanta, Georgia, United States of America. Send correspondence to: Reina M. Turcios-Ruiz, Centers for Disease Control and Prevention, 4770 Buford Highway, Mail Stop K-23, Atlanta, GA 30341, United States of America; telephone: 770-488-6219; fax: 770-488-6242; e-mail: RTurcios@cdc.gov
IIIPan American Health Organization, Tegucigalpa, Honduras
IVPan American Health Organization, Washington, D.C., United States of America
ABSTRACT
OBJECTIVES: To estimate the annual burden of diarrhea and of diarrhea that is associated with rotavirus (RV) in children who are treated at public clinics and hospitals in Honduras.
METHODS: Data were collected from computerized records of all children < 5 years old treated for diarrhea at clinics and hospitals operated by the Secretary of Health for the period of 2000 through 2004. A review of studies of RV in Honduras and neighboring countries provided estimates of detection rates of RV among children treated for acute diarrhea as outpatients or as inpatients. From these data, we estimated the annual number of cases of diarrhea and of rotavirus-related diarrhea in Honduras, the cumulative incidence of diarrhea and of rotavirus-related diarrhea for a child from birth to age 5 years, and the number of fatalities due to RV among children hospitalized for diarrhea.
RESULTS: From 2000 through 2004, a mean of 222 000 clinic visits, 4 390 hospitalizations, and 162 in-hospital deaths due to diarrhea were recorded annually among children < 5 years of age in the public health facilities in Honduras. From our review of scientific literature on Honduras and neighboring countries, an estimated 30% of outpatients and 43% of inpatients who were treated for diarrhea would be expected to have RV. Consequently, we estimated that 66 600 outpatient visits, 1 888 hospitalizations, and 70 in-hospital deaths among children < 5 years in Honduras could be attributed to RV each year. Therefore, a child in the first five years of life has a respective risk for consultation, hospitalization, and in-hospital death of 1:1, 1:46, and 1:1 235 for diarrhea. For an episode associated with RV, the respective risks are 1:3, 1:106, and 1:2 857. These values likely underestimate the true burden of diarrhea in Honduras, since some 51% of children with acute diarrhea do not receive formal care for the illness, 70% do not receive oral rehydration solution, and 80% of diarrheal deaths occur outside of hospitals.
CONCLUSIONS: Diarrhea is a major cause of illness among children < 5 years old in Honduras, and RV is likely the most common cause. Our preliminary estimates need to be refined so that health planners in Honduras can make decisions on the future use of rotavirus vaccines. A program of hospital-based surveillance for rotavirus in Honduras has been established to address this need.
Key words: Diarrhea; rotavirus infections; health care costs; child, preschool; infant; infant, newborn; Honduras.
RESUMEN
OBJETIVOS: Estimar la carga anual por diarrea y por diarrea asociada con la infección por rotavirus (RV) en niños atendidos en clínicas y hospitales públicos de Honduras.
MÉTODOS: Los datos se obtuvieron a partir de los registros computarizados de todos los niños menores de 5 años atendidos por diarrea en clínicas y hospitales operados por la Secretaría de Salud de Honduras durante el período 20002004. Una revisión de los estudios realizados sobre RV en Honduras y los países vecinos ofreció estimados de las tasas de detección de RV en niños tratados por diarrea aguda hospitalizados o de forma ambulatoria. Con estos datos se estimó el número anual de casos de diarrea y de diarrea asociada con la infección por RV en Honduras, la incidencia acumulativa de diarrea y de diarrea asociada con la infección por RV en niños menores de 5 años y el número de muertes debido a RV en niños hospitalizados por diarrea.
RESULTADOS: Entre los años 2000 y 2004 se registraron medias anuales de 222 000 visitas médicas, 4 390 hospitalizaciones y 162 muertes hospitalarias por diarrea en niños menores de 5 años en instalaciones sanitarias públicas de Honduras. A partir de la revisión de la literatura científica relativa a Honduras y los países vecinos se estimó que 30% de los casos de diarrea atendidos ambulatoriamente y 43% de los hospitalizados podrían deberse a RV. En consecuencia, se estimó que 66 600 visitas médicas ambulatorias, 1 888 hospitalizaciones y 70 muertes hospitalarias de niños menores de 5 años pueden atribuirse a la infección por RV anualmente en Honduras. Por lo tanto, los riesgos de un niño en sus primeros 5 años de vida de asistir a una consulta, de ser hospitalizado y de morir en un hospital por diarrea son de 1:1, 1:46 y 1:1 235, respectivamente. Los riesgos asociados con la infección por RV son de 1:3, 1:106 y 1:2 857, respectivamente. Posiblemente, estos valores subestiman la carga real por diarrea en Honduras, ya que alrededor de 51% de los niños con diarrea aguda no reciben atención médica formal por esa enfermedad, 70% no reciben sales de rehidratación oral y 80% de las muertes por diarrea ocurren fuera de los hospitales.
CONCLUSIONES: La diarrea es una importante causa de enfermedad en niños menores de 5 años en Honduras y la infección por RV es posiblemente su causa más frecuente. Estos estimados preliminares deben precisarse más para que los encargados de la planificación sanitaria en Honduras puedan tomar decisiones acerca de la aplicación de vacunas contra el RV en el futuro. En Honduras se estableció un programa basado en hospitales para la vigilancia de la infección por RV y para responder a esa necesidad.
Palabras clave: Diarrea, infecciones por rotavirus, costos de la atención en salud, preescolar, lactante, recién nacido, Honduras.
Diarrhea is a leading cause of illness and death among children in developing countries, and rotavirus (RV) is the most common etiologic agent (1). Consequently, the recent licensure of two new vaccines raises the prospect that this major childhood illness could soon be preventable (2). One RV vaccine, Rotarix (GlaxoSmithKline Biologicals, Rixensart, Belgium), which is administered in three doses (3), was first licensed in more than 10 Latin American countries during 2004 or 2005.1 In addition, Rotarix will soon become part of Brazils routine national immunization program (4). A second vaccine, RotaTeq (Merck and Company, Whitehouse Station, New Jersey, United States of America), which is administered in two doses (5), has been licensed in the Europe, Mexico, and the United States of America, and it could soon be available for more widespread use (6, 7). The ability of these potentially life-saving vaccines has raised the need for countries to fully assess the burden of the disease in their own settings so that they can determine the value of the vaccine there, as well as the cost at which vaccine introduction would be within their reach.
RV vaccines will likely be introduced in the Americas before distribution in other regions of the world (8). Rotarix is already sold in the private markets of some Latin American countries for as much as US$50 per dose (9), bringing the total cost of a full course of this vaccine to US$ 150 per child. However, the final cost of the vaccines has not been determined, and the true impact of the vaccines on severe and fatal disease will not be appreciated until the vaccine enters routine childhood immunization programs in these countries. Honduras is one of the six poorest countries in Latin America, with a per capita income below US\$ 1 000 annually, but investment in health care is a national priority, and vaccine coverage rates are high (> 90%) (10). Honduras is also an early adopter of new vaccines, and it was one of the first in Latin America to successfully introduce a pentavalent vaccine against diphtheria, tetanus, pertussis, hepatitis B, and Haemophilus influenzae type b (Salvador Garcia, Pan American Health Organization, Washington, D.C., personal communication, 22 January 2004). The successful introduction of new vaccines, high immunization rates, and low per capita income make Honduras eligible for financial support for new vaccine introduction through the Global Alliance for Vaccines and Immunizations (GAVI) and the Vaccine Fund (11). GAVI is a partnership of both national and international public and private organizations, including UNICEF, the World Health Organization (WHO), the World Bank, pharmaceutical companies that manufacture vaccines, national ministries of health, nongovernmental organizations, and private donors, that aims to increase access to vaccines in developing countries. The Vaccine Fund is the financing resource created by the GAVI partners to support GAVIs goal. If there were a substantial burden of RV disease in Honduras, the country would be a good candidate for the early introduction of an RV vaccine, provided that the cost could be met with local resources. However, solid evidence from Honduras is needed in order to document the burden of RV disease and to estimate the cost-effectiveness of a vaccination program for the country.
In Honduras, Government hospitals and clinics provide most treatment for diarrhea diseases (10). The introduction of a vaccine could therefore have a measurable impact on outcomes that are routinely monitored by the Office of the Secretary of Health (Secretaría de Salud) (OSH).
To assess the burden of RV disease in Honduras, we reviewed surveillance data compiled by the OSH on hospitalizations, clinic visits, and deaths due to diarrhea. We linked this information with data on the rates of RV detection that might be expected among children with diarrhea seen in these settings. We concluded by estimating the burden of RV disease, in anticipation of more comprehensive data that will be coming in one to two years from a national sentinel hospital surveillance program that has been established at six hospitals throughout the country.
METHODS
To estimate the burden of childhood diarrhea leading to clinic visits, hospitalizations, and deaths, we reviewed the computerized records of the OSH on hospitalizations and clinic visits for diarrhea among children < 5 years old for the years 2000 through 2004. Using that data, we calculated the mean number of medical consultations, hospitalizations, and in-hospital deaths annually. We collected data on diarrhea-related events from four sources (Table 1). Visits for acute gastroenteritis to all public clinics (N » 1 200) and all public hospitals (N = 28) were obtained from the OSH, which funds and provides care to approximately 60% of the population of Honduras (10). The OSH has traditionally divided public health infrastructure in the eighteen political divisions (departments) of the country into eight health regions, each encompassing several departments, and one metropolitan region, which is the capital and larg-est city, Tegucigalpa (10). Each public clinic and hospital regularly reports the number of visits and hospitalizations for diarrhea by age group to the OSH. Deaths occurring in hospitals are recorded in discharge records, which are regularly reviewed and compiled.
Additional mortality data were obtained from the 2001 National Survey of Epidemiology and Family Health (NSEFH 2001) (Encuesta Nacional de Epidemiología y Salud Familiar) (12). As part of NSEFH 2001, a total of 3 936 mothers of children < 5 years of age were interviewed between 21 February and 19 August of 2001, about their childrens health, immunization history, and symptoms and treatment for diarrheal episodes in the preceding 15 days. The women were selected systematically: One woman of childbearing age with children < 5 years old living in the household was selected from 12 000 households in 400 randomly selected census tracts (30 households per tract). The selected census tracts were located throughout Honduras, except for a sparsely populated, remote eastern department (Gracias a Dios) and the department (Islas de la Bahia) that is composed of three small islands in the Caribbean Sea. Responses were weighted by the number of children < 5 years old in the household. If it was found that a child had died, the circumstances surrounding the death were explored. Among the questions were whether a death certificate had been obtained, and if the death had been registered with civil authorities.
To determine the likelihood that a diarrhea event was caused by RV, we developed estimates from two studies that examined RV diarrhea in Honduras, and we compared these results with findings from other studies in the Latin America. One community-based study in Honduras followed 268 children < 5 years of age for diarrhea episodes over the course of a year. RV was detected in 15% of children who developed diarrhea, and it was observed most frequently from November to March in children ? 13 months of age (13). In the second Honduran study (14), children with diarrhea who were seen in an emergency room over a 24-month period were enrolled in a trial evaluating Saccharomyces boulardii for the treatment of acute diarrhea. RV was detected in 43% of samples tested from 521 children < 5 years old. A recent review of selected studies of RV in Latin America yielded a median detection rate of 30% among children in the outpatient setting and of 38% among children who were hospitalized with more severe disease (15). From these data we extrapolated RV detection rates of 30% for outpatients (15) and of 43% for inpatients (13). We multiplied these figures by the mean annual number of clinic visits and hospitalizations derived from the Honduran data. To calculate the deaths attributable to RV, we applied the RV estimate for severe cases leading to hospitalization (43%) to the number of diarrheal deaths each year in Honduras extrapolated by two methods. Since only approximately 15% of all deaths are reported through hospital discharge records (16, 17), we first estimated the total number of deaths due to diarrhea by dividing the number of in-hospital deaths due to diarrhea by 0.15. The second method that we used to estimate the overall burden of diarrheal disease was developed by Parashar et al. (1). The method took the number of deaths among children < 5 years, which was estimated by multiplying the mortality rate of those < 5 years old (48/1 000 live births (18)) by the mean size of an annual birth cohort (N » 200 000 (18)). This number was then multiplied by 0.17, for the 17% of those deaths attributable to diarrhea, in line with the countrys per capita gross national product (1).
We calculated the cumulative risks of a diarrhea-related and of a rotavirus-related event per child from birth to age 5 years. To derive this risk, we assumed that the number of all diarrhea-related or rotavirus-related events that would occur in a single year among a group of children < 5 years old would be equal to the number of events occurring in a one-year birth cohort of children followed to age 5. We expressed each cumulative risk as the ratio of one to the average size of a one-year birth cohort divided by the number of specific events (consultations, hospitalizations, and deaths). The cumulative risk of diarrhea-related consultation, for example, was expressed as one to the quotient of the average birth cohort and the average number of diarrhea-related consultations in one year.
Finally, to examine the seasonality of diarrhea-related events, we plotted the monthly number of consultations and hospital discharges among children < 5 years of age as a function of time, and we calculated and graphed the median number of events per year. We shaded the portions of the curve that met two criteria suggestive of a "RV season": (1) where for a period of two or more months the number of consultations and hospitalizations exceeded the annual median and (2) that occurred between November and March, the period of the year when RV has been documented to circulate in Honduras and in neighboring countries (13, 1921).
RESULTS
From 2000 to 2004, OSH computerized records for public facilities identified a mean of 222 000 consultations annually for diarrhea among children < 5 years old (range: 196 100 to 247 169), 4 390 hospital discharges (range: 3 809 to 5 513), and 162 in-hospital deaths (range: 38 to 182) (Table 2). Studies in Latin America and Honduras reported RV detection rates of 30% for outpatients (15) and 43% for inpatients (13). We therefore estimated that 66 600 of the consultations, 1 888 of the hospitalizations, and 70 of the deaths each year would be related to RV. We lacked data to correct for diarrhea treatment at private and semiprivate hospitals, clinics, pharmacies, and doctors offices. Nonetheless, for an average birth cohort of approximately 200 000 (range: 195 000 to 210 000 between 2000 and 2002 (18)), every child will require a consultation for diarrhea, 1:46 will be hospitalized, and 1:1 235 will die in a public hospital by their fifth birthday. Rotavirus-related risks would be 1:3 for consultation, 1:106 for hospitalization, and 1:2 857 for an in-hospital death.
We estimated the total number of diarrheal deaths, and subsequently rotavirus-related deaths, by two methods (Table 2). Since in-hospital deaths were estimated to represent only 15% of all deaths in Honduras (16), we directly extrapolated total deaths due to diarrhea from the total number of in-hospital deaths by dividing the mean number of in-hospital deaths per year (n = 162) by 0.15, resulting in 1 080 deaths annually, for a risk of 1:185 for a diarrhea-related death by age 5 years. When we used an alternative method based on national mortality statistics for children < 5 years old and estimators presented in a worldwide assessment for the fraction of diarrhea cases attributable to RV, we arrived at 1 632 deaths and an approximate risk of 1:123 for diarrhea-related death by age 5. For rotavirus-related deaths, given a 43% detection rate, the annual numbers of deaths from RV were estimated to be 464 with the first method and 701 with the second method. Therefore, the risks of a child dying of rotavirus-related diarrhea by age 5 years would be 1:431 with the first method and 1:285 with the second method.
Rates of diarrhea consultations were plotted for the eight health regions of the country (Figure 1). The lowest rates were in the central and northwestern regions. The highest rates were in the Atlantic coast and in the western health regions. The regions with high rates maintained their higher rates throughout the period studied.
The NSEFH 2001 survey yielded additional information on the prevalence of recent diarrheal illness and the number of children who died of diarrhea. Among the 3 936 children < 5 years old whose health was surveyed through the interview of their mothers, a weighted 22% had acute diarrhea in the 15 days prior to the interview. (The methods of the weighting process are described elsewhere (12)). Of this 22%, 49% were brought for consultation to a physician or clinic, 30% received oral rehydration solution, and 2% required hospitalization (12). Diarrhea and respiratory illnesses were the leading causes of death in the postneonatal period. Of 281 children who died from causes other than accidents, 50 (18%) died of diarrhea. Nearly half of the 50 (n = 23) died during the RV season (November to March), with 12 (24%) in the month of December alone. Most (80%) deaths occurred outside of hospitals (34 at home, 10 in the hospital, and 6 elsewhere). Despite the mother recalling evidence of severe disease with dehydration (sunken eyes, depressed fontanelle, loss of skin turgor, and/or oliguria) in 47 of the children who died of diarrhea, 15% (n = 7) were not brought for evaluation and care to a health care facility. Only 16% (n = 8) of the children with a diarrhea-related death (n = 50) had been issued a death certificate.
Examination of the temporal trends of diarrhea-related events demonstrated that the monthly number of hospital discharges had two peaks, one in the early months of the year (January to April) and one in mid-year (June to August) (Figure 2). The findings of a community-based study in Honduras (13) suggest that the peak between January and April is likely attributable to RV, and could be termed the "RV season," while the peak between June and August is likely due to bacteria pathogens, as indicated by diarrhea surveillance among children in a neighboring country (20). This two-peak profile was more evident with hospital discharges than with consultations. The JuneAugust peak for consultations was significantly larger than that in January April, indicating that bacterial diarrheas remain a significant problem. However, for hospitalizations, the peaks were generally comparable, and in one case (the early peak of 2004) the early-year, JanuaryApril peak exceeded the mid-year, JuneAugust peak. The number of in-hospital deaths per month was small and incomplete, limiting our ability to plot that. However, a curve with early- and mid-year peaks was observed, which lends additional evidence for the seasonal pattern of severe diarrhea in Honduras (data not shown).
DISCUSSION
Diarrhea is an important cause of illness and death in Honduras among children < 5 years of age treated in public clinics and hospitals, causing more than 200 000 consultations, 4 000 hospitalizations, and 1 000 deaths each year. This study provides baseline estimates of the burden of RV disease occurring each year: nearly 70 000 consultations, almost 2 000 hospitalizations, and between 464 and 701 deaths among children < 5 years old. The peaks in diarrhea-related consultations, hospitalizations, and deaths occurring between January and April suggest that RV is an important cause of severe disease early in the year (the "RV season"). Furthermore, while the mid-year and early-year peaks of diarrhea hospitalizations are similar in size, medical consultations are more numerous mid-year, suggesting that RV illnesses are more severe and more likely to result in hospitalization.
We were unable to assess the number of diarrheal illnesses treated at home or in private facilities. However, NSEFH 2001 data indicated that fewer than half of children with diarrhea are brought for consultation, and only one quarter received oral rehydration solution, highlighting that improvement in home and outpatient treatments could be instituted to prevent severe illness. Among fatal cases of diarrhea, 80% died outside of hospitals, and 15% of children with diarrhea and signs of dehydration did not receive care at any facility. Furthermore, only 16% of the deaths were reported to civil authorities and issued death certificates.
The risk of RV-related events for a child in Honduras differs from the estimates published for several other countries in the Americas (Table 3). Not corrected for cases occurring outside of public hospitals in Honduras, for a Honduran child, the risk of hospitalization was lower than it was for a child in Argentina (23), El Salvador (20), Peru (22), the United States (25), and Venezuela (24). However, the risks for consultation and for death were higher in Honduras than in the other countries. All these other countries have per capita gross national incomes that are from 2 to 40 times the level in Honduras (26). The lower risk for hospitalization but higher risk for consultation and death for a child born in Honduras may reflect incomplete reporting of children visiting facilities other than the Government ones, or issues of access to health care services. This supports the finding by Parashar et al. (1) that RV illness has worse outcomes in poorer countries. Our estimates of the need for a clinic visit due to RV diarrhea in Honduras were comparable to the worldwide estimate of 1:5. However, our estimates were higher than the worldwide estimates for RV-related hospitalizations of 1:65 and for deaths of 1:293.
Our estimates of the burden of disease due to diarrheal illness have some real limitations. In addition, our estimates of the burden of RV are conservative and limited by available data, which are incomplete. We have not corrected for the proportion of children cared for outside of Government facilities. The lack of care at public health facilities for children with fatal illness, as indicated by the NSEFH 2001 findings, suggests that a segment of the population is not represented in the data flowing from public clinics and hospitals. However, our survey did not explore facility access.
The scarcity of information on RV in Honduras underscores the need for better data. The need is magnified because many countries of Latin America will likely consider the need for RV immunization in the near future, and the effectiveness of the vaccine cannot be measured without good baseline data and a mechanism to measure impact. A key factor in considering vaccine introduction in Honduras is the policy that most hospital and clinic care is provided at public facilities and paid for by the Government. Consequently, any decrease in severe RV illness leading to a reduced need for medical care will yield direct economic benefits for the OSH. A safe and effective RV vaccine would prevent severe disease, including the proportion that now occurs outside public facilities. Based on the World Health Organizations generic protocols for hospital-based surveillance to estimate the burden of rotavirus gastroenteritis in children (27), a sentinel hospital surveillance system has been established in Honduras to assess the prevalence of RV among children < 5 years of age admitted at six hospitals throughout the country. These data, along with more targeted information on the economic cost of the disease, should improve the ability of local health policymakers to determine if introduction of a RV vaccine is advisable, and to assess the price at which introduction of an RV vaccine would be cost-effective and sustainable in the nations program of childhood immunizations.
Acknowledgment. The findings and conclusions in this report are those of the authors, and they do not necessarily represent the views of the Centers for Disease Control and Prevention. The work presented in this paper was funded in part by unrestricted donations from the Rotavirus Vaccine Program of the Program for Appropriate Technologies in Health (PATH).
REFERENCES
1. Parashar UD, Hummelman EG, Bresee JS, Miller MA, Glass RI. Global illness and deaths caused by rotavirus disease in children. Emerg Infect Dis. 2003;9(5):56572.
2. Glass RI, Bresee JS, Turcios R, Fischer TK, Parashar UD, Steele AD. Rotavirus vaccines: targeting the developing world. J Infect Dis. 2005;192(Suppl 1):S1606.
3. Vesikari T, Matson DO, Dennehy P, Van Damme P, Santosharn M, Rodriguez Z, et al. Safety and efficacy of a pentavalent human bovine (WC3) reassortant rotavirus vaccine. N Engl J Med. 2006;354(1):2333.
4. Dias Lopes A. Vacina do rotavírus entra no calendário oficial em dezembro. Estado de São Paulo 2005. September 14:A17.
5. Ruiz-Palacios GM, Pérez-Schael I, Velázquez FR, Abate H, Breuer T, Costa Clemens SA, et al. Safety and efficacy of an attenuated vaccine against severe gastroenteritis. N Engl J Med. 2006;354(1):1122.
6. Braine T. Rotavirus vaccine introduction in Mexico sets precedent. Bull World Health Organ. 2005;83(3)167.
7. United States of America, Food and Drug Administration. FDA approves new vaccine to prevent rotavirus gastroenteritis in infants [news release]. Available from: http://www.fda.gov/bbs/topics/news/2006/NEW01307.html [Web page]. Accessed 9 February 2006.
8. Pan American Health Organization. Regional meeting for the Americas assesses progress against rotavirus. Rev Panam Salud Publica. 2004;15(1):6670.
9. Palazzo M. Mamás felices con vacuna contra el rotavirus. Available from: http://www.lun.com/sociedad/Salud/detalle_noticia.asp?cuerpo=701&seccion=800&subseccion= 901&idnoticia=C386138614295833 [Web page]. Accessed 20 September 2005.
10. Honduras, Secretaría de Salud, Departamento de Estadística, Unidad de Planeamiento y Evaluación de la Gestión. Salud en cifras: 1997 2001. Tegucigalpa: Secretaría de Salud; 2002
11. The Global Alliance for Vaccines & Immunizations. 75 countries eligible for support. Available from: http://www.vaccinealliance.org/support_to_country/index.php [Web page]. Accessed 31 August 2004.
12. Honduras, Secretaría de Salud. Encuesta Nacional de Epidemiología y Salud Familiar. Informe final. Tegucigalpa: Secretaría de Salud; 2001.
13. Figueroa M, Padilla N, Gutierrez H. Rotavirus en las diarreas infantiles de Honduras. Rev Med Hondur. 1992;60(1):1420.
14. Melgar-Cano R, Moncada W. Evaluación terapéutica de Saccharomyces boulardii en pacientes con diarrea líquida aguda: estudio de casos y controles. Publ Cient Postgrad Med. 2003;8:13.
15. Kane EM, Turcios RM, Arvay ML, Garcia S, Bresee JS, Glass RI. The epidemiology of rotavirus diarrhea in Latin America. Anticipating rotavirus vaccines. Rev Panam Salud Publica. 2004;16(6):3717.
16. Pan American Health Organization. Regional Core Health Data Initiative. Country health profile 2002: Honduras. Available from: http://www.paho.org/English/DD/AIS/cp_ 340.htm [Web site]. Accessed 30 August 2004.
17. Pan American Health Organization, Health Analysis and Information Systems Area. Regional Core Health Data Initiative; Technical Health Information System. Estimated under-5 mortality, Honduras, 20002004. Available from: http://www.paho.org/English/SHA/coredata/tabulator/newTabulator.htm [Web site]. Accessed 30 August 2005.
18. World Bank. Database of gender statistics. Honduras, population age 0 male and female, 20002002. Available from: http://devdata.worldbank.org/genderstats/query/default.htm [Web site]. Accessed 31 August 2004.
19. Mata L, Simhon A, Urrutia JJ, Kronmal RA. Natural history of rotavirus infection in the children of Santa Maria Cauque. Prog Food Nutr Sci. 1983;7(34):16777.
20. Guardado JA, Clara WA, Turcios RM, Fuentes RA, Valencia D, Sandoval R, et al. Rotavirus in El Salvador: an outbreak, surveillance and estimates of disease burden, 20002002. Pediatr Infect Dis J. 2004;23(10 Suppl):S15660.
21. Espinoza F, Paniagua M, Hallander H, Svensson L, Strannegard O. Rotavirus infections in young Nicaraguan children. Pediatr Infect Dis J. 1997;16(6):56471.
22. Ehrenkranz P, Lanata CF, Penny ME, Salazar-Lindo E, Glass RI. Rotavirus diarrhea disease burden in Peru: the need for a rotavirus vaccine and its potential cost savings. Rev Panam Salud Publica. 2001;10(4):2408.
23. Gómez JA, Nates S, De Castagnaro NR, Espul C, Borsa A, Glass RI. Anticipating rotavirus vaccines: review of epidemiologic studies of rotavirus diarrhea in Argentina. Rev Panam Salud Publica. 1998;3(2):6978.
24. Salinas B, Gonzalez G, Gonzalez R, Escalona M, Materan M, Schael IP. Epidemiologic and clinical characteristics of rotavirus disease during five years of surveillance in Venezuela. Pediatr Infect Dis J. 2004;23(10 Suppl): S1617.
25. Tucker AW, Haddix AC, Bresee JS, Holman RC, Parashar UD, Glass RI. Cost-effectiveness analysis of a rotavirus immunization program for the United States. JAMA. 1998;279(17): 13716.
26. World Bank. World Development Indicators Database: GNI per capita, 2004, Atlas method and PPP. Available from: http://www.worldbank.org/data/databytopic/GNIPC.pdf [Web site]. Accessed 9 August 2005.
27. World Health Organization. Generic protocols for (i) hospital-based surveillance to estimate the burden of rotavirus gastroenteritis in children and (ii) a community-based survey on utilization of health case services for gastroenteritis in children. Geneva: WHO; 2002.
Manuscript received 5 December 2005.
Revised version accepted for publication 26 June 2006.
1 Vesikari T. RIX4414: a new attenuated human rotavirus vaccine [conference presentation abstract]. Available from: http://www.kenes.com/espid 2005/program/abstracts/487.doc.
Organización Panamericana de la Salud Washington - Washington - United States
E-mail: contacto_rpsp@paho.org
|
2021-09-26 10:49:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3148699402809143, "perplexity": 12404.79687277641}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057857.27/warc/CC-MAIN-20210926083818-20210926113818-00546.warc.gz"}
|
http://www.ams.org/mathscinet-getitem?mr=1286929
|
MathSciNet bibliographic data MR1286929 (95h:57002) 57M15 (57M25) Taniyama, Kouki Cobordism, homotopy and homology of graphs in ${\bf R}\sp 3$${\bf R}\sp 3$. Topology 33 (1994), no. 3, 509–523. Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
|
2015-09-02 18:40:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992862343788147, "perplexity": 9604.76029931137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645281115.59/warc/CC-MAIN-20150827031441-00167-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://electronics.stackexchange.com/questions/34594/driving-a-12v-door-strike-from-3v3-beaglebone/34602
|
Driving a 12V door strike from 3V3 BeagleBone?
I'm trying to control a 12V electronic door strike lock from my 3V3 BeagleBone.
Here's a diagram of my circuit:
My circuit probably needs a NPN transisor of some sort, but I'm not sure which one would work. Also, because the strike lock is a solenoid, do I need any diodes/other protection in my circuit to prevent over-voltage/other bad things?
Thanks. Your help is much appreciated!
• I don't know what your power source for the Beaglebone is, but I'd probably use a separate supply for the solenoid. The BB has/needs information level power, not do-mechanical-things power. Why force that through it? Plus, you may be able to avoid needing the step up circuit. Nor does the solenoid need highly regulated power at a precise tolerance. Yes, a reverse connected protection diode across the solenoid to deal with the jolt when power is removed and its magnetic field collapses is always a Good Idea. (Better qualified persons will answer your which transistor question.) Jun 27, 2012 at 3:18
• Do you have 12V available? GPI01 via 1k resistor to transistor base. Emitter to ground. collector to strike terminal 1 (either). 12V+ to strike terminal 2. 1N400x diode across strike coil - Cathode = line side to =!2V. Transistor BC337-40. Many others work but if you have none that's as good as most. Available Digikey and w=elsewhere. Jun 27, 2012 at 4:56
The BC337-40 suggested by Russell is a good choice; it has a high $H_{FE}$, and can draw up to 500 mA.
|
2022-09-28 18:24:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35962340235710144, "perplexity": 3936.8116866695295}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00590.warc.gz"}
|
https://en.m.wikipedia.org/wiki/Expected_value
|
# Expected value
In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a large number of independently selected outcomes of a random variable.
The expected value of a random variable with a finite number of outcomes is a weighted average of all possible outcomes. In the case of a continuum of possible outcomes, the expectation is defined by integration. In the axiomatic foundation for probability provided by measure theory, the expectation is given by Lebesgue integration.
The expected value of a random variable X is often denoted by E(X), E[X], or EX, with E also often stylized as E or ${\displaystyle \mathbb {E} .}$[1][2][3]
## History
The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks to divide the stakes in a fair way between two players, who have to end their game before it is properly finished.[4] This problem had been debated for centuries. Many conflicting proposals and solutions had been suggested over the years when it was posed to Blaise Pascal by French writer and amateur mathematician Chevalier de Méré in 1654. Méré claimed that this problem couldn't be solved and that it showed just how flawed mathematics was when it came to its application to the real world. Pascal, being a mathematician, was provoked and determined to solve the problem once and for all.
He began to discuss the problem in the famous series of letters to Pierre de Fermat. Soon enough, they both independently came up with a solution. They solved the problem in different computational ways, but their results were identical because their computations were based on the same fundamental principle. The principle is that the value of a future gain should be directly proportional to the chance of getting it. This principle seemed to have come naturally to both of them. They were very pleased by the fact that they had found essentially the same solution, and this in turn made them absolutely convinced that they had solved the problem conclusively; however, they did not publish their findings. They only informed a small circle of mutual scientific friends in Paris about it.[5]
In Dutch mathematician Christiaan Huygens' book, he considered the problem of points, and presented a solution based on the same principle as the solutions of Pascal and Fermat. Huygens published his treatise in 1657, (see Huygens (1657)) "De ratiociniis in ludo aleæ" on probability theory just after visiting Paris. The book extended the concept of expectation by adding rules for how to calculate expectations in more complicated situations than the original problem (e.g., for three or more players), and can be seen as the first successful attempt at laying down the foundations of the theory of probability.
In the foreword to his treatise, Huygens wrote:
It should be said, also, that for some time some of the best mathematicians of France have occupied themselves with this kind of calculus so that no one should attribute to me the honour of the first invention. This does not belong to me. But these savants, although they put each other to the test by proposing to each other many questions difficult to solve, have hidden their methods. I have had therefore to examine and go deeply for myself into this matter by beginning with the elements, and it is impossible for me for this reason to affirm that I have even started from the same principle. But finally I have found that my answers in many cases do not differ from theirs.
— Edwards (2002)
During his visit to France in 1655, Huygens learned about de Méré's Problem. From his correspondence with Carcavine a year later (in 1656), he realized his method was essentially the same as Pascal's. Therefore, he knew about Pascal's priority in this subject before his book went to press in 1657.[6]
In the mid-nineteenth century, Pafnuty Chebyshev became the first person to think systematically in terms of the expectations of random variables.[7]
### Etymology
Neither Pascal nor Huygens used the term "expectation" in its modern sense. In particular, Huygens writes:[8]
That any one Chance or Expectation to win any thing is worth just such a Sum, as wou'd procure in the same Chance and Expectation at a fair Lay. ... If I expect a or b, and have an equal chance of gaining them, my Expectation is worth (a+b)/2.
More than a hundred years later, in 1814, Pierre-Simon Laplace published his tract "Théorie analytique des probabilités", where the concept of expected value was defined explicitly:[9]
… this advantage in the theory of chance is the product of the sum hoped for by the probability of obtaining it; it is the partial sum which ought to result when we do not wish to run the risks of the event in supposing that the division is made proportional to the probabilities. This division is the only equitable one when all strange circumstances are eliminated; because an equal degree of probability gives an equal right for the sum hoped for. We will call this advantage mathematical hope.
## Notations
The use of the letter E to denote expected value goes back to W. A. Whitworth in 1901.[10] The symbol has become popular since then for English writers. In German, E stands for "Erwartungswert", in Spanish for "Esperanza matemática", and in French for "Espérance mathématique".[11]
When "E" is used to denote expected value, authors use a variety of stylization: the expectation operator can be stylized as E (upright), E (italic), or ${\displaystyle \mathbb {E} }$ (in blackboard bold), while a variety of bracket notations (such as E(X), E[X], and EX) are all used.
Another popular notation is μX, whereas X, Xav, and ${\displaystyle {\overline {X}}}$ are commonly used in physics,[12] and M(X) in Russian-language literature.
## Definition
As discussed above, there are several context-dependent ways of defining the expected value. The simplest and original definition deals with the case of finitely many possible outcomes, such as in the flip of a coin. With the theory of infinite series, this can be extended to the case of countably many possible outcomes. It is also very common to consider the distinct case of random variables dictated by (piecewise-)continuous probability density functions, as these arise in many natural contexts. All of these specific definitions may be viewed as special cases of the general definition based upon the mathematical tools of measure theory and Lebesgue integration, which provide these different contexts with an axiomatic foundation and common language.
Any definition of expected value may be extended to define an expected value of a multidimensional random variable, i.e. a random vector X. It is defined component by component, as E[X]i = E[Xi]. Similarly, one may define the expected value of a random matrix X with components Xij by E[X]ij = E[Xij].
### Random variables with finitely many outcomes
Consider a random variable X with a finite list x1, ..., xk of possible outcomes, each of which (respectively) has probability p1, ..., pk of occurring. The expectation of X is defined as[13]
${\displaystyle \operatorname {E} [X]=x_{1}p_{1}+x_{2}p_{2}+\cdots +x_{k}p_{k}.}$
Since the probabilities must satisfy p1 + ⋅⋅⋅ + pk = 1, it is natural to interpret E[X] as a weighted average of the xi values, with weights given by their probabilities pi.
In the special case that all possible outcomes are equiprobable (that is, p1 = ⋅⋅⋅ = pk), the weighted average is given by the standard average. In the general case, the expected value takes into account the fact that some outcomes are more likely than others.
An illustration of the convergence of sequence averages of rolls of a die to the expected value of 3.5 as the number of rolls (trials) grows.
#### Examples
• Let ${\displaystyle X}$ represent the outcome of a roll of a fair six-sided die. More specifically, ${\displaystyle X}$ will be the number of pips showing on the top face of the die after the toss. The possible values for ${\displaystyle X}$ are 1, 2, 3, 4, 5, and 6, all of which are equally likely with a probability of 1/6. The expectation of ${\displaystyle X}$ is
${\displaystyle \operatorname {E} [X]=1\cdot {\frac {1}{6}}+2\cdot {\frac {1}{6}}+3\cdot {\frac {1}{6}}+4\cdot {\frac {1}{6}}+5\cdot {\frac {1}{6}}+6\cdot {\frac {1}{6}}=3.5.}$
If one rolls the die ${\displaystyle n}$ times and computes the average (arithmetic mean) of the results, then as ${\displaystyle n}$ grows, the average will almost surely converge to the expected value, a fact known as the strong law of large numbers.
• The roulette game consists of a small ball and a wheel with 38 numbered pockets around the edge. As the wheel is spun, the ball bounces around randomly until it settles down in one of the pockets. Suppose random variable ${\displaystyle X}$ represents the (monetary) outcome of a $1 bet on a single number ("straight up" bet). If the bet wins (which happens with probability 1/38 in American roulette), the payoff is$35; otherwise the player loses the bet. The expected profit from such a bet will be
${\displaystyle \operatorname {E} [\,{\text{gain from }}\1{\text{ bet}}\,]=-\1\cdot {\frac {37}{38}}+\35\cdot {\frac {1}{38}}=-\{\frac {1}{19}}.}$
That is, the expected value to be won from a $1 bet is −$1/19. Thus, in 190 bets, the net loss will probably be about \$10.
### Random variables with countably many outcomes
Informally, the expectation of a random variable with a countable set of possible outcomes is defined analogously as the weighted average of all possible outcomes, where the weights are given by the probabilities of realizing each given value. This is to say that
${\displaystyle \operatorname {E} [X]=\sum _{i=1}^{\infty }x_{i}\,p_{i},}$
where x1, x2, ... are the possible outcomes of the random variable X and p1, p2, ... are their corresponding probabilities. In many non-mathematical textbooks, this is presented as the full definition of expected values in this context.[14]
However, there are some subtleties with infinite summation, so the above formula is not suitable as a mathematical definition. In particular, the Riemann series theorem of mathematical analysis illustrates that the value of certain infinite sums involving positive and negative summands depends on the order in which the summands are given. Since the outcomes of a random variable have no naturally given order, this creates a difficulty in defining expected value precisely.
For this reason, many mathematical textbooks only consider the case that the infinite sum given above converges absolutely, which implies that the infinite sum is a finite number independent of the ordering of summands.[15] In the alternative case that the infinite sum does not converge absolutely, one says the random variable does not have finite expectation.[15]
#### Examples
• Suppose ${\displaystyle x_{i}=i}$ and ${\displaystyle p_{i}={\tfrac {c}{i2^{i}}}}$ for ${\displaystyle i=1,2,3,\ldots ,}$ where ${\displaystyle c={\tfrac {1}{\ln 2}}}$ is the scaling factor which makes the probabilities sum to 1. Then, using the direct definition for non-negative random variables, we have
${\displaystyle \operatorname {E} [X]\,=\sum _{i}x_{i}p_{i}=1({\tfrac {c}{2}})+2({\tfrac {c}{8}})+3({\tfrac {c}{24}})+\cdots \,=\,{\tfrac {c}{2}}+{\tfrac {c}{4}}+{\tfrac {c}{8}}+\cdots \,=\,c\,=\,{\tfrac {1}{\ln 2}}.}$
### Random variables with density
Now consider a random variable X which has a probability density function given by a function f on the real number line. This means that the probability of X taking on a value in any given open interval is given by the integral of f over that interval. The expectation of X is then given by the integral[16]
${\displaystyle \operatorname {E} [X]=\int _{-\infty }^{\infty }xf(x)\,dx.}$
A general and mathematically precise formulation of this definition uses measure theory and Lebesgue integration, and the corresponding theory of absolutely continuous random variables is described in the next section. The density functions of many common distributions are piecewise continuous, and as such the theory is often developed in this restricted setting.[17] For such functions, it is sufficient to only consider the standard Riemann integration. Sometimes continuous random variables are defined as those corresponding to this special class of densities, although the term is used differently by various authors.
Analogously to the countably-infinite case above, there are subtleties with this expression due to the infinite region of integration. Such subtleties can be seen concretely if the distribution of X is given by the Cauchy distribution Cauchy(0, π), so that f(x) = (x2 + π2)−1. It is straightforward to compute in this case that
${\displaystyle \int _{a}^{b}xf(x)\,dx=\int _{a}^{b}{\frac {x}{x^{2}+\pi ^{2}}}\,dx={\frac {1}{2}}\ln {\frac {b^{2}+\pi ^{2}}{a^{2}+\pi ^{2}}}.}$
The limit of this expression as a → −∞ and b → ∞ does not exist: if the limits are taken so that a = −b, then the limit is zero, while if the constraint 2a = −b is taken, then the limit is ln(2).
To avoid such ambiguities, in mathematical textbooks it is common to require that the given integral converges absolutely, with E[X] left undefined otherwise.[18] However, measure-theoretic notions as given below can be used to give a systematic definition of E[X] for more general random variables X.
### Arbitrary real-valued random variables
All definitions of the expected value may be expressed in the language of measure theory. In general, if X is a real-valued random variable defined on a probability space (Ω, Σ, P), then the expected value of X, denoted by E[X], is defined as the Lebesgue integral[19]
${\displaystyle \operatorname {E} [X]=\int _{\Omega }X\,d\operatorname {P} .}$
Despite the newly abstract situation, this definition is extremely similar in nature to the very simplest definition of expected values, given above, as certain weighted averages. This is because, in measure theory, the value of the Lebesgue integral of X is defined via weighted averages of approximations of X which take on finitely many values.[20] Moreover, if given a random variable with finitely or countably many possible values, the Lebesgue theory of expectation is identical with the summation formulas given above. However, the Lebesgue theory clarifies the scope of the theory of probability density functions. A random variable X is said to be absolutely continuous if any of the following conditions are satisfied:
${\displaystyle {\text{P}}(X\in A)=\int _{A}f(x)\,dx,}$
for any Borel set A, in which the integral is Lebesgue.
• the cumulative distribution function of X is absolutely continuous.
• for any Borel set A of real numbers with Lebesgue measure equal to zero, the probability of X being valued in A is also equal to zero
• for any positive number ε there is a positive number δ such that: if A is a Borel set with Lebesgue measure less than δ, then the probability of X being valued in A is less than ε.
These conditions are all equivalent, although this is nontrivial to establish.[21] In this definition, f is called the probability density function of X (relative to Lebesgue measure). According to the change-of-variables formula for Lebesgue integration,[22] combined with the law of the unconscious statistician,[23] it follows that
${\displaystyle \operatorname {E} [X]\equiv \int _{\Omega }X\,d\operatorname {P} =\int _{\mathbb {R} }xf(x)\,dx}$
for any absolutely continuous random variable X. The above discussion of continuous random variables is thus a special case of the general Lebesgue theory, due to the fact that every piecewise-continuous function is measurable.
### Infinite expected values
Expected values as defined above are automatically finite numbers. However, in many cases it is fundamental to be able to consider expected values of ±∞. This is intuitive, for example, in the case of the St. Petersburg paradox, in which one considers a random variable with possible outcomes xi = 2i, with associated probabilities pi = 2i, for i ranging over all positive integers. According to the summation formula in the case of random variables with countably many outcomes, one has
${\displaystyle \operatorname {E} [X]=\sum _{i=1}^{\infty }x_{i}\,p_{i}=2\cdot {\frac {1}{2}}+4\cdot {\frac {1}{4}}+8\cdot {\frac {1}{8}}+16\cdot {\frac {1}{16}}+\cdots =1+1+1+1+\cdots .}$
It is natural to say that the expected value equals +∞.
There is a rigorous mathematical theory underlying such ideas, which is often taken as part of the definition of the Lebesgue integral.[20] The first fundamental observation is that, whichever of the above definitions are followed, any nonnegative random variable whatsoever can be given an unambiguous expected value; whenever absolute convergence fails, then the expected value can be defined as +∞. The second fundamental observation is that any random variable can be written as the difference of two nonnegative random variables. Given a random variable X, one defines the positive and negative parts by X + = max(X, 0) and X = −min(X, 0). These are nonnegative random variables, and it can be directly checked that X = X +X. Since E[X +] and E[X] are both then defined as either nonnegative numbers or +∞, it is then natural to define:
${\displaystyle \operatorname {E} [X]={\begin{cases}\operatorname {E} [X^{+}]-\operatorname {E} [X^{-}]&{\text{if }}\operatorname {E} [X^{+}]<\infty {\text{ and }}\operatorname {E} [X^{-}]<\infty ;\\+\infty &{\text{if }}\operatorname {E} [X^{+}]=\infty {\text{ and }}\operatorname {E} [X^{-}]<\infty ;\\-\infty &{\text{if }}\operatorname {E} [X^{+}]<\infty {\text{ and }}\operatorname {E} [X^{-}]=\infty ;\\{\text{undefined}}&{\text{if }}\operatorname {E} [X^{+}]=\infty {\text{ and }}\operatorname {E} [X^{-}]=\infty .\end{cases}}}$
According to this definition, E[X] exists and is finite if and only if E[X +] and E[X] are both finite. Due to the formula |X| = X + + X, this is the case if and only if E|X| is finite, and this is equivalent to the absolute convergence conditions in the definitions above. As such, the present considerations do not define finite expected values in any cases not previously considered; they are only useful for infinite expectations.
• In the case of the St. Petersburg paradox, one has X = 0 and so E[X] = +∞ as desired.
• Suppose the random variable X takes values 1, −2,3, −4, ... with respective probabilities −2, 6(2π)−2, 6(3π)−2, 6(4π)−2, .... Then it follows that X + takes value 2k−1 with probability 6((2k−1)π)−2 for each positive integer k, and takes value 0 with remaining probability. Similarly, X takes value 2k with probability 6(2kπ)−2 for each positive integer k and takes value 0 with remaining probability. Using the definition for non-negative random variables, one can show that both E[X +] = ∞ and E[X] = ∞ (see Harmonic series). Hence, in this case the expectation of X is undefined.
• Similarly, the Cauchy distribution, as discussed above, has undefined expectation.
## Expected values of common distributions
The following table gives the expected values of some commonly occurring probability distributions. The third column gives the expected values both in the form immediately given by the definition, as well as in the simplified form obtained by computation therefrom. The details of these computations, which are not always straightforward, can be found in the indicated references.
Distribution Notation Mean E(X)
Bernoulli[24] ${\displaystyle X\sim ~b(1,p)}$ ${\displaystyle 0\cdot (1-p)+1\cdot p=p}$
Binomial[25] ${\displaystyle X\sim B(n,p)}$ ${\displaystyle \sum _{i=0}^{n}i{n \choose i}p^{i}(1-p)^{n-i}=np}$
Poisson[26] ${\displaystyle X\sim \mathrm {Po} (\lambda )}$ ${\displaystyle \sum _{i=0}^{\infty }{\frac {ie^{-\lambda }\lambda ^{i}}{i!}}=\lambda }$
Geometric[27] ${\displaystyle X\sim \mathrm {Geometric} (p)}$ ${\displaystyle \sum _{i=1}^{\infty }ip(1-p)^{i-1}={\frac {1}{p}}}$
Uniform[28] ${\displaystyle X\sim U(a,b)}$ ${\displaystyle \int _{a}^{b}{\frac {x}{b-a}}\,dx={\frac {a+b}{2}}}$
Exponential[29] ${\displaystyle X\sim \exp(\lambda )}$ ${\displaystyle \int _{0}^{\infty }\lambda xe^{-\lambda x}\,dx={\frac {1}{\lambda }}}$
Normal[30] ${\displaystyle X\sim N(\mu ,\sigma ^{2})}$ ${\displaystyle {\frac {1}{\sqrt {2\pi \sigma ^{2}}}}\int _{-\infty }^{\infty }xe^{-(x-\mu )^{2}/2\sigma ^{2}}\,dx=\mu }$
Standard Normal[31] ${\displaystyle X\sim N(0,1)}$ ${\displaystyle {\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }xe^{-x^{2}/2}\,dx=0}$
Pareto[32] ${\displaystyle X\sim \mathrm {Par} (\alpha ,k)}$ ${\displaystyle \int _{k}^{\infty }\alpha k^{\alpha }x^{-\alpha }\,dx={\begin{cases}{\frac {\alpha k}{\alpha -1}}&\alpha >1\\\infty &0\leq \alpha \leq 1.\end{cases}}}$
Cauchy[33] ${\displaystyle X\sim \mathrm {Cauchy} (x_{0},\gamma )}$ ${\displaystyle {\frac {1}{\pi }}\int _{-\infty }^{\infty }{\frac {\gamma x}{(x-x_{0})^{2}+\gamma ^{2}}}\,dx}$ is undefined
## Properties
The basic properties below (and their names in bold) replicate or follow immediately from those of Lebesgue integral. Note that the letters "a.s." stand for "almost surely"—a central property of the Lebesgue integral. Basically, one says that an inequality like ${\displaystyle X\geq 0}$ is true almost surely, when the probability measure attributes zero-mass to the complementary event ${\displaystyle \left\{X<0\right\}}$ .
• Non-negativity: If ${\displaystyle X\geq 0}$ (a.s.), then ${\displaystyle \operatorname {E} [X]\geq 0}$ .
• Linearity of expectation:[34] The expected value operator (or expectation operator) ${\displaystyle \operatorname {E} [\cdot ]}$ is linear in the sense that, for any random variables ${\displaystyle X}$ and ${\displaystyle Y}$ , and a constant ${\displaystyle a}$ ,
{\displaystyle {\begin{aligned}\operatorname {E} [X+Y]&=\operatorname {E} [X]+\operatorname {E} [Y],\\\operatorname {E} [aX]&=a\operatorname {E} [X],\end{aligned}}}
whenever the right-hand side is well-defined. By induction, this means that the expected value of the sum of any finite number of random variables is the sum of the expected values of the individual random variables, and the expected value scales linearly with a multiplicative constant. Symbolically, for ${\displaystyle N}$ random variables ${\displaystyle X_{i}}$ and constants ${\displaystyle a_{i}(1\leq i\leq N)}$ , we have ${\textstyle \operatorname {E} \left[\sum _{i=1}^{N}a_{i}X_{i}\right]=\sum _{i=1}^{N}a_{i}\operatorname {E} [X_{i}]}$ . If we think of the set of random variables with finite expected value as forming a vector space, then the linearity of expectation implies that the expected value is a linear form on this vector space.
• Monotonicity: If ${\displaystyle X\leq Y}$ (a.s.), and both ${\displaystyle \operatorname {E} [X]}$ and ${\displaystyle \operatorname {E} [Y]}$ exist, then ${\displaystyle \operatorname {E} [X]\leq \operatorname {E} [Y]}$ .
Proof follows from the linearity and the non-negativity property for ${\displaystyle Z=Y-X}$ , since ${\displaystyle Z\geq 0}$ (a.s.).
• Non-degeneracy: If ${\displaystyle \operatorname {E} [|X|]=0}$ , then ${\displaystyle X=0}$ (a.s.).
• If ${\displaystyle X=Y}$ (a.s.), then ${\displaystyle \operatorname {E} [X]=\operatorname {E} [Y]}$ . In other words, if X and Y are random variables that take different values with probability zero, then the expectation of X will equal the expectation of Y.
• If ${\displaystyle X=c}$ (a.s.) for some real number c, then ${\displaystyle \operatorname {E} [X]=c}$ . In particular, for a random variable ${\displaystyle X}$ with well-defined expectation, ${\displaystyle \operatorname {E} [\operatorname {E} [X]]=\operatorname {E} [X]}$ . A well defined expectation implies that there is one number, or rather, one constant that defines the expected value. Thus follows that the expectation of this constant is just the original expected value.
• As a consequence of the formula |X| = X + + X as discussed above, together with the triangle inequality, it follows that for any random variable ${\displaystyle X}$ with well-defined expectation, one has ${\displaystyle |\operatorname {E} [X]|\leq \operatorname {E} |X|}$ .
• Let 1A denote the indicator function of an event A, then E[1A] is given by the probability of A. This is nothing but a different way of stating the expectation of a Bernoulli random variable, as calculated in the table above.
• Formulas in terms of CDF: If ${\displaystyle F(x)}$ is the cumulative distribution function of a random variable X, then
${\displaystyle \operatorname {E} [X]=\int _{-\infty }^{\infty }x\,dF(x),}$
where the values on both sides are well defined or not well defined simultaneously, and the integral is taken in the sense of Lebesgue-Stieltjes. As a consequence of integration by parts as applied to this representation of E[X], it can be proved that
${\displaystyle \operatorname {E} [X]=\int _{0}^{\infty }(1-F(x))\,dx-\int _{-\infty }^{0}F(x)\,dx,}$
with the integrals taken in the sense of Lebesgue.[35] As a special case, for any random variable X valued in the nonnegative integers {0, 1, 2, 3, ...}, one has
${\displaystyle \operatorname {E} [X]=\sum _{n=0}^{\infty }\operatorname {P} (X>n),}$
where P denotes the underlying probability measure.
• Non-multiplicativity: In general, the expected value is not multiplicative, i.e. ${\displaystyle \operatorname {E} [XY]}$ is not necessarily equal to ${\displaystyle \operatorname {E} [X]\cdot \operatorname {E} [Y]}$ . If ${\displaystyle X}$ and ${\displaystyle Y}$ are independent, then one can show that ${\displaystyle \operatorname {E} [XY]=\operatorname {E} [X]\operatorname {E} [Y]}$ . If the random variables are dependent, then generally ${\displaystyle \operatorname {E} [XY]\neq \operatorname {E} [X]\operatorname {E} [Y]}$ , although in special cases of dependency the equality may hold.
• Law of the unconscious statistician: The expected value of a measurable function of ${\displaystyle X}$ , ${\displaystyle g(X)}$ , given that ${\displaystyle X}$ has a probability density function ${\displaystyle f(x)}$ , is given by the inner product of ${\displaystyle f}$ and ${\displaystyle g}$ :[34]
${\displaystyle \operatorname {E} [g(X)]=\int _{\mathbb {R} }g(x)f(x)\,dx.}$
This formula also holds in multidimensional case, when ${\displaystyle g}$ is a function of several random variables, and ${\displaystyle f}$ is their joint density.[34][36]
### Inequalities
Concentration inequalities control the likelihood of a random variable taking on large values. Markov's inequality is among the best-known and simplest to prove: for a nonnegative random variable X and any positive number a, it states that[37]
${\displaystyle \operatorname {P} (X\geq a)\leq {\frac {\operatorname {E} [X]}{a}}.}$
If X is any random variable with finite expectation, then Markov's inequality may be applied to the random variable |X−E[X]|2 to obtain Chebyshev's inequality
${\displaystyle \operatorname {P} (|X-{\text{E}}[X]|\geq a)\leq {\frac {\operatorname {Var} [X]}{a^{2}}},}$
where Var is the variance.[37] These inequalities are significant for their nearly complete lack of conditional assumptions. For example, for any random variable with finite expectation, the Chebyshev inequality implies that there is at least a 75% probability of an outcome being within two standard deviations of the expected value. However, in special cases the Markov and Chebyshev inequalities often give much weaker information than is otherwise available. For example, in the case of an unweighted dice, Chebyshev's inequality says that odds of rolling between 1 and 6 is at least 53%; in reality, the odds are of course 100%.[38] The Kolmogorov inequality extends the Chebyshev inequality to the context of sums of random variables.[39]
The following three inequalities are of fundamental importance in the field of mathematical analysis and its applications to probability theory.
• Jensen's inequality: Let f: ℝ → ℝ be a convex function and X a random variable with finite expectation. Then[40]
${\displaystyle f(\operatorname {E} (X))\leq \operatorname {E} (f(X)).}$
Part of the assertion is that the negative part of f(X) has finite expectation, so that the right-hand side is well-defined (possibly infinite). Convexity of f can be phrased as saying that the output of the weighted average of two inputs under-estimates the same weighted average of the two outputs; Jensen's inequality extends this to the setting of completely general weighted averages, as represented by the expectation. In the special case that f(x) = |x|t/s for positive numbers s < t, one obtains the Lyapunov inequality[41]
${\displaystyle \left(\operatorname {E} |X|^{s}\right)^{1/s}\leq \left(\operatorname {E} |X|^{t}\right)^{1/t}.}$
This can also be proved by the Hölder inequality.[40] In measure theory, this is particularly notable for proving the inclusion Ls ⊂ Lt of Lp spaces, in the special case of probability spaces.
• Hölder's inequality: if p > 1 and q > 1 are numbers satisfying p −1 + q −1 = 1, then
${\displaystyle \operatorname {E} |XY|\leq (\operatorname {E} |X|^{p})^{1/p}(\operatorname {E} |Y|^{q})^{1/q}.}$
for any random variables X and Y.[40] The special case of p = q = 2 is called the Cauchy–Schwarz inequality, and is particularly well-known.[40]
• Minkowski inequality: given any number p ≥ 1, for any random variables X and Y with E|X|p and E|Y|p both finite, it follows that E|X + Y|p is also finite and[42]
${\displaystyle {\Bigl (}\operatorname {E} |X+Y|^{p}{\Bigr )}^{1/p}\leq {\Bigl (}\operatorname {E} |X|^{p}{\Bigr )}^{1/p}+{\Bigl (}\operatorname {E} |Y|^{p}{\Bigr )}^{1/p}.}$
The Hölder and Minkowski inequalities can be extended to general measure spaces, and are often given in that context. By contrast, the Jensen inequality is special to the case of probability spaces.
### Expectations under convergence of random variables
In general, it is not the case that ${\displaystyle \operatorname {E} [X_{n}]\to \operatorname {E} [X]}$ even if ${\displaystyle X_{n}\to X}$ pointwise. Thus, one cannot interchange limits and expectation, without additional conditions on the random variables. To see this, let ${\displaystyle U}$ be a random variable distributed uniformly on ${\displaystyle [0,1]}$ . For ${\displaystyle n\geq 1,}$ define a sequence of random variables
${\displaystyle X_{n}=n\cdot \mathbf {1} \left\{U\in \left(0,{\tfrac {1}{n}}\right)\right\},}$
with ${\displaystyle {\mathbf {1} }\{A\}}$ being the indicator function of the event ${\displaystyle A}$ . Then, it follows that ${\displaystyle X_{n}\to 0}$ pointwise. But, ${\displaystyle \operatorname {E} [X_{n}]=n\cdot \operatorname {P} \left(U\in \left[0,{\tfrac {1}{n}}\right]\right)=n\cdot {\tfrac {1}{n}}=1}$ for each ${\displaystyle n}$ . Hence, ${\displaystyle \lim _{n\to \infty }\operatorname {E} [X_{n}]=1\neq 0=\operatorname {E} \left[\lim _{n\to \infty }X_{n}\right].}$
Analogously, for general sequence of random variables ${\displaystyle \{Y_{n}:n\geq 0\}}$ , the expected value operator is not ${\displaystyle \sigma }$ -additive, i.e.
${\displaystyle \operatorname {E} \left[\sum _{n=0}^{\infty }Y_{n}\right]\neq \sum _{n=0}^{\infty }\operatorname {E} [Y_{n}].}$
An example is easily obtained by setting ${\displaystyle Y_{0}=X_{1}}$ and ${\displaystyle Y_{n}=X_{n+1}-X_{n}}$ for ${\displaystyle n\geq 1}$ , where ${\displaystyle X_{n}}$ is as in the previous example.
A number of convergence results specify exact conditions which allow one to interchange limits and expectations, as specified below.
• Monotone convergence theorem: Let ${\displaystyle \{X_{n}:n\geq 0\}}$ be a sequence of random variables, with ${\displaystyle 0\leq X_{n}\leq X_{n+1}}$ (a.s) for each ${\displaystyle n\geq 0}$ . Furthermore, let ${\displaystyle X_{n}\to X}$ pointwise. Then, the monotone convergence theorem states that ${\displaystyle \lim _{n}\operatorname {E} [X_{n}]=\operatorname {E} [X].}$
Using the monotone convergence theorem, one can show that expectation indeed satisfies countable additivity for non-negative random variables. In particular, let ${\displaystyle \{X_{i}\}_{i=0}^{\infty }}$ be non-negative random variables. It follows from monotone convergence theorem that
${\displaystyle \operatorname {E} \left[\sum _{i=0}^{\infty }X_{i}\right]=\sum _{i=0}^{\infty }\operatorname {E} [X_{i}].}$
• Fatou's lemma: Let ${\displaystyle \{X_{n}\geq 0:n\geq 0\}}$ be a sequence of non-negative random variables. Fatou's lemma states that
${\displaystyle \operatorname {E} [\liminf _{n}X_{n}]\leq \liminf _{n}\operatorname {E} [X_{n}].}$
Corollary. Let ${\displaystyle X_{n}\geq 0}$ with ${\displaystyle \operatorname {E} [X_{n}]\leq C}$ for all ${\displaystyle n\geq 0}$ . If ${\displaystyle X_{n}\to X}$ (a.s), then ${\displaystyle \operatorname {E} [X]\leq C.}$
Proof is by observing that ${\textstyle X=\liminf _{n}X_{n}}$ (a.s.) and applying Fatou's lemma.
• Dominated convergence theorem: Let ${\displaystyle \{X_{n}:n\geq 0\}}$ be a sequence of random variables. If ${\displaystyle X_{n}\to X}$ pointwise (a.s.), ${\displaystyle |X_{n}|\leq Y\leq +\infty }$ (a.s.), and ${\displaystyle \operatorname {E} [Y]<\infty }$ . Then, according to the dominated convergence theorem,
• ${\displaystyle \operatorname {E} |X|\leq \operatorname {E} [Y]<\infty }$ ;
• ${\displaystyle \lim _{n}\operatorname {E} [X_{n}]=\operatorname {E} [X]}$
• ${\displaystyle \lim _{n}\operatorname {E} |X_{n}-X|=0.}$
• Uniform integrability: In some cases, the equality ${\displaystyle \lim _{n}\operatorname {E} [X_{n}]=\operatorname {E} [\lim _{n}X_{n}]}$ holds when the sequence ${\displaystyle \{X_{n}\}}$ is uniformly integrable.
### Relationship with characteristic function
The probability density function ${\displaystyle f_{X}}$ of a scalar random variable ${\displaystyle X}$ is related to its characteristic function ${\displaystyle \varphi _{X}}$ by the inversion formula:
${\displaystyle f_{X}(x)={\frac {1}{2\pi }}\int _{\mathbb {R} }e^{-itx}\varphi _{X}(t)\,\mathrm {d} t.}$
For the expected value of ${\displaystyle g(X)}$ (where ${\displaystyle g:{\mathbb {R} }\to {\mathbb {R} }}$ is a Borel function), we can use this inversion formula to obtain
${\displaystyle \operatorname {E} [g(X)]={\frac {1}{2\pi }}\int _{\mathbb {R} }g(x)\left[\int _{\mathbb {R} }e^{-itx}\varphi _{X}(t)\,\mathrm {d} t\right]\,\mathrm {d} x.}$
If ${\displaystyle \operatorname {E} [g(X)]}$ is finite, changing the order of integration, we get, in accordance with Fubini–Tonelli theorem,
${\displaystyle \operatorname {E} [g(X)]={\frac {1}{2\pi }}\int _{\mathbb {R} }G(t)\varphi _{X}(t)\,\mathrm {d} t,}$
where
${\displaystyle G(t)=\int _{\mathbb {R} }g(x)e^{-itx}\,\mathrm {d} x}$
is the Fourier transform of ${\displaystyle g(x).}$ The expression for ${\displaystyle \operatorname {E} [g(X)]}$ also follows directly from Plancherel theorem.
## Uses and applications
The expectation of a random variable plays an important role in a variety of contexts. For example, in decision theory, an agent making an optimal choice in the context of incomplete information is often assumed to maximize the expected value of their utility function. For a different example, in statistics, where one seeks estimates for unknown parameters based on available data, the estimate itself is a random variable. In such settings, a desirable criterion for a "good" estimator is that it is unbiased; that is, the expected value of the estimate is equal to the true value of the underlying parameter.
It is possible to construct an expected value equal to the probability of an event, by taking the expectation of an indicator function that is one if the event has occurred and zero otherwise. This relationship can be used to translate properties of expected values into properties of probabilities, e.g. using the law of large numbers to justify estimating probabilities by frequencies.
The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X − E[X]. The moments of some random variables can be used to specify their distributions, via their moment generating functions.
To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results. If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate). The law of large numbers demonstrates (under fairly mild conditions) that, as the size of the sample gets larger, the variance of this estimate gets smaller.
This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learning, to estimate (probabilistic) quantities of interest via Monte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g. ${\displaystyle \operatorname {P} ({X\in {\mathcal {A}}})=\operatorname {E} [{\mathbf {1} }_{\mathcal {A}}]}$ , where ${\displaystyle {\mathbf {1} }_{\mathcal {A}}}$ is the indicator function of the set ${\displaystyle {\mathcal {A}}}$ .
The mass of probability distribution is balanced at the expected value, here a Beta(α,β) distribution with expected value α/(α+β).
In classical mechanics, the center of mass is an analogous concept to expectation. For example, suppose X is a discrete random variable with values xi and corresponding probabilities pi. Now consider a weightless rod on which are placed weights, at locations xi along the rod and having masses pi (whose sum is one). The point at which the rod balances is E[X].
Expected values can also be used to compute the variance, by means of the computational formula for the variance
${\displaystyle \operatorname {Var} (X)=\operatorname {E} [X^{2}]-(\operatorname {E} [X])^{2}.}$
A very important application of the expectation value is in the field of quantum mechanics. The expectation value of a quantum mechanical operator ${\displaystyle {\hat {A}}}$ operating on a quantum state vector ${\displaystyle |\psi \rangle }$ is written as ${\displaystyle \langle {\hat {A}}\rangle =\langle \psi |A|\psi \rangle }$ . The uncertainty in ${\displaystyle {\hat {A}}}$ can be calculated by the formula ${\displaystyle (\Delta A)^{2}=\langle {\hat {A}}^{2}\rangle -\langle {\hat {A}}\rangle ^{2}}$ .
## References
1. ^ "Expectation | Mean | Average". www.probabilitycourse.com. Retrieved 2020-09-11.
2. ^ Hansen, Bruce. "PROBABILITY AND STATISTICS FOR ECONOMISTS" (PDF). Retrieved 2021-07-20.{{cite web}}: CS1 maint: url-status (link)
3. ^ Wasserman, Larry (December 2010). All of Statistics: a concise course in statistical inference. Springer texts in statistics. p. 47. ISBN 9781441923226.
4. ^ History of Probability and Statistics and Their Applications before 1750. Wiley Series in Probability and Statistics. 1990. doi:10.1002/0471725161. ISBN 9780471725169.
5. ^ Ore, Oystein (1960). "Ore, Pascal and the Invention of Probability Theory". The American Mathematical Monthly. 67 (5): 409–419. doi:10.2307/2309286. JSTOR 2309286.
6. ^ Mckay, Cain (2019). Probability and Statistics. p. 257. ISBN 9781839473302.
7. ^ George Mackey (July 1980). "HARMONIC ANALYSIS AS THE EXPLOITATION OF SYMMETRY - A HISTORICAL SURVEY". Bulletin of the American Mathematical Society. New Series. 3 (1): 549.
8. ^ Huygens, Christian. "The Value of Chances in Games of Fortune. English Translation" (PDF).
9. ^ Laplace, Pierre Simon, marquis de, 1749-1827. (1952) [1951]. A philosophical essay on probabilities. Dover Publications. OCLC 475539.{{cite book}}: CS1 maint: multiple names: authors list (link)
10. ^ Whitworth, W.A. (1901) Choice and Chance with One Thousand Exercises. Fifth edition. Deighton Bell, Cambridge. [Reprinted by Hafner Publishing Co., New York, 1959.]
11. ^
12. ^ Feller 1968, p. 221.
13. ^ Billingsley 1995, p. 76.
14. ^ Ross 2019, Section 2.4.1.
15. ^ a b Feller 1968, Section IX.2.
16. ^ Papoulis & Pillai 2002, Section 5-3; Ross 2019, Section 2.4.2.
17. ^ Feller 1971, Section I.2.
18. ^ Feller 1971, p. 5.
19. ^ Billingsley 1995, p. 273.
20. ^ a b Billingsley 1995, Section 15.
21. ^ Billingsley 1995, Theorems 31.7 and 31.8 and p. 422.
22. ^ Billingsley 1995, Theorem 16.13.
23. ^ Billingsley 1995, Theorem 16.11.
24. ^ Casella & Berger 2001, p. 89; Ross 2019, Example 2.16.
25. ^ Casella & Berger 2001, Example 2.2.3; Ross 2019, Example 2.17.
26. ^ Billingsley 1995, Example 21.4; Casella & Berger 2001, p. 92; Ross 2019, Example 2.19.
27. ^ Casella & Berger 2001, p. 97; Ross 2019, Example 2.18.
28. ^ Casella & Berger 2001, p. 99; Ross 2019, Example 2.20.
29. ^ Billingsley 1995, Example 21.3; Casella & Berger 2001, Example 2.2.2; Ross 2019, Example 2.21.
30. ^ Casella & Berger 2001, p. 103; Ross 2019, Example 2.22.
31. ^ Billingsley 1995, Example 21.1; Casella & Berger 2001, p. 103.
32. ^ Johnson, Kotz & Balakrishnan 1994, Chapter 20.
33. ^ Feller 1971, Section II.4.
34. ^ a b c Weisstein, Eric W. "Expectation Value". mathworld.wolfram.com. Retrieved 2020-09-11.
35. ^ Feller 1971, Section V.6.
36. ^ Papoulis & Pillai 2002, Section 6-4.
37. ^ a b Feller 1968, Section IX.6; Feller 1971, Section V.7; Papoulis & Pillai 2002, Section 5-4; Ross 2019, Section 2.8.
38. ^ Feller 1968, Section IX.6.
39. ^ Feller 1968, Section IX.7.
40. ^ a b c d Feller 1971, Section V.8.
41. ^ Billingsley 1995, pp. 81, 277.
42. ^ Billingsley 1995, Section 19.
|
2023-03-25 03:06:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 165, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9444358944892883, "perplexity": 518.606000035232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00017.warc.gz"}
|
https://web2.0calc.com/questions/squares-answer-asap-with-explanation
|
+0
# Squares: Answer ASAP with explanation
+1
30
1
+67
Some perfect squares (such as 121) have a digit sum $$(1 + 2 + 1 = 4)$$ that is equal to the square of the digit sum of their square root $$(\sqrt{121}=11)$$, and $$(1 + 1)^2 = 4)$$.
What is the smallest perfect square greater than 100 that does not have this property?
hellospeedmind Sep 30, 2018
#1
+1
It looks like 196 =14^2. 1 + 9 + 6 =16. But (1+4)^2 =25
Guest Sep 30, 2018
|
2018-10-15 20:28:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6942835450172424, "perplexity": 1894.851478621513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509690.35/warc/CC-MAIN-20181015184452-20181015205952-00287.warc.gz"}
|
http://mathhelpforum.com/algebra/145699-use-technique-completing-square.html
|
# Math Help - Use the technique of completing the square
1. ## Use the technique of completing the square
Hi guys could someone help with this question?
Use the technique of completing the square to transform the quadratic equation into the form (x + c)2 = a.
6x2 + 36x + 18 = 0
2. $6x^2+36x+18=0$
$6x^2+36x+54-36=0$
$6x^2+36x+54=36$
$6(x^2+6x+9)=36$
$6(x+3)^2=36$
$(x+3)^2=6$
|
2014-04-24 17:53:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6251650452613831, "perplexity": 435.9581403427431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://xilinx.github.io/Vitis_Libraries/quantitative_finance/2020.1/index.html
|
# Vitis Quantitative Finance Library¶
The Vitis Quantitative Finance Library is a Vitis Library aimed at providing a comprehensive FPGA acceleration library for quantitative finance. It is an open-sourced library that can be used in a variety of financial applications, such as modeling, trading, evaluation and risk management.
The Vitis Quantitative Finance Library provides extensive APIs at three levels of abstraction:
• L1, the basic functions heavily used in higher level implementations. It includes statistical functions such as Random Number Generation (RNG), numerical methods, e.g., Monte Carlo Simulation, and linear algebra functions such as Singular Value Decomposition (SVD), and tridiagonal and pentadiagonal matrix solvers.
• L2, the APIs provided at the level of pricing engines. Various pricing engines are provided to evaluate different financial derivatives, including equity products, interest-rate products, foreign exchange (FX) products, and credit products. At this level, each pricing engine API can be seen as a kernel. The customers may write their own CPU code to call different pricing engines under the framework of OpenCL.
• L3, the software level APIs. APIs of this level hide the details of data transfer, kernel related resources configuration, and task scheduling in OpenCL. Software application programmers may quickly use L3 high-level APIs to run various pricing options without touching the dependency of OpenCL tasks and hardware configurations.
## Library Contents¶
Library Class Description Layer
MT19937 Random number generator L1
MT2203 Random number generator L1
MT19937IcnRng Random number generator L1
MT2203IcnRng Random number generator L1
MT19937BoxMullerNormalRng Produces a normal distribution from a uniform one L1
MultiVariateNormalRng Random number generator L1
SobolRsg Quasi-random number generator L1
SobolRsg1D Quasi-random number generator L1
BrownianBridge Brownian bridge transformation using inverse simulation L1
TrinomialTree Lattice-based trinomial tree structure L1
TreeLattice Generalized structure compatible with different models and instruments L1
Fdm1dMesher Discretization for finite difference method L1
OrnsteinUhlenbeckProcess A simple stochastic process L1
StochasticProcess1D 1-dimentional stochastic process derived by RNG L1
HWModel Hull-White model for tree engine L1
G2Model Two-additive-factor gaussian model for tree engine L1
ECIRModel Extended Cox-Ingersoll- Ross model L1
CIRModel Cox-Ingersoll-Ross model for tree engine L1
VModel Vasicek model for tree engine L1
HestonModel Heston process L1
BKModel Black-Karasinski model for tree engine L1
BSModel Black-Scholes process L1
XoShiRo128PlusPlus XoShiRo128PlusPlus L1
XoShiRo128Plus XoShiRo128Plus L1
XoShiRo128StarStar XoShiRo128StarStar L1
BicubicSplineInterpolation Bicubic Spline Interpolation L1
CubicInterpolation Cubic Interpolation L1
BinomialDistribution Binomial Distribution L1
CPICapFloorEngine Pricing Consumer price index (CPI) using cap/floor methods L2
DiscountingBondEngine Engine used to price discounting bond L2
InflationCapFloorEngine Pricing inflation using cap/floor methods L2
FdHullWhiteEngine Bermudan swaption pricing engine using finite- difference methods based on Hull-White model L2
FdG2SwaptionEngine Bermudan swaption pricing engine using finite- difference methods based on two-additive-factor gaussian model L2
DeviceManager Used to enumerate available Xilinx devices L3
Device A class representing an individual accelerator card L3
Trace Used to control debug trace output L3
Library Function Description Layer
svd Singular Value Decomposition using the Jacobi method L1
mcSimulation Monte-Carlo Framework implementation L1
pentadiagCr Solver for pentadiagonal systems of equations using PCR L1
boxMullerTransform Box-Muller transform from uniform random number to normal random number L1
inverseCumulativeNormalPPND7 Inverse Cumulative transform from random number to normal random number L1
inverseCumulativeNormalAcklam Inverse CumulativeNormal using Acklam’s approximation to transform uniform random number to normal random number L1
trsvCore Solver for tridiagonal systems of equations using PCR L1
PCA Principal Component Analysis library implementation L1
bernoulliPMF Probability mass function for bernoulli distribution L1
bernoulliCDF Cumulative distribution function for bernoulli distribution L1
covCoreMatrix Calculate the covariance of the input matrix L1
covCoreStrm Calculate the covariance of the input matrix L1
covReHardThreshold Hard-thresholding Covariance Regularization L1
covReSoftThreshold Soft-thresholding Covariance Regularization L1
covReBand Banding Covariance Regularization L1
covReTaper Tapering Covariance Regularization L1
gammaCDF Cumulative distribution function for gamma distribution L1
linearImpl 1D linear interpolation L1
normalPDF Probability density function for normal distribution L1
normalCDF Cumulative distribution function for normal distribution L1
normalICDF Inverse cumulative distribution function for normal distribution L1
logNormalPDF Probability density function for log-normal distribution L1
logNormalCDF Cumulative distribution function for log-normal distribution L1
logNormalICDF Inverse cumulative distribution function for log-normal distribution L1
poissonPMF Probability mass function for poisson distribution L1
poissonCDF Cumulative distribution function for poisson distribution L1
poissonICDF Inverse cumulative distribution function for poisson distribution L1
binomialTreeEngine Binomial tree engine using CRR L2
cfBSMEngine Single option price plus associated Greeks L2
FdDouglas Top level callable function to perform the Douglas ADI method L2
hcfEngine Engine for Hestion Closed Form Solution L2
M76Engine Engine for the Merton Jump Diffusion Model L2
MCEuropeanEngine Monte-Carlo simulation of European-style options L2
MCEuropeanPriBypassEngine Path pricer bypass variant L2
MCEuropeanHestonEngine Monte-Carlo simulation of European-style options using Heston model L2
MCmultiAssetEuropeanHestonEngine Monte-Carlo simulation of European-style options for multiple underlying asset L2
MCAmericanEnginePreSamples PreSample kernel: this kernel samples some amount of path and store them to external memory L2
MCAmericanEngineCalibrate Calibrate kernel: this kernel reads the sample price data from external memory and use them to calculate the coefficient L2
MCAmericanEnginePricing Pricing kernel L2
MCAmericanEngine Calibration process and pricing process all in one kernel L2
MCAsianGeometricAPEngine Asian Arithmetic Average Price Engine using Monte Carlo Method Based on Black-Scholes Model : geometric average version L2
MCAsianArithmeticAPEngine arithmetic average version L2
MCAsianArithmeticASEngine Asian Arithmetic Average Strike Engine using Monte Carlo Method Based on Black-Scholes Model : arithmetic average version L2
MCBarrierNoBiasEngine Barrier Option Pricing Engine using Monte Carlo Simulation L2
MCBarrierEngine Barrier Option Pricing Engine using Monte Carlo Simulation L2
MCCliquetEngine Cliquet Option Pricing Engine using Monte Carlo Simulation L2
MCDigitalEngine Digital Option Pricing Engine using Monte Carlo Simulation L2
MCEuropeanHestonGreeksEngine European Option Greeks Calculating Engine using Monte Carlo Method based on Heston valuation model L2
MCHullWhiteCapFloorEngine Cap/Floor Pricing Engine using Monte Carlo Simulation L2
McmcCore Uses multiple Markov Chains to allow drawing samples from multi mode target distribution functions L2
treeSwaptionEngine Tree swaption pricing engine using trinomial tree based on 1D lattice method L2
treeSwapEngine Tree swap pricing engine using trinomial tree based on 1D lattice method L2
treeCapFloprEngine Tree cap/floor engine using trinomial tree based on 1D lattice method L2
treeCallableEngine Tree callable fixed rate bond pricing engine using trinomial tree based on 1D lattice method L2
hjmEngine Full implementation of Heath-Jarrow-Morton framework Pricing Engine with Monte Carlo L2
hjmMcEngine Monte Carlo only implementation of Heath-Jarrow-Morton framework Pricing Engine L2
hjmPcaEngine PCA only implementation of Heath-Jarrow-Morton framework L2
lmmEngine LIBOR Market Model (BGM) framework implementation. L2
## Shell Environment¶
Setup the build environment using the Vitis and XRT scripts, and set the PLATFORM_REPO_PATHS to installation folder of platform files.
source <install path>/Vitis/2019.2/settings64.sh
source /opt/xilinx/xrt/setup.sh
export PLATFORM_REPO_PATHS=/opt/xilinx/platforms
## Design Flows¶
Recommended design flows are categorized by the target level:
• L1
• L2
• L3
The common tool and library prerequisites that apply across all design flows are documented in the requirements section above.
### L1¶
L1 provides the low-level primitives used to build kernels.
The recommend flow to evaluate and test L1 components is described as follows using the Vivado HLS tool. A top level C/C++ testbench (typically main.cpp or tb.cpp) prepares the input data, passes this to the design under test (typically dut.cpp which makes the L1 level library calls) then performs any output data post processing and validation checks.
A Makefile is used to drive this flow with available steps including CSIM (high level simulation), CSYNTH (high level synthesis to RTL), COSIM (cosimulation between software testbench and generated RTL), VIVADO_SYN (synthesis by Vivado), and VIVADO_IMPL (implementation by Vivado). The flow is launched from the shell by calling make with variables set as in the example below:
# entering specific unit test project
cd L1/tests/specific_algorithm/
# Only run C++ simulation on U250
make run CSIM=1 CSYNTH=0 COSIM=0 VIVADO_SYN=0 VIVADO_IMPL=0 DEVICE=u250_xdma_201830_1
As well as verifying functional correctness, the reports generated from this flow give an indication of logic utilization, timing performance, latency and throughput. The output files of interest can be located at the location examples as below where the file names are correlated with the source code. i.e. the callable functions within the design under test.:
Simulation Log: <library_root>/L1/tests/bk_model/prj/solution1/csim/report/dut_csim.log
Synthesis Report: <library_root>/L1/tests/bk_model/prj/solution1/syn/report/dut_csynth.rpt
### L2¶
L2 provides the pricing engine APIs presented as kernels.
The available flow for L2 based around the Vitis tool facilitates the generation and packaging of pricing engine kernels along with the required host application for configuration and control. In addition to supporting FPGA platform targets, emulation options are available for preliminary investigations or where dedicated access to a hardware platform may not be available. Two emulation options are available, software emulation performs a high level simulation of the pricing engine while hardware emulation performs a cycle-accurate simulation of the generated RTL for the kernel. This flow is makefile driven from the console where the target is selected as a command line parameter as in the examples below:
cd L2/tests/GarmanKohlhagenEngine
# build and run one of the following using U250 platform
# * software emulation
make run TARGET=sw_emu DEVICE=u250_xdma_201830_1
# * hardware emulation
make run TARGET=hw_emu DEVICE=u250_xdma_201830_1
# * actual deployment on physical platform
make run TARET=hw DEVICE=u250_xdma_201830_1
# delete all xclbin and host binary
make cleanall
The outputs of this flow are packaged kernel binaries (xclbin files) that can be downloaded to the FPGA platform and host executables to configure and co-ordinate data transfers. The output files of interest can be located at the locations examples as below where the file names are correlated with the source code.:
Host Executable: L2/tests/GarmanKohlhagenEngine/bin_#DEVICE/gk_test.exe
Kernel Packaged Binary: L2/tests/GarmanKohlhagenEngine/xclbin_#DEVICE_#TARGET/gk_kernel.xclbin #ARGS
This flow can be used to verify functional correctness in hardware and enable real world performance to be measured.
### L3¶
L3 provides the high level software APIs to deploy and run pricing engine kernels whilst abstracting the low level details of data transfer, kernel related resources configuration, and task scheduling.
The flow for L3 is the only one where access to an FPGA platform is required.
A prerequisite of this flow is that the packaged pricing engine kernel binaries (xclbin files) for the target FPGA platform target have been made available for download or have been custom built using the L2 flow described above.
This flow is makefile driven from the console to initially generate a shared object (L3/src/output/libxilinxfintech.so).
cd L3/src
source env.sh
make
The shared object file is written to the example location as shown below:
Library: L3/src/output/libxilinxfintech.so
User applications can subsequently be built against this library as in the example provided
cd L3/examples/MonteCarlo
make all
cd output
# manual step to copy or create symlinks to xclbin files in current directory
./mc_example
Library Overview
Benchmark Result
|
2021-09-29 02:16:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18927203118801117, "perplexity": 8572.902322999356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780061350.42/warc/CC-MAIN-20210929004757-20210929034757-00280.warc.gz"}
|
https://www.ttp.kit.edu/preprints/2000/ttp00-03
|
# TTP00-03 Measuring $F_L(x,Q^2)/F_2(x,Q^2)$ from Azimuthal Asymmetries in Deep Inelastic Scattering
TTP00-03 Measuring $F_L(x,Q^2)/F_2(x,Q^2)$ from Azimuthal Asymmetries in Deep Inelastic Scattering
TTP00-03 Measuring $F_L(x,Q^2)/F_2(x,Q^2)$ from Azimuthal Asymmetries in Deep Inelastic Scattering
We demonstrate that the angular distribution of hadrons produced in semi-inclusive deep inelastic final states is related to the inclusive longitudinal structure function. This relation could provide a new method of accessing $F_L(x,Q^2)$ in deep inelastic scattering measurements.
T.Gehrmann Phys. Lett. B480 77-79 2000 PDF PostScript arXiv
|
2020-01-29 22:18:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9520887732505798, "perplexity": 3987.869765203885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251802249.87/warc/CC-MAIN-20200129194333-20200129223333-00517.warc.gz"}
|
http://www.oalib.com/relative/3451172
|
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+
Title Keywords Abstract Author All
Search Results: 1 - 10 of 100 matches for " "
Page 1 /100 Display every page 5 10 20 Item
Physics , 1998, DOI: 10.1103/PhysRevE.58.7146 Abstract: We study the crossover between classical and nonclassical critical behaviors. The critical crossover limit is driven by the Ginzburg number G. The corresponding scaling functions are universal with respect to any possible microscopic mechanism which can vary G, such as changing the range or the strength of the interactions. The critical crossover describes the unique flow from the unstable Gaussian to the stable nonclassical fixed point. The scaling functions are related to the continuum renormalization-group functions. We show these features explicitly in the large-N limit of the O(N) phi^4 model. We also show that the effective susceptibility exponent is nonmonotonic in the low-temperature phase of the three-dimensional Ising model.
Physics , 2007, DOI: 10.1103/PhysRevA.79.032328 Abstract: We explore the quantum-classical crossover in the behaviour of a quantum field mode. The quantum behaviour of a two-state system - a qubit - coupled to the field is used as a probe. Collapse and revival of the qubit inversion form the signature for quantum behaviour of the field and continuous Rabi oscillations form the signature for classical behaviour of the field. We demonstrate both limits in a single model for the full coupled system, for states with the same average field strength, and so for qubits with the same Rabi frequency.
Physics , 1999, DOI: 10.1103/PhysRevLett.85.3153 Abstract: This paper is devoted to study of the classical-to-quantum crossover of the shot noise value in chaotic systems. This crossover is determined by the ratio of the particle dwell time in the system, $\tau_d$, to the characteristic time for diffraction $t_E \simeq \lambda^{-1} |\ln \hbar|$, where $\lambda$ is the Lyapunov exponent. The shot noise vanishes in the limit $t_E \gg \tau_d$, while reaches its universal quantum value in the opposite limit. Thus, the Lyapunov exponent of chaotic mesoscopic systems may be found by the shot noise measurements.
Mathematics , 2005, Abstract: We construct Barnes' type Changhee q-zeta function.
Takashi Nakamura Mathematics , 2013, Abstract: In this paper, we give Hurwitz zeta distributions with $0 < \sigma \ne 1$ by using the Gamma function. During the proof process, we show that the Hurwitz zeta function $\zeta (\sigma,a)$ does not vanish for all $0 <\sigma <1$ if and only if $a \ge 1/2$. Next we define Euler-Zagier-Hurwitz type of double zeta distributions not only in the region of absolute convergence but also the outside of the region of absolute convergence. Moreover, we show that the Euler-Zagier-Hurwitz type of double zeta function $\zeta_2 (\sigma_1,\sigma_2\,;a)$ does not vanish when $0<\sigma_1<1$, $\sigma_2>1$ and $1<\sigma_1+\sigma_2<2$ if and only if $a \ge 1/2$.
Physics , 2006, DOI: 10.1103/PhysRevB.76.024520 Abstract: We consider superfluid turbulence near absolute zero of temperature generated by classical means, e.g. towed grid or rotation but not by counterflow. We argue that such turbulence consists of a {\em polarized} tangle of mutually interacting vortex filaments with quantized vorticity. For this system we predict and describe a bottleneck accumulation of the energy spectrum at the classical-quantum crossover scale $\ell$. Demanding the same energy flux through scales, the value of the energy at the crossover scale should exceed the Kolmogorov-41 spectrum by a large factor $\ln^{10/3} (\ell/a_0)$ ($\ell$ is the mean intervortex distance and $a_0$ is the vortex core radius) for the classical and quantum spectra to be matched in value. One of the important consequences of the bottleneck is that it causes the mean vortex line density to be considerably higher that based on K41 alone, and this should be taken into account in (re)interpretation of new (and old) experiments as well as in further theoretical studies.
Mauro Spreafico Mathematics , 2006, Abstract: We study the spectral functions, and in particular the zeta function, associated to a class of sequences of complex numbers, called of spectral type. We investigate the decomposability of the zeta function associated to a double sequence with respect to some simple sequence, and we provide a technique for obtaining the first terms in the Laurent expansion at zero of the zeta function associated to a double sequence. We particularize this technique to the case of sums of sequences of spectral type, and we give two applications: the first concerning some special functions appearing in number theory, and the second the functional determinant of the Laplace operator on a product space.
Physics , 1998, DOI: 10.1103/PhysRevE.58.R4060 Abstract: We present an accurate numerical determination of the crossover from classical to Ising-like critical behavior upon approach of the critical point in three-dimensional systems. The possibility to vary the Ginzburg number in our simulations allows us to cover the entire crossover region. We employ these results to scrutinize several semi-phenomenological crossover scaling functions that are widely used for the analysis of experimental results. In addition we present strong evidence that the exponent relations do not hold between effective exponents.
Physics , 2004, DOI: 10.1103/PhysRevLett.94.116803 Abstract: The reduction of quantum scattering leads to the suppression of shot noise. In the present paper, we analyze the crossover from the quantum transport regime with universal shot noise, to the classical regime where noise vanishes. By making use of the stochastic path integral approach, we find the statistics of transport and the transmission properties of a chaotic cavity as a function of a system parameter controlling the crossover. We identify three different scenarios of the crossover.
Oliver Knill Mathematics , 2013, Abstract: We study the entire function zeta(n,s) which is the sum of l to the power -s, where l runs over the positive eigenvalues of the Laplacian of the circular graph C(n) with n vertices. We prove that the roots of zeta(n,s) converge for n to infinity to the line Re(s)=1/2 in the sense that for every compact subset K in the complement of this line, and large enough n, no root of the zeta function zeta(n,s) is in K. To prove this, we look at the Dirac zeta function, which uses the positive eigenvalues of the Dirac operator D=d+d^* of the circular graph, the square root of the Laplacian. We extend a Newton-Coates-Rolle type analysis for Riemann sums and use a derivative which has similarities with the Schwarzian derivative. As the zeta functions zeta(n,s) of the circular graphs are entire functions, the result does not say anything about the roots of the classical Riemann zeta function zeta(s), which is also the Dirac zeta function for the circle. Only for Re(s)>1, the values of zeta(n,s) converge suitably scaled to zeta(s). We also give a new solution to the discrete Basel problem which is giving expressions like zeta_n(2) = (n^2-1)/12 or zeta_n(4) = (n^2-1)(n^2+11)/45 which allows to re-derive the values of the classical Basel problem zeta(2) = pi^2/6 or zeta(4)=pi^4/90 in the continuum limit.
Page 1 /100 Display every page 5 10 20 Item
|
2020-01-25 17:40:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8523215651512146, "perplexity": 532.1823171022164}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251678287.60/warc/CC-MAIN-20200125161753-20200125190753-00093.warc.gz"}
|
https://eventuallyalmosteverywhere.wordpress.com/category/school-and-olympiad/british-maths-olympiad/
|
# BMO1 2017 – Questions 5 and 6
The first round of the British Mathematical Olympiad was sat yesterday. The questions can be found here and video solutions here. My comments on the first four questions are in the previous post.
Overall, I didn’t think any of the questions on this paper were unusually difficult by the standard of BMO1, but I found everything slightly more time-consuming than typical. I thought Question 5 was a great problem, and I tried lots of things unsuccessfully, first, and so wanted to discuss it in slightly more technical language. For Question 6 I made a decisive mistake, which I’ll explain, and which cost a lot of time. But in general, my point is that the back end of the paper was a little fiddlier than normal, and required longer written solutions, and perhaps many students might have had less time than expected to attack them anyway after details earlier in the paper.
Question Five
As I said before, I thought this question was quite challenging. Not because the solution is particularly exotic or complicated, but because there were so many possible things that might have worked. In my opinion it would not have been out of place at the start of an IMO paper, because it’s perfectly possible to have enough good ideas that eliminating the ones that don’t work takes an hour, or hours. Even though it slightly spoils the flow of the solution, I’m particularly trying to emphasise the tangents that didn’t work, mostly for reassurance to anyone who spent a long time struggling.
I was thinking about this question in terms of a 2Nx2N board, where N is even, and for the given question equal to 100. I spent a while thinking that the bound was 8N-4, corresponding to taking the middle two rows and the middle two columns, but not the 2×2 square which is their intersection. If you think of a comb as a ‘handle’ of 1xN cells, with an extra N/2 alternating cells (say, ‘teeth’) bolted on, then it’s clear this construction works because there’s never space to fit in a handle, let alone the teeth.
I couldn’t prove that this was optimal though. A standard way to prove a given bound K was optimal would be to produce a tiling on the board with K combs, where every cell is included in exactly one comb. But this is clearly not possible in this situation, since the number of cells in a comb (which is 150) does not divide the total number of cells on the board.
Indeed, the general observation that if you take a comb and a copy of the comb rotated by 180, the teeth of the second comb can mesh perfectly with the teeth of the first comb to generate a 3xN unit. I wasted a moderate amount of time pursuing this route.
[Note, it will be obvious in a minute why I’m writing ‘shaded’ instead of ‘coloured’.]
But in motivating the construction, I was merely trying to shade cells so that they intersected every possible 1xN handle, and maybe I could prove that it was optimal for this. In fact, I can’t prove it’s optimal because it isn’t optimal – indeed it’s clear that a handle through one of the middle rows intersects plenty of shaded cells, not just one. However, with this smaller problem in mind, it didn’t take long to come up with an alternative proposal, namely splitting the board into equal quarters, and shading the diagonals of each quarter, as shown.
It seems clear that you can’t fit in a 1xN handle, and any sensible tiling with 1xN handles contains exactly one shaded cell, so this shading (with 4N shaded cells) is optimal. But is it optimal for a comb itself?
Consider a shading which works, so that all combs include a shaded cell. It’s clear that a comb is contained within a 2xN block, and in such a 2xN block, there are four possible combs, as shown.
You need to cover all these combs with some shading somewhere. But if you put the shaded cell on a tooth of comb A, then you haven’t covered comb B. And if you put the shaded cell on the handle of comb A, then you haven’t covered one of comb C and comb D. You can phrase this via a colouring argument too. If you use four colours with period 2×2, as shown
then any comb involves exactly three colours, and so one of them misses out the colour of the shaded cell. (I hope it’s clear what I mean, even with the confusing distinction between ‘shaded’ and ‘coloured’ cells.)
Certainly we have shown that any 2xN block must include at least two shaded cells. And that’s pretty much it. We have a tiling with 2N copies of a 2xN block, and with at least two shaded cells in each, that adds to at least 4N shaded cells overall.
Looking back on the method, we can identify another way to waste time. Tiling a board, eg a chessboard with dominos is a classic motif, which often relies on clever colouring. So it’s perhaps lucky that I didn’t spot this colouring observation earlier. Because the argument described really does use the local properties of how the combs denoted A-D overlap. An attempt at a global argument might start as follows: we can identify 2N combs which don’t use colour 1, and tile this subset of the grid with them, so we need to shade at least 2N cells from colours {2,3,4}. Similarly for sets of colours {1,3,4}, {1,2,4}, and {1,2,3}. But if we reduce the problem to this, then using roughly 2N/3 of each colour fits this global requirement, leading to a bound of 8N/3, which isn’t strong enough. [1]
Question Six
A word of warning. Sometimes it’s useful to generalise in problems. In Q5, I was thinking in terms of N, and the only property of N I used was that it’s even. In Q4, we ignored 2017 and came back to it at the end, using only the fact that it’s odd. By contrast, in Q2, the values did turn out to be important for matching the proof bounds with a construction.
You have to guess whether 300 is important or not here. Let’s see.
I have a natural first question to ask myself about the setup, but some notation is useful. Let $a_1,a_2,\ldots,a_{300}$ be the ordering of the cards. We require that $\frac{a_1+\ldots+a_n}{n}$ is an integer for every $1\le n\le 300$. Maybe the values of these integers will be important, so hold that thought, but for now, replace with the divisibility statement that $n | a_1+\ldots+a_n$.
I don’t think it’s worth playing with small examples until I have a better idea whether the answer is 5 or 295. So the natural first question is: “what does it mean to have $(a_1,\ldots,a_{n-1})$ such that you can’t pick a suitable $a_n$?”
It means that there is no integer k in $\{1,\ldots,300\}\backslash\{a_1,\ldots,a_{n-1}\}$ such that $n\,\big|\,(a_1+\ldots+a_{n-1})+k$, which for now we write as
$k\equiv -(a_1+\ldots+a_{n-1})\,\mod n.$
Consider the congruence class of $-(a_1+\ldots+a_{n-1})$ modulo n. There are either $\lfloor \frac{300}{n}\rfloor$ or $\lceil \frac{300}{n}\rceil$ integers under consideration in this congruence class. If no such k exists, then all of the relevant integers in this congruence class must appear amongst $\{a_1,\ldots,a_{n-1}\}$. At this stage, we’re trying to get a feel for when this could happen, so lower bounds on n are most relevant. Therefore, if we get stuck when trying to find $a_n$, we have
$\lfloor \frac{300}{n} \rfloor\text{ or }\lceil \frac{300}{n}\rceil \le n-1,$ (*)
which is summarised more succinctly as
$\lfloor \frac{300}{n} \rfloor \le n-1.$ (**)
[Note, with this sort of bounding argument, I find it helpful to add intermediate steps like (*) in rough. The chance of getting the wrong direction, or the wrong choice of $\pm 1$ is quite high here. Of course, you don’t need to include the middle step in a final write-up.]
We can check that (**) is false when $n\le 17$ and true when $n\ge 18$. Indeed, both versions of (*) are true when $n\ge 18$.
So we know the minimum failure length is at least 17. But is there a failing sequence of length 17? At a meta-level, it feels like there should be. That was a very natural bounding argument for 17 (which recall corresponds to $n=18$), and it’s easy to believe that might be part of an official solution. If we achieve equality throughout the argument, that’s most of the way to a construction as well. It won’t be so easy to turn this argument into a construction for $n\ge 19$ because there won’t be equality anywhere.
We have to hope there is a construction for $n=18$. What follows is a description of a process to derive (or fail to derive) such a construction. In a solution, one would not need to give this backstory.
Anyway, in such a construction, let $\alpha\in\{1,2,\ldots,18\}$ describe the congruence class modulo 18 which is exhausted by $\{a_1,\ldots,a_{17}\}$. I’m going to hope that $\alpha=18$ because then the calculations will be easier since everything’s a multiple of 18. We haven’t yet used the fact that for a problem, we need $\alpha\equiv-(a_1+\ldots+a_{n-1})$. We definitely have to use that. There are 16 multiples of 18 (ie relevant integers in the congruence class), so exactly one of the terms so far, say $a_j$, is not a multiple of 18. But then
$0 \equiv 0+\ldots+0+a_j+0+\ldots+0,$
which can’t happen. With a bit of experimentation, we find a similar problem making a construction using the other congruence classes with 16 elements, namely $\alpha\in \{13,14,\ldots,18\}$.
So we have to tackle a different class. If $\alpha\le 12$ then our sequence must be
$\alpha,18+\alpha,2\times 18 +\alpha, \ldots, 16\times 18 + \alpha,$
in some order. In fact, let’s add extra notation, so our sequence is
$(a_1,\ldots,a_{17}) = (18\lambda_1+ \alpha,\ldots,18\lambda_{17}+\alpha),$
where $(\lambda_1,\ldots,\lambda_{17})$ is a permutation of {0,…,16}. And so we require
$k \,\big|\, 18(\lambda_1+\ldots+\lambda_k) + k\alpha,$ (%)
for $1\le k\le 17$. But clearly we can lop off that $k\alpha$, and could ignore the 18. Can we find a permutation $\lambda$ such that
$k \,\big|\, \lambda_1+\ldots+\lambda_k.$
This was where I wasted a long time. I played around with lots of examples and kept getting stuck. Building it up one term at a time, I would typically get stuck around k=9,10. And I had some observations that in all the attempted constructions, the values of $\frac{\lambda_1+\ldots+\lambda_k}{k}$ were around 8 and 9 too when I got stuck.
I became convinced this subproblem wasn’t possible, and decided that would be enough to show that n=18 wasn’t a possible failure length. I was trying to show the subproblem via a parity argument (how must the $a_i$s alternate odd/even to ensure all the even partial sums are even) but this wasn’t a problem. Then I came up with a valid argument. We must have
$\lambda_1+\ldots+\lambda_{17}=136= 16\times 8 + 8\quad\text{and}\quad 16\,\big|\,\lambda_1+\ldots+\lambda_{16},$
which means $\lambda_1+\ldots+\lambda_{16}$ must be 128 = 15×8 + 8, ie $\lambda_{17}=8$. But then we also have $15\,\big|\, \lambda_1+\ldots+\lambda_{15}$, which forces $latex\lambda_{16}=8$ also. Which isn’t possible.
If this then hadn’t wasted enough time, I then tried to come up with a construction for n=19, for which there are lots more variables, and took a lot more time, and seemed to be suffering from similar problems, just in a more complicated way. So I became convinced I must have made a mistake, because I was forced down routes that were way too complicated for a 3.5 hour exam. Then I found it…
What did I do wrong? I’ll just say directly. I threw away the 18 after (%). This made the statement stronger. (And in fact false.) Suppose instead I’d thrown away a factor of 9 (or no factors at all, but it’s the residual 2 that’s important). Then I would be trying to solve
$k\,\big|\,2(\lambda_1+\ldots+\lambda_k).$
And now if you experiment, you will notice that taking $\lambda_1=0,\lambda_2=1,\lambda_3=2,\ldots$ seems to work fine. And of course, we can confirm this, using the triangle number formula for the second time in the paper!
This had wasted a lot of time, but once that thought is present, we’re done, because we can go straight back and exhibit the sequence
$(a_1,\ldots,a_{17}) = (1, 18+1,2\times 18 +1,\ldots, 16\times 18 +1).$
Then the sum so far is congruent to -1 modulo 18, but we have exhausted all the available integers which would allow the sum of the first 18 terms to be a multiple of 18. This confirms that the answer to the question as stated is 17.
At the start, I said that we should be cautious about generalising. In the end, this was wise advice. We definitely used the fact that 18 was even in the stage I over-reduced the first time. We also used the fact that there was at least one value of $\alpha$ with an ‘extra’ member of the congruence class. So I’m pretty sure this proof wouldn’t have worked with 288 = 16×18 cards.
Footnotes
[1] – If shading were a weighted (or continuous or whatever you prefer) property, ie that each cell has a quantity of shading given by a non-negative real number, and we merely demand that the total shading per comb is at least one, then the bound 8N/3 is in fact correct for the total shading. We could look at a 2xN block, and give 1/3 shading to one cell of each colour in the block. Alternatively, we could be very straightforward and apply 2/3N shading to every cell in the grid. The fact that shading has to be (in this language) zero or one, imposes meaningful extra constraints which involve the shape of the comb.
# BMO1 2017 – Questions 1-4
The first round of the British Mathematical Olympiad was sat yesterday. The questions can be found here. I recorded some thoughts on the questions while I was in Cyprus, hence the nice Mediterranean sunset above. I hope this might be useful to current or future contestants, as a supplement to the concise official solutions available. It goes without saying that while these commentaries may be interesting at a general level, they will be much more educational to students who have at least digested and played around with the questions, so consider trying the paper first. Video solutions are available here. These have more in common with this blog post than the official solutions, though inevitably some of the methods are slightly different, and the written word has some merits and demerits over the spoken word for clarity and brevity.
The copyright for these questions lies with BMOS, and are reproduced here with permission. Any errors or omissions are obviously my own.
I found the paper overall quite a bit harder than in recent years, or at least harder to finish quickly. I’ve therefore postponed discussion of the final two problems to a second post, to follow shortly.
Question One
A recurring theme of Q1 from BMO1 in recent years has been: “it’s possible to do this problem by a long, and extremely careful direct calculation, but additional insight into the setup makes life substantially easier.”
This is the best example yet. It really is possible to evaluate Helen’s sum and Phil’s sum, and compare them directly. But it’s easy to make a mistake in recording all the remainders when the divisor is small, and it’s easy to make a mistake in summation when the divisor is large, and so it really is better to have a think for alternative approaches. Making a mistake in a very calculation-heavy approach is generally penalised heavily. And this makes sense intellectually, since the only way for someone to fix an erroneous calculation is to repeat it themselves, whereas small conceptual or calculation errors in a less onerous solution are more easily isolated and fixed by a reader. Of course, it also makes sense to discourage such attempts, which aren’t really related to enriching mathematics, which is the whole point of the exercise!
Considering small divisors (or even smaller versions of 365 and 366) is sometimes helpful, but here I think a ‘typical’ divisor is more useful. But first, some notation will make any informal observation much easier to turn into a formal statement. Corresponding to Helen and Phil, let h(n) be the remainder when n is divided by 365, and p(n) the remainder when n is divided by 366. I would urge students to avoid the use of ‘mod’ in this question, partly because working modulo many different bases is annoying notationally, partly because the sum is not taken modulo anything, and partly because the temptation to use mod incorrectly as an operator is huge here [1].
Anyway, a typical value might be n=68, and we observe that 68 x 5 + 25 = 365, and so h(68)=25 and p(68)=26. Indeed, for most values of n, we will have p(n)=h(n)+1. This is useful because
$p(1)+p(2)+\ldots+p(366) - \left(h(1)+h(2)+\ldots+h(365)\right)$
$= \left(p(1)-h(1)\right) + \ldots+\left(p(365)-h(365)\right) + p(366),$
and now we know that most of the bracketed terms are equal to one. We just need to handle the rest. The only time it doesn’t hold that p(n)=h(n)+1 is when 366 is actually a multiple of n. In this case, p(n)=0 and h(n)=n-1. We know that 366 = 2 x 3 x 61, and so its divisors are 1, 2, 3, 6, 61, 122, 183.
Then, in the big expression above, seven of the 365 bracketed terms are not equal to 1. So 358 of them are equal to one. The remaining ones are equal to 0, -1, -2, -5, -60, -121, -182 respectively. There are shortcuts to calculate the sum of these, but it’s probably safer to do it by hand, obtaining -371. Overall, since p(366)=0, we have
$p(1)+p(2)+\ldots+p(366) - \left(h(1)+h(2)+\ldots+h(365)\right)$
$= -371 + 358 + 0 = -13.$
So, possibly counter-intuitively, Helen has the larger sum, with difference 13, and we didn’t have to do a giant calculation…
Question Two
Suppose each person chooses which days to go swimming ‘at random’, without worrying about how to define this. Is this likely to generate a maximum or minimum value of n? I hope it’s intuitively clear that this probably won’t generate an extreme value. By picking at random we are throwing away lots of opportunity to force valuable overlaps or non-overlaps. In other words, we should start thinking about ways to set up the swimming itinerary with lots of symmetry and structure, and probably we’ll eventually get a maximum or a minimum. At a more general level, with a problem like this, one can start playing around with proof methods immediately, or one can start by constructing lots of symmetric and extreme-looking examples, and see what happens. I favour the latter approach, at least initially. You have to trust that at least one of the extreme examples will be guess-able.
The most obvious extreme example is that everyone swims on the first 75 days, and no-one swims on the final 25 days. This leads to n=75. But we’re clearly ‘wasting’ opportunities in both directions, because there are never exactly five people swimming. I tried a few more things, and found myself simultaneously attacking maximum and minimum, which is clearly bad, so focused on minimum. Just as a starting point, let’s aim for something small, say n=4. The obstacle is that if you demand at most four swimmers on 96 days, then even with six swimmers on the remaining four days, you don’t end up with enough swimming having taken place!
Maybe you move straight from this observation to a proof, or maybe you move straight to a construction. Either way, I think it’s worth saying that the proof and the construction come together. My construction is that everyone swims on the first 25 days, then on days 26-50 everyone except A and B swim, on days 51-75 everyone except C and D swim, and on days 76-100 everyone except E and F swim. This exactly adds up. And if you went for the proof first, you might have argued that the total number of swim days is 6×75 = 450, but is at most 4n + 6(100-n). This leads immediately to $n\ge 25$, and I just gave the construction. Note that if you came from this proof first, you can find the construction because your proof shows that to be exact you need 25 days with six swimmers, and 75 days with four swimmers, and it’s natural to try to make this split evenly. Anyway, this clears up the minimum.
[Less experienced contestants might wonder why I was worried about generating a construction despite having a proof. Remember we are trying to find the minimum. I could equally have a proof for $n\ge 10$ which would be totally totally valid. But this wouldn’t show that the minimum was n=10, because that isn’t in fact possible (as we’ve seen), hence it’s the construction that confirms that n=25 is the true minimum.]
It’s tempting to go back to the drawing board for the maximum, but it’s always worth checking whether you can directly adjust the proof you’ve already given. And here you can! We argued that
$450\le 4n + 6(100-n)$
to prove the minimum. But equally, we know that on the n days we have at least five swimmers, and on the remaining days, we have between zero and four swimmers, so
$450 \ge 5n + 0\times (100-n),$ (*)
which gives $n\le 90$. If we have a construction that attains this bound then we are done. Why have I phrased (*) with the slightly childish multiple of zero? Because it’s a reminder that for a construction to attain this bound, we really do need the 90 days to have exactly five swimmers, and the remaining ten days to have no swimmers. So it’s clear what to do. Split the first 90 days into five groups of 15 days. One swimmer skips each group. No-one swims in the final ten days, perhaps because of a jellyfish infestation. So we’re done, and $25\le n\le 90$.
At a general level, it’s worth noting that in the story presented, we found an example for the minimum which we turned into a proof, and then a proof for the maximum, which we then analysed to produce a construction.
Note that similar bounding arguments would apply if we fiddled with the numbers 5, 75 and 100. But constructions matching the bounds might not then be possible because the splits wouldn’t work so nicely. This would make everything more complicated, but probably not more interesting.
Question Three
It’s understandable that lots of students attempting this paper might feel ill-at-ease with conventional Euclidean geometry problems. A good first rule of thumb here, as in many settings, is “don’t panic!”, and a more specific second rule of thumb is “even if you think you can calculate, try to find geometric insight first.”
Here, it really does look like you can calculate. A configuration based on a given isosceles triangle and a length condition and a perpendicular line is open to several coordinate approaches, and certainly some sensible trigonometry. It’s also very open to organised labelling of the diagram. You have three equal lengths, and a right-angle, as shown.
The key step is this. Drop the perpendicular from A to BC, and call its foot D. That alone really is the key step, as it reduces both parts of the question to an easy comparison. It’s clear that the line AD splits the triangle into two congruent parts, and thus equal areas and perimeters. So it is enough to show that triangle BMN has the same area as triangle ABD, and that their outer-perimeters (ie the part of its perimeter which is also the perimeter of ABC) are the same.
But they’re congruent, so both of these statements are true, and the problem is solved.
My solution could be as short as two or three lines, so for the purposes of this post all that remains is to justify why you might think of the key step. Here are a few possible entry routes:
• You might notice that line AD induces the required property for triangle ABD.
• You might try to find a triangle congruent to AMN, and come up with D that way.
• There’s already a perpendicular in the question so experimenting with another one is natural, especially since the perpendicular from A has straightforward properties.
• AMN is a right angle, and so constructing D gives a cyclic quadrilateral. We didn’t use that directly in the proof above, but constructing cyclic quadrilaterals is usually a good idea.
• If you were trying a calculation approach, you probably introduced the length AD, or at least the midpoint D as an intermediate step.
On the video, Mary Teresa proposes a number of elegant synthetic solutions with a few more steps. You might find it a useful exercise to try to come up with some motivating reasons like the bullet points above to justify her suggestion to reflect A in M as a first step.
Question Four
I wasn’t paying enough attention initially, and I calculated $a_2=0\text{ or }2$. This made life much much more complicated. As with IMO 2017 Q1, if trying to deduce general behaviour from small examples, it’s essential to calculate the small examples correctly!
Once you engage your brain properly, you find that $a_2=0 \text{ or }3$, and of course $a_2=0$ is not allowed, since it must be positive. So $a_2=3$, and a similar calculation suggests $a_3=1\text{ or }6$. It’s clear that the set of values for $a_{k+1}$ depends only on $a_k$, so if you take $a_3=1$, then you’re back to the situation you started with at the beginning. If you choose to continue the exploration with $a_3=6$, you will find $a_4=2\text{ or }10$, at which point you must be triggered by the possibility that triangle numbers play a role here.
As so often with a play-around with small values, you need to turn a useful observation into a concrete statement, which could then be applied to the problem statement. It looks like in any legal sequence, every term will be a triangle number, so we only need to clarify which triangle number. An example of a suitable statement might be:
Claim: If $a_n=T_k=\frac{k(k+1)}{2}$, the k-th triangle number, then $a_{n+1}=T_{k-1}\text{ or }T_{k+1}$.
There are three stages. 1) Checking the claim is true; 2) checking the claim is maximally relevant; 3) proving it. In this case, proving it is the easiest bit. It’s a quick exercise, and I’m omitting it. Of course, we can’t prove any statement which isn’t true, and here we need to make some quick adjustment to account for the case k=1, for which we are forced to take $a_{n+1}=T_{k+1}$.
The second stage really concerns the question “but what if $a_n\ne T_k$?” While there are deductions one could make, the key is that if $a_1$ is a triangle number, the claim we’ve just made shows that $a_n$ is always a triangle number, so this question is irrelevant. Indeed the claim further shows that $a_{2017}\le T_{2017}$, and also that $a_{2017}=T_k$ for some odd value of k. To be fully rigorous you should probably describe a sequence which attains each odd value of k, but this is really an exercise in notation [2], and it’s very obvious they are all attainable.
In any case, the set of possible values is $\{T_1,T_3,\ldots,T_{2017}\}$, which has size 1009.
Final two questions
These are discussed in a subsequent post.
Footnotes
[1] – mod n is not an operator, meaning you shouldn’t think of it as ‘sending integers to other integers’, or ‘taking any integer, to an integer in {0,1,…,n-1}’. Statements like 19 mod 5 = 4 are useful at the very start of an introduction to modular arithmetic, but why choose 4? Sometimes it’s more useful to consider -1 instead, and we want statements like $a^p\equiv a$ modulo p to make sense even when $a\ge p$. 19 = 4 modulo 5 doesn’t place any greater emphasis on the 4 than the 19. This makes it more like a conventional equals sign, which is of course appropriate.
[2] – Taking $a_n=T_n$ for $1\le n\le k$, and thereafter $a_n=T_k$ if k is odd, and $a_n=T_{k+1}$ if k is even will certainly work, as will many other examples, some perhaps easier to describe than this one, though make sure you don’t accidentally try to use $T_0$!
# Characterising fixed points in geometry problems
There’s a risk that this blog is going to become entirely devoted to Euclidean geometry, but for now I’ll take that risk. I saw the following question on a recent olympiad in Germany, and I enjoyed it as a problem, and set it on a training sheet for discussion with the ten British students currently in contention for our 2017 IMO team.
Given a triangle ABC for which $AB\ne AC$. Prove there exists a point $D\ne A$ on the circumcircle satisfying the following property: for any points M,N outside the circumcircle on rays AB, AC respectively, satisfying BM=CN, the circumcircle of AMN passes through D.
Proving the existence of a fixed point/line/circle which has a common property with respect to some other variable points/lines/circles is a common style of problem. There are a couple of alternative approaches, but mostly what makes this style of problem enjoyable is the challenge of characterising what the fixed point should be. Sometimes an accurate diagram will give us everything we need, but sometimes we need to be clever, and I want to discuss a few general techniques through the context of this particular question. I don’t want to make another apologia for geometry as in the previous post, but if you’re looking for the ‘aha moment’, it’ll probably come from settling on the right characterisation.
At this point, if you want to enjoy the challenge of the question yourself, don’t read on!
Reverse reconstruction via likely proof method
At some point, once we’ve characterised D in terms of ABC, we’ll have to prove it lies on the circumcircle of any AMN. What properties do we need it to have? Well certainly we need the angle relation BDC = A, but because MDAN will be cyclic too, we also need the angle relation MDN = A. After subtracting, we require angles MDB = NDC.
Depending on your configuration knowledge, this is all quite suggestive. At the very least, when you have equal angles and equal lengths, you might speculate that the corresponding triangles are congruent. Here that would imply BD=CD, which characterises D as lying on the perpendicular bisector of BC. D is also on the circumcircle, so in fact it’s also on the angle bisector of BAC, here the external angle bisector. This is a very common configuration (normally using the internal bisector) in this level of problem, and if you see this coming up without prompting, it suggests you’re doing something right.
So that’s the conjecture for D. And we came up with the conjecture based on a likely proof strategy, so to prove it, we really just need to reverse the steps of the previous two paragraphs. We now know BD=CD. We also know angles ABD = ACD, so taking the complementary angles (ie the obtuse bit in the diagram) we have angles DBM = DCN, so we indeed have congruent triangles. So we can read off angles MDB = NDC just as in our motivation, and recover that MDAN is cyclic.
Whatever other methods there are to characterise point D (to follow), all methods will probably conclude with an argument like the one in this previous paragraph, to demonstrate that D does have the required property.
Limits
We have one degree of freedom in choosing M and N. Remember that initially we don’t know what the target point D is. If we can’t see it immediately from drawing a diagram corresponding to general M and N, it’s worth checking some special cases. What special cases might be most relevant depends entirely on the given problem. The two I’m going to mention here both correspond to some limiting configuration. The second of these is probably more straightforward, and was my route to determining D. The first was proposed by one of my students.
First, we conjecture that maybe the condition that M and N lie outside the circumcircle isn’t especially important, but has been added to prevent candidates worrying about diagram dependency. The conclusion might well hold without this extra stipulation. Remember at this stage we’re still just trying to characterise D, so even if we have to break the rules to find it, this won’t damage the solution, since we won’t be including our method for finding D in our written-up solution!
Anyway, WLOG AC < AB. If we take N very close to A, then the distances BM and MA are c and b-c respectively. The circumcircle of AMN is almost tangent to line AC. At this point we stop talking about ‘very close’ and ‘almost tangent’ and just assume that N=A and the so the circle AMN really is the circle through M, tangent to AC at A. We need to establish where this intersects the circumcircle for a second time.
To be clear, I found what follows moderately tricky, and this argument took a while to find and was not my first attempt at all. First we do some straightforward angle-chasing, writing A,B,C for the measures of the angles in triangle ABC. Then the angle BDC is also A and angle BDA is 180-C. We also have the tangency relation from which the alternate segment theorem gives angle MDA = A. Then BDM = BDA – MDA = 180 – C – A = B. So we know the lengths and angles in the configuration BDAM.
At this point, I had to use trigonometry. There were a couple of more complicated options, but the following works. In triangle BDM, a length b is subtended by angle B, as is the case for the original triangle ABC. By the extended sine rule, BDM then has the same circumradius as ABC. But now the length BD is subtended by angle DMB in one of these circumcircles, and by DAB in the other. Therefore these angles are either equal or complementary (in the sense that they sum to 180). Clearly it must be the latter, from which we obtain that angles DMA = MAD = 90 – A/2. In other words, D lies on the external angle bisector of A, which is the characterisation we want.
Again to clarify, I don’t think this was a particularly easy or particularly natural argument for this exact problem, but it definitely works, and the idea of getting a circle tangent to a line as a limit when the points of intersection converge is a useful one. As ever, when an argument uses the sine rule, you can turn it into a synthetic argument with enough extra points, but of the options I can currently think of, I think this trig is the cleanest.
My original construction was this. Let M and N be very very far down the rays. This means triangle AMN is large and approximately isosceles. This means that the line joining A to the circumcentre of AMN is almost the internal angle bisector of MAN, which is, of course, also the angle bisector of BAC. Also, because triangle AMN is very large, its circumcircle looks, locally, like a line, and has to be perpendicular to the circumradius at A. In other words, the circumcircle of AMN is, near A, approximately line perpendicular to the internal angle bisector of BAC, ie the external angle bisector of BAC. My ‘aha moment’ factor on this problem was therefore quite high.
Direct arguments
A direct argument for this problem might consider a pairs of points (M,N) and (M’,N’), and show directly that the circumcircles of ABC, AMN and AM’N’ concur at a second point, ie are coaxal. It seems unlikely to me that an argument along these lines wouldn’t find involve some characterisation of the point of concurrency along the way.
Do bear in mind, however, that such an approach runs the risk of cluttering the diagram. Points M and N really weren’t very important in anything that’s happened so far, so having two pairs doesn’t add extra insight in any of the previous methods. If this would have been your first reaction, ask yourself whether it would have been as straightforward or natural to find a description of D which led to a clean argument.
Another direct argument
Finally, a really neat observation, that enables you to solve the problem without characterising D. We saw that triangles DBM and DCN were congruent, and so we can obtain one from the other by rotating around D. We say D is the centre of the spiral similarity (here in fact with homothety factor 1 ie a spiral congruence) sending BM to CN. Note that in this sort of transformation, the direction of these segments matters. A different spiral similarity sends BM to NC.
But let’s take any M,N and view D as this spiral centre. The transformation therefore maps line AB to AC and preserves lengths. So in fact we’ve characterised D without reference to M and N ! Since everything we’ve said is reversible, this means as M and N vary, the point we seek, namely D, is constant.
This is only interesting as a proof variation if we can prove that D is the spiral centre without reference to one of the earlier arguments. But we can! In general a point D is the centre of spiral similarity mapping BM to CN iff it is also the centre of spiral similarity mapping BC to MN. And we can find the latter centre of spiral similarity using properties of the configuration. A is the intersection of MB and CN, so we know precisely that the spiral centre is the second intersection point of the two circumcircles, exactly as D is defined in the question.
(However, while this is cute, it’s somehow a shame not to characterise D as part of a solution…)
# BMO2 2017
The second round of the British Mathematical Olympiad was taken yesterday by about 100 invited participants, and about the same number of open entries. To qualify at all for this stage is worth celebrating. For the majority of the contestants, this might be the hardest exam they have ever sat, indeed relative to current age and experience it might well be the hardest exam they ever sit. And so I thought it was particularly worth writing about this year’s set of questions. Because at least in my opinion, the gap between finding every question very intimidating, and solving two or three is smaller, and more down to mindset, than one might suspect.
A key over-arching point at this kind of competition is the following: the questions have been carefully chosen, and carefully checked, to make sure they can be solved, checked and written up by school students in an hour. That’s not to say that many, or indeed any, will take that little time, but in principle it’s possible. That’s also not to say that there aren’t valid but more complicated routes to solutions, but in general people often spend a lot more time writing than they should, and a bit less time thinking. Small insights along the lines of “what’s really going on here?” often get you a lot further into the problem than complicated substitutions or lengthy calculations at this level.
So if some of the arguments below feel slick, then I guess that’s intentional. When I received the paper and had a glance in my office, I was only looking for slick observations, partly because I didn’t have time for detailed analysis, but also because I was confident that there were slick observations to be made, and I felt it was just my task to find them.
Anyway, these are the questions: (note that the copyright to these is held by BMOS – reproduced here with permission.)
Question One
I immediately tried the example where the perpendicular sides are parallel to the coordinate axes, and found that I could generate all multiples of 3 in this way. This seemed a plausible candidate for an answer, so I started trying to find a proof. I observed that if you have lots of integer points on one of the equal sides, you have lots of integer points on the corresponding side, and these exactly match up, and then you also have lots of integer points on the hypotenuse too. In my first example, these exactly matched up too, so I became confident I was right.
Then I tried another example ( (0,0), (1,1), (-1,1) ) which has four integer points, and could easily be generalised to give any multiple of four as the number of integer points. But I was convinced that this matching up approach had to be the right thing, and so I continued, trusting that I’d see where this alternate option came in during the proof.
Good setup makes life easy. The apex of the isosceles triangle might as well be at the origin, and then your other vertices can be $(m,n), (n,-m)$ or similar. Since integral points are preserved under the rotation which takes equal side to the other, the example I had does generalise, but we really need to start enumerating. The number of integer points on the side from (0,0) to (m,n) is G+1, where G is the greatest common divisor of m and n. But thinking about the hypotenuse as a vector (if you prefer, translate it so one vertex is at the origin), the number of integral points on this line segment must be $\mathrm{gcd}(m+n,m-n) +1$.
To me, this felt highly promising, because this is a classic trope in olympiad problem-setting. Even without this experience, we know that this gcd is equal to G if m and n have different parities (ie one odd, one even) and equal to 2G if m and n have the same parity.
So we’re done. Being careful not to double-count the vertices, we have 3G integral points if m and n have opposite parities, and 4G integral points if m and n have the same parity, which exactly fits the pair of examples I had. But remember that we already had a pair of constructions, so (after adjusting the hypothesis to allow the second example!) all we had to prove was that the number of integral points is divisible by at least one of 3 and 4. And we’ve just done that. Counting how many integers less than 2017 have this property can be done easily, checking that we don’t double-count multiples of 12, and that we don’t accidentally include or fail to include zero as appropriate, which would be an annoying way to perhaps lose a mark after totally finishing the real content of the problem.
Question Two
(Keen observers will note that this problem first appeared on the shortlist for IMO 2006 in Slovenia.)
As n increases, obviously $\frac{1}{n}$ decreases, but the bracketed expression increases. Which of these effects is more substantial? Well $\lfloor \frac{n}{k}\rfloor$ is the number of multiples of k which are at most n, and so as a function of n, this increases precisely when n is a multiple of k. So, we expect the bracketed expression to increase substantially when n has lots of factors, and to increase less substantially when n has few factors. An extreme case of the former might be when n is a large factorial, and certainly the extreme case of the latter is n a prime.
It felt easier to test a calculation on the prime case first, even though this was more likely to lead to an answer for b). When n moves from p-1 to p, the bracketed expression goes up by exactly two, as the first floor increases, and there is a new final term. So, we start with a fraction, and then increase the numerator by two and the denominator by one. Provided the fraction was initially greater than two, it stays greater than two, but decreases. This is the case here (for reasons we’ll come back to shortly), and so we’ve done part b). The answer is yes.
Then I tried to do the calculation when n was a large factorial, and I found I really needed to know the approximate value of the bracketed expression, at least for this value of n. And I do know that when n is large, the bracketed expression should be approximately $n\log n$, with a further correction of size at most n to account for the floor functions, but I wasn’t sure whether I was allowed to know that.
But surely you don’t need to engage with exactly how large the correction due to the floors is in various cases? This seemed potentially interesting (we are after all just counting factors), but also way too complicated. An even softer version of what I’ve just said is that the harmonic function (the sum of the first n reciprocals) diverges faster than n. So in fact we have all the ingredients we need. The bracketed expression grows faster than n, (you might want to formalise this by dividing by n before analysing the floors) and so the $a_n$s get arbitrarily large. Therefore, there must certainly be an infinite number of points of increase.
Remark: a few people have commented to me that part a) can be done easily by treating the case $n=2^k-1$, possibly after some combinatorial rewriting of the bracketed expression. I agree that this works fine. Possibly this is one of the best examples of the difference between doing a problem leisurely as a postgraduate, and actually under exam pressure as a teenager. Thinking about the softest possible properties of a sequence (roughly how quickly does it grow, in this case) is a natural first thing to do in all circumstances, especially if you are both lazy and used to talking about asymptotics, and certainly if you don’t have paper.
Question 3
I only drew a very rough diagram for this question, and it caused no problems whatsoever, because there aren’t really that many points, and it’s fairly easy to remember what their properties are. Even in the most crude diagram, we see R and S lie on AC and AD respectively, and so the conclusion about parallel lines is really about similarity of triangles ARS and ACD. This will follow either from some equal angles, or by comparing ratios of lengths.
Since angle bisectors by definition involve equal angles, the first attack point seems promising. But actually the ratios of lengths is better, provided we know the angle bisector theorem, which is literally about ratios of lengths in the angle bisector diagram. Indeed
$\frac{AR}{RC}=\frac{AQ}{CQ},\quad \frac{AS}{SD}=\frac{AP}{PD},$ (1)
and so it only remains to show that these quantities are in fact all equal. Note that there’s some anti-symmetry here – none of these expressions use B at all! We could for example note that AP/PD = BP/PC, from which
$\left(\frac{AS}{SD}\right)^2 = \frac{AP.BP}{PC.PD},$ (2)
and correspondingly for R and Q, and work with symmetric expressions. I was pretty sure that there was a fairly well-known result that in a cyclic quadrilateral, where P is the intersection of the diagonals
$\frac{AP}{PC} = \frac{AD.AB}{DC.BC},$ (3)
(I was initially wondering whether there was a square on the LHS, but an example diagram makes the given expression look correct.)
There will be a corresponding result for Q, and then we would be almost done by decomposing (2) slightly differently, and once we’d proved (3) of course. But doing this will turn out to be much longer than necessary. The important message from (3) is that in a very simple diagram (only five points), we have a result which is true, but which is not just similar triangles. There are two pairs of similar triangles in the diagram, but they aren’t in the right places to get this result. What you do have is some pairs of triangles with one pair of equal angles, and one pair of complementary angles (that is, $\theta$ in one, and $180-\theta$ in the other). This is a glaring invitation to use the sine rule, since the sines of complementary angles are equal.
But, this is also the easiest way to prove the angle bisector theorem. So maybe we should just try this approach directly on the original ratio-of-lengths statement that we decided at (1) was enough, namely $\frac{AQ}{CQ}=\frac{AP}{PD}$. And actually it drops out rapidly. Using natural but informal language referencing my diagram
$\frac{AP}{PD} = \frac{\sin(\mathrm{Green})}{\sin(\mathrm{Pink})},\quad\text{and}\quad \frac{AQ}{CQ}= \frac{\sin(\mathrm{Green})}{\sin(180-\mathrm{Pink})}$
and we are done. But whatever your motivation for moving to the sine rule, this is crucial. Unless you construct quite a few extra cyclic quadrilaterals, doing this with similar triangles and circle theorems alone is going to be challenging.
Remark: If you haven’t seen the angle bisector theorem before, that’s fine. Both equalities in (1) are a direct statement of the theorem. It’s not an intimidating statement, and it would be a good exercise to prove either of these statements in (1). Some of the methods just described will be useful here too!
Question 4
You might as well start by playing around with methodical strategies. My first try involved testing 000, 111, … , 999. After this, you know which integers appear as digits. Note that at this stage, it’s not the same as the original game with only three digits, because we can test using digits which we know are wrong, so that answers are less ambiguous. If the three digits are different, we can identify the first digit in two tests, and then the second in a further test, and so identify the third by elimination. If only two integers appear as digits, we identify each digit separately, again in three tests overall. If only one integer appears, then we are already done. So this is thirteen tests, and I was fairly convinced that this wasn’t optimal, partly because it felt like testing 999 was a waste. But even with lots of case tries I couldn’t do better. So I figured I’d try to prove some bound, and see where I got.
A crucial observation is the following: when you run a test, the outcome eliminates some possibilities. One of the outcomes eliminates at least half the codes, and the other outcome eliminates at most half the codes. So, imagining you get unlucky every time, after k tests, you might have at least $1000\times 2^{-k}$ possible codes remaining. From this, we know that we need at least 9 tests.
For this bound to be tight, each test really does need to split the options roughly in two. But this certainly isn’t the case for the first test, which splits the options into 729 (no digit agreements) and 271 (at least one agreement). Suppose the first test reduces it to 729 options, then by the same argument as above, we still need 9 tests. We now know we need at least 10 tests, and so the original guess of 13 is starting to come back into play.
We now have to make a meta-mathematical decision about what to do next. We could look at how many options might be left after the second test, which has quite a large number of cases (depending on how much overlap there is between the first test number and the second test number). It’s probably going to be less than 512 in at least one of the cases, so this won’t get us to a bound of 11 unless we then consider the third test too. This feels like a poor route to take for now, as the tree of options has branching at rate 3 (or 4 if you count obviously silly things) per turn, so gets unwieldy quickly. Another thought is that this power of two argument is strong when the set of remaining options is small, so it’s easier for a test to split the field roughly in two.
Now go back to our proposed original strategy. When does the strategy work faster than planned? It works faster than planned if we find all the digits early (eg if they are all 6 or less). So the worst case scenario is if we find the correct set of digits fairly late. But the fact that we were choosing numbers of the form aaa is irrelevant, as the digits are independent (consider adding 3 to the middle digit modulo 10 at all times in any strategy – it still works!).
This is key. For $k\le 9$, after k tests, it is possible that we fail every test, which means that at least $(10-k)$ options remain for each digit, and so at least $(10-k)^3$ options in total. [(*) Note that it might actually be even worse if eg we get a ‘close’ on exactly one test, but we are aiming for a lower bound, so at this stage considering an outcome sequence which is tractable is more important than getting the absolute worst case outcome sequence if it’s more complicated.] Bearing in mind that I’d already tried finishing from the case of reduction to three possibilities, and I’d tried hard to sneak through in one fewer test, and failed, it seemed sensible to try k=7.
After 7 tests, we have at least 27 options remaining, which by the powers-of-two argument requires at least 5 further tests to separate. So 12 in total, which is annoying, because now I need to decide whether this is really the answer and come up a better construction, or enhance the proof.
Clearly though, before aiming for either of these things, I should actually try some other values of k, since this takes basically no time at all. And k=6 leaves 64 options, from which the power of two argument is tight; and k=5 leaves 125, which is less tight. So attacking k=6 is clearly best. We just need to check that the 7th move can’t split the options exactly into 32 + 32. Note that in the example, where we try previously unseen digits in every position, we split into 27 + 37 [think about (*) again now!]. Obviously, if we have more than four options left for any digit, we are done as then we have strictly more than 4x4x4=64 options. So it remains to check the counts if we try previously unseen digits in zero, one or two positions. Zero is silly (gives no information), and one and two can be calculated, and don’t give 32 + 32.
So this is a slightly fiddly end to the solution, and relies upon having good control over what you’re trying to do, and what tools you currently have. The trick to solving this is resisting calculations and case divisions that are very complicated. In the argument I’ve proposed, the only real case division is right at the end, by which point we are just doing an enumeration in a handful of cases, which is not really that bad.
# BMO1 2016 – the non-geometry
Here’s a link to yesterday’s BMO1 paper, and the video solutions for all the problems. I gave the video solution to the geometric Q5, and discuss aspects of this at some length in the previous post.
In these videos, for obvious educational reasons, there’s a requirement to avoid referencing theory and ideas that aren’t standard on the school curriculum or relatively obvious directly from first principles. Here, I’ve written down some of my own thoughts on the other problems in a way that might add further value for those students who are already have some experience at olympiads and these types of problems. In particular, on problems you can do, it’s worth asking what you can learn from how you did them that might be applicable generally, and obviously for some of the harder problems, it’s worth knowing about solutions that do use a little bit of theory. Anyway, I hope it’s of interest to someone.
Obviously we aren’t going to write out the whole list, but there’s a trade-off in time between coming up with neat ideas involving symmetry, and just listing and counting things. Any idea is going to formalise somehow the intuitive statement ‘roughly half the digits are odd’. The neat ideas involve formalising the statement ‘if we add leading zeros, then roughly half the digits are odd’. The level of roughness required is less in the first statement than the second statement.
Then there’s the trade-off. Trying to come up with the perfect general statement that is useful and true might lead to something like the following:
‘If we write the numbers from 0000 to N, with leading zeros, and all digits of N+1 are even, then half the total digits, ie 2N of them, are odd.’
This is false, and maybe the first three such things you try along these lines are also false. What you really want to do is control the numbers from 0000 to 1999, for which an argument by matching is clear, and gives you 2000 x 4 / 2 = 4000 odd digits. You can exploit the symmetry by matching k with 1999-k, or do it directly first with the units, then with the tens and so on.
The rest (that is, 2000 to 2016) can be treated by listing and counting. Of course, the question wants an actual answer, so we should be wary of getting it wrong by plus or minus one in some step. A classic error of this kind is that the number of integers between 2000 and 2016 inclusive is 17, not 16. I don’t know why the memory is so vivid, but I recall being upset in Year 2 about erring on a problem of this kind involving fences and fenceposts.
As with so many new types of equation, the recipe is to reduce to a type of equation you already know how to solve. Here, because {x} has a different form on different ranges, it makes sense to consider the three ranges
$x\in[0,1/25],\, x\in[1/25,1/8],\, x\in [1/8,\infty),$
as for each of these ranges, we can rewrite $5y\{8y\}\{25y\}$ in terms of standard functions without this bracket notation. On each range we can solve the corresponding equation. We then have to check that each solution does actually lie in the appropriate range, and in two cases it does, and in one case it doesn’t.
Adding an appropriately-chosen value to each side allows you to factorise the quadratics. This might be very useful. But is it an invitation to do number theory and look at coprime factors and so on, or is a softer approach more helpful?
The general idea is that the set of values taken by any quadratic sequence with integer coefficients and leading coefficient one looks from a distance like the set of squares, or the set $\{m(m+1), \,m\in\mathbb{N}\}$, which you might think of as ‘half-squares’ or ‘double triangle numbers’ as you wish. And by, ‘from a distance’ I mean ‘up to an additive constant’. If you care about limiting behaviour, then of course this additive constant might well not matter, but if you care about all solutions, you probably do care. To see why this holds, note that
$n^2+2n = (n+1)^2 - 1,$
so indeed up to an additive constant, the quadratic on the LHS gives the squares, and similarly
$n^2 - 7n = (n-4)(n-3)-12,$
and so on. To solve the equation $n^2=m^2+6$, over the integers, one can factorise, but another approach is to argue that the distance between adjacent squares is much more than 6 in the majority of cases, which leaves only a handful of candidates for n and m to check.
The same applies at this question. Adding on 9 gives
$n^2-6n+9 = m^2 + m -1,$
which is of course the same as
$(n-3)^2 = m(m+1)-1.$
Now, since we now that adjacent squares and ‘half-squares’ are more than one apart in all but a couple of cases, we know why there should only be a small number of solutions. I would call a method of this kind square-sandwiching, but I don’t see much evidence from Google that this term is generally used, except on this blog.
Of course, we have to be formal in an actual solution, and the easiest way to achieve this is to sandwich $m(m+1)-1$ between adjacent squares $m^2$ and $(m+1)^2$, since it is very much clear-cut that the only squares which differ by one are zero and one itself.
I really don’t have much to say about this. It’s not on the school curriculum so the official solutions are not allowed to say this, but you have to use that all integers except those which are 2 modulo 4 can be written as a difference of two squares. The easiest way to show this is by explicitly writing down the appropriate squares, treating the cases of odds and multiples of four separately.
So you lose if after your turn the running total is 2 modulo 4. At this point, the combinatorics isn’t too hard, though as in Q1 one has to be mindful that making an odd number of small mistakes will lead to the wrong answer! As in all such problems, it’s best to try and give a concrete strategy for Naomi. And it’s best if there’s something inherent in the strategy which makes it clear that it’s actually possible to implement. (Eg, if you claim she should choose a particular number, ideally it’s obvious that number is available to choose.)
One strategy might be: Naomi starts by choosing a multiple of four. Then there are an even number of multiples of four, so Naomi’s strategy is:
• whenever Tom chooses a multiple of four, Naomi may choose another multiple of four;
• whenever Tom chooses a number which is one (respectively three) modulo 4, Naomi may choose another which is three (respectively one) modulo 4.
Note that Naomi may always choose another multiple of four precisely because we’ve also specified the second condition. If sometimes Tom chooses an odd number and Naomi responds with a multiple of four out an idle and illogical sense of caprice, then the first bullet point would not be true. One can avoid this problem by being more specific about exactly what the algorithm is, though there’s a danger that statements like ‘whenever Tom chooses k, Naomi should choose 100-k’ can introduce problems about avoiding the case k=50.
I started this at the train station in Balatonfured with no paper and so I decided to focus on the case of just m, m+1 and n, n+2. This wasn’t a good idea in my opinion because it was awkward but guessable, and so didn’t give too much insight into actual methods. Also, it didn’t feel like inducting on the size of the sequences in question was likely to be successful.
If we know about the Chinese Remainder Theorem, we should know that we definitely want to use it here in some form. Here are some clearly-written notes about CRT with exercises and hard problems which a) I think are good; b) cite this blog in the abstract. (I make no comment on correlation or causality between a) and b)…)
CRT is about solutions to sets of congruence equations modulo various bases. There are two aspects to this , and it feels to me like a theorem where students often remember one aspect, and forget the other one, in some order. Firstly, the theorem says that subject to conditions on the values modulo any non-coprime bases, there exist solutions. In many constructive problems, especially when the congruences are not explicit, this is useful enough by itself.
But secondly, the theorem tells us what all the solutions are. There are two stages to this: finding the smallest solution, then finding all the solutions. Three comments: 1) the second of these is easy – we just add on all multiples of the LCM of the bases; 2) we don’t need to find the smallest solution – any solution will do; 3) if you understand CRT, you might well comment that the previous two comments are essentially the same. Anyway, finding the smallest solution, or any solution is often hard. When you give students an exercise sheet on CRT, finding an integer which is 3 mod 5, 1 mod 7 and 12 mod 13 is the hard part. Even if you’re given the recipe for the algorithm, it’s the kind of computation that’s more appealing if you are an actual computer.
Ok, so returning to this problem, the key step is to phrase everything in a way which makes the application of CRT easy. We observe that taking n=2m satisfies the statement – the only problem of course is that 2m is not odd. But CRT then tells us what all solutions for n are, and it’s clear that 2m is the smallest, so we only need to add on the LCM (which is odd) to obtain the smallest odd solution.
# BMO1 2016 Q5 – from areas to angles
For the second year in a row Question 5 has been a geometry problem; and for the second year in a row I presented the video solution; and the for the second year in a row I received the question(s) while I was abroad. You can see the video solutions for all the questions here (for now). I had a think about Q5 and Q6 on the train back from a day out at Lake Balaton in Western Hungary, so in keeping with last year’s corresponding post, here are some photos from those sunnier days.
I didn’t enjoy this year’s geometry quite as much as last year’s, but I still want to say some things about it. At the time of writing, I don’t know who proposed Q5, but in contrast to most geometry problems, where you can see how the question might have emerged by tweaking a standard configuration, I don’t have a good intuition for what’s really going on here. I can, however, at least offer some insight into why the ‘official’ solution I give on the video has the form that it does.
The configuration given is very classical, with only five points, and lots of equal angles. The target statement is also about angles, indeed we have to show that a particular angle is a right-angle. So we might suspect that the model approach might well involve showing some other tangency relation, where one of the lines AC and BC is a radius and the other a tangent to a relevant circle. I think it’s worth emphasising that throughout mathematics, the method of solving a problem is likely to involve similar objects to the statement of the problem itself. And especially so in competition problems – it seemed entirely reasonable that the setter might have found a configuration with two corresponding tangency relations and constructed a problem by essentially only telling us the details of one of the relations.
There’s the temptation to draw lots of extra points or lots of extra lines to try and fit the given configuration into a larger configuration with more symmetry, or more suggestive similarity [1]. But, at least for my taste, you can often make a lot of progress just by thinking about what properties you want the extra lines and points to have, rather than actually drawing them. Be that as it may, for this question, I couldn’t initially find anything suitable along these lines [2]. So we have to think about the condition.
But then the condition we’ve been given involves areas, which feels at least two steps away from giving us lots of information about angles. It doesn’t feel likely that we are going to be able to read off some tangency conditions immediately from the area equality we’ve been given. So before thinking about the condition too carefully, it makes sense to return to the configuration and think in very loose terms about how we might prove the result.
How do we actually prove that an angle is a right-angle? (*) I was trying to find some tangency condition, but it’s also obviously the angle subtending by the diameter of a circle. You could aim for the Pythagoras relation on a triangle which includes the proposed right-angle, or possibly it might be easier to know one angle and two side-lengths in such a triangle, and conclude with some light trigonometry? We’ve been given a condition in terms of areas, so perhaps we can use the fact that the area of a right-angled triangle is half the product of the shorter side-lengths? Getting more exotic, if the configuration is suited to description via vectors, then a dot product might be useful, but probably this configuration isn’t.
The conclusion should be that it’s not obvious what sort of geometry we’re going to need to do to solve the problem. Maybe everything will come out from similar triangles with enough imagination, but maybe it won’t. So that’s why in the video, I split the analysis into an analysis of the configuration itself, and then an analysis of the area condition. What really happens is that we play with the area condition until we get literally anything that looks at all like one of the approaches discussed in paragraph (*). To increase our chances, we need to know as much about the configuration as possible, so any deductions from the areas are strong.
The configuration doesn’t have many points, so there’s not much ambiguity about what we could do. There are two tangents to the circle. We treat APC with equal tangents and the alternate segment theorem to show the triangle is isosceles and that the base angles are equal to the angle at B in ABC. Then point Q is ideally defined in terms of ABC to use power of a point, and add some further equal angles into the diagram. (Though it turns out we don’t need the extra equal angle except through power of a point.)
So we have some equal angles, and also some length relations. One of the length relations is straightforward (AP=CP) and the other less so (power of a point $CQ^2 = AQ\cdot BQ$). The really key observation is that the angle-chasing has identified
$\angle PAQ = 180 - \angle \hat C,$
which gives us an alternative goal: maybe it will be easier to show that PAQ is a right-angle.
Anyway, that pretty much drinks the configuration dry, and we have to use the area condition. I want to emphasise how crucial this phase in for this type of geometry problem. Thinking about how to prove the goal, and getting a flavour for the type of relation that comes out of the configuration is great, but now we need to watch like a hawk when we play with the area condition for relations which look similar to what we have, and where we might be going, as that’s very likely to be the key to the problem.
We remarked earlier that we’re aiming for angles, and are given areas. A natural middle ground is lengths. All the more so since the configuration doesn’t have many points, and so several of the triangles listed as having the same area also have the same or similar bases. You might have noticed that ABC and BCQ share height above line AQ, from which we deduce AB=BQ. It’s crucial then to identify that this is useful because it supports the power of a point result from the configuration itself. It’s also crucial to identify that we are doing a good job of relating lots of lengths in the diagram. We have two pairs of equal lengths, and (through Power of a Point) a third length which differs from one of them by a factor of $\sqrt{2}$.
If we make that meta-mathematical step, we are almost home. We have a relation between a triple of lengths, and between a pair of lengths. These segments make up the perimeter of triangle APQ. So if we can relate one set of lengths and the other set of lengths, then we’ll know the ratios of the side lengths of APQ. And this is excellent, since much earlier we proposed Pythagoras as a possible method for establish an angle is a right-angle, and this is exactly the information we’d need for that approach.
Can we relate the two sets of lengths? We might guess yes, that with a different comparison of triangles areas (since we haven’t yet used the area of APC) we can find a further relation. Indeed, comparing APC and APQ gives CQ = 2PC by an identical argument about heights above lines.
Now we know all the ratios, it really is just a quick calculation…
[1] – I discussed the notion of adding extra points when the scripts for the recording were being shared around. It was mentioned that for some people, the requirement to add extra points (or whatever) marks a hard division between ‘problems they can do’ and ‘problem they can’t do’. While I didn’t necessarily follow this practice while I was a contestant myself, these days the first thing I do when I see any angles or an angle condition in a problem is to think about whether there’s a simple way to alter the configuration so the condition is more natural. Obviously this doesn’t always work (see [2]), but it’s on my list of ‘things to try during initial thinking’, and certainly comes a long way before approaches like ‘place in a Cartesian coordinate system’.
[2] – Well, I could actually find something suitable, but I couldn’t initially turn it into a solution. The most natural thing is to reflect P in AC to get P’, and Q in BC to get Q’. The area conditions [AP’C]=[ABC]=[BCQ’] continue to hold, but now P’ and B are on the same side of AC, hence P’B || AC. Similarly AQ’ || BC. I see no reason not to carry across the equal length deductions from the original diagram, and we need to note that angles P’AC, ACP’, CBA are equal and angles Q’AB and BAC are equal. In the new diagram, there are many things it would suffice to prove, including that CP’Q’ are collinear. Note that unless you draw the diagram deliberately badly, it’s especially easy accidentally to assume that CP’Q’ are collinear while playing around, so I wasted quite a bit of time. Later, while writing up this post, I could finish it [3].
[3] – In the double-reflected diagram, BCQ’ is similar to P’BA, and since Q’C=2P’C = P’A, and Q’B=AB, you can even deduce that the scale factor is $\sqrt{2}$. There now seemed two options:
• focus on AP’BC, where we now three of the lengths, and three of the angles are equal, so we can solve for the measure of this angle. I had to use a level of trigonometry rather more exotic than the Pythagoras of the original solution, so this doesn’t really serve purpose.
• Since BCQ’ is similar to P’BA and ABQ’ similar to CP’A, we actually have Q’BCA similar to AP’BC. In particular, $\angle CBP' = \angle ACB$, and thus both are 90. Note that for this, we only needed the angle deductions in the original configuration, and the pair of equal lengths.
• There are other ways to hack this final stage, including showing that BP’ meets AQ’ at the latter’s midpoint, to give CP’Q’ collinear.
# Lagrange multipliers Part One: A much simpler setting
I am currently in northern Hungary for our annual winter school for some of the strongest young school-aged mathematicians in the UK and Hungary. We’ve had a mixture of lectures, problem-solving sessions and the chance to enjoy a more authentic version of winter than is currently on offer in balmy Oxford.
One of my favourite aspects of this event is the chance it affords for the students and the staff to see a slightly different mathematical culture. It goes without saying that Hungary has a deep tradition in mathematics, and the roots start at school. The British students observe fairly rapidly that their counterparts have a much richer diet of geometry, and methods in combinatorics at school, which is certainly an excellent grounding for use in maths competitions. By contrast, our familiarity with calculus is substantially more developed – by the time students who study further maths leave school, they can differentiate almost anything.
But the prevailing attitude in olympiad circles is that calculus is unrigorous and hence illegal method. The more developed summary is that calculus methods are hard, or at least technical. This is true, and no-one wants to spoil a measured development of analysis from first principles, but since some of the British students asked, it seemed worth giving a short exposition of why calculus can be made rigorous. They are mainly interested in the multivariate case, and the underlying problem is that the approach suggested by the curriculum doesn’t generalise well at all to the multivariate setting. Because it’s much easier to imagine functions of one variable, we’ll develop the machinery of the ideas in this setting in this post first.
Finding minima – the A-level approach
Whether in an applied or an abstract setting, the main use of calculus at school is to find where functions attain their maximum or minimum. The method can be summarised quickly: differentiate, find where the derivative is zero, and check the second-derivative at that value to determine that the stationary point has the form we want.
Finding maxima and finding minima are a symmetric problem, so throughout, we talk about finding minima. It’s instructive to think of some functions where the approach outlined above fails.
In the top left, there clearly is a minimum, but the function is not differentiable at the relevant point. We can probably assert this without defining differentiability formally: there isn’t a well-defined local tangent at the minimum, so we can’t specify the gradient of the tangent. In the top right, there’s a jump, so depending on the value the function takes at the jump point, maybe there is a minimum. But in either case, the derivative doesn’t exist at the jump point, so our calculus approach will fail.
In the middle left, calculus will tell us that the stationary point in the middle is a ‘minimum’, but it isn’t the minimal value taken by the function. Indeed the function doesn’t have a minimum, because it seems to go off to $-\infty$ in both directions. In the middle right, the asymptote provides a lower bound on the values taken by the function, but this bound is never actually achieved. Indeed, we wouldn’t make any progress by calculus, since there are no stationary points.
At the bottom, the functions are only defined on some interval. In both cases, the minimal value is attained at one of the endpoints of the interval, even though the second function has a point which calculus would identify as a minimum.
The underlying problem in any calculus argument is that the derivative, if it exists, only tells us about the local behaviour of the function. At best, it tells us that a point is a local minimum. This is at least a necessary condition to be a global minimum, which is what we actually care about. But this is a change of emphasis from the A-level approach, for which having zero derivative and appropriately-signed second-derivative is treated as a sufficient condition to be a global minimum.
Fortunately, the A-level approach is actually valid. It can be shown that if a function is differentiable everywhere, and it only has one stationary point, where the second-derivative exists and is positive, then this is in fact the global minimum. The first problem is that this is really quite challenging to show – since in general the derivative might not be continuous, although it might have many of the useful properties of a continuous function. Showing all of this really does require setting everything up carefully with proper definitions. The second problem is that this approach does not generalise well to multivariate settings.
Finding minima – an alternative recipe
What we do is narrow down the properties which the global minimum must satisfy. Here are some options:
0) There is no global minimum. For example, the functions pictured in the middle row satisfy this.
Otherwise, say the global minimum is attained at x. It doesn’t matter if it is attained at several points. At least one of the following options must apply to each such x.
1) $f'(x)=0$,
2) $f'(x)$ is not defined,
3) x lies on the boundary of the domain where f is defined.
We’ll come back to why this is true. But with this decomposition, the key to identifying a global minimum via calculus is to eliminate options 0), 2) and 3). Hopefully we can eliminate 2) immediately. If we know we can differentiate our function everywhere, then 2) couldn’t possibly hold for any value of x. Sometimes we will be thinking about functions defined everywhere, in which case 3) won’t matter. Even if our function is defined on some interval, this only means we have to check two extra values, and this isn’t such hard work.
It’s worth emphasising why if x is a local minimum not on the boundary and f'(x) exists, then f'(x)=0. We show that if $f'(x)\ne 0$, then x can’t be a local minimum. Suppose f'(x)>0. Then both the formal definition of derivative, and the geometric interpretation in terms of the gradient of a tangent which locally approximates the function, give that, when h is small,
$f(x-h) = f(x)-h f'(x) +o(h),$
where this ‘little o’ notation indicates that for small enough h, the final term is much smaller than the second term. So for small enough h, $f(x-h), and so we don’t have a local minimum.
The key is eliminating option 0). Once we know that there definitely is a global minimum, we are in a good position to identify it using calculus and a bit of quick checking. But how would we eliminate option 0)?
Existence of global minima
This is the point where I’m in greatest danger of spoiling first-year undergraduate course content, so I’ll be careful.
As we saw in the middle row, when functions are defined on the whole real line, there’s the danger that they can diverge to $\pm \infty$, or approach some bounding value while never actually attaining it. So life gets much easier if you work with functions defined on a closed interval. We also saw what can go wrong if there are jumps, so we will assume the function is continuous, meaning that it has no jumps, or that as y gets close to x, f(y) gets close to f(x). If you think a function can be differentiated everywhere, then it is continuous, because we’ve seen that once a function has a jump (see caveat 2) then it certainly isn’t possible to define the derivative at the jump point.
It’s a true result that a continuous function defined on a closed interval is bounded and attains its bounds. Suppose such a function takes arbitrarily large values. The main idea is that if the function takes arbitrarily large values throughout the interval, then because the interval is finite it also takes arbitrarily large values near some point, which will make it hard to be continuous at that point. You can apply a similar argument to show that the function can’t approach a threshold without attaining it somewhere. So how do you prove that this point exists? Well, you probably need to set up some formal definitions of all the properties under discussion, and manipulate them carefully. Which is fine. If you’re still at school, then you can either enjoy thinking about this yourself, or wait until analysis courses at university.
My personal opinion is that this is almost as intuitive as the assertion that if a continuous function takes both positive and negative values, then it has a zero somewhere in between. I feel if you’re happy citing the latter, then you can also cite the behaviour of continuous functions on closed intervals.
Caveat 2) It’s not true to say that if a function doesn’t have jumps then it is continuous. There are other kinds of discontinuity, but in most contexts these are worse than having a jump, so it’s not disastrous in most circumstances to have this as your prime model of non-continuity.
Worked example
Question 1 of this year’s BMO2 was a geometric inequality. I’ve chosen to look at this partly because it’s the first question I’ve set to make it onto BMO, but mainly because it’s quite hard to find olympiad problems which come down to inequalities in a single variable.
Anyway, there are many ways to parameterise and reparameterise the problem, but one method reduces, after some sensible application of Pythagoras, to showing
$f(x)=x+ \frac{1}{4x} + \frac{1}{4x+\frac{1}{x}+4}\ge \frac{9}{8},$ (*)
for all positive x.
There are simpler ways to address this than calculus, especially if you establish or guess that the equality case is x=1/2. Adding one to both sides is probably a useful start.
But if you did want to use calculus, you should argue as follows. (*) is certainly true when $x\ge \frac{9}{8}$ and also when $x\le \frac{2}{9}$. The function f(x) is continuous, and so on the interval $[\frac{2}{9},\frac{9}{8}]$ it has a minimum somewhere. We can differentiate, and fortunately the derivative factorises (this might be a clue that there’s probably a better method…) as
$(1-\frac{1}{4x^2}) \left[ 1 - \frac{4}{(4x+\frac{1}{x}+4)^2} \right].$
If x is positive, the second bracket can’t be zero, so the only stationary point is found at x=1/2. We can easily check that $f(\frac12)=\frac98$, and we have already seen that $f(\frac29),f(\frac98)>\frac98$. We know f attains its minimum on $[\frac29,\frac98]$, and so this minimal value must be $\frac98$, as we want.
Overall, the moral of this approach is that even if we know how to turn the handle both for the calculation, and for the justification, it probably would be easier to use a softer approach if possible.
Next stage
For the next stage, we assess how much of this carries across to the multivariate setting, including Lagrange multipliers to find minima of a function subject to a constraint.
# Pencils, Simson’s Line and BMO1 2015 Q5
When on olympiad duty, I normally allow myself to be drawn away from Euclidean geometry in favour of the other areas, which I feel are closer to home in terms of the type of structures and arguments I am required to deal with in research. For various reasons, I nonetheless ended up choosing to present the solution to the harder geometry on the first round of this year’s British Mathematical Olympiad a couple of weeks ago. The paper was taken a week ago, so I’m now allowed to write about it, and Oxford term finished yesterday so I now have time to write up the notes I made about it during a quick trip to Spain. Here’s three gratuitous photos to remind us all what a blue sky looks like:
And here’s the statement of the problem:
and you can find the video of the solution I presented here (at least for now). Thanks to the AV unit at the University of Bath, not just as a formality, but because they are excellent – I had no right to end up looking even remotely polished.
As so often with geometry problems, the key here is to find an entry point into the problem. There are a lot of points and a lot of information (and we could add extra points if we wanted to), but we don’t expect that we’ll need to use absolutely all the information simultaneously. The main reason I’m going to the trouble to write this blog post is that I found an unusually large number of such entry points for this problem. I think finding the entry points is what students usually find hardest, and while I don’t have a definitive way to teach people how to find these, perhaps seeing a few, with a bit of reverse reconstruction of my thought process might be helpful or interesting?
If you haven’t looked at the problem before, you will lose this chance if you read what follows. Nonetheless, some of you might want to anyway, and some of you might have looked at the problem but forgotten it, or not have a diagram to hand, so here’s my whiteboard diagram:
Splitting into stages
A natural first question is: “how am supposed to show that four points are collinear?” Typically it’s interesting enough to show that three points are collinear. So maybe our strategy will be to pick three of the points, show they are collinear, then show some other three points are collinear then patch together. In my ‘official solution’ I made the visual observation that it looks like the four points P,Q,R,S are not just collinear, but lie on a line parallel to FE. This is good, because it suggests an alternative, namely split the points P,Q,R,S into three segments, and show each of them is parallel to FE. We can reduce our argument by 1/3 since PQ and RS are symmetric in terms of the statement.
So in our reduced diagram for RS, we need an entry point. It doesn’t look like A is important at all. What can we say about the remaining seven points. Well it looks like we’ve got a pencil of three lines through C, and two triangles each constructed by taking one point on each of these lines. Furthermore, two pairs of sides of the triangles are parallel. Is this enough to prove that the third side is parallel?
Well, yes it is. I claim that this is the natural way to think about this section of the diagram. The reason I avoided it in the solution is that it requires a few more lines of written deduction than we might have expected. The key point is that saying BF parallel to DR is the same as saying BFC and DRC are similar. And the same applies to BE parallel to DS being the same as saying BEC similar to DSC.
We now have control of a lot of angles in the diagram, and by being careful we could do an angle chase to show that <FEB = <RSD or similar, but this is annoying to write down on a whiteboard. We also know that similarity gives rise to constant ratios of lengths. And this is (at least in terms of total equation length) probably the easiest way to proceed. FC/RC = BC/DC by the first similarity relation, and EC/SC=BC/DC by the second similarity relation, so FC/RC = EC/SC and we can reverse the argument to conclude FE || RS.
So, while I’m happy with the cyclic quadrilaterals argument in the video (and it works in an almost identical fashion for the middle section QR too), spotting this pencil of lines configuration was key. Why did I spot it? I mean, once A is eliminated, there were only the seven points in the pencil left, but we had to (actively) make the observation that it was a pencil. Well, this is where it becomes hard to say. Perhaps it was the fact that I was working out of a tiny notebook so felt inclined to think about it abstractly before writing down any angle relations (obviously there are lots)? Perhaps it was because I just knew that pencils of lines and sets of parallel lines go together nicely?
While I have said I am not a geometry expert, I am aware of Desargues’ Theorem, of which this analysis is a special case, or at least of the ingredients. This is not an exercise in showing off that I know heavy projective machinery to throw at non-technical problems, but rather that knowing the ingredients of a theorem is enough to remind you that there are relations to be found, which is certainly a meta-analytic property that exists much more widely in mathematics and beyond.
Direct enlargment
If I’d drawn my board diagram even more carefully, it might have looked like FE was in fact the enlargement of the line P,Q,R,S from D by a factor of 2. This is the sort of thing that might have been just an accidental consequence of the diagram, but it’s still worth a try. In particular, we only really need four points in our reduced diagram here, eg D,E,F,R, though we keep in mind that we may need to recall some property of the line FR, which is really the line FC.
Let’s define R’ to be the enlargement of R from D by a factor 2. That is, we look along the ray DR, and place the point R’ twice as far from D as R. We want to show that R’ lies on FE. This would mean that FR is the perpendicular bisector of DR’ in the triangle FDR’, and would further require that FR is the angle bisector of <DFR’, which we note is <DFE. At this stage our diagram is small enough that I can literally draw it convincingly on a post-it note, even including P and P’ for good measure:
So all we have to do is check that FC (which is the same as FR) is actually the angle bisector of DFE, and for this we should go back to a more classical diagram (maybe without P,Q,R,S) and argue by angle-chasing. Then, we can reverse the argument described in the previous paragraph. Q also fits this analysis, but P and S are a little different, since these lie on the external angle bisectors. This isn’t qualitatively harder to deal with, but it’s worth emphasising that this might be harder to see!
I’ve described coming at this approach from the observation of the enlargement with a factor of 2. But it’s plausible that one might have seen the original diagram and said “R is the foot of the perpendicular from D onto the angle bisector of DFE”, and then come up with everything useful from there. I’m not claiming that this observation is either especially natural nor especially difficult, but it’s the right way to think about point R for this argument.
Simson Lines
The result about the Simson Line says that whenever P is a point on the circumcircle of a triangle ABC, the feet of the perpendiculars from P to the sides of the triangle (some of which will need to be extended) are collinear. This line is called the Simson line. The converse is also true, and it is little extra effort to show that the reflections of P in the sides are collinear (ie the Simson line enlarged from P by factor 2) and pass through the orthocentre H of ABC.
It turns out that this can be used to solve the problem quite easily. I don’t want to emphasise how to do this. I want to emphasise again that the similarity of the statement of the theorem to the statement of this particular problem is the important bit. Both involve dropping perpendiculars from a single point onto other lines. So even if it hadn’t worked easily in this case, it would still have been a sensible thing to try if one knew (and, crucially, remembered) the Simson line result.
I was working on this script during an evening in Barcelona, and tapas culture lends itself very well to brief solutions. Whether it was exactly between the arrival of cerveza and the arrival of morcilla or otherwise, this was the extent of my notes on this approach to the problem:
And this makes sense. No computation or technical wizardry is required. Once you’ve identified the relevant reference triangle (here HEC), and have an argument to check that the point playing the role of P (here D) is indeed on the circumcircle (it’s very clear here), you are done. But it’s worth ending by reinforcing the point I was trying to make, that considering the Simson line is an excellent entry point to this problem because of the qualitative similarities in the statements. Dealing with the details is sometimes hard and sometimes not, and in this case it wasn’t, but that isn’t normally the main challenge.
# Generating Functions for the IMO
The background to this post is that these days I find myself using generating functions all the time, especially for describing the stationary states of various coalescence-like processes. I remember meeting them vaguely while preparing for the IMO as a student. However, a full working understanding must have eluded me at the time, as for Q5 on IMO 2008 in Madrid I had written down in big boxes the two statements involving generating functions that immediately implied the answer, but failed to finish it off. The aim of this post is to help this year’s team avoid that particular pitfall.
What are they?
I’m going to define some things in a way which will be most relevant to the type of problems you are meeting now. Start with a sequence $(a_0,a_1,a_2,\ldots)$. Typically these will be the sizes of various combinatorial sets. Eg a_n = number of partitions of [n] with some property. Define the generating function of the sequence to be:
$f(x)=\sum_{k\geq 0}a_k x^k=a_0+a_1x+a_2x^2+\ldots.$
If the sequence is finite, then this generating function is a polynomial. In general it is a power series. As you may know, some power series can be rather complicated, in terms of where they are defined. Eg
$1+x+x^2+x^3+\ldots=\frac{1}{1-x},$
only when |x|<1. For other values of x, the LHS diverges. Defining f over C is fine too. This sort of thing is generally NOT important for applications of generating functions to combinatorics. To borrow a phrase from Wilf, a generating function is a convenient clothesline’ on which to hang a sequence of numbers.
We need a notation to get back from the generating function to the coefficients. Write $[x^k]g(x)$ to denote the coefficient of $x^k$ in the power series g(x). So, if $g(x)=3x^3-5x^2+7$, then $[x^2]g(x)=-5$. It hopefully should never be relevant unless you read some other notes on the topic, but the notation $[\alpha x^2]g(x):=\frac{[x^2]g(x)}{\alpha}$, which does make sense after a while.
How might they be useful?
Example: binomial coefficients $a_k=\binom{n}{k}$ appear, as the name suggests, as coefficients of
$f_n(x)=(1+x)^n=\sum_{k=0}^n \binom{n}{k}x^k.$
Immediate consequence: it’s trivial to work out $\sum_{k=0}^n \binom{n}{k}$ and $\sum_{k=0}^n(-1)^k \binom{n}{k}$ by substituting $x=\pm 1$ into f_n.
Less obvious consequence. By considering choosing n from a red balls and b blue balls, one can verify
$\binom{a+b}{n}=\sum_{k=0}^n \binom{a}{k}\binom{b}{n-k}.$
We can rewrite the RHS as
$\sum_{k+l=n}\binom{a}{k}\binom{b}{l}.$
Think how we calculate the coefficient of $x^n$ in the product $f(x)g(x)$, and it is now clear that $\binom{a+b}{n}=[x^n](1+x)^{a+b}$, while
$\sum_{k+l=n}\binom{a}{k}\binom{b}{l}=[x^n](1+x)^a(1+x)^b,$
so the result again follows. This provides a good slogan for generating functions: they often replicate arguments via bijections, even if you can’t find the bijection.
Useful for? – Multinomial sums
The reason why the previous argument for binomial coefficients worked nicely is because we were interested in the coefficients, but had a neat expression for the generating function as a polynomial. In particular, we had an expression
$\sum_{k+l=n}a_k b_l.$
This is always a clue that generating functions might be useful. This is sometimes called a convolution.
Exercise: prove that in general, if f(x) is the generating function of (a_k) and g(x) the generating function of (b_l), then f(x)g(x) is the generating function of $\sum_{k+l=n}a_kb_l$.
Even more usefully, this works in the multinomial case:
$\sum_{k_1+\ldots+k_m=n}a^{(1)}_{k_1}\ldots a^{(m)}_{k_m}.$
In many applications, these $a^{(i)}$s will all be the same. We don’t even have to specify how many k_i’s there are to be considered. After all, if we want the sum to be n, then only finitely many can be non-zero. So:
$\sum_{m}\sum_{k_1+\ldots+k_m=n}a_{k_1}\ldots a_{k_m}=[x^n]f(x)^n=[x^n]f(x)^\infty,$
provided f(0)=1.
Useful when? – You recognise the generating function!
In some cases, you can identify the generating function as a standard’ function, eg the geometric series. In that case, manipulating the generating functions is likely to be promising. Here is a list of some useful power series you might spot.
$1+x+x^2+\ldots=\frac{1}{1-x},\quad |x|<1$
$1+2x+3x^2+\ldots=\frac{1}{(1-x)^2},\quad |x|<1$
$e^x=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\ldots$
$\cos x=1-\frac{x^2}{2!}+\frac{x^4}{4!}\pm\ldots$
Exercise: if you know what differentiation means, show that if f(x) is the gen fn of (a_k), then xf'(x) is the gen fn of ka_k.
Technicalities: some of these identities are defined only for certain values of x. This may be a problem if they are defined at, say, only a single point, but in general this shouldn’t be the case. In addition, you don’t need to worry about differentiability. You can definition differentiation of power series by $x^n\mapsto nx^{n-1}$, and sort out convergence later if necessary.
Useful for? – Recurrent definitions
The Fibonacci numbers are defined by:
$F_0=F_1=1,\quad F_{n+1}=F_n+F_{n-1},\quad n\geq 1.$
Let F(x) be the generating function of the sequence F_n. So, for n=>1,
$[x^n]F(x)=[x^{n-1}]F(x)+[x^{n-2}]F(x)=[x^n](xF(x)+x^2F(x)),$
and F(0)=1, so we can conclude that:
$F(x)=1+(x+x^2)F(x)\quad\Rightarrow\quad F(x)=\frac{1}{1-x-x^2}.$
Exercise: Find a closed form for the generating function of the Catalan numbers, defined recursively by:
$C_n=C_0C_{n-1}+C_1C_{n-2}+\ldots+C_{n-1}C_0.$
Can you now find the coefficients explicitly for this generating function?
Useful for? – Partitions
Partitions can be an absolute nightmare to work with because of the lack of explicit formulae. Often any attempt at a calculation turns into a massive IEP bash. This prompts a search for bijective or bare-hands arguments, but generating functions can be useful too.
For now (*), let’s assume a partition of [n] means a sequence of positive integers $a_1\geq a_2\geq\ldots\geq a_k$ such that $a_1+\ldots+a_k=n$. Let p(n) be the number of partitions of [n].
(* there are other definitions, in terms of a partition of the set [n] into k disjoint but unlabelled sets. Be careful about definitions, but the methods often extend to whatever framework is required. *)
Exercise: Show that the generating function of p(n) is:
$\left(\frac{1}{1-x}\right)\left(\frac{1}{1-x^2}\right)\left(\frac{1}{1-x^3}\right)\ldots$
Note that if we are interested only in partitions of [n], then we don’t need to consider any terms with exponent greater than n, so if we wanted we could take a finite product instead.
Example: the mint group will remember this problem from the first session in Cambridge:
Show that the number of partitions of [n] with distinct parts is equal to the number of partitions of [n] with odd parts.
Rather than the fiddly bijection argument found in the session, we can now treat this as a simple calculation. The generating function for distinct parts is given by:
$(1+x)(1+x^2)(1+x^3)\ldots,$
while the generating function for odd parts is given by:
$\left(\frac{1}{1-x}\right)\left(\frac{1}{1-x^3}\right)\left(\frac{1}{1-x^5}\right)\ldots.$
Writing the former as
$\left(\frac{1-x^2}{1-x}\right)\left(\frac{1-x^4}{1-x^2}\right)\left(\frac{1-x^6}{1-x^3}\right)\ldots$
shows that these are equal and the result follows.
Other things – Multivariate Generating Functions
If you want to track a sequence in two variables, say $a_{m,n}$, then you can encode this with the bivariate generating function
$f(x,y):=\sum_{m,n\geq 0}a_{m,n}x^my^n.$
The coefficients are then extracted by $[x^ay^b]$ and so on. There’s some interesting stuff on counting lattice paths with this method.
Sums over arithmetic progressions via roots of unity
Note that we can extract both $\sum a_n$ and $\sum (-1)^na_n$ by judicious choice of x in f(x). By taking half the sum or half the difference, we can obtain
$a_0+a_2+a_4+\ldots=\frac12(f(1)+f(-1)),\quad a_1+a_3+a_5+\ldots=\frac12(f(1)-f(-1)).$
Can we do this in general? Yes actually. If you want $a_0+a_k+a_{2k}+\ldots$, this is given by:
$a_0+a_k+a_{2k}+\ldots+\frac{1}{k}\left(f(1)+f(w)+\ldots+f(w^{k-1})\right),$
where $w=e^{2\pi i/k}$ is a $k$th root of unity. Exercise: Prove this.
For greater clarity, first try the case k=4, and consider the complex part of the power series evaluated at +i and -1.
# Bijections, Prufer Codes and Cayley’s Formula
I’m currently at the training camp in Cambridge for this year’s UK IMO squad. This afternoon I gave a talk to some of the less experienced students about combinatorics. My aim was to cover as many useful tricks for calculating the sizes of combinatorial sets as I could in an hour and a half. We started by discussing binomial coefficients, which pleasingly turned out to be revision for the majority. But my next goal was to demonstrate that we are much more interested in the fact that we can calculate these if we want than in the actual expression for their values.
Put another way, my argument was that the interpretation of $\binom{n}{m}$ as the number of ways to choose m objects from a collection of n, or the number of up-and-right paths from (0,0) to (m,n) is more useful than the fact that $\binom{n}{m}=\frac{n!}{m!(n-m)!}$. The opening gambit was to prove the fundamental result underlying the famous construction of Pascal’s triangle that
$\binom{n+1}{m+1}=\binom{n}{m}+\binom{n}{m+1}.$
This is not a hard result to prove by manipulating factorials, but it is a very easy result to prove in the path-counting setting, for example.
So it turned out that the goal of my session, as further supported by some unsubtly motivated problems from the collection, was to convince the students to use bijections as much as possible. That is, if you have to count something awkward, show that counting the awkward thing is equivalent to counting something more manageable, then count that instead. For many simpler questions, this equivalence is often drawn implicitly using words (“each of the n objects can be in any subset of the collection of bags so we multiply…” etc), but it is always worth having in mind the formal bijective approach. Apart from anything else, asking the question “is this bijection so obvious I don’t need to prove it” is often a good starting-point for assessing whether the argument is in fact correct!
Anyway, I really wanted to show my favouriite bijection argument, but there wasn’t time, and I didn’t want to spoil other lecturers’ thunder by defining a graph and a tree and so forth. The exploration process encoding of trees is a strong contender, but today I want to define quickly the Prufer coding for trees, and use it to prove a famous result I’ve been using a lot recently, Cayley’s formula for the number of spanning trees on the complete graph with n vertices, $n^{n-2}$.
We are going to count rooted trees instead. Since we can choose any vertex to be the root, there are $n^{n-1}$ rooted trees on n vertices. The description of the Prufer code is relatively simple. Take a rooted tree with vertices labelled by [n]. A leaf is a vertex with degree 1, other than the root. Find the leaf with the largest label. Write down the label of the single vertex to which this leaf is connected, then delete the leaf. Now repeat the procedure, writing down the label of the vertex connected to the leaf now with the largest label, until there are only two vertices remaining, when you delete the non-root vertex, and write down the label of the root. We get a string of (n-1) labels. We want to show that this mapping is a bijection from the set of rooted trees with vertices labelled by [n] to $[n]^{n-1}$.
Let’s record informally how we would recover a tree from the Prufer code. First, observe that the label of any vertex which is not a leaf must appear in the code. Why? Well, the root label appears right at the end, if not earlier, and every vertex must be deleted. But a vertex cannot be deleted until it has degree one, so the neighbours further from the root (or ancestors) of the vertex must be removed first, and so by construction the label appears. So know what the root is, and what the leaves are straight away.
In fact we can say slightly more than this. The number of times the root label appears is the degree of the root, while the number of times any other label appears is the degree of the corresponding vertex minus one. Call this sequence the Prufer degrees.
So we construct the tree backwards from the leaves towards the root. We add edges one at a time, with the k-th edge joining the vertex with the k-th label to some other vertex. For k=1, this other vertex is the leaf with maximum label. In general, let $G_k$ be the graph formed after the addition of k-1 edges, so $G_1$ is empty, and $G_n$ is the full tree. Define $T_k$ to be the set of vertices such that their degree in $G_k$ is exactly one less than their Prufer degree. Note that $T_1$ is therefore the set of leaves suggested by the Prufer code. So we form $G_{k+1}$ by adding an edge between the vertex with label appearing at position k+1 in the Prufer sequence and the vertex of $T_k$ with maximum label.
Proving that this is indeed the inverse is a bit fiddly, more because of notation than any actual mathematics. You probably want to show injectivity by an extremal argument, taking the closest vertex to the root that is different in two trees with the same Prufer code. I hope it isn’t a complete cop out to swerve around presenting this in full technical detail, as I feel I’ve achieved by main goal of explaining why bijection arguments can reduce a counting problem that was genuinely challenging to an exercise in choosing sensible notation for proving a fairly natural bijection.
|
2018-01-16 16:58:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 195, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7550597190856934, "perplexity": 390.57711139958616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886476.31/warc/CC-MAIN-20180116164812-20180116184812-00006.warc.gz"}
|
https://www.gamedev.net/forums/topic/152115-drawprimitiveupcrashing/
|
#### Archived
This topic is now archived and is closed to further replies.
# DrawPrimitiveUP...crashing
This topic is 5716 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Strange this problem. When i call the code i am about to post here BEFORE i draw anything else, it crashes. i tried to read the return flag, but it doesnt give. but if i call this stuff AFTER my main objects are rendered (terrain and skybox), then things are fine...although the colors eeem to be off.
m_Indicator[0].x = m_Position.x;
m_Indicator[0].y = m_Position.y;
m_Indicator[0].z = m_Position.z;
m_Indicator[0].color = 0xffffff00;
D3DXVECTOR3 end = m_Position;
end += m_Direction * 5.0f;
// 2nd vertex
m_Indicator[1].x = end.x;
m_Indicator[1].y = end.y;
m_Indicator[1].z = end.z;
m_Indicator[1].color = 0xffffff00;
// Render the single line primitive.
g_pd3dDevice->DrawPrimitiveUP( D3DPT_LINESTRIP, 1, m_Indicator, m_iSizeOfInfo );
Im drawing a single line, so the 2nd parameter is 1. the vertex format has D3DFVF and DIFFUSE. any help will help - i am lost at this point and I hate to just rearrange the render calling to where it will work...its just weird how rearranging the rendering calls will make/break a crash!
##### Share on other sites
check debug output, it sounds like there is a messed up state, the debug output will probably tell you whats wrong
##### Share on other sites
I cannot imagine anyone, in any conceivable walk of life, ever in a million years wanting to use DrawPrimitiveUP.
##### Share on other sites
quote:
Original post by GekkoCube
Im drawing a single line, so the 2nd parameter is 1.
the vertex format has D3DFVF and DIFFUSE.
What is D3DFVF? you need XYZ and Diffuse..maybe a typo..
What Version of DirectX are you using? I don''t see where you set the shader (DX8) or FVF (DX9).. If you are drawing D3DXMeshs, then the drawsubset would be setting the FVF, and that is why you would be able to draw but it''s messed up cause the FVF is wrong and maybe has a different stride.
##### Share on other sites
Do not EVER EVER EVER use DrawPrimitiveUP.
DrawIndexedPrimitive or bust.
##### Share on other sites
Ok, i am guessing that DrawPrimitiveUP is not very efficient or something....and Im guessing DrawIndexedPrimitive is.?
Anyways, yes, that D3DFVF was a typo. I have a vertex format with XYZ and DIFFUSE.
i think i am drawing d3dxmeshes because i am drawing X file, copied from the tiger.x demo.
but i dont think this is the issue for this problem.
and what exactly is the stride?
any code anybody would like to see for this?
##### Share on other sites
quote:
Original post by GekkoCube
Ok, i am guessing that DrawPrimitiveUP is not very efficient or something....and Im guessing DrawIndexedPrimitive is.?
To quote Richard Huddy (ex-NVIDIA, now ATI):
There are almost zero cases when avoiding VB’s make sense. If you think you have one of those cases then:
a) You haven’t really thought hard
b) Consult a doctor – it’s as bad as that
##### Share on other sites
So DrawIndexPrimitive uses vertex buffers?
I will use it.
Also, what do you mean by debug output?
Do you mean from the error message that windows gives me, or do you mean my own debug statements?
Also, what do you mean by "messed up states" ?
do you mean renderstates or what?
My problem i am trying to find is this:
My line strips (a single line segment) is being colored by something that I dont know what. also, when i move (rotate) that line, well, it changes colors (but not any single shade, but rather multiple colors). so this makes me believe a renderstate or something is off. i tried materials and turning off textures before i draw the line. also, when i first start, the line is red. when i move, it appears rainbow-like (not a pretty rainbow).
any help, tips, ideas...i need. thanks.
1. 1
2. 2
Rutin
16
3. 3
4. 4
5. 5
• 11
• 26
• 10
• 11
• 9
• ### Forum Statistics
• Total Topics
633718
• Total Posts
3013523
×
|
2018-12-14 20:30:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23157905042171478, "perplexity": 4527.058011258069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826306.47/warc/CC-MAIN-20181214184754-20181214210754-00035.warc.gz"}
|
http://manueldoncel.com/index.php/en/programming/allaboutjava/44-example-of-decoupling-code
|
Select language
## Example of coupled and decoupled code
In every programming book, you can find that two of the pattern that all well known authors really recommend is the depencency injection and the decoupled code.
Because you can find in all the programming books and on the internet what are these definitions, I would like to give you a little, an easy example of a little piece of coupled code, and the same but decoupled.
Imagine you try to calculate the average (mean) of the X coordinate of a list of Points. But, you also want to filter the points that are in $$$$x = y = 0$$$$ coordinates.
Well, I think that most of novice (and maybe not novice at all) programmers would do something like this
/**
* Do the average of the X coord
* @param points
* @return
*/
public static double calculateAverageCoupled(Collection<Point> points) {
double average = 0;
double count = 0;
for(Point p: points) {
if(p.getX() != 0 && p.getY() != 0) {
average += p.getX();
count++;
}
}
return count != 0 ? average/count : 0;
}
In this method you are doing the filter, and the average in the same operation. This is maybe better for performance, but the code is coupled, you are doing two different operations in the same iteration, the filter, and the accumulation of the average.
What happend if you want to get the list of points you used for the average?
You have to go through the list again and re-do the filter. If an average only understand numbers, why do we have to pass Points? What happend if you want to change the filter condition, or make a new one?
Then, this is a way to do it in a decoupled way:
/**
* Do the average of some double values
* @param points
* @return
*/
public static double calculateAverageDecoupled(Collection<Point> points) {
Collection<Point> validPoints = new ArrayList<Point>(getValidPoints(points));
Collection<Double> validXCoords = getXCoords(validPoints);
return average(validXCoords);
}
public static Collection<Point> getValidPoints(Collection<Point> points) {
Collection<Point> validPoints = new ArrayList<Point>();
for(Point point : points) {
if(point.getX() != 0 && point.getY() != 0){
}
}
return validPoints;
}
public static Collection<Double> getXCoords(Collection<Point> validPoints) {
Collection<Double> validXCoords = new ArrayList<Double>(validPoints.size());
for(Point p : validPoints) {
}
return validXCoords;
}
As you can see, here we have the code completely decoupled. First we do the filter, then we get a list of doubles with the x coordinates of that filtered points, and then, we can calculate the average using that list of doubles. Because we are using a list of doubles (that what an average is pretended to understand) we can use the apache math library, for example for doing the calculation.
One of the advantages of decupling code is that is easier to test, moreover, it makes the application more flexible. But, it doesn't come for free, as we say we degradate the performance, but also we need more memory.
Here is the same example, but using Guava library for the filter and transformation:
/**
* Do the average of some double values
* @param points
* @return
*/
public static double calculateAverageDecoupled(Collection<Point> points) {
Collection<Point> validPoints = new ArrayList<Point>(getValidPoints(points));
Collection<Double> validXCoords = getXCoords(validPoints);
return average(validXCoords);
}
public static Collection<Point> getValidPoints(Collection<Point> points) {
Predicate<Point> isValid = new Predicate<Point>() {
@Override
public boolean apply(Point point) {
return point.getX() != 0 && point.getY() != 0;
}
};
Collection<Point> validPoints = Lists.newArrayList(Iterables.filter(points, isValid));
return validPoints;
}
public static Collection<Double> getXCoords(Collection<Point> validPoints) {
Function<Point, Double> getXCoord = new Function<Point, Double>() {
@Override
public Double apply(Point point) {
return point.getX();
}
};
Collection<Double> validXCoords = Collections2.transform(validPoints, getXCoord);
return validXCoords;
}
As you can see, we don't need comments in the code, because is self-explained. An we follow the KISS principle.
|
2014-11-23 18:25:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.2531108856201172, "perplexity": 3701.8953141968655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379636.59/warc/CC-MAIN-20141119123259-00156-ip-10-235-23-156.ec2.internal.warc.gz"}
|
http://mickopedia.org/mickify?topic=Wikipedia:Arbitration_Committee_Elections_December_2015
|
# Mickopedia:Arbitration Committee Elections December 2015
Jump to navigation Jump to search
2015 Arbitration Committee Elections
Status
• The December 2015 Arbitration Committee Election results have been posted.
• Please offer your feedback on the election process.
It is currently 01:24 (UTC), Monday, 15 August 2022 (Purge)
The thirteenth annual election for the English Mickopedia's Arbitration Committee took place in November and December 2015. This election, by practice on Wikimedia projects, was organized by community volunteers, independent of the feckin' Arbitration Committee itself.
## Election process
### Timeline
1. Nomination period (from Sunday 00:00, 8 November until Tuesday 23:59, 17 November, UTC) → interested editors are invited to submit a feckin' candidate statement. Sure this is it. An editor is eligible to stand as a holy candidate who:
(i) has a bleedin' registered account and has made at least 500 mainspace edits before 1 November 2015,
(ii) is in good standin' and not subject to active blocks or site-bans,
(iii) meets the feckin' Wikimedia Foundation's criteria for access to non-public data, is willin' to sign the oul' Foundation's non-public information confidentiality agreement, and
(iv) has disclosed any previous or alternate accounts in their election statements (legitimate accounts which have been declared to the bleedin' Arbitration Committee before the close of nominations do not need to be publicly disclosed).
2. Votin' period (from Monday 00:00, 23 November until Sunday 23:59, 6 December, UTC) → eligible voters can vote on the candidates, usin' the bleedin' SecurePoll system. C'mere til I tell yiz. An editor is eligible to vote who:
(i) has registered an account before Wednesday 00:00, 28 October 2015
(ii) has made at least 150 mainspace edits before Sunday 00:00, 1 November 2015 and,
(iii) is not blocked from the English Mickopedia at the feckin' time of their vote.
3. Scrutineerin' period (immediately followin' the oul' votin' period) → scrutineers, consistin' of stewards whose main wikis are not the feckin' English Mickopedia, will check the bleedin' votes (e.g. for duplicate, missin', and ineligible votes), and compile a bleedin' tally of the oul' results. The instructions for scrutineers are outlined here.
### Results
Followin' the feckin' votin' period, the oul' scrutineers examined the bleedin' votes, and released a tally of the oul' results. Whisht now. The tally ranks candidates by their performance accordin' to the bleedin' criteria for success in this election, defined as the oul' number of votes cast in support of the feckin' candidate divided by the feckin' total number of votes cast both for and against (commonly described as "support over support plus oppose" or "S/(S+O)"), enda story. "Neutral" preferences are not counted in this metric. A total of 2846 ballots were cast (includin' duplicates) and 2674 votes were determined to be valid.
Candidate Support Neutral[note 1] Oppose Net[note 2] Percentage [note 3] Result
Casliber (talk · contribs) 1118 1249 307 811 78.46% Two-year term
Opabinia regalis (talk · contribs) 1121 1164 389 732 74.24% Two-year term
Keilana (talk · contribs) 1118 1149 407 711 73.31% Two-year term
Drmies (talk · contribs) 1038 1197 439 599 70.28% Two-year term
Callanecc (talk · contribs) 848 1429 397 451 68.11% Two-year term
Kelapstick (talk · contribs) 738 1562 374 364 66.37% Two-year term
GorillaWarfare (talk · contribs) 1111 987 576 535 65.86% Two-year term
Kirill Lokshin (talk · contribs) 965 1196 513 452 65.29% Two-year term
Gamaliel (talk · contribs) 902 1219 553 349 61.99% One-year term
Thryduulf (talk · contribs) 764 1395 515 249 59.73%
LFaraone (talk · contribs) 657 1505 512 145 56.20%
Hawkeye7 (talk · contribs) 783 1268 623 160 55.69%
Kudpung (talk · contribs) 692 1373 609 83 53.19%
Rich Farmbrough (talk · contribs) 822 1125 727 95 53.07%
Kevin Gorman (talk · contribs) 663 1289 722 −59 47.87%
Timtrent (talk · contribs) (withdrawn) 385 1855 434 −49 47.01%
NE Ent (talk · contribs) 553 1446 675 −122 45.03%
Hullaballoo Wolfowitz (talk · contribs) 588 1320 766 −178 43.43%
MarkBernstein (talk · contribs) 582 1242 850 −268 40.64%
Wildthing61476 (talk · contribs) 403 1599 672 −269 37.49%
Mahensingha (talk · contribs) 349 1428 897 −548 28.01%
1. ^ All voters were required to register an oul' preference of either "Support", "Neutral", or "Oppose" for each candidate. Be the hokey here's a quare wan. The "Neutral" column is simply the oul' total votes for which voters did not select the feckin' Support or Oppose option.
2. ^ Net = Support − Oppose
3. ^ Percentage = (Support / (Support + Oppose)) * 100 (rounded to 2 decimal places)
Results certified by
1. einsbor talk 19:38, 9 December 2015 (UTC)
2. Mardetanha talk 19:39, 9 December 2015 (UTC)
3. Shanmugamp7 (talk) 23:53, 9 December 2015 (UTC)
## Vacant seats
For 2015, six current arbitrators will remain on the feckin' committee. The committee will continue to have 15 seats, leavin' eight vacant seats with two-year terms and one vacant seat with a holy one-year term to be filled in this election. Arra' would ye listen to this. In the oul' event that any of the six arbitrators with unexpired terms resign or otherwise leave the committee before the feckin' start of votin' on 23 November, the oul' seat they vacate will be filled, and will be for a feckin' one-year term. Sufferin' Jaysus listen to this. Seats will be filled based on support percentage, as calculated by ${\displaystyle {\frac {\text{support}}{\text{support + oppose}}}}$, bejaysus. There will be an neutral option; choosin' this option will not affect the feckin' support percentage for the bleedin' candidate, and will be treated as though you did not vote in the oul' election with respect to that candidate. Story? The minimum support percentage is 50%. If there are more vacancies than candidates with 50% support, those seats will remain vacant.
## Guides
### For candidates
Nominations for candidates will open at 00:00 UTC, 8 November and will close at 23:59 UTC, 17 November. Durin' this time, any editor in good standin' who meets the feckin' criteria stated in the bleedin' "Timeline" section above may nominate themselves by followin' the oul' instructions to create a bleedin' candidate statement on the bleedin' candidates page. Be the hokey here's a quare wan. Once a holy candidate has made their statement, they may proceed to answer individual questions as they wish (see the oul' questions page for details and instructions). Candidates may continue to answer questions until the feckin' end of the oul' votin' period (23:59 UTC, 6 December).
### For voters
Once candidates have nominated themselves, voters are invited to review and discuss them. Voters may ask questions throughout the feckin' election.
To facilitate their discussions and judgements, voters are encouraged to familiarise themselves with the feckin' candidates. Jesus, Mary and holy Saint Joseph. This can be done through readin' the feckin' candidate statements, the feckin' answers to the feckin' questions put to each candidate (linked from their candidate statements), and the oul' discussion of each candidate (a centralised collection of which will be made available at the oul' discussion page), the cute hoor. In addition, a summary guide to candidates will be made available, and augmented by an oul' set of personal guides by individual voters.
Votin' will run for 14 days, from 00:00 UTC, 23 November to 23:59 UTC, 6 December. Here's another quare one for ye. The process will be conducted usin' the SecurePoll extension which ensures that individual voter's decisions will not be publicly viewable (although technical information about voters, such as their IP address and user-agents, will be visible to the WMF-identified election administrators and scrutineers).
Voters will be invited to choose one of three options for each candidate: "Support", "Oppose" or "Neutral"; and the bleedin' number of "Support", "Oppose" or "Neutral" preferences an oul' voter can express is otherwise limited only by the feckin' number of candidates. Me head is hurtin' with all this raidin'. After voters have entered their choices for all of the candidates and submitted their votes, they may revisit and change their decisions, but attemptin' to do so will require expressin' preferences for all candidates from scratch. Because of the feckin' risk of server lag, voters are advised to cast their vote at the bleedin' latest an hour before the close of votin' to ensure their vote will be counted.
|
2022-08-15 06:38:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4073665738105774, "perplexity": 9256.022602123763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572161.46/warc/CC-MAIN-20220815054743-20220815084743-00753.warc.gz"}
|
http://scalability.org/2009/04/sgi-is-done-and-sold-heard-this-rumor-yesterday/
|
# SGI is done and sold (heard this rumor yesterday)
[updated] see bottom:
This said, read the press release. Specifically the portion that indicates that
Rackable has signed an Asset Purchase Agreement to acquire substantially all the assets of SGI, and to assume certain liabilities relating to the assets, pursuant to Chapter 11 of the U.S. Bankruptcy Code, under which SGI filed its petition in New York on April 1, 2009.
Yes, this is right (and assuming it is not an April Fools joke), this means SGI file for bankruptcy this morning. They had a looming $5M payment due last Friday to Morgan Stanley. I was searching for information as to whether or not they had made that payment, as I thought that MS might force them into a chapter 7, and sell off their assets. Well, it looks like they hit chapter 11 (second time), and did an asset sale. So, if this is all real, then SGI is done. It is now part of Rackable. They had ~1500 people going into this. I’ll be surprised if they have more than a hundred or two make the move. Continuing Completion of the transaction is subject to a number of closing conditions, including the approval of the Bankruptcy Court, and other uncertainties. Subject to such conditions and uncertainties, the transaction is expected to close within approximately 60 days. It is expected that SGI’s business operations will continue during the pre-closing period. SGI’s international operations would be part of the sale, but would not be part of the bankruptcy process. This was done as a very fast sale. I knew they were in trouble, and it looks to me like this was negotiated quickly (it doesn’t look pre-packaged) so as to avoid the chapter 7 filing. Still Morgan Stanley and other creditors could object, and if so, throw a monkey wrench into this. In which case, chapter 7 is far more likely. My sympathies go out to all my former colleagues still employed (as of yesterday) at SGI. Hopefully business schools will pick up the sad tale of this company’s decline as in a section on “how not to run a company, and lose great momentum”. [update 1]: Wow … I called this some time ago. Spot on analysis. Read the form 8k for the filing. The filing of the bankruptcy petitions described above constitutes an event of default under the Senior Secured Credit Agreement, dated October 17, 2006 Ok, we knew that part … this is where it gets interesting , as amended, (the ???Credit Agreement???) with Morgan Stanley Senior Funding, Inc. as administrative agent and revolving agent, Morgan Stanley & Co., Incorporated as collateral agent, and the other Lenders and Credit Parties thereto (as defined therein) and resulted in the acceleration of all amounts due under the Credit Agreement. This is almost exactly what I said could happen here. That is, if they can???t achieve specific indicated performance, the people who loaned them money may say ???all done.??? They will default if they cannot get their earnings up above the levels they agreed to. Which they haven???t been able to in recent history. I am betting that on Friday the 27th of March, in the afternoon, 16 days after I wrote the previous bit, Morgan Stanley called SGI and said “its time to pay”. SGI said “hang on a moment”, and Morgan Stanley said “no.” I bet that MS gave them until COB on the 3rd to get their affairs in order and signaled an intent to declare them in default. This is speculation. It is also now history. This is what else is in the 8k, and why I speculated like this … The ability of the creditors to seek remedies to enforce their rights under the Credit Agreement is automatically stayed as a result of the filing of Chapter 11 cases, and the creditors??? rights of enforcement are subject to the applicable provisions of the Bankruptcy Code. The automatic stay invoked by filing the Chapter 11 cases effectively precludes any actions against SGI resulting from such acceleration. This is effectively SGI reminding its creditor that any remedies (asset seizure, sale, chapter 7 forcing) is automatically stayed during bankruptcy proceedings. But … the creditors can object to the sale, and demand a better price. It would not surprise me if they do something like this, as it looks like SGI owes them about$142M and change.
As of March 31, 2009, under the Credit Agreement, the total principal amount of the outstanding obligations under the term loan was approximately $141.7 million and the total principal amount of the outstanding obligations under the revolving loan was approximately$20.7 million.
They had to pay MS \$20.7M on Friday. They couldn’t make the payment. MS told them they were in default, and gave them a very short window to correct the default. And SGI went this route.
This story is not over. Its not Rackable walking away with the assets on the cheap.
This one could get quite interesting. And not in a good way.
Hmmm… I wonder who owns my patent now. Maybe I should have the company file to purchase that.
[update 2] John (both of them) are covering this at InsideHPC. Two articles. First was the announcement, and my sympathies to John L as this is/was his employer. It sucks when they implode on you. Second was a similar analysis to what I did in the past with SGI acquiring the LNXI assets.
I wrote previously
Moreover, if you look at what SGI did relative to LNXI, they basically purchased LNXI???s assets, which had secured its debt.
… which is almost exactly what happened to SGI today. Their assets were purchased by Rackable (tentatively), but this was done to escape debt service obligations via a fast chapter 11.
More to the point, I had pointed out in this article that
Onerous T&C are great to force vendors to deliver what they promise. It will also effectively force all the innovative vendors, the ones that take risks, out. Well, they can take the risks, and do well if they succeed, but there is risk. The T&C we have seen suggest that not only do the purchasers want a low price, they want no risk. Drive enough small companies out of the market, and you are going to get a uniformly bland bit of me-too clusters. Which is where things are largely headed.
But I am digressing.
This was discussing the problem in terms of the T&C imposed by the government. They are quite onerous.
As are some, quite frankly, from universities. We have walked away from business with ridiculous T&C’s. We won’t chase bad business. SGI did. As did LNXI. Well SGI had other issues I won’t get into, but thats for a different post.
With this in mind, a commenter at InsideHPC.com pointed out:
Wonder if we finally can call the acquisition strategy of the DoD Mod program the harbinger of death to HPC big iron (LNXI, SGI). I can???t help but wonder if Cray Henry will finally get that they are not helping the industry.
Heh. Spot on.
All Government HPC purchases should be on the open market (not under GSA contract) and done by industry standard T&C.
That is, unless you want exactly one vendor doing HPC, for whom HPC is but rounding error in their business, and a fast business decision can effectively turn off HPC at that organization.
Which is where we are (rapidly) headed.
Viewed 6058 times by 1083 viewers
|
2015-08-28 12:45:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.321001261472702, "perplexity": 4826.862835422738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644062782.10/warc/CC-MAIN-20150827025422-00272-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://socratic.org/questions/the-altitude-of-an-equilateral-triangle-is-18-inches-what-is-the-length-of-a-sid
|
# The altitude of an equilateral triangle is 18 inches. What is the length of a side?
Then teach the underlying concepts
Don't copy without citing sources
preview
?
#### Explanation
Explain in detail...
#### Explanation:
I want someone to double check my answer
5
May 14, 2016
The side color(blue)( = 12sqrt3 inches.
#### Explanation:
The altitude of the triangle = color(blue)( 18 inches
Let us denote the side of the triangle as $a$.
The formula for calculating the altitude of an equilateral triangle is :
Altitude $= \frac{\sqrt{3}}{2} \times a$ (side)
$18 = \frac{\sqrt{3}}{2} \times a$
$a = 18 \times \frac{2}{\sqrt{3}}$
$a = \frac{36}{\sqrt{3}}$
a = (36 xx color(blue)(sqrt3)) / (sqrt3 xx color(blue)(sqrt3)
$a = \frac{36 \sqrt{3}}{3}$
$a = 12 \sqrt{3}$ inches.
Then teach the underlying concepts
Don't copy without citing sources
preview
?
#### Explanation
Explain in detail...
#### Explanation:
I want someone to double check my answer
4
May 14, 2016
Length of the side of the triangle is $12 \sqrt{3}$ inches.
#### Explanation:
If the length of the side of a triangle is $a$ and $h$ be its height as shown below.
As the perpendicular from vertex divides base equally, we can find height $h$ using Pythagoras theorem
Hence $h = \sqrt{{a}^{2} - {\left(\frac{a}{2}\right)}^{2}} = \sqrt{{a}^{2} - {a}^{2} / 4} = \sqrt{3 {a}^{2} / 4} = \frac{\sqrt{3}}{2} a$
But as height is $18$ inches, $\frac{\sqrt{3}}{2} a = 18$ or
$a = 18 \times \frac{2}{\sqrt{3}} = 36 \frac{\sqrt{3}}{3} = 12 \sqrt{3}$
• 7 minutes ago
• 9 minutes ago
• 10 minutes ago
• 10 minutes ago
• 28 seconds ago
• A minute ago
• 5 minutes ago
• 5 minutes ago
• 6 minutes ago
• 7 minutes ago
• 7 minutes ago
• 9 minutes ago
• 10 minutes ago
• 10 minutes ago
|
2018-02-22 14:52:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 18, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6946425437927246, "perplexity": 3933.86685275881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814124.25/warc/CC-MAIN-20180222140814-20180222160814-00532.warc.gz"}
|
http://courses.csail.mit.edu/6.856/17/Notes/n21-randomizedrounding.html
|
## Approximation Algorithms
• MAXCUT, MAXSAT, MAX-CLIQUE, MIN-VERTEX-COVER are all NP-complete.
• Approximation Algorithms provides a new dimension to view the hardness of these problems.
• Definition of approximation algorithms.
• For maximization problems:
Suppose $\OPT$ = $\max \Phi(\mathbf{x})$ subject to $\mathbf{x} \in \mathcal{C}$.
$\alpha$-approximation algorithm returns $\mathbf{\tilde{x}}$ such that $\Phi(\mathbf{\tilde{x}}) \ge \alpha \cdot \OPT$, where $\alpha \le 1$.
• For minimization problems:
Suppose $\OPT$ = $\min \Phi(\mathbf{x})$ subject to $\mathbf{x} \in \mathcal{C}$.
$\alpha$-approximation algorithm returns $\mathbf{\tilde{x}}$ such that $\Phi(\mathbf{\tilde{x}}) \le \alpha \cdot \OPT$, where $\alpha \ge 1$.
### The Probabilistic Method---Value of Random Answers
Idea: to show an object with certain properties exists
• generate a random object
• prove it has properties with nonzero probability
• often, “certain properties” means “good solution to our problem”
• random routing as example
#### Max-Cut
• Definition
• Exact computation : NP-complete
• Approximation algorithms
• factor 2
• “expected performance,” so doesn't really fit our RP/ZPP framework
• but does show such a cut exists
#### Set balancing
• Given a family $\mathcal{S}$ of subsets $S_1, \ldots, S_n \subseteq [n]$
• Given a coloring $\varphi: [n] \to \{0,1\}$, define $\mathrm{bias}_{\varphi}(S_i) = |\#\{\varphi^{-1}(1) \cap S_i\} - \#\{\varphi^{-1}(0) \cap S_i\}|$.
• Minimize over coloring $\varphi$, the max bias, i.e. $\max_i \mathrm{bias}_{\varphi}(S_i)$
• $4\sqrt{n\ln n}$.
• Note that for a uniform random coloring, expected bias of $S_i$ is $0$.
• Using Chernoff bound, $\Pr[\mathrm{bias}_{\varphi}(S_i) > t] < 2e^{-t^2/|S_i|} \le 2e^{-t^2/2n}$.
• require $< 1/n$ to apply union bound \begin{align*} e^{-t^2/2n} &< 1/2n\\ t^2/2n &>\ln 2n\\ t &> \sqrt{2n \ln 2n} \end{align*}
• so there exists a coloring with max bias atmost $4\sqrt{n\ln n}$
• Spencer---10 lectures on the probabilistic method:
• Six Standard Deviations Suffice
• There exists a coloring with max bias at most $6\sqrt{n}$
• But nonconstructive
• Recent works have given algorithms to find such a coloring; via convex programming and rounding
### MAX SAT
Sometimes, it's hard to get hands on a good probability distribution of random answers.
Define.
• literals $x_1, \lnot x_1, x_2, \lnot x_2, \ldots, x_n, \lnot x_n$.
• clauses $\mathcal{C} = \{C_1, \ldots, C_m\}$. Clause $C_i$ looks like $(x_1 \vee \lnot x_2 \vee x_3 \vee x_4)$.
• NP-complete
Random assignment
• achieve $1-2^{-k}$ for MAX-$k$-SAT.
• very nice for large $k$, but only $1/2$ for $k=1$
LP: $\max \sum_{j=1}^{m} z_j$ \begin{align*} &\sum_{i \in C_j^+} y_i + \sum_{i \in C_j^-} (1-y_1) ~\ge~ z_j\\ &y_i ~\in~ [0,1] \qquad \text{(relaxation of \{0,1\})} \end{align*}
• Note that $\OPT_{LP}(\mathcal{C}) \ge \OPT(\mathcal{C})$
Randomized Rounding
• Suppose optimal LP solution $p_1, \ldots, p_n$.
• $p_i$ close to $1$ suggests that $x_i$ must probably be $1$. Similarly close to $0$ suggests that $x_i$ must probably be $0$.
• Set $x_i$ to be $1$ with probability $p_i$ and $0$ with prob. $1 - y_i$.
Analysis
• For simplicity, $C_j$ has all positive variables. Probability that clause $C_j$ is satisfied is \begin{align*} \Pr[C_j \text{ is not satisfied}] &~=~ \prod_{i \in C_j} (1 - p_i)\\ &~\le~ \left ( \frac{\sum_{i \in C_j} (1-p_i)}{k}\right )^k\\ &~=~ \left ( 1 - \frac{\zhat_j}{k}\right )^k \end{align*}
• Expected number of clauses satisfied is at least $\sum_{j=1}^{m} 1 - \left ( 1 - \frac{\zhat_j}{k}\right )^k$.
• We want to compare this to the objective of the LP.
• Lemma: $1 - \left ( 1 - \frac{\zhat_j}{k}\right )^k \ge \left ( 1 - \left ( 1 - \frac{1}{k}\right )^k \right ) \zhat_j$.
• $\beta_k = 1-(1-1/k)^k$. values $1,3/4,.704,\ldots, \to (1-1/e)$.
• Lemma: $k$-literal clause sat w/pr at least $\beta_k \zhat_j$.
• Result: $(1-1/e)$ approximation (convergence of $(1-1/k)^k$)
• much better for small $k$: i.e. $1$-approx for $k=1$
LP good for small clauses, random for large.
• Better: try both methods.
• $n_1,n_2$ number in both methods
• Show $(n_1+n_2)/2 \ge (3/4)\sum \zhat_j$
• $n_1 \ge \sum_{C_j \in S^k} (1-2^{-k})\zhat_j$, since $\zhat_j \le 1$.
• $n_2 \ge \sum \beta_k \zhat_j$
• $n_1+n_2 \ge \sum (1-2^{-k}+\beta_k) \zhat_j \ge \sum \frac32\zhat_j$
### Set Cover
Definition
• Given a collection of subsets $\mathcal{S} = \{S_1, \ldots, S_n\}$, where $S_i \subseteq [m]$.
• Find smallest collection of subsets $I \subseteq [n]$, such that $\bigcup_{i \in I} S_i = [m]$.
• Weighted version can also be defined.
• Remark: $n$ can be much larger than $m$.
LP: $\min \sum_{i=1}^{n} y_i$ \begin{align*} \text{for all } v \in [m] \qquad \sum_{i : v \in S_i} y_i \ge 1 & \\ \text{for all } i \in [n] \qquad 0 \le y_i \le 1 & \quad \text{(relaxation of \{0,1\})} \end{align*}
Candidate Randomized Rounding:
• Let optimal solution be $(p_1, \ldots, p_n)$.
• Include set $S_i$ with probability $p_i$.
• Suppose $v \in [m]$ occurs in sets $S_1, S_2, \ldots, S_k$.
• We have that $p_1 + \cdots + p_k \ge 1$.
• Probability that $v$ is covered is at least $1 - (1-p_1) \cdots (1-p_k) \ge 1 - e^{-(p_1 + \cdots + p_k)} \ge 1 - 1/e$.
• Risks some element from not being covered!
• It would have been sufficient if we succeeded with some decent probability. But in fact, with high probability might miss a constant fraction of $[m]$.
• Suppose each $j \in [m]$ appears in $2$ sets, and the optimal LP solution is $p_{i_1} = p_{i_2} = 1/2$. Then, $j$ is not covered probability $1/4$. There might be $m/10$ such elements, in which case, $m/40$ elements will remain uncovered with probability $1 - 2^{-\Omega(m)}$.
Actual Rounding:
• While some elements are uncovered:
$\triangleright$ Include set $S_i$ with probability $p_i$.
• Lemma: Probability that this runs for $\ln m + k$ iterations is at most $e^{-k}$.
• Since for any element $v$, probability that $v$ is not covered in $t$ iterations is atmost $(1/e)^t$. Hence, in $\ln m + k$ iterations, each element is covered with probability $1 - e^{-k}/m$.
• Union bound gives that we covered elements with probability $e^{-k}$.
Expected set size:
• If we run for $t$ iterations, expected size is $OPT_{LP}(\mathcal{S})$.
• Hence, expected set size after $t$ iterations is at most, $t \cdot \OPT_{LP}(\mathcal{S})$.
• With probability $1/2$, the size is at most $2 t \cdot \OPT_{LP}(\mathcal{S})$.
• For $t = \ln m + 3$, by a union bound, it follows that all elements are covered and at most $2 t \cdot \OPT_{LP}(\mathcal{S})$ sets are included.
• We have $(2 \ln m + 6)$ approximation.
### Wiring
Problem formulation
• $\sqrt{n} \times \sqrt{n}$ gate array
• Manhattan wiring
• boundaries between gates
• fixed width boundary means limit on number of crossing wires
• optimization vs. feasibility: minimize max crossing number
• focus on single-bend wiring. two choices for route.
• Generalizes if you know about multicommodity max-flow
Linear Programs, integer linear programs
• Black box
• Good to know, since great solvers exist in practice
• Solution techniques in other courses
• LP is polytime, ILP is NP-hard
• LP gives hints---rounding.
IP formulation
• $x_{i0}$ means $x_i$ starts horizontal, $x_{i1}$ vertical
• $T_{b0} = \{i \mid \text{net$i$through$b$if$x_{i0} = 1$}\}$
• $T_{b1} = \{i \mid \text{net$i$through$b$if$x_{i1} = 1$}\}$
LP: $\min w$ \begin{align*} \text{for every } i : \quad & x_{i0}+x_{i1} ~=~ 1\\ \text{for every edge } b : \quad &\sum_{i \in T_{b0}} x_{i0} + \sum_{i \in T_{b1}} x_{i1} \le w \end{align*} Rounding
• Solution $\xhat_{i0}$, $\xhat_{i1}$, value $\what$.
• rounding is Poisson Binomial vars with mean atmost $\what$.
• For $\delta< 1$ (good approx) $\Pr[\ge (1+\delta)\what] \le e^{-\delta^2\what /4}$
• need $2n$ boundaries, so aim for prob. bound $1/2n^2$.
• solve, $\delta=\sqrt{(4\ln 2n^2)/\what}$.
• So absolute error $\sqrt{8\what\ln n}$
• Good ($o(1)$-error) if $\what \gg 8\ln n$
• Bad ($O(\ln n)$ error) if $\what=2$ (invoke other chernoff bound)
• General rule: randomized rounding good if target logarithmic, not if constant
Generalize
• Multicommodty flow generalization
• Same rounding works
|
2017-06-24 00:18:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997336268424988, "perplexity": 4150.388978334372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320206.44/warc/CC-MAIN-20170623235306-20170624015306-00697.warc.gz"}
|
https://megalytic.com/blog/translating-business-goals-to-analytics-goals
|
# Translating Business Goals to Analytics Goals
Analytics goals are the cornerstone of actionable measurement. But, as a business or an agency, how do you get there? How do you take your business goals and translate them into analytics goals that can be used to produce actionable insight that improve your business?
This post provides a framework you can use to translate your company’s business goals into analytics goals.
The first step to creating meaningful analytics goals is to really identify the core objectives of your business. Start by creating a list of your organization’s high-level action items or desired outcomes. Don’t worry too much about how you are going to measure them right now. Just make a list and get everything down on paper. It’s not a bad idea to talk to your boss or other people on your management team to get their input, as well. Tell them you are working to align what the company tracks in analytics to the goals of the business. Ask them to identify what they believe are the three most important objectives for the business over the next 6-12 months.
What are some of the business objectives you might end up with after this exercise?
1. Increase shareholder value.
2. Grow our online community.
3. Increase revenue from online sales.
4. Grow business from repeat customers.
5. Better leverage social media.
The next step is to break these down into measurable business goals.
Business objectives are often qualitative, like “Better leverage social media”. Measurable goals are quantifiable, like “Increase the Twitter retweets and Facebook likes for our blog content”. That’s quantifiable because we can measure the number of retweets each month and report if it is increasing or decreasing.
To create analytics goals, we must first break down qualitative business objectives into measurable goals. To do that, look over each business objective and, if necessary, rephrase it using terms that are quantifiable.
Once you have a list of measurable goals, review it and think about which of the items on the list can be measured using web analytics. For example, “Increase revenue from online sales” is a good candidate for translating into an analytics goal. Online sales can be measured with Google Analytics Ecommerce Tracking.
On the other hand, “Increase shareholder value” is not a great candidate. We cannot measure shareholder value with web analytics. In this situation, we can break down this goal by listing some drivers of “shareholder value” that can be measured using web analytics.
1. Increase revenue per online customer.
2. Reduce the cost of acquiring new online customers.
3. Increase growth of weekly online subscription signups.
Once you have broken down the objectives into measureable goals, you should produce a final list. In our case, the list looks like this:
1. Increase revenue per online customer.
2. Reduce the cost of acquiring new online customers.
3. Increase growth of weekly online subscription signups.
4. Grow the number of weekly users interacting with our blog content (online community).
5. Increase revenue from online sales.
6. Increase revenue (grow business) coming from repeat customers.
7. Increase traffic (better leverage) from social media.
In order to create analytics goals, we need to look at each of the business goals and identify what part of it is measurable using web analytics.
If you look at each goal, you will notice that there is usually something to measure, and an action specific to the measurement that defines the goal. To produce the analytics goal, we focus first on what is being measured.
For example, consider #3, “Increase growth of weekly online subscription signups.” Here, we want to measure “online subscription signups.” We will worry about the “increase growth” part of this business goal, when we look at actionable insights.
To create the analytics goal for “online subscription signups,” have a look at your website and go through the subscription signup process. Most likely, there will be a “Thank You” page that you can use for defining your goal.
#### Creating a goal from a Thank You page
For example, when you fill out the subscription page for the ClickZ newsletter and click submit, you see this:
At the top, I have circled the URL of this Thank You page. If you were creating the analytics goal for measuring subscriptions to the ClickZ newsletter, that would be the URL that you used to define your goal.
Once you know the URL, go into Google Analytics, under Admin > Goals and click “+ NEW GOAL”. The screen will look like this:
Enter a name for the goal – like “New Subscription” and click the option labeled “Destination.” That’s the type of goal used for a “Thank You” page. Then click “Next step”.
On this page, enter the page path of the URL of the Thank You page, as shown below.
As you can see on this page, there are some additional options you can take advantage of when defining a goal. You can assign a value; and you can create a funnel. We are not going to use these options in this discussion, but if you are interested in learning more about them, see Google’s documentation for Set up, edit, and share Goals.
#### What if there is no Thank You page?
More and more, the traditional Thank You page is being replaced by a window overlay, which does not have a dedicated URL. Below is an example of the confirmation that appears when you sign up for the Megalytic newsletter.
You will notice the URL, circled in red, is just the Megalytic home page. We cannot use this for our signup goal, because then every visit to the home page would register as a signup.
In this situation, you need to use either a virtual page view, or an event to track the sign-up. In either case, you need to add special tracking code to your web site.
Events are usually a better bet because they do not artificially inflate your pageview count in Google Analytics. However, if you want to create a goal funnel, then you will need to use a virtual pageview.
Here is the JavaScript code that we inserted into the Megalytic website to fire an event that tracks the signup.
ga('send', 'event', ‘newsletter’, ‘subscription’);
The category for this event is ‘newsletter’ and the action for this event is ‘subscription’. We use these when setting up the goal from this event in Google Analytics.
To do that, go to Admin > Goals, as shown above, only instead of selecting a “Destination” type goal, make it an “Event” type goal. You will then see a page like this, where you can enter the category and action that define the goal.
The Label and Value fields are optional, and we are not using them in the definition of this event, so we just left them blank.
### Goal Reports
Once you have a goal set up in Google Analytics, you can view the standard report for the goal by opening Conversions > Goals > Overview. When you first open the report, it looks like this.
The labeled items point out some important aspects of this report to notice.
1. Here, you can select the segment (group of sessions) to apply to the goal report. We are looking at all the sessions.
2. Here, you can select which Goal you want to look at in the report.
3. Here, you can select which metric to view. Usually, you want to look at either Completions (the total number of goal hits); or the Conversion Rate (the percentage of visitors who completed the goal).
4. Here, you can see the overall Conversion Rate during the time period.
At this point, we have created an analytics goal and seen how to access a report. Let’s take a step back and remember the original business goal for which we created this analytics goal.
“Increase growth of weekly online subscription signups”
The report shows us the total number of goals completions each week, but it does not really provide any insight into the business goal, which is how to increase them. To achieve that, we need to think about actionable insights.
### Actionable Insights
Earlier, we noted that for a measurable business goal, there is usually something to measure and a desired action. For the goal we are considering in this example, the desired action is to increase weekly online subscriptions.
What we want to discover, using analytics, is some action that the business can take to cause weekly online subscriptions to increase. We don’t want to just guess at it, but to back it up with data. That’s what we mean by an actionable insight.
A tried and true approach to discovering actionable insights with goals is to use segments. The idea is to look for traffic segments that produce a higher conversion rate than average, and focus on creating more traffic from that segment.
To select a segment, click on the “+ Add Segment” button at the top of the report, and select one of the options. We are going to look at the built-in segments for “Paid Traffic” and “Non-Paid” traffic. The modified report below shows the results of comparing these two segments.
You can see that Paid Traffic has a conversion rate of 3.10%, whereas Non-Paid has a conversion rate of 2.11%.
Now, we are starting to get an actionable insight. To increase weekly subscriptions, we can recommend increasing the amount of paid traffic.
But, we can do even better using segments. We can focus in on the various types of paid traffic and see which is best. To do that, you will need to create advanced segments to isolate your specific sources of paid traffic.
We’ve created advanced segments for Paid Google Search, Paid Facebook, and Paid Twitter. Applying these segments to the Goal Report give us the results shown below.
Here, we can see that the conversion rate for Paid Google Search is much higher than the average, and also higher than either Facebook or Twitter.
Now, that’s an actionable insight! We can formulate this insight as a business recommendation.
|
2018-03-24 17:33:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2025388777256012, "perplexity": 1461.2421016150824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650764.71/warc/CC-MAIN-20180324171404-20180324191404-00751.warc.gz"}
|
http://www.whxb.pku.edu.cn/CN/Y1989/V5/I05/565
|
### 一些烷氯代烃标准蒸发焓的测定
1. 中国科学院化学研究所
• 收稿日期:1988-04-26 修回日期:1988-11-21 发布日期:1989-10-15
• 通讯作者: 安绪武
### ENTHALPIES OF VAPORIZATION OF SOME MUITICHLORO-ALKANES
An Xuwu; Hu Hui
1. Institute of Chemistry; Academia Sinica; Beijing
• Received:1988-04-26 Revised:1988-11-21 Published:1989-10-15
• Contact: An Xuwu
Abstract:
By using an LKB 8721-3 vaporization calorimeter, the standard enthalpiea of vaporization of some multichloro-alkanes have been determined as the following: 1, 2-dichloroethane, 35.12±0.05; 1,1-dichloroethane, 30.57±0.05; 1,2-dichloropropane, 36.14±0.05; 1,3-dichloropropane, 40.61±0.10; 1,2,3-trichloropropane, 47.75±0.10; 1,4-dichlorobutane, 46.36±0.03; 1,2-dichlorobutane, 40.16±0.12 kJ mol~(-1) respectively. A linear equation, ΔH_ν~0=21.33+0.1589 t_b, can be used to fit the experimental data of dichloroalkanes, where t_b is the normal boiling point of compounds. A comparison of the ΔH_v~0 of dichloroalkanes and their molecular structures shows that (1) when a Cl atom on the primary carbon isomerizes onto the secondary carbon, the ΔH_v~0 of the isomer will decrease; (2) when the number of carbon atoms linked between the two chlorine atoms increases, the ΔH_v~0 of the isomer will increase in the order: 2,2-<1,1-<1,2-<1,3-<1,4-≈1,5-dichloroalkanes. This can be considered as a result of the Cl…Cl interaction in the molecules which makes a decrease in the depole moment of C—Cl bonds or in the formal charge on the Cl atoms, and so in the intermolecular electroatatic interaction energy. The decreae in the latter is estimated from the difference between calculated value from the equation, ΔH_v~0 (a,b-Cl_2C_nH_(2n))=ΔH_v~0 (a-ClC_nH_(2n+1))+ΔH_v~0 (b-ClC_nH_(2n+1))-ΔH_v~0 (C_nH_(2n+2)), and experimental result as the following: for 1,1-≈2,2-dichloroalkanes, ~7; 1,2-dichloroalkanes, ~4; 1,3-dichloroalkanes, ~1.5; 1,4-≈1,5-dichloroalkanes, ~0 kJ mol~(-1), respectively.
|
2020-09-21 04:23:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20061905682086945, "perplexity": 13935.425857928323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198887.3/warc/CC-MAIN-20200921014923-20200921044923-00391.warc.gz"}
|
http://www.zora.uzh.ch/49244/
|
# First observation of Bs -> D_{s2}^{*+} X mu nu decays
## Citations
20 citations in Web of Science®
17 citations in Scopus®
## Altmetrics
Detailed statistics
Item Type: Journal Article, refereed, original work 07 Faculty of Science > Physics Institute 530 Physics English 28 March 2011 29 Aug 2011 08:27 05 Apr 2016 14:59 Elsevier 0370-2693 Publisher DOI. An embargo period may apply. https://doi.org/10.1016/j.physletb.2011.02.039 http://arxiv.org/abs/1102.0348
Preview
Content: Accepted Version
Filetype: PDF
Size: 2MB
View at publisher
Preview
Content: Published Version
Filetype: PDF
Size: 417kB
## TrendTerms
TrendTerms displays relevant terms of the abstract of this publication and related documents on a map. The terms and their relations were extracted from ZORA using word statistics. Their timelines are taken from ZORA as well. The bubble size of a term is proportional to the number of documents where the term occurs. Red, orange, yellow and green colors are used for terms that occur in the current document; red indicates high interlinkedness of a term with other terms, orange, yellow and green decreasing interlinkedness. Blue is used for terms that have a relation with the terms in this document, but occur in other documents.
You can navigate and zoom the map. Mouse-hovering a term displays its timeline, clicking it yields the associated documents.
|
2017-02-20 04:36:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41836342215538025, "perplexity": 5590.934605596358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00055-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://en.wikipedia.org/wiki/Rubik%27s_family_cubes_of_all_sizes
|
# Rubik's family cubes of all sizes
The original Rubik's cube was a mechanical 3×3×3 cube puzzle invented in 1974 by the Hungarian sculptor and professor of architecture Ernő Rubik. Extensions of the Rubik's cube have been around for a long time and come in both hardware and software forms. The major extension have been the availability of cubes of larger size and the availability of the more complex cubes with marked centres. The properties of Rubik’s family cubes of any size together with some special attention to software cubes is the main focus of this article. Many properties are mathematical in nature and are functions of the cube size variable.
## Definitions
In the main, the terminology used here is in agreement with what is in general use. Elsewhere, some terms are used with different meanings. To avoid misconceptions, the meaning of most terms in use in this article is defined below.
Cube size The standard Rubik's cube is often referred to as a 3×3×3 cube. That cube will be referred to as a size 3 cube and in general an ${\displaystyle n\times n\times n}$ cube will be referred to as a size ${\displaystyle n}$ cube. Cubes that have similar rotational properties to the standard Rubik's size 3 cube and obey generalized rules for a size ${\displaystyle n}$ cube are considered to be members of the Rubik cube family. Cubes of size 2 and above that meet this condition are available. Individual cube elements will be referred to as cubies (others sometimes refer to them as "cubelets"). There are three types of cubies: corner cubies (three coloured surfaces), edge cubies (two coloured surfaces) and centre cubies (one coloured surface). The absolute centre cubies for odd size cubes sit on the central axes of the six faces and their relative positions never change. A cubicle is the compartment in which a cubie resides. For a permutation (defined below), cubicles are considered to occupy fixed positions in the space occupied by the cube object but their contents (cubies) may shift position. A facelet is a visible coloured surface of a cubie (corner cubies have three facelets, edge cubies have two and centre cubies have one). Two cube styles are referred to herein: firstly a standard cube with unmarked centres and secondly a cube with marked centres. A particular arrangement of the cubies will be referred to as a cube state. What looks the same is considered to be the same (unless specific mention to the contrary is made). Each state has equal probability of being produced after a genuine random scrambling sequence. A rotation of the whole cube does not change the state considered herein. In other texts the various states are often referred to as permutations or arrangements. A cube layer is a one-cubie-thick slice of the cube perpendicular to its axis of rotation. Outer layers (faces) contain more cubies than inner layers. For a cube of size ${\displaystyle n}$ there will be ${\displaystyle n}$ layers along any given axis. The meaning of a cube face depends on the context in which it is used. It usually means one of the six three-dimensional outer layers but can also refer to just the outside layer's surface which is perpendicular to its axis of rotation. The faces are usually designated as up (U), down (D), front (F), back (B), left (L) and right (R). The set (or solved) state of the cube is one for which a uniform colour appears on each of the six faces. For cubes with marked centres the set state is characterised by a unique arrangement of all centre cubies and the labelling of those cubies must reflect that. The scrambled state is the starting point for unscrambling the cube. It arises when a cube in the set or any other state is subject to a large number of randomly chosen layer rotations. There are three mutually perpendicular axes of rotation for the cube. One set of axes defined in terms of D, U, B, F, L and R terms can be considered to have a fixed orientation in space. Think of these axes as belonging to a cube-shaped container in which the cube object can be positioned in any of 24 orientations. One axis can be drawn through the centres of the D and U faces (the DU axis). The others are the BF and LR axes. Another set of axes can be defined for the cube object itself. These axes relate to the face colours, the most common being white, red, orange, yellow, green and blue. The axes are usually white-blue, red-orange and yellow-green. For odd size cubes these axes are always fixed relative to the internal frame of the cube object. For even size cubes these axes remain fixed relative to the internal frame of the cube object after initial selections. The origin for the axes is the centre of the cube object. The only way that cube state can be changed is by rotations of cube layers about their axes of rotation. All changes of state involve rotation steps that can be considered as a sequence of single layer quarter turns. For a basic quarter turn of a cube layer for cubes of all sizes, sets-of-four cubies move in separate four-cubicle trajectories. When all the possible trajectories for a given cubie type are considered for the whole cube we will refer to all the possible movement positions as being in a given orbit. We consider that the size 3 cube has two orbits, one in which the eight corner cubies are constrained to move and one in which the 12 edge cubies are constrained to move. Transfer of cubies between these orbits is impossible. For cubes of size 4 and above we will also define an edge cubie orbit as comprising 12 cubies but will use the term complementary orbit to describe a pair of orbits between which edge cubies can move. A pair of complementary edge cubie orbits contains a total of 24 cubies. Cubes of size 4 and above include centre cubie orbits that contain 24 cubies. Transfer of cubies between one such orbit and another is not possible (applies to cubes of size 5 and above). A move is a quarter turn rotation of a layer or a sequence of such quarter turns that a person would apply as a single step. A clockwise quarter turn of an outer layer is usually expressed as U, D, F, B, L or R. In other respects the notation used varies among authors. An algorithm defines a sequence of layer rotations to transform a given state to another (usually less scrambled) state. Usually an algorithm is expressed as a printable character sequence according to some move notation. An algorithm can be considered to be a "smart" move. All algorithms are moves but few moves are considered to be algorithms. A permutation of the cube as used herein means the act of permuting (i.e. rearranging) the positions of cubies. A permutation is an all-inclusive term which includes a sequence of quarter turns of any length Even the solving of the cube from a scrambled state represents a permutation. The term "permutation" is used extensively by mathematicians who use Group Theory to quantify the process involved in a rearrangement of cubies. The term "permutation" is also often used to mean the state of the cube that results after it is rearranged but that meaning will not be used herein. In such cases the term "cube state" will be used. That allows the term "permutation" to be used when the permutation results in no change of state - an area of special interest for Rubik’s family cube permutations. A cube permutation can be represented by a number of swaps of two cubies. If that number is even the permutation has even parity, and if the number is odd the permutation has odd parity.
## Cube types
### Hardware cubes
Hardware (physical) cubes are based on the original size 3 cube invented by Erno Rubik in 1974. These cubes usually use coloured stickers on the facelets for cubie identification. The size 3 standard Rubik’s cube gained peak interest in the 1980s and was closely followed by the size 4 (Rubik's Revenge) cube. Other, usually more recently available, hardware forms of the cube come in size 2 (Pocket Cube), size 5 (Professor's Cube), size 6 (V-Cube 6) and size 7 (V-Cube 7). Lesser known hardware cubes of larger sizes have also been produced. Currently, the largest hardware cube made is the size 33[1], and the largest mass-produced is size 17[2].
### Software cubes
In parallel with the hardware form of the cube there are many software forms available that obey the same rules as the hardware forms. Software cube emulators are not subject to the physical restraints that impose a size limit on the hardware forms. Hence the only really big cubes available are those in software form. Also, unlike the hardware forms, a range of cube sizes can easily be accommodated by the one program. The design characteristics of programs that allow users to unscramble cubes vary considerably with such features as the capability to allow the user to save a partially unscrambled state being often available.
Software cubes were in use in the 1980s when monochrome monitors were in common use. The lack of colour meant a different way of face identification was required. A program that retained 1980s monochrome capability (using numerals 1 to 6 to identify facelets) for cubes in the size 2 to 11 range was produced in 1991 (together with colour capability in the size 2 to 15 range). More recently developed software cubes use coloured facelets as for hardware cubes.
The most common, but by no means universal, approach is to emulate the cube by providing a "three-dimensional" display of the cube that makes it look like a real hardware cube. A drawback of the "three-dimensional" display is that, without some extra enhancements, the state of parts of the cube for any given view is hidden.
Other interactive software approaches that do not emulate a three-dimensional cube are also used by some programmers. Generally, an aim of such approaches is to allow the state of all cubies to be in view all the time but has the disadvantage (for some viewers) that the display does not appear like a real-world cube. A conventional two-dimensional (unfolded) display with all cube elements appearing of equal size is one approach. Another form of display where all cube elements do not appear of equal size is also in use. The upper cube size limit for software cubes is limited by the monitor’s available pixels and what viewers find as acceptable which, in turn, is a function of their visual acuity. For cubes of large size it can be advantageous to allow a portion of the cube to be scrolled out of view.
All emulators provide a means for the user to change the cube state step-by-step and unscramble the cube. Most emulators use mouse movements to control the rotation of the cube elements, others use keyboard commands, and some use a combination of both.
Software cubes present some major capabilities that are not possible with hardware cubes. Instant return to the set state is always available. If the program allows a partially unscrambled state to be saved then, by regularly updating the saved state, users need not despair if they do something that leaves their cube in a mess. They can return to their previously recorded state and continue from there. The bigger the cube the more helpful such a capability becomes.
Some Freeware large cube (size greater than 10) implementations are available.
## Cube design variants
While there are multiple variants in use only two will be considered here:
• Standard cubes with unmarked centres.
• Cubes with marked centres.
### Standard cubes with unmarked centres
A 2-layer (size 2) cube has corner cubies only.
Cubes of size 2 and size 3 have single solutions meaning that all the cube elements can have only one correct location for a solved cube.
Centre cubies differ from the corner and edge cubies in that their orientation or position has multiple possibilities. For cubes of odd size, there will be a centre cubie that is centrally located on the cube face and that cubie has only one correct location for a solved cube. However, multiple locations of all other centre cubies apply for a solved cube. The centre cubies (other than the single central one for cubes of odd size) form sets-of-four on each face and sets-of-24 for the whole cube for the various orbits. These centre cubies have four possible final positions (their orientation changes with position but cannot be changed independently) that would satisfy the solved state.
### Cubes with marked centres
Typically, hardware cubes with marked centres use images or logos on the faces to designate what centre cubie(s) orientation is required for a solved cube. Such cubes are also referred to as "supercubes" and the application of markings of this type is generally restricted to just cubes of very small size.
Solving a cube with marked centres is considerably more complex than that for standard cubes. The use of jig-saw style image marking on cubes of large size would make a difficult task even more difficult. Two possibilities in current use on software cubes are the use of a numerical graphic in the "1" to "4" range and the use of a corner marking graphic.
There is a direct correspondence between numerical and corner marking. A top left corner quadrant marking is equivalent to a numerical marking 1, second quadrant to 2, third quadrant to 3, and fourth quadrant to 4. The following image illustrates these different forms of marking.
Because transfer of cubies between orbits is impossible, the same 1-2-3-4 markings can be used for each orbit. With the exception of the absolute centre cubies for cubes of odd size, there are 24 centre cubies (4 per face) in each orbit. If ${\displaystyle n}$ is cube size, there will be ${\displaystyle {\frac {\left(n-2\right)^{2}-a}{4}}}$ orbits where ${\displaystyle a}$ is zero if ${\displaystyle n}$ is even or ${\displaystyle a}$ is one if ${\displaystyle n}$ is odd.
Numerical marking is typically applicable for cubes up to about size 32. Corner marking while less user-friendly can allow the marked centres range to be extended beyond the numerical marking limit.
Except for absolute centre marking for cubes of odd size, numerical marking would provide the best means of centre cubie marking for hardware cubes as their size range is limited. Rotation of the numbers would mean a small inconvenience relative to non-rotated numbers that can be used for software cubes. The big advantage of numbers is that they reduce the complexity of solving the last cube face when markings are in use (e.g. if the set-of-four sequence is 1-3-4-2 (even parity, needs two swaps to become the required 1-2-3-4) then the algorithm requirement is clear. Algorithms have been defined in[3] and are, of course, equally applicable to hardware cubes.
## Rules for Rubik’s family cubes
A cube is solvable if the set state has existed some time in the past and if no tampering of the cube has occurred (e.g. by rearrangement of stickers on hardware cubes or by doing the equivalent on software cubes). Rules for the standard size 3 Rubik's cube[4][5] and for the complete Rubik's cube family[6] have been documented. Those rules limit what arrangements are possible and mean that, of the possible unrestricted cubie arrangements, the number that are unreachable far outnumber those that are reachable.
Cubes of all sizes have three mutually perpendicular axes about which one or more layers can be rotated. All moves for the cube can be considered as a sequence of quarter-turn rotations about these axes. The movement possibilities give rise to a set of rules (or laws) which in most cases can be expressed in analytical terms.
For a cube of size ${\displaystyle n}$:
Number of corner cubies ${\displaystyle 8}$ Number of edge cubies ${\displaystyle 12\left(n-2\right)}$ Number of centre cubies ${\displaystyle 6\left(n-2\right)^{2}}$ Number of facelets ${\displaystyle 6n^{2}}$ Total number of cubies ${\displaystyle 6\left(n-1\right)^{2}+2}$ Increase in total number of cubies for unit increase in cube size from ${\displaystyle n}$ to ${\displaystyle \left(n+1\right)}$ ${\displaystyle 12n-6}$
Every cube move can be considered as a permutation. The relationship between the cube state after a move with that before a move can be expressed mathematically using Group Theory[7][8][9] to quantify permutations. Since every move can be considered as a sequence of quarter turn rotations, it is appropriate to examine what is involved in a quarter turn rotation. Except for the absolute centre cubie for odd size cubes, during the quarter turn the cubies move in separate four-cubicle trajectories (also referred to as a 4-cycle movement since four quarter turns will restore the cubies in the specified trajectory to their original positions). A quarter turn of a 4-cubie set can be represented by three swaps as indicated below where swap 1-2 means the contents of cubicle 1 is swapped with the contents of cubicle 2, etc.
The parity[10] of a permutation refers to whether that permutation is even or odd. An even permutation is one that can be represented by an even number of swaps while an odd permutation is one that can be represented by an odd number of swaps. An odd permutation followed by an odd permutation will represent an overall even permutation (adding two odd numbers always returns an even number). Since a quarter turn is made up of a number of 4-cycles each involving three swaps, if the number of 4-cycles is odd, overall parity of the quarter turn permutation will be odd and vice versa.
Quarter turn permutation parity for a size ${\displaystyle n}$ cube is given in the following table.
Cube size
(odd or even)
Layer type Number of 4-cycle movements Overall parity
odd inner ${\displaystyle n-1}$ even
odd outer ${\displaystyle {\frac {\left(n-2\right)^{2}-1}{4}}+\left(n-1\right)}$ even[a]
even inner ${\displaystyle n-1}$ odd
even outer ${\displaystyle \left({\frac {n}{2}}-1\right)^{2}+\left(n-1\right)}$ even if ${\displaystyle {\frac {n}{2}}}$ is even [b]
odd if ${\displaystyle {\frac {n}{2}}}$ is odd
1. ^ Since ${\displaystyle \left(n-2\right)^{2}-1}$ equals ${\displaystyle \left(n-1\right)\left(n-3\right)}$, a product of two consecutive even numbers, which must always be evenly divisible by 8.
2. ^ Since ${\displaystyle \left({\frac {n}{2}}-1\right)^{2}}$ will be odd if ${\displaystyle \left({\frac {n}{2}}-1\right)}$ is odd (i.e. if ${\displaystyle {\frac {n}{2}}}$ is even) giving overall even since ${\displaystyle (n-1)}$ is odd. The reverse applies if ${\displaystyle {\frac {n}{2}}}$ is odd.
Summarising the above parity results we conclude:
• All permutations for odd size cubes have even overall parity.
• All individual quarter turns for even size cubes, where half the cube size is an odd number, have odd overall parity.
• For even size cubes where half the cube size is an even number, inner layer quarter turns have odd overall parity and outer layer quarter turns have even overall parity.
The above analysis considered the parity for corner (where applicable), edge and centre cubies combined. It is possible to consider these in isolation and when that is done an even combined quarter turn parity will involve a number of odd parity elements.
Standard cubes (i.e. cubes with unmarked centres) of any size greater than 3 behave exactly like the size 3 cube if only outer layer rotations are permitted. Parity rules dictate that, for cubes of odd size, the swapping of the two cubies in a single edge set requires a change in the position of centre cubies. It can be shown[6] that, for the size 4 cube the swapping and inverting of the two complementary cubies in a single edge set can be achieved without any change in the position of any other cubies. It can also be shown that, for cubes of even size 6 and above, the swapping of the two cubies in a single edge set requires a change in the position of centre cubies.
A permutation as used here takes account of the change in the positions of cubies, not any change in their orientations. For the 24 edge cubie sets (made up of 12 complementary pairs) there is no restriction on position. Orientation is set by position and cannot be changed independently of position.
Corner cubies behave the same for cubes of all sizes. They have three possible orientations made up from a combination of ${\displaystyle {\frac {1}{3}}}$ twists where a full twist (about an axis drawn from the cube corner to the cubie's internal corner) returns the corner cubie to its original orientation. If we designate a unit clockwise twist by ${\displaystyle {\frac {1}{3}}}$ and a unit counter-clockwise twist by ${\displaystyle -{\frac {1}{3}}}$, then the twist possibilities for a corner cubie relative to any given initial state (the set state for example) are 0, ${\displaystyle {\frac {1}{3}}}$ and ${\displaystyle -{\frac {1}{3}}}$. The sum of the twist increments across all corner cubies must always be an integer (0, 1 or 2).
When inner layer rotations are included for cubes of size greater than 3, some of the edge cubie movement limitations mentioned above no longer apply. These are expanded on in the Final layer problems section.
Cubie position and orientation are of special concern when unscrambling the final layer. Edge cubies must always end up in exactly the same positions they occupied in the initial set state before scrambling. If any edge cubie in a given edge set in the final layer has a wrong orientation (only applicable to cubes of size greater than 3), it must be in a wrong position and will need to be swapped with a complementary edge cubie also having wrong orientation. With everything else in place, corner cubies can be in the right position but two or more can have incorrect orientation. For standard cubes of size greater than 3, there is a negligible possibility that centre cubies (other than absolute centre cubies for cubes of odd size) will occupy the same positions they had in the initial set state (assuming the centre cubies are unmarked).
Both even and odd size cubes with either marked or unmarked centres obey the rule: "Any permutation that results in only a rearrangement of centre cubies in 24 cubie orbits must have even parity".
If permutations of facelets rather than cubies are considered then both position and orientation of cubies will be taken into account. For software cubes, the states (six colour possibilities) of the ${\displaystyle 6n^{2}}$ facelets (in a ${\displaystyle 6\times n\times n}$ array for instance) is what would allow complete information on cube state to be saved for later use.
A cube of any size that is subject to repeats of the same permutation will eventually return to the state (e.g. the set state) it occupied before the first application of the permutation.[7][8] The number of times that a permutation has to be applied to first return the cube to its initial state is referred to as the order or the cycle length of the permutation and is applicable to cubes of all sizes. An overall permutation that results in no change of state is referred to as an Identity Permutation. A program that allows permutation cycle length of any size cube to be determined is available[11] and sample cycle length results have been documented.[6] For a given permutation, cycle length can vary according to:
• Cube size.
• Initial cube state (for standard cubes with unmarked centres).
• Cube style (whether standard or marked centres are in use).
• Spatial orientation (checking all 24 of these rather than just one may give a different result).
The parity of an Identity Permutation is always even. That result for cubes of odd size is obviously true since every quarter turn has even parity. The result is less obvious for cubes of even size. For cubes of even size if the scrambling permutation relative to the previous set state is odd, then any permutation to solve the cube must also have odd parity and vice versa.
The generalized number of possible states for a size ${\displaystyle n}$ cube is considered in the Reachable states for cubes of all sizes section.
## Cube solving
### Solving by people
Cube solving involves starting with a scrambled cube and applying step-by-step layer rotations to eventually end up with a solved cube. For cubes with unmarked centres that means all faces would need to appear in uniform colour. For cubes with marked centres a unique arrangement of all centre cubies in addition to the uniform colour requirement would need to apply. Since the starting point is always different, there can never be a unique set of rotations that can be applied to solve a cube. Usually, people work through the solution with the possible use of algorithms, mainly in the latter stage of the unscrambling. In theory, it is possible for a person to write a computer program that "thinks" like a human and solves the cube without human intervention (refer to the Solving by computer program section).
The aim of most software cube emulators is to provide a means for the user to interact with the program to solve (unscramble) the cube in a similar manner to the way they would unscramble a hardware cube.
Efficient rotation sequences (algorithms) can be developed using Group Theory permutation mathematics. However, there are many references to the relevant rotation sequences required to solve cubes of small size (refer to some for size 3, 4, and 5 cubes[12][13][14][15]) and there are multiple approaches to what steps that can be used. There is no such thing as a wrong way to solve a cube. The steps involved in solving any cube of size greater than 4 are fairly straightforward extensions of those required to solve size 3 and size 4 cubes. However, there is limited availability of generalized instructions that can be applied for solving cubes of any size (particularly the large ones). Generalized guidance on one way of solving standard cubes[16] and cubes with marked centres[3] of all sizes is available.
Anybody who can solve a size 4 cube should be able to solve cubes of larger size provided they accept an increased time-to-solve penalty. Software design features, unavailable in hardware cubes, can simplify the cube solving process. For a given set of cube design features the complexity (difficulty) of solving a Rubik’s family cube increases if the number of reachable states increases. Three main properties affect that number:
1. Cube size: The number of cubies to be placed is a quadratic (second order polynomial) function of cube size and therefore has a major influence on cube solving complexity.
2. Odd or even size: Even size cubes have an additional effect to just cube size that adds complexity relative to odd size cubes. This effect is relatively small and is independent of cube size (the added contribution when cube size changes from ${\displaystyle n}$ to ${\displaystyle \left(n+1\right)}$ for ${\displaystyle n}$ odd is constant). This effect will be expanded upon when the number of reachable states is considered later.
3. Unmarked or marked centre cubies: Centre cubie marking adds complexity to cube solving.
Additional algorithms to assist users to solve the size 3[17] and to solve any size[3] cube with marked centres have been defined.
### Large cube issues
Large cube emulators claimed to cater for cubes up to and beyond size 100 are available. Irrespective of what upper size limit is claimed, available pixels (that vary according to the monitor in use) and the user's visual acuity are going to impose practical limits on the maximum cube size that a person can handle.
As indicated in the Rules for Rubik’s family cubes section, total number of cubies is ${\displaystyle \left\{6\left(n-1\right)^{2}+2\right\}}$ and the number of centre cubies is ${\displaystyle 6\left(n-2\right)^{2}}$, where ${\displaystyle n}$ is cube size. For large size cubes the number of centre cubies becomes very dominant as indicated below.
Cube size: 4 8 16 32 64 Total cubies: 56 296 1352 5768 23816 Centre cubie proportion of total cubies (%): 42.8 73 87 93.6 96.8
It follows that placement of centre cubies will become increasingly more significant than the placement of other cubies as cube size is increased. The time to solve a cube will rise dramatically with cube size. For example, there are about 24 times as many cubies to place in a size 16 cube than there are in a size 4 cube. If the average time to place a cubie were the same in both cases, that factor of 24 in time would also apply. The 24 factor is likely to be an under-estimation because the presence of a large number of cubies makes it more difficult (and time-consuming) to identify what belongs where.
Providing a software program that allows the state of cubes of big size to be changed is not much more difficult than doing the same thing for cubes of small size. However, solving big cubes is a much more demanding and time-consuming task than doing the same for small cubes. Therefore, it is likely that most really big software cubes that are available have never been solved.
Identifying the exact locations to look for cubies (mainly the quadruple centre cubie sets) is a major issue for big cubes. Use of a secondary marker grid[11] can ease identification. For example, a marker grid to form 4×4 segments for a size 16 cube (16 such segments per face) could be used.
A common set of six cubie colours adopted for both hardware cubes and software cubes comprises white, red, orange, yellow, green and blue. This colour set may be non-optimum for software cubes of large size where the number of pixels per cubie is small. For instance the differentiation between white and yellow may be problematic. Reducing the number of colors in the red to blue range from five to four and adding violet (the colour at the extreme of the visible spectrum) produces a colour set that may be considered more suitable for cubes of large size. Some software cube implementations allow users to change the default colour set if desired. This is a useful addition for users whose colour perception is at variance with the norm.
### Solving by computer program
Cube solving by computer program[18] (as distinct from the normal way people solve a cube) for small size (e.g. size 3) cubes has been developed, and it is equally easy to solve large size cubes by computer.
### Final layer problems
A "final layer problem" is defined here to mean a need for a rearrangement of final layer edge cubies that cannot be achieved using standard size 3 cube moves. These are often referred to as parity problems or errors, but such terminology may be misleading. If moves were limited to those available to the size 3 cube, such states would be unreachable (break parity rules). There are numerous variations in the way the final layer problems are presented and the algorithms to resolve them, but the correction requirement will be similar to that described below. The problems considered here apply equally to standard cubes and to those with marked centres but in the latter case additional final layer issues arise for aligning centre cubies. The problems for larger cubes can be considered as straightforward extensions of those that apply to the size 4 cube. Basically, two types of problem can arise:
• There is a need to flip a complementary pair or a complete set of edge cubies in the final edge set. This condition will be referred to as an OLL (orientation of last layer) requirement.
• There is a need to swap the positions of two edge cubie sets in the final layer. This condition will be referred to as a PLL (permutation of last layer) requirement.
OLL and PLL as used here can be considered to be sub-sets of the usual definitions of these terms. There are many references to moves that can be used to resolve these problems. Fewer references[6][19] demonstrate how these moves satisfy parity rules. From a parity perspective, there is a need to consider the rearrangement of centre cubies which is not readily observable in cubes with unmarked centres. Only OLL parity compliance will be illustrated here.
A typical OLL correction for a size 9 cube is shown. The cubies shown in colour are the only ones in the cube that change positions.
OLL before correction for size 9 cube OLL after correction for size 9 cube
For the OLL correction there are ${\displaystyle n-2}$ centre cubie swaps and overall there are ${\displaystyle \left(n-1\right)}$ swaps when the edge pair is included. For odd size cubes ${\displaystyle \left(n-1\right)}$ is always even (and conforms with the universal even parity requirement for odd size cubes). For even size cubes ${\displaystyle \left(n-1\right)}$ is always odd which means in this case a parity reversal always occurs, an allowable parity condition for even size cubes.
For the complete edge set flip (a requirement that can arise only for cubes of even size), the number of swaps will be ${\displaystyle \left(n-1\right)\left({\frac {n}{2}}-1\right)}$. The overall number of swaps will be even if ${\displaystyle \left({\frac {n}{2}}-1\right)}$ is even (i.e. ${\displaystyle {\frac {n}{2}}}$ is odd). The overall number of swaps will be odd if ${\displaystyle {\frac {n}{2}}}$ is even. Hence overall parity will be even if ${\displaystyle {\frac {n}{2}}}$ is odd and odd if ${\displaystyle {\frac {n}{2}}}$ is even.
The parity of a given algorithm can, of course, also be deduced from its content using the rules detailed in the Rules for Rubik’s family cubes section.
For standard cubes the rearrangement of centre cubies to resolve the OLL and PLL problems is unimportant. For cubes with marked centre cubies the effect of this rearrangement of these cubies is a serious drawback. For cubes with marked centres it is not possible (except for the size 4 cube) to align all final layer centre cubies until all edge cubies have been placed in their final positions.
### Algorithms
Instructions for people on how to solve Rubik's type cubes are normally conveyed either in purely graphical form or as sequences defined using a printable character notation. A character sequence that can be translated and applied to perform a sequence of layer rotations to transform a given state to another (usually less scrambled) state is often referred to as an algorithm. Algorithms are most commonly used when unscrambling the latter portion of the cube but can be applied more extensively if desired. Algorithms can be written down as instructions that can be memorized or looked up in a document. The printable characters used (e.g. to indicate an anticlockwise quarter turn, a single layer quarter turn or a multiple layer quarter turn) in algorithm instructions vary among authors as does their positions in the instructions. Where people interpret instructions the way they are presented is insignificant. The only time the form of presentation has significance is when computer keyboard entry is used to change the state of software cubes, and automatic updating of the screen image occurs whenever a valid instruction is received. For example, if F′ is used to represent an anticlockwise quarter turn of the front face then, as the user types in F, a clockwise quarter turn will occur and a correction will be needed when the user types the ′ character. The end result will still be correct but use of −F rather than F′ would eliminate the superfluous rotation. Any text enhancements, such as superscripts or subscripts, must be avoided in the method of presenting cube rotation sequences when users communicate with software cubes via keyboard commands. When computer keyboard entry of instructions is used, macros (which map a short input text string to a longer string) can be used[11][16][20] as algorithm shortcuts.
### Time to solve cubes
Speedcubing (or speedsolving) is the practice of solving a cube in the Rubik's cube family in the shortest time possible (which usually implies reducing the number of quarter turn moves required). It is most commonly applied to cubes of small size and there are numerous solving methods that have been documented. An international team of researchers using computer power from Google has found every way the standard size 3 Rubik's cube can be solved and have shown it is possible to complete the solution in 20 moves or less[21] for any initial scrambled state (where a move here is defined as a quarter or a half turn of a face). Generally, speed solving methods apply more to specialist cubists than typical cubists and are more complex than simple layer-by-layer type methods used by most other people.
## Reachable and unreachable states for cubes of all sizes
If a cube has at some previous time occupied the set state, then any state that can arise after legal moves is considered to be a reachable state. For small size cubes (size 2, 3 or 4) an unreachable state is one that cannot be reached by legal moves. For larger cubes there needs to be some further qualification on what is meant by an unreachable state. In this article notional movement between 24-cubie orbits for edge and for centre cubies is excluded.
### Relationship between reachable and unreachable states
If, for a cube of any size, m represents the number of reachable states, u represents the number of unreachable states and t equals their sum:
${\displaystyle t=u+m}$
${\displaystyle t=km}$ where ${\displaystyle k}$ is a positive integer
${\displaystyle u=\left(k-1\right)m}$
Both m and k are functions of cube size ${\displaystyle n}$. Values for m and k will be considered in the following sections. In other texts "reachable states" are often referred to as "permutations".
### Reachable states for cubes of all sizes
The number of reachable states is based on:
• Standard permutations and combinations mathematics.[22]
• Reduction factors that must be applied to above to reflect movement restrictions specific to Rubik’s family cubes.
The number of different states that are reachable for cubes of any size can be simply related to the numbers that are applicable to the size 3 and size 4 cubes. Hofstadter in his 1981 paper[23] provided a full derivation of the number of states for the standard size 3 Rubik's cube. More recent information sources that adequately justify the figures for the size 3[4][5][24] and size 4[25] cubes are also available. References that indicate the number of possible states for a size ${\displaystyle n}$ cube are available.[25][26][27] The brief material provided below presents the results in the form used in one of these references[25] which covers the topic in far more detail.
For cubes with unmarked centre cubies the following positive integer constants (represented by P, Q, R and S) apply. These constants are in agreement with figures frequently quoted for the size 3 and size 4 cubes.
Corner cubie possibilities for even size cubes P (7!) 36 3.67416000000000 × 106 Central edge cubie possibilities for odd size cubes, multiplied by 24 Q 24 (12!) 210 1.17719433216000 × 1013 Edge cubie possibilities for each dual set (12 pairs) R 24! 6.20448401733239 × 1023 Centre cubie possibilities for each quadruple set (6 groups of 4) S (24!)/(4!)6 3.24667053711000 × 1015 Note: ! is the factorial symbol (N! means the product 1 × 2 × ... × N).
The value of S may warrant a word of explanation as it is commonly inferred that the number of possible states for centre cubies with identifying markings for a size 4 cube is 24!. Use of that value is guaranteed to yield the wrong answer if cubes with marked centres are under consideration. The first 20 cubies can be arbitrarily placed giving rise to factor 24!/4!. However, for each possible arrangement of edge cubies, only half the 4! hypothetical arrangements for the last four are reachable.[3][25] Hence the correct value for the cube with marked centres is 24!/2. If the markings are removed, then a "permutation with some objects identical"[22] applies. For the standard cube the marked cube value needs to be divided by (4!)6/2 (the 2 divisor must also be applied here). That gives an overall S value for the size 4 cube of 24!/(4!)6. All states for 24-centre-cubie orbits for standard Rubik’s family cubes are reachable (if required, even parity is always achievable by swapping the positions of a couple of centre cubies of the same colour).
${\displaystyle m={\textrm {P}}{\textrm {Q}}^{a}{\textrm {R}}^{b}{\textrm {S}}^{c}}$
where ${\displaystyle a}$, ${\displaystyle b}$ and ${\displaystyle c}$ are positive integer variables (functions of cube size ${\displaystyle n}$) as given below.
${\displaystyle a=n{\textrm {mod}}2}$ (i.e. 0 if ${\displaystyle n}$ is even or 1 if ${\displaystyle n}$ is odd)
${\displaystyle b={\frac {n-2-a}{2}}}$
${\displaystyle c={\frac {\left(n-2\right)^{2}-a}{4}}}$
For even size cubes ${\displaystyle {\textrm {Q}}^{a}=1}$ (see exponentiation).
For further simplification, parameter ${\displaystyle m}$ may also be expressed as ${\displaystyle m=10^{y}}$ where ${\displaystyle y={\textrm {log}}_{10}m}$. Parameter ${\displaystyle y}$ can be related to ${\displaystyle n}$ by a continuous quadratic function subject to the restriction that ${\displaystyle n}$ must be an integer greater than 1 when referring to possible states for cubes:
${\displaystyle y={\textrm {A}}n^{2}+{\textrm {B}}n+{\textrm {C}}}$
where A, B and C are constants. Constants A and B are the same for ${\displaystyle n}$ even and for ${\displaystyle n}$ odd but the value of C is different.
Parameter Value
A 3.87785955497335
B -3.61508538481188
CEVEN -1.71610938550614
CODD -4.41947361312695
CEVEN - CODD 2.70336422762081
In graphical terms, when y is plotted,[25] two parabolae of exactly the same shape are involved, with "even" cube values lying on one and "odd" cube values lying on the other. The difference is imperceptible except when plotted over a small range of ${\displaystyle n}$, as indicated in the graphs reproduced below. Only Rubik’s family values for ${\displaystyle n}$ equal to 2 and 3 are included in the second graph.
Use of the log function y provides the only practical means of plotting numbers that vary over such a huge range as that for the Rubik's cube family. The difference between the curves translates as a factor of 505.08471690483 (equal to ${\displaystyle {\frac {{\textrm {R}}^{0.5}{\textrm {S}}^{0.25}}{\textrm {Q}}}}$). This is the factor that defines the effect of even size, relative to odd size, on the number of reachable states for cubes with unmarked centres.
Hence, with the logarithmic presentation the number of cube states can be expressed using just four[28] numbers (A, B and the two C values). Furthermore, the number of cube states form a restricted set of values for a more general continuous quadratic (parabolic) function for which ${\displaystyle n}$ can have non-integer and negative values. Calculating the value of m from the corresponding value of y is a straightforward process.
Centre cubies are different from corner or edge cubies in that, unless they have indicative markings, there are multiple possibilities for their final orientation and/or locations. The number of different ways centre cubies can be arranged to yield a solved cube with unmarked centre cubies may be of interest. To calculate that, the impact of centre cubie marking needs to be assessed. Define ${\displaystyle m_{\textrm {M}}}$, ${\displaystyle {\textrm {Q}}_{\textrm {M}}}$ and ${\displaystyle {\textrm {S}}_{\textrm {M}}}$ to be the changed parameters for marked centre cubies (P and R remain unchanged).
${\displaystyle {\textrm {Q}}_{\textrm {M}}={\textrm {TQ}}}$ where ${\displaystyle {\textrm {T}}={\frac {4^{6}}{2}}=2048}$
${\displaystyle {\textrm {S}}_{\textrm {M}}={\textrm {VS}}}$ where ${\displaystyle {\textrm {V}}={\frac {(4!)^{6}}{2}}=95551488}$
${\displaystyle m_{\textrm {M}}={\textrm {P}}\left({\textrm {Q}}_{\textrm {M}}\right)^{a}{\textrm {R}}^{b}\left({\textrm {S}}_{\textrm {M}}\right)^{c}}$
${\displaystyle m_{\textrm {M}}=m_{\textrm {D}}m}$
${\displaystyle m_{\textrm {M}}={\textrm {T}}^{a}{\textrm {V}}^{c}}$
Parameter ${\displaystyle m_{\textrm {M}}}$ defines the number of reachable states for cubes with marked centres. Factor ${\displaystyle m_{\textrm {D}}}$ gives the number of different arrangements of unmarked centre cubies that will provide a solved size ${\displaystyle n}$ cube. It is also the factor by which the number of different states for a standard cube needs to be multiplied by when marked centres apply.
### Unreachable states for cubes of all sizes
The number of unreachable states far exceeds the number of reachable states. There are many references to the number of unreachable states for the size 3 cube but very few for larger size cubes.
The unreachable arrangements for corner and edge cubies are the same for cubes with or without marked centres.
If a corner cubie for cubes of any size is considered, then a 1/3 twist clockwise leaving everything else unchanged will represent an unreachable state, and similarly for a 1/3 twist counter-clockwise. Hence only 1/3 of the twist possibilities are reachable.
For the central edge cubie for odd size cubes the behaviour is the same as that for the size 3 cube. Only half the conceivable positions are reachable and only half the conceivable orientations are reachable. Hence only 1/4 of the central edge cubie movement possibilities are reachable.
Edge cubies that comprise 12 complementary pairs (24 cubies total) behave as if the complementary cubies did not look the same. Any given edge cubie can move to any position in the 24-cubie orbit but for any given position there is one reachable and one unreachable orientation for that cubie. The reverse applies for the complementary edge cubie. For a given cubie (1-2) the reachable and unreachable orientations for a given face for a given orbit for a size 8 cube is illustrated below. One of the 24 reachable possibilities for a given edge cubie matches that of the set cube.
The number of unreachable states for a 24-edge-cubie set is the same as the number of reachable states (24! in each case).
In the case of the marked centre cubies only half the conceivable arrangements for each set of 24 cubies for any given orbit are reachable.[3] The same parity rules that apply for marked centre cubies also apply for the unmarked centre cubies. A quarter turn of a set-of-four centre cubies cannot be achieved without changing the arrangement elsewhere to meet the parity requirement. Because there are 95551488 ways of arranging the individual centre cubies so that the resulting arrangement appears exactly the same, parity rules can be met without any observable indication of how the parity compliance is achieved. Hence, for the normal case (24 cubies comprising four of each of six colours) there is no restriction on the achievable states for the centre cubies.
The following table uses the values noted above to represent the k component factors for the size ${\displaystyle n}$ cube. Exponents a, b and c are functions of cube size ${\displaystyle n}$ as defined above.
Reduction components for factor k (for standard cube with unmarked centres) and for ${\displaystyle k_{\textrm {M}}}$ (for cube with marked centres) Unmarked centres'cube type Marked centres'cube type Corner cubie factor 3 3 Central edge cubie factor (such cubies exist only for cubes of odd size) ${\displaystyle 2^{2a}}$ ${\displaystyle 2^{2a}}$ Complementary edge cubie factor for all 12-pair sets combined ${\displaystyle 2^{b}}$ ${\displaystyle 2^{b}}$ Absolute centre cubie factor (such cubies exist only for cubes of odd size) 1 ${\displaystyle 2^{a}}$ Centre cubie factor for all 24-cubie sets combined 1 ${\displaystyle 2^{c}}$
Taking the product of these factors:
For the standard size ${\displaystyle n}$ cube ${\displaystyle k=\left(3\right)2^{2a+b}}$ For the marked centres' size ${\displaystyle n}$ cube ${\displaystyle k_{\textrm {M}}=\left(3\right)2^{3a+b+c}}$
Some values for cubes of small size are given below.
Cube size Value of ${\displaystyle k}$ Value of ${\displaystyle k_{\textrm {M}}}$ 2 3 4 5 6 7 8 3 12 6 24 12 48 24 3 24 12 192 192 6144 12288
The number of unreachable states is given by ${\displaystyle (k-1)m}$ for standard cubes and by ${\displaystyle (k_{\textrm {M}}-1)m_{\textrm {M}}}$ for cubes with marked centre cubies.
## Notes and references
1. ^ https://www.puzzlcrate.com/blog/2017/12/12/how-gregs-33x33-was-assembled-video
2. ^ https://thecubicle.us/yuxin-huanglong-17x17-p-10097.html
3. Ken Fraser, "Implementing and Solving Rubik's Family Cubes with Marked Centres". Retrieved 2017-02-24.
4. ^ a b Ryan Heise, "Rubik's Cube Theory - Laws of the cube". Retrieved 2017-02-24.
5. ^ a b Arfur Dogfrey, "The Dog School of Mathematics: 12. Rubik's Magic Cube". Retrieved 2017-02-24.
6. ^ a b c d Ken Fraser, "Rules for Rubik's Family Cubes of All Sizes". Retrieved 2017-02-24.
7. ^ a b Tom Davis, "Group Theory via Rubik’s Cube". Retrieved 2017-02-24.
8. ^ a b Tom Davis, "The Mathematics of the Rubik' Cube". Retrieved 2017-02-24.
9. ^ Arfur Dogfrey, "The Dog School of Mathematics: Introduction to Group Theory". Retrieved 2017-02-24.
10. ^ Ryan Heise, "Rubik's Cube Theory - Parity". Retrieved 2017-02-24.
11. ^ a b c Ken Fraser, "Unravelling Cubes of Size 2x2x2 and Above". Retrieved 2017-02-24.
12. ^ Peter Still, "Beginner Solution to the Rubik's Cube". Retrieved 2017-02-24.
13. ^ Jaap's Puzzle Page, "Rubik’s Revenge (solving)". Retrieved 2017-02-24.
14. ^ Chris Hardwick, "Solving the Rubik's Revenge (4x4x4)". Retrieved 2017-02-24.
15. ^ Robert Munafo, "Instructions for solving size 2, 3, 4 and 5 cubes". Retrieved 2017-02-24.
16. ^ a b Ken Fraser, "Instructions for Solving Cubes of Various Sizes". Retrieved 2017-02-24.
17. ^ Matthew Monroe, "How to handle pictures or logos on the faces". Retrieved 2017-02-24.
18. ^ Eric Dietz(deceased), "Rubik's Cube Solver". Retrieved 2017-02-24.
19. ^ Chris Hardwick, "Fix parity for 4x4x4 cube". Retrieved 2017-02-24.
20. ^ Tom Davis, "Rubik Test Release". Retrieved 2017-02-24.
21. ^ Tomas Rokicki, Herbert Kociemba, Morley Davidson and John Dethridge, "God's Number is 20". Retrieved 2017-02-24.
22. ^ a b Oliver Mason, "Some Simple Counting Rules, EE304 - Probability and Statistics". Retrieved 2017-02-24.
23. ^ Hofstadter, D.R., Metamagical Themas, "The Magic Cube's cubies twiddled by cubists and solved by cubemeisters", Scientific American, March 1981.
24. ^ Jaap's Puzzle Page, "Permutations and unreachable states for size 3x3x3 cube". Retrieved 2017-02-24.
25. Ken Fraser, "Rubik's Cube Extended: Derivation of Number of States for cubes of Any Size and Values for up to Size 25x25x25". Retrieved 2017-02-24.
26. ^ Richard Carr, "The Number Of Possible Positions Of An N x N x N Rubik Cube". Retrieved 2017-02-24.
27. ^ Chris Hardwick, "Number of combinations to the Rubik's Cube and variations". Retrieved 2017-02-24.
28. ^ Math reference, "non-integer". Retrieved 2017-02-24.
|
2018-07-20 04:02:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 118, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6493667364120483, "perplexity": 1041.855695592444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591481.75/warc/CC-MAIN-20180720022026-20180720042026-00299.warc.gz"}
|
https://dataspace.princeton.edu/handle/88435/dsp015h73q0226?mode=full
|
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp015h73q0226
DC FieldValueLanguage
dc.contributor.authorReshidi, Pellumb
dc.contributor.otherEconomics Department
dc.date.accessioned2022-06-16T20:34:37Z-
dc.date.available2022-06-16T20:34:37Z-
dc.date.created2022-01-01
dc.date.issued2022
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp015h73q0226-
dc.description.abstractThis dissertation focuses on group information acquisition and how this information disseminates within the group. Across the three chapters we derive theoretical predictions and experimentally test these predictions. In Chapter I, we analyze whether the sequencing of information affect beliefs formed within a group. We extend the DeGroot model to allow for sequential information arrival. We find that the final beliefs can be altered by varying the sequencing of information, keeping the information content unchanged. We identify the sequences that yield the highest and lowest attainable consensus, thus bounding the variation in final beliefs that can be attributed to information sequencing. With regard to information aggregation, as the number of group members grows, the sequential arrival of information compromises the group's beliefs: in all but particular cases, beliefs converge away from the truth. In Chapter II, motivated by the findings in the previous chapter, we test whether information sequencing affects beliefs formed in groups. In a lab experiment, participants estimate a parameter of interest using a common and a private signal, as well as past guesses of group members. At odds with the Bayesian model, we find that the order and timing of information affect final beliefs, even when the information content is unchanged. Although behavior is non-Bayesian, it is robustly predictable by a model relying on simple heuristics. We explore ways in which the network structure and the timing of information help alleviate correlation neglect. We highlight that the influence of private information on participants' actions is time-independent—a novel documented behavioral heuristic. Finally, in Chapter III, co-authored with Alessandro Lizzeri, Leeat Yariv, Jimmy Chan, and Wing Suen, we report results from lab experiments on information acquisition. We consider decisions governed by individuals and groups and compare how different voting rules affect outcomes. We contrast static with dynamic information collection. Generally, outcomes approximate theoretical benchmarks, and sequential information collection is welfare enhancing. Nonetheless, several important departures emerge. Static information collection is excessive, and sequential information collection is non-stationary, producing declining decision accuracies over time. Furthermore, groups using majority rule often reach hasty and inaccurate decisions.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.publisherPrinceton, NJ : Princeton University
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=http://catalog.princeton.edu>catalog.princeton.edu</a>
dc.subjectinformation acquisition
dc.subjectnetworks
dc.subjectsocial learning
dc.subject.classificationEconomics
dc.titleInformation Acquisition and Dissemination in Groups
|
2023-03-29 04:05:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43273264169692993, "perplexity": 2675.6101308502944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948932.75/warc/CC-MAIN-20230329023546-20230329053546-00081.warc.gz"}
|
http://openstudy.com/updates/55e8443de4b0227206128f5e
|
## tiff9702 one year ago http://student.alphaplustesting.org/TestQuestions/maths-questions/st-maths7-ex1.3-q11.png I don't get this
1. misty1212
HIU
2. misty1212
i would say those are all numbers $$\leq -2$$ right?
3. misty1212
in other words $x\leq -2$
4. misty1212
clear or no?
5. tiff9702
Yes I got that
6. misty1212
i see that it is none of your chouices no matter add 2 to both sides get $x+2\leq 0$
7. anonymous
|dw:1441285377290:dw|
8. misty1212
as always, it is C when in doubt, charlie out
9. misty1212
?? @jfdshsdfhjewhjewh what is that a picture of ?
10. tiff9702
Lol
|
2017-01-24 01:39:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49125179648399353, "perplexity": 11049.659921685852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00178-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://brilliant.org/problems/spoooooooky-russian-rational-expressions-5/
|
# "Spoooooooky" Russian rational expressions 5
Algebra Level pending
$\mathscr{E} = \left( \dfrac{2m+1}{2m-1} - \dfrac{2m-1}{2m+1}\right) \div \dfrac{4m}{10m-5}$
Let $$m = 2016$$. If the value of $$\mathscr{E}$$ can be represented in the form $$\dfrac{a}{b}$$, where $$a$$ and $$b$$ are coprime positive integers. Find $$a+b$$.
×
|
2017-01-23 19:22:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6878638863563538, "perplexity": 330.08501819704003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00133-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/need-help-with-integration.132730/
|
# Need help with Integration
1. Sep 19, 2006
### Song
Integral In(x^2-x+2)dx
First I use the intergration by parts, let u=In(x^2-x+2), du=(2x-1)dx/(x^2-x+2), dv=dx, v=x. Then it equals to xIn(x^2-x+2)-integral (x(2x-1))/(x^2-x+2) dx
Then by using long division I get integral (2+(x-4)/(x^2-x+2))dx..but at the end I have no idea how to integral x/(x^2-x+2).......please help with this and thanks a lot.
2. Sep 19, 2006
### chaoseverlasting
What does In(x^2-x+2)dx mean?
Is it ( 1/(x^2 -x +2) )dx ?
3. Sep 19, 2006
### Song
4. Sep 19, 2006
### StatusX
To integrate x/(x^2-x+2), first use substitution to turn this into the integral of C/(x^2-x+2) (ie, write the numerator as 1/2 (2x-1) + 1/2 and take u as the denominator), then factor the numerator as (ax+b)^2+c^2, and finally use the fact that the integral of 1/(x^2+1) is arctan(x).
5. Sep 19, 2006
### Song
sorry, I don't get how "To integrate x/(x^2-x+2), first use substitution to turn this into the integral of C/(x^2-x+2) (ie, write the numerator as 1/2 (2x-1) + 1/2 and take u as the denominator)" works...
Do you mean let x=(1/2)(2x-1)+1/2, then it will be 1/2 integral 2x/(x^2-x+2) dx, then use U substitution u=x^2 then it becomes integral (u-sqrtu+2)^(-1) du?
I'm lost....
6. Sep 19, 2006
### StatusX
Yes, sorry, that wasn't very clear. I meant, write:
$$\int \frac{x}{x^2-x+2} dx = \int \frac{1/2(2x-1) +1/2}{x^2-x+2} dx=\frac{1}{2} \int \frac{2x-1}{x^2-x+2} dx + \frac{1}{2} \int \frac{1}{x^2-x+2} dx$$
The first term can be integrated by substitution, so you're left with the second term to integrate. That's what I meant by "turn it into the integral of C/(x^2-x+2)". Do you understand what to do from here?
7. Sep 19, 2006
### Song
yes. Thanks so much!
|
2017-07-22 07:03:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9833709597587585, "perplexity": 3729.1365310311526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423903.35/warc/CC-MAIN-20170722062617-20170722082617-00687.warc.gz"}
|
https://pediatrics.aappublications.org/node/43551.full.print
|
pediatrics
July 2005, VOLUME116 /ISSUE 1
Risk Factors Associated With Sudden Unexplained Infant Death: A Prospective Study of Infant Care Practices in Kentucky
1. Lisa B. E. Shields, MD*,
2. Donna M. Hunsaker, MD*,
3. Susan Muldoon, PhD§,
4. Tracey S. Corey, MD*,
5. Betty S. Spivack, MD*
1. *Office of the Chief Medical Examiner, Louisville, Kentucky
2. Department of Pathology and Laboratory Medicine, University of Louisville School of Medicine, Louisville, Kentucky
3. §Department of Epidemiology and Clinical Investigation Sciences, School of Public Health/Health Information Sciences, University of Louisville, Louisville, Kentucky
Abstract
Objective. To ascertain the prevalence of infant care practices in a metropolitan community in the United States with attention to feeding routines and modifiable risk factors associated with sudden unexplained infant death (specifically, prone sleeping position, bed sharing, and maternal smoking).
Methods. We conducted an initial face-to-face meeting followed by a telephone survey of 189 women who gave birth at a level I hospital in Kentucky between October 14 and November 10, 2002, and whose infants were placed in the well-infant nursery. The survey, composed of questions pertaining to infant care practices, was addressed to the women at 1 and 6 months postpartum.
Results. A total of 185 (93.9%) women participated in the survey at 1 month, and 147 (75.1%) mothers contributed at 6 months. The racial/ethnic composition of the study was 56.1% white, 30.2% black, and 16.4% biracial, Asian, or Hispanic. More than half of the infants (50.8%) shared the same bed with their mother at 1 month, which dramatically decreased to 17.7% at 6 months. Bed sharing was significantly more common among black families compared with white families at both 1 month (adjusted odds ratio [OR]: 5.94; 95% confidence interval [CI]: 2.71–13.02) and 6 months (adjusted OR: 5.43; 95% CI: 2.05–14.35). Compared with other races, white parents were more likely to place their infants on their back before sleep at both 1 and 6 months. Black parents were significantly less likely to place their infants on their back at 6 months compared with white parents (adjusted OR: 0.14; 95% CI: 0.06–0.33). One infant succumbed to sudden infant death syndrome at 3 months of age, and another infant died suddenly and unexpectedly at 9 months of age. Both were bed sharing specifically with 1 adult in the former and with 2 children in the latter.
Conclusions. Bed sharing and prone placement were more common among black infants. Breastfeeding was infrequent in all races. This prospective study additionally offers a unique perspective into the risk factors associated with sudden infant death syndrome and sudden unexplained infant death associated with bed sharing by examining the survey responses of 2 mothers before the death of their infants combined with a complete postmortem examination, scene analysis, and historical investigation.
• sudden infant death syndrome
• infant care practices
• bed sharing
• feeding
• smoking
Sudden infant death syndrome (SIDS) is the primary cause of postneonatal mortality in the United States and the third leading cause of infant death, following congenital anomalies and disorders related to prematurity and low birth weight.1,2 Initially defined in 1969 and with subsequent revisions in 1989, SIDS was defined as “the sudden death of an infant under one year of age, which remains unexplained after a thorough case investigation, including performance of a complete autopsy, examination of the death scene, and review of the clinical history.”3,4 One of the authors of the present study coauthored the most recent working-group definition of SIDS in 2004: “SIDS is defined as the sudden unexpected death of an infant <1 year of age, with onset of the fatal episode apparently occurring during sleep, that remains unexplained after a thorough investigation, including performance of a complete autopsy and review of the circumstances of death and the clinical history.”5 Although various epidemiologic and pathologic features have been associated with SIDS, the diagnosis remains one of exclusion. SIDS is rarely encountered during the first month of life, peaks by the second or third month, and subsequently subsides.6 This age peak has been less pronounced in the last few years, as the overall SIDS rate has fallen.5
Numerous risk factors have been suggested as increasing the risk of SIDS, including low birth weight, male gender, black race, young maternal age, and multiparity.69 Several modifiable behaviors have also been associated with a higher risk of SIDS: the prone sleeping position, bed sharing, lack of breastfeeding, and maternal cigarette smoking.69 The term “cosleeping” is defined by some to encompass a variety of sleeping environments, ranging from intimate bed sharing to the presence of the infant's crib or bassinet in the mother's bedroom.10 In our study we used the term “bed sharing” to denote an infant's sleeping in the same bed as the caregiver and, furthermore, have specified when an infant slept in the caregiver's room in a crib or bassinet. Although the causes of SIDS remain to be elucidated, many toxic, infective, metabolic, nutritional, endocrine, cardiac, respiratory, and neurologic disorders have been proposed.10 The “triple-risk model” of SIDS has been suggested, incorporating external stressors that impact a vulnerable infant during a critical stage in development.11 Apoptotic neurodegeneration may disturb nervous coordination and alter cardiorespiratory function, leading to SIDS.12
Although a myriad of risk factors have been associated with SIDS, relatively little attention has focused on the prevalence of the behaviors related to these risk factors in society. Our study aims to highlight the pervasiveness of infant care practices in a moderately sized metropolitan community in the United States. We address various aspects of daily infant care, including sleeping environment, positioning before and on wakening from sleep, feeding, and exposure to maternal cigarette smoke.
METHODS
Study Population
The study population consisted of women who had given birth to infants placed in the well-infant nursery at a level 1 metropolitan hospital in Kentucky between October 14 and November 10, 2002. All women provided written informed consent within 1 day of parturition. The participants were contacted by telephone when their infants were 1 month of age, with subsequent follow-up 6 months postpartum. A host of questions pertaining to infant care practices was addressed to the women. The women who were unable to be contacted by telephone were sent a written survey and asked to return their responses. The participants were not financially compensated for their involvement in this study. The study was approved by the University of Louisville Institutional Review Board before its initiation.
A thorough investigation was conducted into the deaths of 2 of the infants whose mothers had participated in this study. An extensive scene analysis and sudden unexplained infant death (SUID) investigation report form were completed by the Jefferson County coroner's office in cooperation with the homicide unit of the Louisville Metro Police Department. Developed at the Interagency Panel on SIDS “Workshop on Guidelines for Scene Investigation of Sudden Unexplained Infant Deaths” in 1993 and updated in 1996, this Centers for Disease Control and Prevention form documents circumstances surrounding the infant's death, the infant's medical/birthing history, pertinent maternal history during pregnancy, subjective observations by the investigator of the surroundings at the death scene, interviews with family members of the deceased, and illustrations of the infant's body.13 Postmortem examinations including metabolic, radiologic, and toxicologic studies were performed on the 2 infants who died in January and July 2003.
Statistical Analysis
Statistical analysis was performed by using SPSS 11.0 for Windows (SPSS Inc, Chicago, IL). Associations between independent and outcome variables (bed sharing, positioning of infant on back, and breastfeeding) were assessed by using the χ2 or Fisher's exact test. Logistic regression was used to calculate odds ratios (ORs) and 95% confidence intervals (CIs) for the 3 outcome variables associated with relevant factors at both 1 and 6 months. Potential risk factors for infant care practices were incorporated into the model, including race, maternal age, birth order, maternal employment, and cigarette smoking. Factors significantly associated with the outcome at P ≤ .05 in univariate analyses were introduced into logistic-regression models. Variables were removed from the multivariate model if they did not reach statistical significance.
RESULTS
Study Population
A total of 185 (93.9%) mothers participated in the survey 1 month postpartum, and 148 women were contacted 6 months postpartum. One infant whose mother contributed to the study at 1 month died at 3 months of age; therefore, 147 (75.1%) women were included in the 6-month survey. Four women were involved in the 6-month survey even though they had not participated in the 1-month survey. The sociodemographic features of the 189 participants in this study are highlighted in Table 1. The ages of the mothers ranged between 16 and 41 years, with an average age of 26.2 years. There was a total of 3169 live births at the hospital in 2002 with 304 (9.5%) admissions to the neonatal intensive care unit. A wide range of incomes was noted for the 3043 mothers who gave birth, with 24.1% below $15000 per year and 6.6% above$100000 per year.
TABLE 1.
Sociodemographic Characteristics of Mothers and Their Infants
Approximately one third of the households welcomed the infant as the first born (31.7%), second born (33.3%), or later born (34.9%). Three of the households had ≥4 children other than the newborn: 1 mother who cared for her 6 children between the 1 and 9 years old, 1 mother who had 5 children between 3 and 14 years old, and 1 woman who had 4 children between 6 and 11 years old.
A total of 100 (68.0%) women had resumed employment when contacted at the 6-month follow-up. Of these 100 mothers, 13 (8.8%) had returned to work <6 weeks postpartum, 73 (49.6%) between 6 and 12 weeks postpartum, and 14 (9.5%) between 13 and 24 weeks postpartum. A total of 92 (62.5%) women worked ≤40 hours per week, and 8 (5.4%) worked >40 hours per week. Of the 100 employed women, 24 (24.0%) had placed their infants in day care.
Sleep Practices
The infants' sleep practices pertaining to sleeping position, bedding, and room environment with possible bed sharing are presented in Table 2. More than half of the infants (50.8%) shared a bed with their mother at 1 month of age, which drastically decreased to only 17.7% at 6 months of age. Compared with white infants, black infants were significantly more likely to bed share at both 1 month (adjusted OR: 5.94; 95% CI: 2.71–13.02) and 6 months (adjusted OR: 5.43; 95% CI: 2.05–14.35) (Table 3). Biracial, Asian, and Hispanic infants were also more likely to bed share at 6 months compared with white infants (adjusted OR: 4.55; 95% CI: 1.28–16.17). A total of 24 infants always bed shared at 1 month, reflecting 25.5% of those who bed shared and 13.0% of all infants. Furthermore, 21 infants shared a bed at every sleeping period with their parents at 6 months, accounting for 80.8% of the bed-sharers and 14.3% of the total number of infants.
TABLE 2.
Sleeping Practices at 1 and 6 Months of Age
TABLE 3.
Multivariate Logistic-Regression Model for Factors Associated With Bed Sharing, Positioning in Bed, and Feeding at 1 and 6 Months
The majority of infants were placed on their back at both 1 and 6 months (67.0% vs 68.0%) (Table 2). It is particularly notable that at 6 months, only 42.2% of the infants remained on their backs when they were woken by their caregiver, whereas 21.1% had turned to the supine position and 29.2% were discovered in varied positions on waking. In univariate analysis, white parents, compared with other races, were more likely to place their infants on their backs at 1 month (P ≤ .02) and 6 months (P < .0001). Placement of the infant on the back was significantly less common by black parents compared with white parents at 6 months (adjusted OR: 0.14; 95% CI: 0.06–0.33) (Table 3). Mothers who had given birth to ≥3 children were significantly less likely to place their newborn on his or her back at 1 month of age compared with women who had given birth to their first child (adjusted OR: 0.38; 95% CI: 0.17–0.85). Furthermore, women ≥30 years were more likely to position their infants on their back at 6 months compared with mothers <30 years old (P < .002), a finding marginally significant on multivariate analysis when compared with women ≤19 years (adjusted OR: 3.86; 95% CI: 0.99–15.14). Infants who shared a bed with their parents at 1 month were significantly less likely to be placed on their back before sleeping compared with those who slept in a bassinet or crib (adjusted OR: 0.43; 95% CI: 0.21–0.89).
A total of 87 (47.0%) infants slept only in a crib and/or bassinet at 1 month, which drastically increased to 119 (80.9%) at 6 months (Table 2). Ten percent of infants bed shared with their mothers at every sleeping period. At 1 month, 3 infants slept on a couch with their mother. The majority (71.3%) of infants slept in their mother's room at 1 month of age, whereas a higher percentage slept in their own room compared with their mother's at 6 months (53.7% vs 46.3%).
Smoking Habits
Of the 185 women contacted at the 1-month survey, 49 (26.5%) admitted that they smoked cigarettes. Of these 49 mothers, 47 (95.9%) smoked ≤1 pack per day, and 2 (4.1%) smoked >1 pack per day. A total of 42 (85.7%) women smoked only outside, whereas 7 (14.3%) smoked inside. Furthermore, 35 (23.8%) of the 147 women contacted at 6 months postpartum smoked cigarettes. Of these 35 women, 33 (94.3%) smoked ≤1 pack per day, and 2 (5.7%) smoked >1 pack per day. A total of 30 (85.7%) mothers stated that they smoked only outside, whereas 5 (14.3%) smoked inside.
In univariate analysis, compared with mothers who smoked, women who abstained from smoking cigarettes were more likely to breastfeed their infants at 1 month (P < .0001) and were more likely to share a bed with their infants at 6 months (P < .015). These findings were not significant in the multivariate analysis (Table 3).
Feeding Practices
A total of 135 (73.0%) infants were only fed formula at 1 month of age (Table 4). Only 25 (13.5%) infants consumed breast milk as their sole sustenance: 20 (19.0%) of the 105 white infants, 4 (7.3%) of the 55 black infants, and 1 (5.3%) of the 19 biracial infants. One hundred and twenty-nine (87.8%) infants were fed both formula and semisolids at 6 months of age. No infants were exclusively breastfed at 6 months. Feeding practices in relation to race were not statistically significant at either 1 or 6 months (data not shown). A total of 106 (56.3%) infants were fed in their mother's bed at 1 month compared with only 51 (34.7%) at 6 months. Compared with women ≤19 and ≥30 years, women between 20 and 29 years old were significantly less likely to breastfeed their infants at 1 month (adjusted OR: 0.16; 95% CI: 0.06–0.45) (Table 3).
TABLE 4.
Feeding Practices at 1 and 6 Months of Age
Deceased Infants
The preliminary 2002 state vital statistics records reported that the infant mortality rate for the county was 6.55 per 1000, with a rate for white infants of 4.85 per 1000 and for black infants of 11.12 per 1000. Two of the infants whose mothers participated in this study died within the first year of life. The first infant was delivered vaginally at 37 weeks' gestation. The white female newborn weighed 6 pounds, 4 oz (2840 g) with a length of 18 inches. Her 26-year-old mother had previously sustained 3 miscarriages and had given birth to 2 infants. She suffered from premature labor and depression during her pregnancy and was placed on the nonscheduled antidepressant fluoxetine (Prozac). The infant was evaluated at the local emergency department 6 days before her death for “viral” congestion and experienced increased fussiness and decreased appetite within the last 24 hours of her life. The mother recalled that she placed her infant in the supine position next to her on a queen-sized bed (Fig 1). Apart from a “whimper” within 1 hour of falling asleep, the mother denied any unusual circumstances and awoke to observe her daughter unresponsive and lifeless lying on her right side. A complete postmortem examination with toxicological and genetic studies revealed no cause of death. Rare thymic and pleural petechiae and pulmonary edema were noted at autopsy. The findings of this 3-month-old infant were consistent with SIDS, with bed sharing as a recognized confounding factor.
Fig 1.
Queen-sized bed that a 3-month-old infant bed shared with her mother. Note the extensive disarray of the bedroom.
The mother of the first deceased infant shared her infant care practices during the 1 month survey. The infant's mother lived with the father of the infant and her 2 other children aged 3 and 5 years. She admitted to smoking a half-pack of cigarettes per day outside. Because of reflux concerns, the mother alternated placing her daughter on her back and side before sleep. The infant slept with her mother 2 nights per week and slept in a bassinet the remainder of the week. The infant never breastfed and consumed 4 to 5 oz of formula every 3 to 5 hours at 1 month of age.
The second infant, a white infant boy, was delivered vaginally at 37 weeks' gestation with an initial weight of 5 pounds, 13 oz (2650 g) and length of 17 inches. At 9 months of age, the infant was discovered in cardiopulmonary arrest by his 10-year-old sister lying in an adult full-sized bed between his 12-year-old sister and a 13-year-old cousin (Fig 2). Historical information indicated that overlying was not occurring when the infant was discovered. The family denied any medical illnesses or other symptoms before death. The infant had been evaluated by his pediatrician for a well-infant check-up reported as normal and was immunized 3 weeks before his death. His pediatrician indicated that the infant had fallen behind on his well-infant visits and immunizations and failed to appear at 6 appointments. Physical findings included facial petechiae. Intrathoracic petechial hemorrhages, specifically involving the thymus gland, epicardial heart, and lungs, were not present. No anatomic or toxicological cause of death was determined after a complete postmortem examination. The authors consider this SUID undetermined as to both cause and manner. The possibility of asphyxiation via overlay in this bed-sharing scenario has not been totally excluded.
Fig 2.
Double bed with soft bedding in which a 9-month-old infant was discovered lying between his 12-year-old sister and 13-year-old cousin.
The mother of the second infant discussed her infant care practices at both 1 and 6 months. The infant lived with his parents and 4 siblings between the ages of 3 and 12 years. The mother disclosed that she smoked half a pack of cigarettes per day outside and in her bathroom at both survey periods. She placed her son on his side for sleeping at 1 month and alternated placement on the back and side at 6 months. At 1 month, the infant slept with his mother 2 nights per week and in the crib for the remainder of the week which was located in his mother's room. He slept every night in the crib, which was still present in the mother's room at 6 months. The infant was never breastfed and consumed 4 to 5 oz of formula every 3 to 4 hours at 1 month and 8 oz of formula every 2 hours, 1 jar of semisolids per day, and mashed potatoes at 6 months.
DISCUSSION
Although the etiology of SUID remains unknown, a host of modifiable risk factors has been suggested as increasing the risk of SUID, including prone sleeping, bed sharing, and maternal smoking.5,1417 The Back to Sleep campaign initiated in 1994 by the US Public Health Service, American Academy of Pediatrics, the SIDS Alliance, and the Association of SIDS and Infant Mortality Programs urged caregivers to place their infants on their back before sleeping, which in turn decreased the frequency of prone sleeping from 70% in 1992 to 24% in 1996 and decreased the SIDS rate by >38%.14,18,19 Willinger et al1921 initiated the National Infant Sleep Position Study in 1992, highlighting infant sleeping position and the practice of bed sharing. Between 1994 and 1998, they reported that placement in the supine position increased from 27% to 38% for white infants and from 17% to 31% for black infants.20 Willinger et al also documented that by 1998, 17% of infants continued to be placed prone and 56% in the supine position.20 Prone placement was significantly more common among black women, mothers aged 20 to 29 years with a previous child, women living in the mid-Atlantic or southern region of the country, and for infants <8 weeks old.
In our study, black mothers and women who had given birth to ≥3 children were also significantly more likely to place their infants prone. The majority of infants were placed in the supine position at both 1 and 6 months (67% and 68%, respectively). None of the infants at 1 month was placed solely on his or her stomach at every sleeping period, whereas 11.6% of infants at 6 months were always placed prone.
Infants placed in a specific position before sleep may be found in a different position on waking. The developmental milestone of rolling over is attained by 75% of infants by 4 months of age.22 In this respect, as they grow and develop, infants roll with increasing frequency, thus increasing the likelihood that they will assume different positioning during sleep. Willinger et al19 reported that the percentage of infants placed prone was similar to that discovered prone (28% vs 24% in 1996), whereas more infants were found supine than were placed in that position (50% vs 35% in 1996). At 6 months of age in our study, 68% of the infants were placed on their back, and only 42.2% were discovered on their back. More infants were found in the supine position than were placed supine (21.1% vs 11.6%).
The supine sleeping position of an infant has been recommended internationally, with various ensuing alterations in sleeping practices. The nonprone sleeping position was recommended in Sweden in April 1992 by the Swedish Board of Health and Welfare for infants >1 month of age.23 In a 1994–1995 survey conducted by Lindgren et al23 of Swedish parents of 1028 infants, 15.3% of infants were placed in the prone position compared with 72% in 1991. Mothers who placed their infants in the prone position were more likely to participate in other behaviors associated with an increased risk of SIDS such as formula feeding their infant and smoking. The percentage of infants placed prone was significantly lower in a 1997 survey in Canterbury, New Zealand, compared with that recorded in the 1987–1990 New Zealand Cot Death Study (2.9% vs 39.7%, respectively), suggesting that the “nonprone sleeping” mission in Canterbury was effective.24
The benefits and risks of bed sharing have been debated extensively without a resolution. The American Academy of Pediatrics in 1997 strongly discouraged bed sharing on soft sleep surfaces and in situations with a caregiver who smoked or used alcohol or drugs.25 Advocates of bed sharing embrace the maternal-infant bond that is nurtured through the close proximity of mother and infant during sleep. The mother is able to respond promptly to the needs of her infant while providing protection and emotional security.26 In addition to the physiologic and psychological benefits of bed sharing, McKenna et al27 suggest that bed sharing encourages breastfeeding. Infants who routinely bed shared were twice as likely to breastfeed and for a 39% longer duration of time during the night compared with infants who slept alone. Our study shows that only 13.5% of women solely breastfed, and an additional 13.5% combined breastfeeding and formula use at 1 month postpartum. Furthermore, only 4.1% of infants consumed breast milk in combination with formula at 6 months of age.
The Willinger et al National Infant Sleep Position Study21 reported that 45% of infants had spent a portion of time on an adult bed in the preceding 2 weeks and that >90% of infants who usually slept on an adult bed shared it with their parents. The percentage of infants who shared an adult bed increased from 5.5% to 12.8% between 1993 and 2000. Bed sharing was significantly more common with mothers <18 years old, in black families, and with those living in the South. In a study of low-income, inner-city mothers by Brenner at al,28 48% of infants usually slept in a bed with a parent or other adult from 3 to 7 months of age, which only decreased by 1 percentage point from 7 to 12 months. A total of 75% of infants who bed shared from 3 to 7 months old continued this practice from 7 to 12 months. The results of our study showed that more than half of the infants shared a bed with their mother at 1 month of age, which decreased to only 17.7% at 6 months. In addition, black mothers were significantly more likely to share a bed with their infant at 1 month, and black, biracial, Asian, and Hispanic women were significantly more likely to bed share at 6 months.
Ten percent of infants in our study always shared a bed with their mother at both 1 and 6 months of age. Certain women who advocated bed sharing with every sleeping period admitted that the fear of “crib death” and the possibility of “the infant's leg getting caught in the crib” deterred them from placing their infant in a crib. Numerous individuals sharing a single bed was encountered in 3 households. In 1 household, a mother, father, 2-year-old child, and the new infant shared a bed at 1 month. At 6 months, a mother, a 6-year-old child, a 3-year-old child, and the infant shared a bed. In another case at 6 months, a mother, a 4-year-old child, a 2-year-old child, and the infant shared a bed.
The sleeping environment of the infant may contain hazards that may prove fatal.2933 These risks include overlaying of the infant by another individual, entrapment or wedging between the mattress and another object such as a wall, head entrapment in the bed railings, and suffocation.3032 Infants often fail to extricate themselves from perilous sleeping situations because of their poorly developed motor skills and muscle strength.29 Infants were 8.1 times more likely to die in an adult bed and 17.2 times more likely to die on a sofa or chair in the 1990s compared with the 1980s.33 Furthermore, the risk of suffocation was 40 times greater for infants in adult beds opposed to those in cribs. In a study of 119 infant deaths by Kemp et al,31 SIDS was the diagnosis in 88 cases, accidental suffocation in 16, and undetermined in 15. Infants were discovered in the prone position in 61.1% of cases, on a sleep surface not designed for infants in 75.9%, and sharing a sleeping surface with another individual in 47.1% of cases. In a study of 697 cases of unexpected infant death between 1991 and 2000 in Kentucky by Knight at al,34 65% of the deaths were attributed to SIDS, 16% to unintentional asphyxia and overlay, and 19% to undetermined causes. A total of 36.2% of all infants had been bed sharing with children or adults at the time of death. In an attempt to decrease the risk of infant mortality resulting from bed sharing, researchers have suggested that the infant sleep in his or her crib and/or bassinet placed next to the parental bed.29 Our study showed that only half of the infants slept in a crib and/or bassinet at 1 month, which drastically increased with age. Approximately three fourths of infants slept in their parents' room, either bed sharing or in a crib and/or bassinet at 1 month, compared with less than half at 6 months.
Maternal cigarette smoking during pregnancy and smoking in the home of the infant after birth have been recognized as major risk factors for SIDS.17,3538 In a case-control study by Klonoff-Cohen et al,36 passive smoke from the mother, father, and other live-in adults in the vicinity of an infant increased the risk of SIDS. A dose-response effect was observed with a greater risk of SIDS with increasing amounts of smoke exposure. Furthermore, bed sharing by infants with smoking mothers poses a stronger risk compared with those with mothers who abstain from smoking.17,35,38 In our study, 26.5% of mothers smoked cigarettes at 1 month postpartum, which minimally decreased at the 6-month period. Nonsmoking mothers were significantly more likely to breastfeed their infants at 1 month and bed share at 6 months.
Breastfeeding has been associated with a decreased risk of SIDS.7,8 This practice has increased in popularity throughout the 1990s. A Lindgren et al23 survey reported that a total of 724 (70.4%) infants breastfed, whereas 176 (17.1%) were solely formula fed. Breastfeeding was seldom encountered in our study. A total of 135 (73.0%) infants were fed only formula at 1 month of age, whereas 25 (13.5%) were only breastfed. Women between 20 and 29 years old were significantly less likely to breastfeed at 1 month. A common explanation given by responders for the lack of breastfeeding was overwhelming breast discomfort with the practice with a previous child. Maternal smoking was given as another reason for formula feeding. One woman assumed that smoking while breastfeeding was not recommended, and another woman stated that she had been informed by a nurse at the hospital after the birth of her infant that nicotine within breast milk is harmful to an infant and that a smoker, therefore, should not breastfeed. A total of 129 (87.8%) infants consumed both formula and semisolids at 6 months of age, whereas only 17 (11.6%) breastfed in conjunction with other food items. There was a large variation in the introduction of solid foods. One mother believed that Gerber semisolid food within a jar was “modern medicine” and needed to discuss its use with her pediatrician before allowing her infant to consume it. In contrast, 1 woman stated that her 6-month-old infant enjoyed chewing on chicken bones, and another mother reported that her 6-month-old infant devoured spaghetti and meatballs.
CONCLUSIONS
The deaths of 2 of the infants whose mothers had participated in our study provided us a unique opportunity to assess their various infant care practices before their demise. The death of the 3-month-old was consistent with SIDS, whereas the 9-month-old's death was attributed to SUID. Both fatalities were associated with bed sharing in a standard adult bed. The mothers of both infants smoked cigarettes, and neither infant was breastfed at any time in their lives. In the case of the infant who died at 3 months, she was alternately placed on her back and side before sleep at 1 month. The other infant was placed on his side at 1 month and alternately was placed and woke on his back and side at 6 months.
This study highlights major differences in child care practices of the black population compared with other races. Compared with white parents, black parents were significantly more likely to bed share at both 1 and 6 months and were significantly less likely to place their infants on their back before sleep at 6 months. These observations may be related to the marked differences in countywide infant mortality rates in that the infant mortality rate of black infants was more than twice that of white infants for the study year. Furthermore, McKenna's promotion of bed sharing as a tool to both encourage and lengthen the duration of breastfeeding may be ineffective in the high-risk black population, because they are significantly more likely to bed share compared with other races, although feeding practices by race were not statistically significant in this study.
The findings gleaned in this prospective study may impact infant care practices by highlighting the pervasiveness of specific behaviors associated with an increased risk of SUID. In this respect, special attention may be addressed to inform and educate at-risk groups in our society.
Acknowledgments
We thank Erika Kravick, RN (Director of Nursing), Stacy Williams (Marketing Manager), and the ward nurses in the postpartum unit at the level I metropolitan hospital for assistance with this study. Above all, we thank the mothers who participated in the survey for their time and invaluable contributions.
Footnotes
• Accepted February 20, 2005.
• Reprint requests to (D.M.H.) Office of the Chief Medical Examiner, Urban Government Center, 810 Barret Ave, Louisville, KY 40204. E-mail: stinknlex{at}aol.com
• No conflict of interest declared.
SIDS, sudden infant death syndromeOR, odds ratioCI, confidence intervalSUID, sudden unexplained infant death
|
2019-07-18 05:44:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17256388068199158, "perplexity": 6271.211047064997}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525500.21/warc/CC-MAIN-20190718042531-20190718064531-00155.warc.gz"}
|
http://en.wikipedia.org/wiki/Stokes_parameters
|
# Stokes parameters
The Stokes parameters are a set of values that describe the polarization state of electromagnetic radiation. They were defined by George Gabriel Stokes in 1852,[1] as a mathematically convenient alternative to the more common description of incoherent or partially polarized radiation in terms of its total intensity (I), (fractional) degree of polarization (p), and the shape parameters of the polarization ellipse.
## Definitions
The Poincaré sphere is the parametrisation of the last three Stokes' parameters in spherical coordinates
The relationship of the Stokes parameters to intensity and polarization ellipse parameters is shown in the equations below and the figure at right.
\begin{align} S_0 &= I \\ S_1 &= p I \cos 2\psi \cos 2\chi\\ S_2 &= p I \sin 2\psi \cos 2\chi\\ S_3 &= p I \sin 2\chi \end{align}
Here $p$, $I$, $2\psi$ and $2\chi$ are the spherical coordinates of the three-dimensional vector of cartesian coordinates $(S_1, S_2, S_3)$. $I$ is the total intensity of the beam, and $p$ is the degree of polarization. The factor of two before $\psi$ represents the fact that any polarization ellipse is indistinguishable from one rotated by 180°, while the factor of two before $\chi$ indicates that an ellipse is indistinguishable from one with the semi-axis lengths swapped accompanied by a 90° rotation. The four Stokes parameters are sometimes denoted I, Q, U and V, respectively.
If given the Stokes parameters one can solve for the spherical coordinates with the following equations:
\begin{align} I &= S_0 \\ p &= \frac{\sqrt{S_1^2 + S_2^2 + S_3^2}}{S_0} \\ 2\psi &= \mathrm{atan} \frac{S_2}{S_1}\\ 2\chi &= \mathrm{atan} \frac{S_3}{\sqrt{S_1^2+S_2^2}}\\ \end{align}
### Stokes vectors
The Stokes parameters are often combined into a vector, known as the Stokes vector:
$\vec S \ = \begin{pmatrix} S_0 \\ S_1 \\ S_2 \\ S_3\end{pmatrix} = \begin{pmatrix} I \\ Q \\ U \\ V\end{pmatrix}$
The Stokes vector spans the space of unpolarized, partially polarized, and fully polarized light. For comparison, the Jones vector only spans the space of fully polarized light, but is more useful for problems involving coherent light. The four Stokes parameters do not form a preferred basis of the space, but rather were chosen because they can be easily measured or calculated.
The effect of an optical system on the polarization of light can be determined by constructing the Stokes vector for the input light and applying Mueller calculus, to obtain the Stokes vector of the light leaving the system.
#### Examples
Below are shown some Stokes vectors for common states of polarization of light.
$\begin{pmatrix} 1 \\ 1 \\ 0 \\ 0\end{pmatrix}$ Linearly polarized (horizontal) $\begin{pmatrix} 1 \\ -1 \\ 0 \\ 0\end{pmatrix}$ Linearly polarized (vertical) $\begin{pmatrix} 1 \\ 0 \\ 1 \\ 0\end{pmatrix}$ Linearly polarized (+45°) $\begin{pmatrix} 1 \\ 0 \\ -1 \\ 0\end{pmatrix}$ Linearly polarized (−45°) $\begin{pmatrix} 1 \\ 0 \\ 0 \\ 1\end{pmatrix}$ Right-hand circularly polarized $\begin{pmatrix} 1 \\ 0 \\ 0 \\ -1\end{pmatrix}$ Left-hand circularly polarized $\begin{pmatrix} 1 \\ 0 \\ 0 \\ 0\end{pmatrix}$ Unpolarized
## Alternate explanation
A monochromatic plane wave is specified by its propagation vector, $\vec{k}$, and the complex amplitudes of the electric field, $E_1$ and $E_2$, in a basis $(\hat{\epsilon}_1,\hat{\epsilon}_2)$. Alternatively, one may specify the propagation vector, the phase, $\phi$, and the polarization state, $\Psi$, where $\Psi$ is the curve traced out by the electric field in a fixed plane. The most familiar polarization states are linear and circular, which are degenerate cases of the most general state, an ellipse.
One way to describe polarization is by giving the semi-major and semi-minor axes of the polarization ellipse, its orientation, and the sense of rotation (See the above figure). The Stokes parameters $I$, $Q$, $U$, and $V$, provide an alternative description of the polarization state which is experimentally convenient because each parameter corresponds to a sum or difference of measurable intensities. The next figure shows examples of the Stokes parameters in degenerate states.
### Definitions
The Stokes parameters are defined by
$\begin{matrix} I & \equiv & \langle E_x^{2} \rangle + \langle E_y^{2} \rangle \\ ~ & = & \langle E_a^{2} \rangle + \langle E_b^{2} \rangle \\ ~ & = & \langle E_l^{2} \rangle + \langle E_r^{2} \rangle, \\ Q & \equiv & \langle E_x^{2} \rangle - \langle E_y^{2} \rangle, \\ U & \equiv & \langle E_a^{2} \rangle - \langle E_b^{2} \rangle, \\ V & \equiv & \langle E_l^{2} \rangle - \langle E_r^{2} \rangle. \end{matrix}$
where the subscripts refer to three bases: the standard Cartesian basis ($\hat{x},\hat{y}$), a Cartesian basis rotated by 45° ($\hat{a},\hat{b}$), and a circular basis ($\hat{l},\hat{r}$). The circular basis is defined so that $\hat{l} = (\hat{x}+i\hat{y})/\sqrt{2}$. The next figure shows how the signs of the Stokes parameters are determined by the helicity and the orientation of the semi-major axis of the polarization ellipse.
### Representations in fixed bases
In a fixed ($\hat{x},\hat{y}$) basis, the Stokes parameters are
$\begin{matrix} I&=&|E_x|^2+|E_y|^2, \\ Q&=&|E_x|^2-|E_y|^2, \\ U&=&2\mbox{Re}(E_xE_y^*), \\ V&=&-2\mbox{Im}(E_xE_y^*), \\ \end{matrix}$
while for $(\hat{a},\hat{b})$, they are
$\begin{matrix} I&=&|E_a|^2+|E_b|^2, \\ Q&=&-2\mbox{Re}(E_a^{*}E_b), \\ U&=&|E_a|^{2}-|E_b|^{2}, \\ V&=&2\mbox{Im}(E_a^{*}E_b). \\ \end{matrix}$
and for $(\hat{l},\hat{r})$, they are
$\begin{matrix} I &=&|E_l|^2+|E_r|^2, \\ Q&=&2\mbox{Re}(E_l^*E_r), \\ U & = &-2\mbox{Im}(E_l^*E_r), \\ V & =&|E_l|^2-|E_r|^2. \\ \end{matrix}$
## Properties
For purely monochromatic coherent radiation, one can show that
$\begin{matrix} Q^2+U^2+V^2 = I^2, \end{matrix}$
whereas for the whole (non-coherent) beam radiation, the Stokes parameters are defined as averaged quantities, and the previous equation becomes an inequality:[2]
$\begin{matrix} Q^2+U^2+V^2 \le I^2. \end{matrix}$
However, we can define a total polarization intensity $I_p$, so that
$\begin{matrix} Q^{2} + U^2 +V^2 = I_p^2, \end{matrix}$
where $I_p/I$ is the total polarization fraction.
Let us define the complex intensity of linear polarization to be
$\begin{matrix} L & \equiv & |L|e^{i2\theta} \\ & \equiv & Q +iU. \\ \end{matrix}$
Under a rotation $\theta \rightarrow \theta+\theta'$ of the polarization ellipse, it can be shown that $I$ and $V$ are invariant, but
$\begin{matrix} L & \rightarrow & e^{i2\theta'}L, \\ Q & \rightarrow & \mbox{Re}\left(e^{i2\theta'}L\right), \\ U & \rightarrow & \mbox{Im}\left(e^{i2\theta'}L\right).\\ \end{matrix}$
With these properties, the Stokes parameters may be thought of as constituting three generalized intensities:
$\begin{matrix} I & \ge & 0, \\ V & \in & \mathbb{R}, \\ L & \in & \mathbb{C}, \\ \end{matrix}$
where $I$ is the total intensity, $|V|$ is the intensity of circular polarization, and $|L|$ is the intensity of linear polarization. The total intensity of polarization is $I_p=\sqrt{|L|^2+|V|^2}$, and the orientation and sense of rotation are given by
$\begin{matrix} \theta &=& \frac{1}{2}\arg(L), \\ h &=& \sgn(V). \\ \end{matrix}$
Since $Q=\mbox{Re}(L)$ and $U=\mbox{Im}(L)$, we have
$\begin{matrix} |L| &=& \sqrt{Q^2+U^2}, \\ \theta &=& \frac{1}{2}\tan^{-1}(U/Q). \\ \end{matrix}$
## Relation to the polarization ellipse
In terms of the parameters of the polarization ellipse, the Stokes parameters are
$\begin{matrix} I_p & = & A^2 + B^2, \\ Q & = & (A^2-B^2)\cos(2\theta), \\ U & = & (A^2-B^2)\sin(2\theta), \\ V & = & 2ABh. \\ \end{matrix}$
Inverting the previous equation gives
$\begin{matrix} A & = & \sqrt{\frac{1}{2}(I_p+|L|)} \\ B & = & \sqrt{\frac{1}{2}(I_p-|L|)} \\ \theta & = & \frac{1}{2}\arg(L)\\ h & = & \sgn(V). \\ \end{matrix}$
|
2013-12-19 05:09:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 62, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9952850341796875, "perplexity": 513.2711418791966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345761938/warc/CC-MAIN-20131218054921-00037-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/3391551/let-a-b-and-c-be-three-sets-if-a-%E2%88%88-b-and-b-%E2%8A%82-c-is-it-true-that-a-%E2%8A%82-c-if-not/3391605
|
# Let A, B and C be three sets. If A ∈ B and B ⊂ C, is it true that A ⊂ C?. If not, give an example.
Let $$A, B$$ and $$C$$ be three sets. If $$A ∈ B$$ and $$B ⊂ C$$, is it true that $$A ⊂ C?$$. If not, give an example.
This question is from my textbook. And the answer is:
No. Let $$A = \{1\}, B = \{\{1\}, 2\}$$ and $$C = \{\{1\}, 2, 3\}.$$ Here $$A ∈ B$$ as $$A = {1}$$ and $$B ⊂ C.$$ But $$A ⊄ C$$ as $$1 ∈ A$$ and $$1 ∉ C.$$ Note that an element of a set can never be a subset of itself.
I am having trouble in in understanding "But $$A ⊄ C$$ as $$1 ∈ A$$ and $$1 ∉ C."$$ How could $$1 ∉ C?$$ Clearly $$C$$ contains $$B$$ and $$A$$ is an element of $$B$$ and $$A$$ have $$1$$ so $$C$$ must have $$1$$. I would be very grateful If you answer this.
• The statement "Note that an element of a set can never be a subset of itself" is wrong. An element of a set can be a set, and a set is always a subset of itself. – Robert Israel Oct 13 at 5:11
• An example where $A \in B$ and $B \subset C$ and $A \subset C$ is $A = \{1\}$, $B = \{\{1\}\}$, $C = \{1, \{1\}\}$. – Robert Israel Oct 13 at 5:18
• @RobertIsrael: …unless the book is using "subset" in the sense of "proper subset", which some still do. – Ilmari Karonen Oct 13 at 12:33
• I understood the question to mean "Is it necessarily true that $A\subset C$?" The single counter-example given in the book is enough to answer this question in the negative. There is no mistake in the book. – TonyK Oct 13 at 14:19
• @TonyK I was not talking about that. The question is correct so is the answer but in last phrase the book says "Note that an element ....." which seems wrong. But as Ilmari Karonen mentioned that this right if book is using subset in sense of proper subset. And he is indeed right. My book is using subset as proper set. – Shekhar Oct 13 at 15:23
This is becuase there is a distinction between $$1$$ and $$\{1\}$$. The former is the number one. The latter is a set containing the number one. If some set $$C$$ contains another set, let's call it set $$E$$, we do not look at the members of $$E$$ when we consider the members of $$C$$. We would say the set $$E$$ is a member of $$C$$, but this does not necessarily mean that anything contained by $$E$$ is also in $$C$$.
Note the difference of $$\{1\}\in C$$ (that is true) and $$\{1\}\subset C$$ (that is not true). The set $$A=\{1\}$$ is an element of $$C$$, not a subset. For example, $$\{2\},\{3\}$$ and $$\{\{1\}\}$$ are subsets of $$C$$, but $$2,3$$ and $$\{1\}$$ are not subsets of $$C$$, but its elements.
|
2019-12-09 08:22:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 45, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7868566513061523, "perplexity": 121.43752062652291}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518337.65/warc/CC-MAIN-20191209065626-20191209093626-00313.warc.gz"}
|
https://stats.stackexchange.com/questions/221144/was-it-as-valid-to-perform-k-means-on-a-distance-matrix-as-on-data-matrix-text
|
# Was it as valid to perform k-means on a distance matrix as on data matrix (text mining data)?
(This post is a repost of a question I posted yesterday (now deleted), but I've tried to scale back volume of words and simplify what I'm asking)
I'm hoping to get some help interpreting a kmeans script and output I have created. This is in the context of text analysis. I created this script after reading several articles online on text analysis. I have linked to some of them them below.
Sample r script and corpus of text data I will refer to throughout this post:
library(tm) # for text mining
## make a example corpus
# make a df of documents a to i
a <- "dog dog cat carrot"
b <- "phone cat dog"
c <- "phone book dog"
d <- "cat book trees"
e <- "phone orange"
f <- "phone circles dog"
g <- "dog cat square"
h <- "dog trees cat"
i <- "phone carrot cat"
j <- c(a,b,c,d,e,f,g,h,i)
x <- data.frame(j)
# turn x into a document term matrix (dtm)
docs <- Corpus(DataframeSource(x))
dtm <- DocumentTermMatrix(docs)
# create distance matrix for clustering
m <- as.matrix(dtm)
d <- dist(m, method = "euclidean")
# kmeans clustering
kfit <- kmeans(d, 2)
#plot – need library cluster
library(cluster)
clusplot(m, kfit\$cluster)
That's it for the script. Below are the output of some of the variables in the script:
Here's x, the data frame x that was transformed into a corpus:
x
j
1 dog dog cat carrot
2 phone cat dog
3 phone book dog
4 cat book trees
5 phone orange
6 phone circles dog
7 dog cat square
8 dog trees cat
9 phone carrot cat
An here's the resulting document term matrix dtm:
> inspect(dtm)
<<DocumentTermMatrix (documents: 9, terms: 9)>>
Non-/sparse entries: 26/55
Sparsity : 68%
Maximal term length: 7
Weighting : term frequency (tf)
Terms
Docs book carrot cat circles dog orange phone square trees
1 0 1 1 0 2 0 0 0 0
2 0 0 1 0 1 0 1 0 0
3 1 0 0 0 1 0 1 0 0
4 1 0 1 0 0 0 0 0 1
5 0 0 0 0 0 1 1 0 0
6 0 0 0 1 1 0 1 0 0
7 0 0 1 0 1 0 0 1 0
8 0 0 1 0 1 0 0 0 1
9 0 1 1 0 0 0 1 0 0
And here is the distance matrix d
> d
1 2 3 4 5 6 7 8
2 1.732051
3 2.236068 1.414214
4 2.645751 2.000000 2.000000
5 2.828427 1.732051 1.732051 2.236068
6 2.236068 1.414214 1.414214 2.449490 1.732051
7 1.732051 1.414214 2.000000 2.000000 2.236068 2.000000
8 1.732051 1.414214 2.000000 1.414214 2.236068 2.000000 1.414214
9 2.236068 1.414214 2.000000 2.000000 1.732051 2.000000 2.000000 2.000000
Here is the result, kfit:
> kfit
K-means clustering with 2 clusters of sizes 5, 4
Cluster means:
1 2 3 4 5 6 7 8 9
1 2.253736 1.194938 1.312096 2.137112 1.385641 1.312096 1.930056 1.930056 1.429253
2 1.527463 1.640119 2.059017 1.514991 2.384158 2.171389 1.286566 1.140119 2.059017
Clustering vector:
1 2 3 4 5 6 7 8 9
2 1 1 2 1 1 2 2 1
Within cluster sum of squares by cluster:
[1] 13.3468 12.3932
(between_SS / total_SS = 29.5 %)
Available components:
[1] "cluster" "centers" "totss" "withinss" "tot.withinss" "betweenss" "size" "iter"
[9] "ifault"
Here is the resulting plot:
1. In calculating my distance matrix d (a parameter used in kfit calculation) I did this: d <- dist(m, method = "euclidean"). Another article I encountered did this: d <- dist(t(m), method = "euclidean"). Then, separately on a SO question I posted recently someone commented "kmeans should be run on the data matrix, not on the distance matrix!". Presumably they mean kmeans() should take m instead of d as input. Of these 3 variations which/who is "right". Or, assuming all are valid in one way or another, which would be the conventional way to go in setting up an initial baseline model?
2. As I understand it, when kmeans function is called on d, what happens is that 2 random centroids are chosen (in this case k=2). Then r will look at each row in d and determine which documents are closest to which centroid. Based on the matrix d above, what would that actually look like? For example if the first random centroid was 1.5 and the second was 2, then how would document 4 be assigned? In the matrix d doc4 is 2.645751 2.000000 2.000000 so (in r) mean(c(2.645751,2.000000,2.000000)) = 2.2 so in the first iteration of kmeans in this example doc4 is assigned to the cluster with value 2 since it's closer to that than to 1.5. After this the mean of the cluster is reclauculated as a new centroid and the docs reassigned where appropriate. Is this right or have I completely missed the point?
3. In the kfit output above what is "cluster means"? E.g., Doc3 cluster 1 has a value of 1.312096. What is this number in this context? [edit, since looking at this again a few days after posting I can see that it's the distance of each document to the final cluster centers. So the lowest number (closest) is what determines which cluster each doc is assigned].
4. In the kfit output above, "clustering vector" looks like it's just what cluster each doc was assigned to. OK.
5. In the kfit output above, "Within cluster sum of squares by cluster". What is that? 13.3468 12.3932 (between_SS / total_SS = 29.5 %). A measure of the variance within each cluster, presumably meaning a lower number implies a stronger grouping as opposed to a more sparse one. Is that a fair statement? What about the percentage given 29.5%. What's that? Is 29.5% "good". Would a lower or higher number be preferred in any instance of kmeans? If I experimented with different numbers of k, what would I be looking for to determine if the increasing/decreasing number of clusters has helped or hindered the analysis?
6. The screenshot of the plot goes from -1 to 3. What is being measured here? As opposed to education and earnings, height and weight, what is the number 3 at the top of the scale in this context?
7. In the plot the message "These two components explain 50.96% of the point variability" I already found some detailed info here (in case anyone else comes across this post - just for completeness of understanding kmeans output wanted to add here.).
Here's some of the articles I read that helped me to create this script:
• If downvoting please leave a comment letting me know why so I can try to amend – Doug Fir Jun 29 '16 at 2:16
• Where is kfit function documentation available? I've looked inside the tm library cran.r-project.org/web/packages/tm/tm.pdf and found no kfit there. – ttnphns Jun 29 '16 at 8:12
• Hi @ttnphns kfit is a variable of kfit <- kmeans(d, 2) in the example script I made. There's no actual kfit function – Doug Fir Jun 29 '16 at 8:46
• What I've done in SPSS with your data was this. I ran K-means with inputs (a) your document term matrix tdm; (b) with your euclidean distance matrix d. SPSS's K-means treats input always as cases X variables data and clusters the cases. As initial centres, I input in both analyses the output centres of your analysis - cluster means. Results: in analysis (b), but not in (a), I got final centres identical to the input centres. That means that K-means in (b) could not further improve the cluster centres, which implies that analysis (b) coincides with the k-means analysis done by you. – ttnphns Jun 29 '16 at 8:47
• (cont.) But as said before, my analysis (b) treated its input data as data matrix, not as distance matrix. Therefore, your analysis did it so too. I conclude that your K-means function is not designed to take in distance matrices (or you failed to play such an option if it does exist), it is standard K-means requiring data matrices. It is a mistake to try to feed it with a distance matrix. Your clustering results were therefore erroneous. So was my conclusion. – ttnphns Jun 29 '16 at 8:47
To understand how the kmeans() function works, you need to read the documentation and/or inspect the underlying code. That said, I am sure it does not take a distance matrix without even bothering. You could write your own function to do k-means clustering from a distance matrix, but it would be an awful hassle.
|
2019-06-17 10:47:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40698277950286865, "perplexity": 1264.2343541045207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998473.44/warc/CC-MAIN-20190617103006-20190617125006-00019.warc.gz"}
|
https://www.tutorialspoint.com/how-to-measure-the-binary-cross-entropy-between-the-target-and-the-input-probabilities-in-pytorch
|
# How to measure the Binary Cross Entropy between the target and the input probabilities in PyTorch?
PyTorchServer Side ProgrammingProgramming
We apply the BCELoss() method to compute the binary cross entropy loss between the input and target (predicted and actual) probabilities. BCELoss() is accessed from the torch.nn module. It creates a criterion that measures the binary cross entropy loss. It is a type of loss function provided by the torch.nn module.
The loss functions are used to optimize a deep neural network by minimizing the loss. Both the input and target should be torch tensors having the class probabilities. Make sure that the target is between 0 and 1. Both the input and target tensors may have any number of dimensions. BCELoss() is used for measuring the error of a reconstruction in, for example, an auto-encoder.
### Syntax
torch.nn.BCELoss()
### Steps
To compute the binary cross entropy loss, one could follow the steps given below −
• Import the required library. In all the following examples, the required Python library is torch. Make sure you have already installed it.
import torch
• Create the input and target tensors and print them.
input = torch.rand(3, 5)
target = torch.randn(3, 5).softmax(dim=1)
• Create a criterion to measure the binary cross entropy loss.
bce_loss = nn.BCELoss()
• Compute the binary cross entropy loss and print it.
output = bce_loss(input, target)
print('Binary Cross Entropy Loss: \n', output)
Note − In the following examples, we are using random numbers to generate input and target tensors. So, you may get different values of these tensors.
## Example 1
In the following Python program, we compute the binary cross entropy loss between the input and target probabilities.
import torch
import torch.nn as nn
target = torch.rand(6)
# create a criterion to measure binary cross entropy
bce_loss = nn.BCELoss()
# compute the binary cross entropy
output = bce_loss(input, target)
output.backward()
print('input:\n ', input)
print('target:\ n ', target)
print('Binary Cross Entropy Loss: \n', output)
## Output
input:
tensor([0.3440, 0.7944, 0.8919, 0.3551, 0.9817, 0.8871], requires_grad=True)
target:
tensor([0.1639, 0.4745, 0.1537, 0.5444, 0.6933, 0.1129])
Binary Cross Entropy Loss:
tensor(1.2200, grad_fn=<BinaryCrossEntropyBackward>)
Notice that both the input and target tensor elements are in between 0 and 1.
## Example 2 −
In this program, we compute the BCE loss between the input and target tensors. Both the tensors are 2D. Notice that for the target tensor, we use softmax() function to make its elements between 0 and 1
import torch
import torch.nn as nn
target = torch.randn(3, 5).softmax(dim=1)
loss = nn.BCELoss()
output = loss(input, target)
output.backward()
print("Input:\n",input)
print("Target:\n",target)
print("Binary Cross Entropy Loss:\n",output)
## Output
Input:
tensor([[0.5080, 0.5674, 0.1960, 0.7617, 0.9675],
[0.8497, 0.4167, 0.4464, 0.6646, 0.7448],
[0.4477, 0.6700, 0.0358, 0.8317, 0.9484]],
tensor(1.0689, grad_fn=<BinaryCrossEntropyBackward>)
|
2022-09-28 20:15:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47966939210891724, "perplexity": 2617.4090035491145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00742.warc.gz"}
|
http://openstudy.com/updates/4d7d1a58a1ea8b0b7a391c2e
|
## A community for students. Sign up today
Here's the question you clicked on:
## anonymous 5 years ago 2 absolute value x+1 greater than or equal to 28
• This Question is Closed
1. anonymous
2|x+1|≥28 First divide by 2 to get" |x+1|≥14 Next break it into two parts: x+1≥14 or x+1≤-14 then solve each one separately: x≥13 or x≤-15
2. anonymous
so if I put it in interval notation would it be (-funny eight,-15,]([13,funny eight)
3. anonymous
Right: $(-\infty,-15],[13,\infty)$
#### Ask your own question
Sign Up
Find more explanations on OpenStudy
Privacy Policy
|
2016-10-26 07:52:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5003433227539062, "perplexity": 9238.65076183282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720760.76/warc/CC-MAIN-20161020183840-00191-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/140690/dealing-with-probability-density-functions
|
Dealing with probability density functions
Let $X$, $Y$ be independent random variables with the common pdf \begin{eqnarray*} f(u) &=& \left\{\begin{array}{ll} u\over2 & \mbox{for } 0 < u < 2\\ 0 &\mbox{elsewhere} \end{array}\right.\\ \end{eqnarray*} Set up an explicit double integral for $P(X Y > 1)$
Let $Z$ be the maximum of $X,Y$ (That is, $Z = X$ if $X \geqslant Y$, and $Z= Y$ if $Y > X$). Find $P(Z\leqslant 1)$
Find the pdf $g(z)$ of $Z$, being sure to define $g(z)$ for all numbers $z$.
This is a problem on a practice exam I'm studying, but I really have no idea how to approach the problem.
-
• Sketch the plane with coordinate axes and draw on it the square region (of area $4$) over which the joint density is nonzero. Determine the joint density function $f_{X,Y}(u,v) = f_X(u)f_Y(v)$.
• Sketch the hyperbola $xy=1$ in the first quadrant and persuade yourself $P\{XY > 1\}$ is the total probability mass in one of the two regions into which the hyperbola divides the square.
• Find $P\{XY>1\}$ by integrating $f_{X,Y}(u,v)$ over the region you just identified.
$F_Z(z) = P\{Z \leq z\} = P\{X \leq z, Y \leq z\} = P\{X \leq z\}P\{Y \leq z\}$.
|
2016-02-12 17:11:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.986320972442627, "perplexity": 116.30865532635335}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701164289.84/warc/CC-MAIN-20160205193924-00188-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://uniontestprep.com/accuplacer-test/practice-test/next-generation-advanced-algebra-and-functions/pages/1
|
# Question 1 Next Generation Advanced Algebra and Functions Practice Test for the ACCUPLACER® test
Which answer provides the solution to this system of equations and describes the relationship between the lines?
|
2019-08-20 20:21:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3277522623538971, "perplexity": 722.4052469404897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315618.73/warc/CC-MAIN-20190820200701-20190820222701-00271.warc.gz"}
|
https://wptc.to/how-to-cntfp/latex-python-equations-8f49e5
|
Pure Python library for LaTeX to MathML conversion. [Python + LaTeX] Importing equations from Python to LaTeX. PyLaTeX has two quite different usages: generating full pdfs and generating LaTeX snippets. This tutorial explains how we can render the LaTex formulas or equations in Matplotlib. save. Invaluable for creating content for presentations in powerpoint and keynote. How to generate latex first. Tags. Therefore, I will introduce how to write equations in LaTeX today. Another option would be to look in "The Comprehensive LaTeX Symbol List" in the external links section below.. Greek letters []. LaTeX is a powerful tool to typeset math; Embed formulas in your text by surrounding them with dollar signs ; The equation environment is used to typeset one formula; The align environment will align formulas at the ampersand & symbol; Single formulas must be seperated with two backslashes \\; Use the matrix environment to typeset matrices; Scale parentheses with \left( \right) automatically This means that you can freely mix in mathematical expressions using the MathJax subset of Tex and LaTeX. Look for "Detexify" in the external links section below. Additional LaTeX code to put into the preamble of the LaTeX files used to translate the math snippets. Perl and Python have their own interfaces to Tk, allowing them also to use Tk when building GUI programs. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. 0 comments. LaTeX. You can define equations in Python using SymPy and symbolic math variables. Hey, I'm looking for the best current package for generating python documentation automatically from the source with latex equation rendering. The graphs can be created through the tikz environment and without the tikz environment as well. ... Shreya Shankar: LaTeX equations 5 months ago. Snip also supports some text mode LaTeX, like the tabular environment. In this article, you will learn how to write basic equations and constructs in LaTeX, about aligning equations, stretchable horizontal … Close. Installation pip install latex2mathml Usage Python import latex2mathml.converter latex_input = "" mathml_output = latex2mathml. LaTeX Features . Well, it’s all about LaTeX. I won't go over LaTeX in this article, but if you are curious, there are lots of great tutorials out there. This note describes all the steps to use Python inside Latex. PyLaTeX is a Python library for creating and compiling LaTeX files or snippets. Let’s solve it using Python! LaTeX is a programming language that can be used for writing and typesetting documents. share. The goal of this library is to be an easy, but extensible interface between Python and LaTeX. Getting started with MathJax. Note: this page is part of the documentation for version 3 of Plotly.py, which is not the most recent version. latex2mathml. PyTeX, or Py/TeX is you prefer, is to be a front end to TeX, written in Python. Bonjour ! Integrations in Equations. The goal of this library is being an easy, but extensible interface between Python and LaTeX. Log in or sign up to leave a … The Markdown parser included in the Jupyter Notebook is MathJax-aware. These examples are extracted from open source projects. Use it e.g. Fractions in Equations . 100% Upvoted. ‘La’ is written in TeX's macro language. HOW TO USE LATEX TO WRITE MATHEMATICAL NOTATION There are thr Those who use LaTeX for their documentation related works, usually are from STEM (Science, Technology, Engineering, Mathematics) background. Work well with technical reports, journal articles, slide presentations, and books. Snip is a LaTeX app first, which means it has great compatibility with any LaTeX editor, like Overleaf. Avec le mode mathématiques [modifier | modifier le wikicode]. Jul 19, 2020. Using Python¶ As stated before, Python is a very powerful high-level programming language that can be used to compute tedious and complex arithmetic questions. on utilise \mathrm{…} pour avoir une écriture en romain ; comme cela concerne tous les symboles chimiques, cela devient vite fastidieux ; Here is a simple example of how you can use Python to solve a system of linear equations. Greek letters are commonly used in mathematics, and they are very easy to type in math mode. PyLaTeX is a Python library for creating and compiling LaTeX files. Solving Equations Solving Equations. Je vais pour cela utiliser le package minted ! These people use equations more often than not. But the graphs with equations are better by using the tikz environment.. It is especially useful to write mathematical notation such as equations and formulae. ‘La’ is a front end to Don Knuth's typesetting program TeX. report. Try this simple one upmath – Latex editor online. python documentation from code with latex equations? LATEX Mathematical Symbols The more unusual symbols are not defined in base LATEX (NFSS) and require \usepackage{amssymb} 1 Greek and Hebrew letters α \alpha κ \kappa ψ \psi z \digamma ∆ \Delta Θ \Theta β \beta λ \lambda ρ \rho ε \varepsilon Γ \Gamma Υ \Upsilon Math equations can be written in Manim using LaTeX – a typesetting system widely used in academia. In this post I’ll show you, with examples, how to write equations in Jupyter notebook’s markdown. Pour écrire des formules chimiques, on peut bien sûr utiliser les mathématiques (voir LaTeX/Écrire des mathématiques), en retenant les points suivants : . One of LaTeX big advantages is that it allows you to create good looking math equation with an intuitive system. Hi, I would like to help myself and other students from my technical university with diploma theses … Every science, math or engineering student knows the pain of getting LaTeX to lay out equations correctly–sometimes it can seem harder than the problem you were working on in the first place! Equations Equations. Joseph Briones: Python 5 months ago. Equations with Align Environment. Snip can convert images into LaTeX for inline equations, block mode equations, and numbered equations. Some examples from the MathJax demos site are reproduced below, as well as the Markdown+TeX source. Mathematical Equations in LaTeX. Most differential equations are impossible to solve explicitly however we can always use numerical methods to approximate solutions. What's out there? Using texlive 2020, this package (pythontex) is already there. Equations in SymPy are different than expressions.An expression does not have equality. Except for people familiar with LaTeX, this is often an unfamiliar territory. If you like latex2mathml or if it is useful to you, show your support by buying me a coffee.. A lot of the nice looking equations you see in books and all around the web are written using LaTeX commands. Summation in Equations. Lets look at an example. Latex graph of equations using Tikz. convert (latex_input) Command-line Just put your LaTeX math inside $. Let’s say we have the same system of equations as shown above. Now think of LaTeX as La/TeX. SymPy's solve() function can be used to solve equations and expressions that contain symbolic math variables.. Equations with one solution. A graph is a pictorial representation of data connected by links. code; Shreya Shankar: LaTeX equations. Fortunately, there's a tool that can greatly simplify the search for the command for a specific symbol. Ce tutoriel a pour vocation de vous expliquer comment représenter de manière élégante du code latex. Latex is one of the widely used formats for writing equations so I chose Latex. Inline Formula Writing. Learn more In this tutorial we will go over following features: Latex amsmath package … Input LaTeX, Tex, AMSmath or ASCIIMath notation (Click icon to switch to ASCIIMath mode) to make formula. to add packages which modify the fonts used for math, such as '\\usepackage{newtxsf}' for sans-serif fonts, or '\\usepackage{fouriernc}' for serif fonts. Python sympy.latex() Examples The following are 21 code examples for showing how to use sympy.latex(). WHAT IS LATEX? We can use any online Latex editor to generate Latex. Knowing a few of the mathematics commands is not only helpful if you want to write a book or an article (or do some extreme stuff ), but can come in handy in a lot of places, as many systems support LaTeX. The Jupyter Notebook uses MathJax to render LaTeX inside HTML / Markdown. Exemple de code python présenté : Voici le protocol : 1 - Soyez sûr d'avoir Python et pip d'installé (et reconnu en … converter. A simple equation that contains one variable like x-4-2 = 0 can be solved using the SymPy's solve() function. 1 1 11. There is not additional installation needed. Math equations. hide. The directions are based on using Linux, since this is the system I tried this on. An expression is a collection of symbols and operators, but expressions are not equal to anything. Here is a way to do it, with the dcases and spreadlines environments, from mathtools.The latter package lets you define additional vertical spacing between rows of a multiline equation, which is necessary here, due to the fractions in each line. How to add LaTeX to python graphs. What is LaTeX . Posted by 6 days ago [Python + LaTeX] Importing equations from Python to LaTeX. See our Version 4 Migration Guide for … Support. This is left empty by default. Have you ever asked yourself, how they write complex maths and physics equations using computer? latex to png image converter. 4 Calling python from latex to solve differential equations and show its solution. LaTeX provides a feature of special editing tool for scientific tool for math equations in LaTeX. When only one value is part of the solution, the solution is in the form of a list. LaTeX is used to prepare a document with high-quality typesetting. In this topic, some examples are with equations, and some are the graphs with the use of vertices only. For example, if you want to use those equations in a IPython notebook markdown cell, simply Y$-signs, e.g., $\mu = 2$ or prepend /begin{equation} and append /end{equation} Convert Latex equations into beautiful, transparency-correct PNGs. Its solution equal to anything Python and LaTeX differential equations and formulae used. Of linear equations a … LaTeX graph of equations using computer the steps to LaTeX. But expressions are not equal to anything the Markdown+TeX source to translate the math snippets math.! Wo n't go over LaTeX in this post I ’ ll show you, with examples how... Form of a list section below Engineering, Mathematics ) background the search the... Than expressions.An expression does not have equality – a typesetting system widely used in,! To create good looking math equation with an intuitive system mode mathématiques [ |! Tool that can be solved using the MathJax demos site are reproduced below, as well as the source... Those who use LaTeX for their documentation related works, usually are from STEM ( Science, Technology Engineering... The LaTeX files used to translate the math snippets are with equations, and they are very easy to in! Sympy 's solve ( ) journal articles, slide presentations, and books prepare...: generating full pdfs and generating LaTeX snippets ( Science, Technology,,... Equations and show its solution with high-quality typesetting [ modifier | modifier le ]. And physics equations using computer SymPy are different than expressions.An expression does not have equality in. Is useful to you, show your support by buying me a... Use Python inside LaTeX for writing and typesetting documents to TeX, written in Manim using commands! You ever asked yourself, how they write complex maths and physics equations using tikz technical,... Support by buying me a coffee are commonly used in Mathematics, and books of equations. Import latex2mathml.converter latex_input = < your_latex_string > '' mathml_output = latex2mathml mode mathématiques [ modifier modifier. Uses MathJax to render LaTeX inside HTML / Markdown journal articles, slide presentations, and they very... For writing and typesetting documents not the most recent version mathematical equations in Python using and. Python using SymPy and symbolic math variables.. equations with one solution uses MathJax render... Into the preamble of the nice looking equations you see in books and all around web! And typesetting documents reproduced below, as well as the Markdown+TeX source written using commands... High-Quality typesetting there 's a tool that can greatly simplify the search for the command for specific! Me a coffee avec le mode mathématiques [ modifier | modifier le wikicode ] GUI.. This package ( pythontex ) is already there a collection of symbols and operators, but if you latex2mathml. Collection of symbols and operators, but if you are curious, are... Be solved using the SymPy 's solve ( ) function LaTeX app first which... ’ ll latex python equations you, show your support by buying me a coffee lots great... Notation such as equations and show its solution that can be written in 's... Du code LaTeX content for presentations in powerpoint and keynote tabular environment ‘ La ’ is written Python. Of data connected by links graphs can be used to translate the math snippets how they write complex maths physics! Ago [ Python + LaTeX ] Importing equations from Python to LaTeX but if you are curious, are... Mode LaTeX, this is the system I tried this on see in books and all the..., allowing them also to use Python inside LaTeX the most recent version LaTeX... N'T go over LaTeX in this article, but expressions are not equal to anything you ever asked,. Are the graphs can be written in Manim using LaTeX – a typesetting system widely used in Mathematics and. To ASCIIMath mode ) to make formula an unfamiliar territory files or snippets: generating pdfs. … equations equations current package for generating Python documentation automatically from the source LaTeX! Perl and Python have their own interfaces to Tk, allowing them also to Tk. Variables.. equations with one solution, usually are from STEM ( Science, Technology, Engineering Mathematics! Connected by links using LaTeX – a typesetting system widely used in academia a feature of editing... Du code LaTeX building GUI programs s Markdown, journal articles, slide presentations, and are! Prefer, is to be a front end to TeX, AMSmath or ASCIIMath (! [ Python + LaTeX ] Importing equations from Python to solve a system of equations using tikz note... Like the tabular environment are thr mathematical equations in LaTeX in academia books... And they are very easy to type in math mode simple one –... And symbolic math variables this topic, some examples are with equations, and are... Manière élégante du code LaTeX can use any online LaTeX editor online documentation. Is to be an easy, but if you are curious, 's! Specific symbol solution is in the external links section below as well as the Markdown+TeX source how they complex! Put into the preamble of the LaTeX files used to prepare a document with typesetting... = latex2mathml log in or sign up to leave a … LaTeX graph equations... But the graphs can be used to prepare a document with high-quality typesetting 21 code examples for showing how write! Use Python to LaTeX Manim using LaTeX – a typesetting system widely used in academia LaTeX rendering. Those who use LaTeX for their documentation related works, usually are from (., as well of data connected by links Usage Python import latex2mathml.converter latex_input . Can define equations in LaTeX are not equal to anything with an system... To ASCIIMath mode ) to make formula typesetting system widely used in academia and formulae you can Python! Math variables articles, slide presentations, and they are very easy to type in math mode for specific... Parser included in the external links section below are commonly used in Mathematics, and they are very to. Graph is a Python library for creating content for presentations in powerpoint and keynote fortunately, there are of! And numbered equations latex python equations written using LaTeX – a typesetting system widely used in Mathematics, and are! Latex today is especially useful to you, with examples, how to write in... ) background invaluable for creating and compiling LaTeX files or snippets de vous expliquer comment représenter de manière élégante code! Examples are with equations, and books one of LaTeX big advantages is that allows! Especially useful to write mathematical notation there are lots of great tutorials out there Shreya Shankar: LaTeX equations months... Reproduced below, as well as the Markdown+TeX source translate the math snippets are different than expressions.An does. Contain symbolic math variables generating LaTeX snippets very easy to type in math mode macro language subset of TeX LaTeX. Compatibility with any LaTeX editor online: LaTeX equations 5 months ago tried., the solution is in the external links section below install latex2mathml Usage Python import latex_input. A graph is a front end to TeX, written in TeX 's macro language the solution is the! Html / Markdown log in or sign up to leave a … LaTeX graph of as... Python library for creating and compiling LaTeX files or snippets equations using tikz current package for Python.
|
2021-05-15 21:32:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9142425060272217, "perplexity": 3602.846174767026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991378.52/warc/CC-MAIN-20210515192444-20210515222444-00012.warc.gz"}
|
https://openmx.ssri.psu.edu/comment/9177
|
# umxACE, same/different sex twins
3 posts / 0 new
Offline
Joined: 02/18/2021 - 04:29
umxACE, same/different sex twins
Hello,
We are fitting twin ACE models (and hopefully later twin ACE Cholesky models) using Finnish register data with no information on twin's zygosity, relying on same/different sex identification approach. This seems to be pretty straightforward with OpenMX but we have not found a way to do it with umxACE. Presumably, this is something that could be achieved with umxModify by changing paths but we were not able figure out how and were not able to find any examples either. Any suggestions how to proceed?
Best, Jani
Offline
Joined: 03/01/2013 - 14:09
See github script repository
Hi
The model you wish to fit is best described as a mixture distribution, following Neale (2003). There are two scripts in the repository that may help, Acemix.R is one, Acemix2.R is the other. In your case, there is no need for the DZ parts of the script, you simply mark the probability of being MZ as (1-(2*NdzosPairs/Ntotalpairs)).
Unfortunately, these are univariate scripts, and worse they are set up in a Cholesky style. I do not recommend the Cholesky approach in general, per Verhulst et al 2019. However, it would not be so difficult to modify it to use the direct symmetric approach. Get rid of the A C and E algebras and make the X Y and Z matrices symmetric, and rename them as A C and E.
mxMatrix("Full", nrow=1, ncol=1, free=TRUE, values=.6, lbound=.01 , label="a", name="X"),
mxMatrix("Full", nrow=1, ncol=1, free=TRUE, values=.6, lbound=.01 , label="c", name="Y"),
mxMatrix("Full", nrow=1, ncol=1, free=TRUE, values=.6, lbound=.1, ubound=10, label="e", name="Z"),
# Matrixes A, C, and E to compute A, C, and E variance components
mxAlgebra(X * t(X), name="A"),
mxAlgebra(Y * t(Y), name="C"),
mxAlgebra(Z * t(Z), name="E"),
Becomes
mxMatrix("Symm", nrow=nvar, ncol=nvar, free=TRUE, values=.6, name="A"),
mxMatrix("Symm", nrow=nvar, ncol=nvar, free=TRUE, values=.6, name="C"),
mxMatrix("Symm", nrow=nvar, ncol=nvar, free=TRUE, values=.6, name="E"),
And you would set the number of variables earlier in the script before the model definition with e.g., nvar<-5
HTH!
Offline
Joined: 02/18/2021 - 04:29
Thanks!
Excellent, we will have a try!
Best, Jani
|
2021-06-24 03:06:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5980623960494995, "perplexity": 5818.590721195673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488550571.96/warc/CC-MAIN-20210624015641-20210624045641-00523.warc.gz"}
|