url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://engineering.purdue.edu/~mark/puthesis/faq/changing-page-number-format/
|
Changing page number format
July 15, 2009
Mark Senn
How can the page number format be changed?
To change the page number format insert one of the following immediately after the \begin{document} command.
To center the page number do:
\makeatletter
\makeatletter
|
2014-04-19 10:01:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9807451367378235, "perplexity": 14170.300347036324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/264392/what-does-coherent-superposition-mean
|
# What does coherent superposition mean?
1. There is only one coherent state: $$|\alpha\rangle=e^{-\frac{|\alpha|^2}{2}}\sum_{n=0}^\infty \frac{\alpha^n}{\sqrt{n!}}|n\rangle$$
2. Also, a pure state does not mean a coherent state.
But when does one mean when they talk about a coherent superposition of ground and excited state:
$c\left|g\right> + d \left|e\right>$. Drawing on the bloch sphere, it is on the surface, but so does just being a superposition. But what does one mean and what does it imply?
I also see a post: Can coherent superpositions of a neutron and antineutron exist?
## 2 Answers
The word "coherent" is used in Physics in a rather sloppy way. Your first state is a linear combination of harmonic oscillator eigenvectors that turns into a gaussian in momentum/position representations. In a more general background, a coherent state is just a state where coherences (off-diagonal terms in the density matrix) are non-zero, which means the state can skipp from one stationary state to another.
Now, a coherent superposition is quite like a coherent state: a superposition is said to be coherent if there's an observable that, if applied to one state, can turn it into another also present in the superposition. As an example, consider the $z$-axis spin up and spin down states of the electron in a Stern-Gerlach experiment. Then there is one spin operator, namely $S_x$, that can turn one into the other. This means they form a coherent superposition. As a counter-example consider the ground and the first excited states of the harmonic oscillator: the creation operator can turn the former into the latter, but this operator is not an observable. The superposition is a non-coherent one, meaning that off-diagonal elements in the density matrix are irrelevant to the problem at hand.
• Thank you. But could you explain the last sentence to me? If I am in the ground state of harmonic oscillator, it is diagonal in the fock state basis, so does being in first excited states. Also, I remember that $\left<\hat{a}\right>$ mean the amplitude of radiation even though it's not hermitian. – diff Jun 24 '16 at 3:38
• Remember: I'm in a superposition of the ground and first excited states. If this example is confusing consider the superposition of two position eigenstates belonging to two particles in different extremes of the Universe. You can't turn one into the other, so the superposition is non-coherent. – QuantumBrick Jun 24 '16 at 3:43
• Oh understand now. Can I say the superposition of the ground and first excited states of SHO is non-coherent because $a$ can turn one to another but then $a$ cannot turn it back? – diff Jun 24 '16 at 3:51
• Not because of that. It is non-coherent because there's no observable that can turn one into the other. Anihilation and creation operators are not observables: you can't measure them. What you can do is to build products with them that are observables, but these products cannot turn ground states into excited states, since the same number of anihilation and creation operators must be present, otherwise the resulting operator is still non-hermitian. – QuantumBrick Jun 24 '16 at 3:58
• Should that observable operator be able to turn it back or be unitary? – diff Jun 24 '16 at 4:06
First state refers to a state of the field. Originally Glauber developed this formalism to give quantum description of laser fields. Later it was adopted in other fields.
Second state refers to the state of a 2-level system(in your case). You can also get superposition states with incoherent light, but those are not very useful. The word coherent is used to describe the superposition states that you create with coherent fields. Usually you dont have to quantize the field, but can still work in what is know as "semi-classical" approximation. This means that you have the field classical and the system quantized. This is the more common experimental situation that one encounters. The field dealing with that is called "coherent control". Check out P.L Knight or N.V. Vitanov, they have planty of papers there.
|
2019-05-22 02:38:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7700600624084473, "perplexity": 373.91211533905744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256724.28/warc/CC-MAIN-20190522022933-20190522044933-00101.warc.gz"}
|
https://www.assignmentexpert.com/homework-answers/physics/mechanics-relativity/question-127893
|
107 309
Assignments Done
98.9%
Successfully Done
In March 2023
# Answer to Question #127893 in Mechanics | Relativity for Atharv Shukla
Question #127893
A horizontal jet of water coming out of a pipe of area of cross section 20cm hits a vertical wallwith a velocity of 10ms and rebounds with the same speed . The force exerted by the water on the wall is
1
2020-07-30T10:38:30-0400
Explanations & Calculations
• To do this sum consider about some volume of water having a mass “m(kg)” hitting the wall at the given velocity “v” and rebounds after a period “t (s)”.
• Due to the change of momentum within this period, a force (F) is exerted on that volume of water (same is exerted on the wall at the point of hit).
• Applying the equation related to momentum change,
"\\qquad\\qquad\n\\begin{aligned}\n\\small F &= \\small \\frac{mv-mu}{t}\\\\\n&= \\small \\frac{mu-m(-u)}{t}\\\\\n&= \\small \\frac{2mu}{t}\\\\\n& = \\small 2u\\big(\\frac{m}{t}\\big)\n\\end{aligned}"
• This "\\large \\frac{m}{t}" is called the rate of mass flow which equals to,
"\\qquad\\qquad\n\\begin{aligned}\n&= \\small \\frac{Ax\\rho}{t} \n&= A\\rho\\big(\\frac{x}{t}\\big)\n&=A\\rho u\n\\end{aligned}" : "\\rho =" density of water
• Therefore,
"\\qquad\\qquad\n\\begin{aligned}\n\\small F &= \\small 2u\\times Au \\rho\\\\\n&= \\small 2Au^2 \\rho\\\\\n&= \\small2\\times20\\times10^{-4}m^2\\times(10ms^{-1})^2\n\\times10^3kgm^{-3}\\\\\n&= \\small \\bold{40N}\n\\end{aligned}"
Need a fast expert's response?
Submit order
and get a quick answer at the best price
for any assignment or question with DETAILED EXPLANATIONS!
|
2023-03-28 05:29:29
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8857008218765259, "perplexity": 6431.237557646189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948765.13/warc/CC-MAIN-20230328042424-20230328072424-00328.warc.gz"}
|
https://bodheeprep.com/algebra-questions-cat/6
|
# Algebra Practice Questions for CAT with Solutions
Question 1:
If x =2+22/3+21/3, then the value of x3-6x2+6x is:
[1] 2
[2] -2
[3] 0
[4] 4
x =2+22/3+21/3
x – 2 = 22/3+21/3
(x – 2)3 = (22/3+21/3)3
x3 – 6x2 + 12x – 8 = 22 + 3. 24/3.21/3 + 3. 22/3.22/3 + 2
x3 – 6x2 + 12x – 8 = 4 + 3.25/3 + 3.24/3 + 2
x3 – 6x2 + 12x – 8 = 6 + 6.22/3 + 6.21/3 = (12 + 6.22/3 + 6.21/3) – 6
x3 – 6x2 + 12x – 8 = 6x – 6
x3 – 6x2 + 6x = 2. Option A
Question 2:
If the roots of the equation x2-2ax+a2+a-3=0 are real and less than 3, then
[1] a < 2
[2] 2 < a < 3
[3] 3 < a < 4
[4] a > 4
For the roots to be real
4a^2 – 4(a^2 + a – 3) ≥ 0
– (a – 3) ≥ 0
a ≤ 3
Answer could be Option (a) or Option (b)
Put a = 0, we get the equation as x2 – 3 = 0. This equation has real roots and both of them are less than 3. So, a = 0 is valid solution.
a = 0, is not a part of the solution 2 < a < 3 but it is a part of a < 2. Option A
Question 3:
Find the value of $\sqrt {2 + \sqrt {2 + \sqrt {2 + \sqrt {2 + .....} } } }$
[1] -1
[2] 1
[3] 2
[4] $\frac{{\sqrt 2 + 1}}{2}$
$\sqrt { 2 + \sqrt { 2 + \sqrt { 2 + \sqrt { 2 + \ldots } } } } = x$ $\sqrt { 2 + x } = x$ $2 + x = x ^ { 2 }$ $x ^ { 2 } - x - 2 = 0$ $x = 2 , - 1$
Question 4:
If a, b and c are the roots of the equation x3 – 3x2 + x + 1 = 0 find the value of $\frac{1}{a} + \frac{1}{b} + \frac{1}{c}$
[1] 1
[2] -1
[3] 1/3
[4] -1/3
$\frac{1}{a} + \frac{1}{b} + \frac{1}{c} = \frac{{ab + bc + ca}}{{abc}} = \frac{{\frac{{Coefficient\;of\;x}}{{Coefficient\;of\;{x^3}}}}}{{\frac{{ - \;Const}}{{Coefficient\;of\;{x^3}}}}} = - \frac{{Coefficient\;of\;x}}{{Const}} = - 1$Option B
Question 5:
If p, q and r are the roots of the equation 2z3 + 4z2 -3z -1 =0, find the value of (1 - p) ´ (1 - q) ´ (1 - r)
[1] -2
[2] 0
[3] 2
[4] None of these
If p, q and r are the roots of the equation 2z3 + 4z2 -3z -1 =0, then
f(z) = 2z3 + 4z2 -3z -1 = (z - p) ´ (z - q) ´ (z - r)
f(1) = 2 + 4 – 3 – 1 = (1 - p) ´ (1 - q) ´ (1 - r)
(1 - p) ´ (1 - q) ´ (1 - r) = 2. Option C
|
2020-07-09 01:46:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6947340369224548, "perplexity": 966.34448056007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655897844.44/warc/CC-MAIN-20200709002952-20200709032952-00378.warc.gz"}
|
http://www.ncatlab.org/nlab/show/analytical+index
|
# nLab analytical index
### Context
#### Index theory
index theory, KK-theory
noncommutative stable homotopy theory
partition function
genus, orientation in generalized cohomology
## Definitions
operator K-theory
K-homology
integration
# Contents
## Idea
By pseudo-differential analysis? an elliptic operator acting on sections of two vector bundles on a manifold is a Fredholm operator and hence has closed kernel and cokernel of finite dimension. The difference of these two dimensions is the analytical index of the operator.
More generally, for $(E_p, D_p)$ an elliptic complex, its analytical index is the alternating sum
$ind_{an}(E_p, D_p) = \sum_p (-1)^+ dim (ker (\Delta_p)) \,.$
## Properties
This index does not the depend of the Sobolev space used to get a bounded operator (by elliptic regularity the kernel is made up of smooth sections and the same for the cokernel as it is the kernel of the adjoint). By topological K-theory one can associate to it also a topological index. The Atiyah-Singer index theorem say that this two indexes coincide.
Revised on March 30, 2014 08:59:31 by Urs Schreiber (89.204.154.204)
|
2015-01-29 10:19:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7951717376708984, "perplexity": 927.7314495017605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121833101.33/warc/CC-MAIN-20150124175033-00008-ip-10-180-212-252.ec2.internal.warc.gz"}
|
https://runescape.wiki/w/Calculator:Farming/Herbs
|
# Calculator:Farming/Herbs
Templates used Calculator:Template/Farming/Herbs
This calculator determines expected yield, cost and profit (or lack there-of) of planting and picking herbs on a per patch basis. It assumes:
• All Prices are based on the Grand Exchange Market Watch, including seeds, herbs, composts and potions.
• The cost of a Juju farming potion is based on the cheapest per dose item available. It does not take into account visiting more than one Herb Patch per Dose, costs can be reduced this way.
• If any of the Master farmer outfits are chosen, cleaned herbs are harvested and taken into account for profit calculations.
• The price of Goutweed whilst not having a specific Grand Exchange value, has an inherent value based on the Exchange value of the herbs it provides when exchanging Goutweed with Sanfew. An expected value can be obtained based on the probabilities listed.
• Expected Chance of Disease leading to Death based on ${\displaystyle (1-P)^{n-2}}$, where ${\displaystyle P}$ is the probability of disease per stage and ${\displaystyle n}$ is the number of growth cycles the crop requires to fully grow.
• Expected Yield based on Negative Binomial Distribution of ${\displaystyle {\frac {HarvestLives}{(1-ChancetoSave)}}}$
• Rounding Down is used on all base chances as is expected with Runescript and as described by Jagex Moderators[1].
1. ^ Jagex. Mod Kieren's Twitter account. 8 September 2017. Mod Kieren: "On each stage! The division is always rounded down."
## Herbs
template=Calculator:Template/Farming/Herbs
form=HerbForm
result=HerbResult
param = playername|Name||hs|FarmingLevel_Input,20,1
param = FarmingLevel_Input|Farming Level|1|int|1-99|
param = Compost_Input|Compost|None|select|None,Compost,Supercompost,Ultracompost
param = Leprechaun_Input|Leprechaun Auto-compost|no|check|yes,no
param = Secateurs_Input|Magic Secateurs|no|check|yes,no
param = Scroll_Input|Scroll of Life|no|check|yes,no
param = Potion_Input|Juju Farming Potion|no|check|yes,no
param = Disease_Input|Disease-free Patch|no|check|yes,no
param = Aura_Input|Aura|None|select|None,Basic (3%),Greater (5%),Master (7%),Supreme (10%),Legendary (15%)
param = Outfit_Input|Elite Farming Outfit|None|select|None,Crop Farmer Outfit,Master Farmer Outfit
Please input the data and submit the form.
|
2019-08-21 11:45:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39465826749801636, "perplexity": 9786.754732942409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315936.22/warc/CC-MAIN-20190821110541-20190821132541-00208.warc.gz"}
|
http://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-12th-edition/chapter-8-section-8-4-formulas-and-further-applications-8-4-exercises-page-536/10
|
## Intermediate Algebra (12th Edition)
$d=\pm\dfrac{\sqrt{skw}}{kw}$
$\bf{\text{Solution Outline:}}$ To solve the given equation, $s=kwd^2 ,$ in terms of $d ,$ use the properties of equality and the Square Root Principle to isolate the variable. $\bf{\text{Solution Details:}}$ Using the properties of equality, the equation above is equivalent to \begin{array}{l}\require{cancel} \dfrac{s}{kw}=d^2 \\\\ d^2=\dfrac{s}{kw} .\end{array} Taking the square root of both sides (Square Root Principle), the equation above is equivalent to \begin{array}{l}\require{cancel} d=\pm\sqrt{\dfrac{s}{kw}} .\end{array} Rationalizing the denominator by multiplying the radicand by an expression equal to $1$ which will make the denominator a perfect power of the index results to \begin{array}{l}\require{cancel} d=\pm\sqrt{\dfrac{s}{kw}\cdot\dfrac{kw}{kw}} \\\\ d=\pm\sqrt{\dfrac{skw}{(kw)^2}} \\\\ d=\pm\sqrt{\dfrac{1}{(kw)^2}\cdot skw} \\\\ d=\pm\sqrt{\left( \dfrac{1}{kw}\right)^2\cdot skw} \\\\ d=\pm\dfrac{1}{kw}\sqrt{skw} \\\\ d=\pm\dfrac{\sqrt{skw}}{kw} .\end{array}
|
2018-04-23 15:59:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9978483319282532, "perplexity": 2136.3591554847067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946077.4/warc/CC-MAIN-20180423144933-20180423164933-00392.warc.gz"}
|
https://www.izande.com/toddler/burberry/irs/52428744fa055a6a29
|
First consider the definition of the Cartesian product of Cycles. So is it associative? This can be done by the application of Zadeh's extension principle.. Index of the families of the Cartesian product of Cycles , for , being even and odd. The Cartesian product is also known as the cross product. A Cartesian product for sets A, B, C can be represented as A B C.
Moreover, in our formulation we learn a scale parameter per subgraph. Type and draw all kind of functions. The CARTESIAN JOIN or CROSS JOIN returns the Cartesian product of the sets of records from two or more joined tables. If we take two non-empty sets A and B, then the Cartesian product is denoted by A x B, where all the ordered pairs (a, b) are made in a way that a A and b B. Note: You need Excel 2013 or above for this.
We also define and explore an example of the graph set. Then we use it to calculate the distinguishing threshold of a Cartesian product graph. Ever wanted to create all combinations from two (or more) lists? Send. To determine: the Cartesian product of set A and set B, cardinality of the Cartesian product . Cartesian Product Video. In terms of set-builder notation, We start with a reminder of what this means just for sets and then provide the formal definition for graphs. Step 2: Now click the button Calculate A x B Then we use it to calculate the distinguishing threshold of a Cartesian product graph. Cartesian Product can result in a huge table if the tables that you are using as the source are big. Later, we present precise results for the sigma coindices of various graph operations such as tensor product, Cartesian product, This online calculator computes the cross product (or vector product) and visualizes the result in a cartesian coordinate system. Calculus: Fundamental Theorem of Calculus The Origin. A cartesian coordinate plane can have infinite points on it, but each point has its own unique coordinates. (4.) Example: A = {1, 2} , B = {a, b} Step 3: Finally, the Cartesian coordinates will be displayed in the output field. Remainder when 2 power 256 is divided by 17 A mixed graph GM (V, E, A) is a graph containing unoriented edges (set E) as well as oriented edges (set A), referred to as arcs. Here I would like to investigate the properties of the tensor and Cartesian products of graph. You may hear the words "Abscissa" and "Ordinate" they are just the x and y values:. There you just need to make an entry in set $$A$$ and set $$B,$$ then click on a button cartesian product. To find the cardinality, |A x B|, we have |A| * |B|. cplusplus cpp set-theory sets cartesian-product Updated Feb 16, 2020; C++; Enter Set A and Set B below to find the Cartesian Product:-- Enter Set A-- Enter Set B . In this paper, we propose some mathematical properties of the sigma coindex. A good starting point is a few values around zero: -2, -1, 0, -1, 2. 1. Thus, it equates to an inner join where the join-condition always evaluates to either True or where the join-condition is absent from the statement. In general, if there are m elements in set A and n elements in B, the number of elements in the Cartesian Product is m x n Given two finite non-empty sets, write a program to print Cartesian Product. We prove a statement that gives a necessary and sufficient condition for a vertex coloring of the Cartesian product to be distinguishing. A has 3 elements and B also has 3 elements. These infinite unique coordinates are known as cartesian coordinates. Calculator Software. Abscissa and Ordinate. The Cartesian products of sets mean the product of two non-empty sets in an ordered way. The Cartesian Product is denoted as A x B A x B = { (a, b) | a A and b B } 1. Figure 1. The Cartesian Product is non-commutative: A B B A. Download Sample Power BI File. Example question #1: Sketch the graph of y = x 2 2 on the Cartesian plane. Converting repeating decimals in to fractions. How The calculator below computes the cross product of two An ordered pair means that two elements are taken from each set. In the book Product Graphs: Structure and Recognition (Wiley-Interscience Series in Discrete Mathematics and Optimization) by Wilfried Imrich and Sandi Klavzar, published in the World Mathematical Year 2000, the main results on the structure and algorithmic properties of the four principal graph products (Cartesian, direct, strong, and lexicographic) were brought together Cartesian Product Calculator. The basic operations on sets are:Union of setsIntersection of setsA complement of a setSet differenceCartesian product of sets. Data Structure; OS; Graph Theory; Microprocessor; Cryptography; Compiler Design; Computer Graphics; IPv4; Parallel Algorithm; Programming . Graphing on a Cartesian plane is sometimes referred to as curve sketching. Example. To determine: the Cartesian product of set A and set B, cardinality of the Cartesian product . image/svg+xml. Bases: sage.sets.set_from_iterator.EnumeratedSetFromIterator Cartesian product of finite sets. Finding Cartesian Product Equality of 2 ordered pairs; Number of elements; Operations on sets+ cartesian product; Relations - Definition; Finding Relation - Set-builder form given; Finding Relation - Arrow Depiction given; Number of Relations; Functions - Definition; Finding values at certain points; Different Functions and their graphs Before we can calculate the products of whole simplicial complexes we need to be able to calculate the products of individual facets.
Cartesian Product: If A and B are two non-empty sets, then the set of all possible order pairs where first elements are from set A and second elements are from set B is called Cartesian The Cartesian product of two edges is a cycle on four vertices: K 2 K 2 = C 4. The sigma coindex is defined as the sum of the squares of the differences between the degrees of all nonadjacent vertex pairs. The Cartesian product and ordered pairs of two sets A and B are explained given below with help of an example. There you just need to make an entry in set $$A$$ Step 1: Choose your x-values. This tutorial will go through different methods to Free Sets Caretesian Product Calculator - Find the caretesian product of two sets step-by-step Related Graph sets-cartesian-product-calculator. (4.) 10-Key Keypad. Bundle. The research aim is to construct a disease-symptom knowledge graph (DSKG) as a cause-effect knowledge graph containing disease-symptom relations as a cause-effect relation type determined from downloaded documents on medical web-board resources. Or, in other words, the collection of all ordered pairs obtained by the product of two non-empty sets. Age Under 20 years old 20 years old level 30 years old level 40 years old level 50 years old level 60 years old level or over Occupation Elementary school/ Junior high-school student Copy and paste the expression you typed, into the small textbox of the calculator . tools for the study of higher rank graphs and graph c*-algebras. A vertex colouring of a graph is complete if for any with there are in adjacent vertices such that and . The cartesian product can be easily obtained using a cartesian product calculator, which can be searched on google. Its up to you what values to choose for your x-values, but pick numbers that are easy to calculate. The cartesian product can be easily obtained using a cartesian product calculator, which can be searched on google. Comments: 11 pages, 4 figures: The product graphs have the tendency to be rather dense, so AdjacencyGraph might not be the best choice to construct it from the adjacency matrix: Doing so leads to a graph with GraphComputation`GraphRepresentation returning "Simple" which is The best graphing calculator for your desktop. example. It will also generate a step by step explanation for That is $$G^r=G \square G \square \dots \square G$$, r times. a la Cartesian product of both lists. Key Points on Cartesian ProductCartesian Product of Empty Set. If either of two set is empty, the Cartesian product of those two set is also an empty. Non-commutativity Property. For two unique and non-empty sets A and B, AB is not equal to BA.Condition for Commutative Property. If A = {1, 2} and B = . Syntax. en. A 'Cartesian Coordinates Calculator' is a free online tool that helps in finding the cartesian coordinates/product (i.e. Time for a quick but very useful tip. Dene u0002 NG (e) = |E| (neu (e|G) + nev (e|G)), where e is an arbitrary edge of the graph G. Clearly, PIe (G) = |E|2 eE (G) NG (e). Each disease-symptom relation connects a disease-name concept node (a causative-concept node) to a Cartesian Product is the multiplication of two sets to form the set of all ordered pairs. The first element of the ordered pair belong to first set and second pair belong the second set. For an example, Here, set A and B is multiplied to get Cartesian product AB. The first element of AB is a ordered pair (dog, meat) where dog belongs to set A. Cartesian Products class sage.combinat.cartesian_product. In the cartesian product of two fuzzy numbers A and B you take the MINIMUM of the grades of membership of the two corresponding sub-numbers a i and b i that are operated on, to determine the grade of You can input only integer numbers or fractions in this online calculator. Make Cartesian product of two tables in Excel. Entering data into the cross product calculator. Copy and paste Moreover, we calculate the number of non-equivalent distinguishing colorings of grids. The procedure to use the cartesian coordinates calculator is as follows: Step 1: Enter the elements of two sets A and B in the respective input field. Calculus: Integral with adjustable bounds. 2 - The version of Minecraft that 2b2t has been stuck on for years, and will be the version of Minecraft run on the Calculator, with step by step explanation, on finding union, intersection, difference and cartesian product of two sets 0 Staking Calculator Still don't know how long it takes? Better calculate the average! Cartesian products. Advanced scientific calculator. Convert two lists to tables, if not already done. The procedure to use the cartesian coordinates calculator is as follows: Step 1: Enter the elements of two sets A and B in the respective input field. To find the cross product of two vectors: Select the vectors form of representation; Type the coordinates of the vectors; Press the button "=" and you will have a detailed step-by-step solution. Examples In this section we calculate the vertex and edge PI indices of some well-known graphs. We define the Cartesian product as a derived set of combinations of two sets. So use it carefully, and only if needed. The Cartesian product of two sets A and B denoted A x B is the set of all possible ordered pairs (a, b), where a is in A and b is in B. first and only open-source implementation of the Morita equivalent graph rewriting system. Operations on sets calculator. We can get the Cartesian product between two lists saved as a 2D list in Python. Moreover, we calculate the number of non-equivalent distinguishing colorings of grids. The Cartesian product of two sets and denoted is the set of all possible ordered pairs where and. The procedure to use the cartesian coordinates calculator is as follows: Step 1: Enter the elements of two sets A and B in the respective input field. The basic syntax of the CARTESIAN JOIN or the CROSS JOIN is as follows This class will soon be deprecated (see trac ticket #18411 and trac ticket #19195).One should instead use the functorial construction cartesian_product.The main differences in behavior are: CONTACT; Email: donsevcik@gmail.com; Tel: We focus on two different graph products in this section: the Cartesian product and the disjunctive product.
Graphing rational functions with holes. By using this (5.)
Contact us. Our Cartesian Product tool is used heavily in set theory and discrete math. A Cartesian product can be calculated for any number of sets. This Cartesian Product Let denote the Cartesian product of graphs and . L.C.M method to solve time and work problems. We can calculate the dot product for any number of vectors, however all vectors must contain an equal number of terms. Create a graph in Pentacalc using the Cartesian coordinate system. The Cartesian Product has 3 x 3 = 9 elements. The edge set of G Hconsists of all pairs (u 1;v 1)(u 2;v 2) of vertices with u 1u 2 2E(G) and v 1 = v 2, or u of betweenness cenrality using factor graphs is useful. 1. 3.1 Definition: The Cartesian product of is a graph containing vertices and 2 edges, , 3, where and both and are odd and even. Here is a ridiculously simple way to do it. Informally, if x is a real number and f is a real function, graph may mean the graphical representation of this collection, in the form of a line chart: a curve on a Cartesian plane, together with Cartesian axes, etc. The graphs G and H are called factors of the product $$G \square H$$. When we looked at the product of graphs, on this page, we saw that there are two types of product: Tensor Product; Cartesian Product In this paper we calculate In mathematics, a Cartesian product is a mathematical operation which returns a set from multiple sets. That is, for sets A and B, the Cartesian product A B is the set of all ordered pairs where a A and b B. The simplest case of a Cartesian product is the Cartesian square, which returns a set from two sets. As you can see from this example, the Cartesian products and do not contain exactly the same ordered pairs. The Cartesian Product of two sets can be easily represented in the form of a matrix where both sets are on either axis, as shown in the image below. The r th Cartesian power of a graph G, denoted by $$G^r$$, is the Cartesian product of G with itself taken r times. Plot in 2d and view in 3d. Relations and Relational Algebra Nottingham. The Cartesian product of K 2 and a path graph is a ladder graph. Suppose G is a graph. Shop . Graphing rational functions. Related Symbolab Examples : It is important to note that the tuples within the Cartesian product are ordered based on which sets are to the left of the operation and which are to the right of the operation, so in most cases A B B A. Translating the word problems in to algebraic expressions.
Examples of Queries in Relational Algebra The Tuple Relational Calculus The Domain Relational Calculus . The Cartesian graph product , sometimes simply called "the" graph product (Beineke and Wilson 2004, p. 104) and sometimes denoted (e.g., Salazar and Ugalde 2004) of Free Pre-Algebra, Algebra, Trigonometry, Calculus, Geometry, Statistics and Chemistry calculators step-by-step This website uses cookies to ensure you get the best experience. What is the Cartesian product of two graphs? As the graph size explodes with the number of categorical variables and categories, we propose the graph Cartesian product to decompose the graph into smaller sub-graphs, enabling kernel computation in linear time with respect to the number of input variables. relational algebra in dbms- also TRC and DRC in dbms. Step 2: Now click the button Calculate A x B Coordinates to get the ordered pairs. Vector cross product calculator online to calculate the cartesian product of two vectors. C++ program to calculate the Cartesian Product of two sets using a Linked List data structure. You will get $$A \times B$$ in a moment. Or you could try -10, -5, 0, 5, 10. This Cartesian Product Calculator can find the cartesian product of two sets, Set A & B. The Cartesian product of two path graphs Decimal representation of rational numbers. Cartesian Product Definition: Given 2 sets A and B, the Cartesian Product is the set of all unique ordered pairs using one element from Set A and one element from Set B. Some points to note are: 1. Download the sample Power BI report here: 8 Cartesian product, A, cartesian product Projection Example: The table E (for EMPLOYEE , inside a relational DBMS which uses relational algebra Finding square root using long division. We prove a statement that gives a necessary and sufficient condition for a vertex coloring of the Cartesian product to be distinguishing. To improve this 'Cartesian to Polar coordinates Calculator', please fill in questionnaire. CartesianProduct_iters (* iters) . Products of Facets . Simplexes as an Extension of Graphs. In this paper we calculate the domination number of the Cartesian product of a path Pm with directed path Pn (mixed-grid graph ppmn ) for some values of m and arbitrary n. The achromatic number of is the maximum number of colours in a proper complete vertex colouring of . Recall that the Cartesian product of graphs G and H, denoted $$G \Box H$$, is the graph defined by $$V(G\Box H) = \{(u, v) : u \in V(G), v \in V(H)\}$$ whereby (u, v)(x, y) is an edge in $$G\Box H$$ if and only if $$u=x$$ and $$vy \in E(H)$$, or Click the "Submit" button. This calculator is an online tool to find find union, intersection, difference and Cartesian product of two sets. In mathematics, specifically set theory, the Cartesian product of two sets A and B, denoted A B, is the set of all ordered pairs (a, b) where a is in A and b is in B. The cartesian product of two graphs Gand H, denoted G H, is a graph with vertex set V(G) V(H). The Cartesian Product is A x B. Step 2: Now click the button Calculate A x B If table A is 1,000 rows, and table B is also 1,000 rows, the result of the cartesian product will be 1,000,000 rows. A = {} B = {} Delete the "default" expression in the textbox of the calculator . A Fuzzy Calculator How fuzzy arithmetic works You can use fuzzy numbers for fuzzy arithmetic. Cartesian Product of A = {1, 2} and B = {x, y, z} Properties of Cartesian Product. What is a Cartesian Coordinates Calculator? Objects, with the basic operations on sets calculator GeoGebra /a > Multiplication of cartesian product calculator 5 sets /a., intersection, which is a subcollection O0of O whose union contains a,. A = {} B = {} Delete the "default" expression in the textbox of the calculator . Features of Graphing calculator Solve equation & Draw graph: Solve math equations. The point (0,0) is given the special name "The Origin", and is sometimes given the letter "O". Cartesian Product Formula. The cartesian product of two sets can be represented in the form of Cartesian coordinates. It is imaginable to use other graph operations to calculate In this paper, we give a unified approach to solve the computational problems of degree-based topological indices of standard product graphs for the path, star and regular graphs.
|
2023-02-09 12:33:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.763144314289093, "perplexity": 449.2348327225252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499966.43/warc/CC-MAIN-20230209112510-20230209142510-00654.warc.gz"}
|
http://mathhelpforum.com/calculus/27454-solve-differential-equation-print.html
|
# Solve Differential Equation
• February 4th 2008, 01:46 PM
bobak
Solve Differential Equation
Find a solution to the following equation.
$e^{\frac{d^2y}{dx^2}} + e^{\frac{dy}{dx}} + e^{y} = 8$
My thoughts: This question is a joke right? short of guessing i can't think of a way to solve this.
• February 4th 2008, 10:51 PM
mr fantastic
Quote:
Originally Posted by bobak
Find a solution to the following equation.
$e^{\frac{d^2y}{dx^2}} + e^{\frac{dy}{dx}} + e^{y} = 8$
My thoughts: This question is a joke right? short of guessing i can't think of a way to solve this.
If you only want a solution, an obvious one is y = k for suitable value of k .... And yes, I did guess (but it was a shrewd guess ..... ;)
• February 4th 2008, 11:47 PM
bobak
So y=ln6 is the best solution so far. by the was does this even qualify as a second order differential equation?
• February 5th 2008, 12:27 AM
mr fantastic
Quote:
Originally Posted by bobak
So y=ln6 is the best solution so far. by the was does this even qualify as a second order differential equation?
There's probably a special name. It's not Clairaut's equation, but if you look into it, you'll see it's not so unusual to have equations of this form .....
Where'd the question come from, anyway?
• February 5th 2008, 12:31 AM
bobak
My math tutor put in on a problem sheet for me, he claims he has a solution to it, I'll ask him what it is later if its is anything other than ln6 I'll post it.
• February 5th 2008, 06:16 PM
bnic3029
|
2015-09-04 20:33:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8452013731002808, "perplexity": 1285.325907612044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645359523.89/warc/CC-MAIN-20150827031559-00054-ip-10-171-96-226.ec2.internal.warc.gz"}
|
http://clay6.com/qa/38505/if-z-z-z-are-complex-numbers-such-that-z-z-z-large-frac-large-frac-large-fr
|
Browse Questions
# If $\;z_{1} , z_{2} ,z_{3}\;$ are complex numbers such that $\;|z_{1}|=|z_{2}|=|z_{3}|=|\large\frac{1}{z_{1}} +\large\frac{1}{z_{2}} + \large\frac{1}{z_{3}}|=1\;$ , then find the value of $\;|z_{1}+ z_{2}+z_{3}|\; .$
$(a)\;1\qquad(b)\;0\qquad(c)\;2\qquad(d)\;4$
Answer : $\;1$
Explanation :
$|z_{1}|=|z_{2}|=|z_{3}| =1$
$|z_{1}|^{2}=|z_{2}|^{2}=|z_{3}|^{2} =1$
$z_{1} \overline{z_{1}} + z_{2} \overline{z_{2}} + z_{3} \overline{z_{3}} =1$
$\overline{ z_{1}} = \large\frac{1}{z_{1}} \; , \overline{ z_{2}} = \large\frac{1}{z_{2}} \; , \overline{ z_{3}} = \large\frac{1}{z_{3}}$
Given that , $\;|\large\frac{1}{z_{1}} +\large\frac{1}{z_{2}} + \large\frac{1}{z_{3}}|=1$
$|\overline{z_{1}}+ \overline{z_{2}}+ \overline{z_{3}} | =1$
i.e , $\;|\overline{z_{1} + z_{2}+z_{3}}| =1$
$\;|z_{1}+ z_{2}+z_{3}|=1\; .$
|
2016-12-08 14:02:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9920068383216858, "perplexity": 989.8112102411394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541529.0/warc/CC-MAIN-20161202170901-00248-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/simple-newtonian-mechanics-problem.392840/
|
# Homework Help: Simple newtonian mechanics problem
1. Apr 6, 2010
### ENgez
1. The problem statement, all variables and given/known data
a man with mass m=66kg is standing on top of a platform with mass M=120kg. The man is pulling himself up using a pair of ropes suspended over massless pulleys. he pulls each rope with force of F=600N and is accelarating towards the ceiling at acceleration a. g=9.8 m/sec^2. find the value of a in m/sec^2.
* - pulley
2. Relevant equations
$$\sum F = ma$$
3. The attempt at a solution
I tried treating the man and the platform as one body but i got negative acceleration, as if the man was accelerating away from the ceiling. I would like to see how you guys solve this problem.
File size:
26.2 KB
Views:
127
2. Apr 6, 2010
### tiny-tim
Welcome to PF!
Hi ENgez! Welcome to PF!
(have a sigma: ∑ and try using the X2 tag just above the Reply box )
Show us your full calculations, and then we'll see what went wrong, and we'll know how top help!
3. Apr 6, 2010
### ENgez
i summed up the forced on the y axis treating the man and the platform as one body :
600*2 - (m+M)*g = (m+M)*a
a=-3.707 m/sec2
Last edited: Apr 6, 2010
4. Apr 6, 2010
### tiny-tim
Hi ENgez!
Why 600 times two?
5. Apr 6, 2010
### ENgez
The man pulls each rope with a force of 600N, as seen in the attached picture.
6. Apr 6, 2010
So why two?
7. Apr 6, 2010
### ENgez
Is this supposed to be a hint?. I think it should be two as there are two ropes.
8. Apr 6, 2010
### tiny-tim
How do you know it isn't one rope?
If the rope continued under the platform, so that there was only one rope, would that make any difference? If not, how can the number of ropes matter?
9. Apr 6, 2010
### ENgez
so you are saying that the equation should be
600-(m+M)*g=(m+M)*a
a= -6.574?
10. Apr 6, 2010
### tiny-tim
ie 600*1 ?
No, I'm saying that the number of ropes is irrelevant.
Hint: what would be the tension in each rope if the platform was stationary?
11. Apr 6, 2010
### ENgez
I thinks i see where you are getting here... if the platform was stationary the tension each rope felt was given by: 2T-(m+M)*g=0 => T = (m+M)*g/2. So if the man is supplying 600N of force the equation becomes 2T+600+(m+M)*g/2 = (m+M)*a?
Last edited: Apr 6, 2010
12. Apr 6, 2010
### tiny-tim
Sorry, that equation doesn't make sense …
if it's meant to be the equation for force on the man-and-platform, how can you include force from the man?
13. Apr 6, 2010
### ENgez
hmmm. let me try again... the equation for the platform-and-man is:
2T-(m+M)*g= (m+M)*a
and the equation for the man:
T-600-m*g=m*a
is this correct?
14. Apr 6, 2010
### tiny-tim
No.
Try it for a = 0, ie the man-and-platform is stationary …
draw all the external forces on the man-and-platform.
i really have no idea what that's supposed to be
(and i'm off to bed :zzz:)
15. Apr 6, 2010
### ENgez
thanks alot for your help :) ill keep working on it.
|
2018-12-13 17:24:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5321069955825806, "perplexity": 3729.965100424683}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825029.40/warc/CC-MAIN-20181213171808-20181213193308-00133.warc.gz"}
|
https://ltwork.net/flowchart-in-programming--1318
|
Flowchart in programming
Question:
Flowchart in programming
For the following reaction, 63.6 grams of barium hydroxide are allowed to react with 34.1 grams of sulfuric acid .barium hydroxide(aq)
For the following reaction, 63.6 grams of barium hydroxide are allowed to react with 34.1 grams of sulfuric acid . barium hydroxide(aq) + sulfuric acid(aq) -> water(l) + barium sulfate(s) What is the maximum amount of barium sulfate that can be formed?What is the FORMULA for the limiting reagen...
It is used to as a strategy to convince readers of the truth of the writer's position. it is called
It is used to as a strategy to convince readers of the truth of the writer's position. it is called a claim....
Please change the verb in the parentheses to the correct preterite form to match the subject .Uds.(nadar) en la piscina el sábado pasado.
Please change the verb in the parentheses to the correct preterite form to match the subject . Uds.(nadar) en la piscina el sábado pasado....
Explain how you could write a quadratic function in factored form that would have a vertex with an x coordinate of 3 and two
Explain how you could write a quadratic function in factored form that would have a vertex with an x coordinate of 3 and two distinct roots...
Presentation software allows users to
Presentation software allows users to...
A large crate sits on the floor of a warehouse. Paul and Bob apply constant horizontal forces to the crate. The force applied
A large crate sits on the floor of a warehouse. Paul and Bob apply constant horizontal forces to the crate. The force applied by Paul has magnitude 48.0 N and direction 61.0∘ south of west. How much work does Paul’s force do during a displacement of the crate that is 12.0 m in the direction 22.0...
Find the area of the figure : It's a parallelogram with the height of 12 1/2 and the base of 17 1/5 Note: Sorry I can't insert
Find the area of the figure : It's a parallelogram with the height of 12 1/2 and the base of 17 1/5 Note: Sorry I can't insert a picture TwT...
The fundamental mechanism by which the energy is converted from one form to another in electrical machines is a. electrie
The fundamental mechanism by which the energy is converted from one form to another in electrical machines is a. electrie field b magnetie field d. inductive reactance...
Fill in the blank: includes the continents, islands and the entire ocean floor
Fill in the blank: includes the continents, islands and the entire ocean floor...
Most countries in Europe can be characterized as...Less DevelopedDevelopingMore DevelopedUnder Developed
Most countries in Europe can be characterized as... Less Developed Developing More Developed Under Developed...
Need Help with these questions ASAP!! 1. How did the automobile industry change American society? 2. Why did the economy grow
Need Help with these questions ASAP!! 1. How did the automobile industry change American society? 2. Why did the economy grow so rapidly in the 1920s? 3. How did Americans respond to immigration and social change during the 1920s? 4. In what areas did women gain power during the 1920s? 5. How did 1...
Democratic Concepts ChartDirections: Use your knowledge from the lesson to fill in the missing content in the charts below.
Democratic Concepts Chart Directions: Use your knowledge from the lesson to fill in the missing content in the charts below. Once you have completed your charts, be sure to answer the focus question at the bottom of the page. Tracing the roots of the American political process to Athens Ancient Gre...
99 points what is the equation of the line graphed?
99 points what is the equation of the line graphed? $99 points what is the equation of the line graphed?$...
What is accerlation
What is accerlation...
What is the meaning of the death of Argus? (what is significant about it?)
What is the meaning of the death of Argus? (what is significant about it?)...
Please help me so the question is which one of these is a photograph can u help me ASAP ? Please $Please help me so the question is which one of these is a photograph can u help me ASAP ???? Ple$...
A person who has a conversion disorder may complain of which kinds of problems? Select all that apply.
A person who has a conversion disorder may complain of which kinds of problems? Select all that apply....
Solve the equation 5y – 7x = 7 for y. 7 7 Oy x+ o y = 35x + 35 Оу — als 7 5 o y= 35 – 35x
Solve the equation 5y – 7x = 7 for y. 7 7 Oy x+ o y = 35x + 35 Оу — als 7 5 o y= 35 – 35x $Solve the equation 5y – 7x = 7 for y. 7 7 Oy x+ o y = 35x + 35 Оу — als 7 5 o y= 35 – 35x$...
|
2022-08-18 20:04:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39712122082710266, "perplexity": 1776.7691136141113}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573399.40/warc/CC-MAIN-20220818185216-20220818215216-00558.warc.gz"}
|
https://scicomp.stackexchange.com/questions/11672/computing-element-stiffness-matrices-with-variable-coefficients
|
# Computing element stiffness matrices with variable coefficients
I am trying to implement a simple FEM approach, using p1 triangular elements, for solving the diffusion equation with variable nodal diffusivities and I was wondering how to incorporate the variable nodal diffusivities when computing the element stiffness matrices.
I would really appreciate if someone could point me in the right direction. Many thanks in advance.
I assume you're trying to solve an equation that looks like:
\begin{align} -\nabla \cdot (a(x)\nabla{u}) = f, \end{align}
for $x$ in some domain $\Omega$, although the same approach would be fine (for a residual evaluation, anyway) if $a$ were also a function of $u$.
The stiffness matrix will take the form
\begin{align} A_{ij} = \int_{\Omega}a(x)\nabla\varphi_{j}(x)\cdot\nabla\varphi_{i}(x)\,\mathrm{d}x. \end{align}
For $P_{1}$ triangular elements, $\nabla\varphi_{j}$ will be constant over each element $K$, for each $j$. Letting $\mathcal{K}$ be the mesh, the elements of the stiffness matrix become
\begin{align} A_{ij} = \sum_{K \in \mathcal{K}}\nabla\varphi_{j}^{K}\cdot\nabla\varphi_{i}^{K}\int_{K}a(x)\,\mathrm{d}x, \end{align}
where all I've done is factor out the constant gradients over each element (these gradients will differ from element to element, which is why they are also indexed over $K$). If $K$ has vertices $N_{1}$, $N_{2}$, and $N_{3}$, then
\begin{align} \int_{K}a(x)\,\mathrm{d}x = a\left(\frac{N_{1} + N_{2} + N_{3}}{3}\right)|K| + O(h^{3}), \end{align}
where $h$ is the mesh "size", and $|K|$ is the area of element $K$. This approximation is also called the center of gravity rule.
In practice, you'd take the local stiffness matrix for each element (assuming a diffusivity of one), and multiply it by the variable diffusivity $a$ evaluated at the center of gravity of the element.
• Hi Geoff, many thanks for taking the for an excellent reply. If I may pose a following question: is there a case when it is necessary to expand these variable diffusivities in terms of the nodal basis functions? Many thanks again. – semper May 21 '14 at 11:28
If you are using numerical integration you can simply replace the integral $$\int\limits_{\Omega_{el}} a(x) \nabla \varphi_i(x) \nabla \varphi_j(x) dx = \sum_{x_k \in \Omega_{el}} a(x_k) \nabla \varphi_i(x_k) \nabla \varphi_j(x_k) \enspace .$$
So, you have to know your diffusivity at each Gauss point. This is easily know if you have an analytic function. If you know the data just in your nodes you can use the interpolators yo compute these values. In the case of linear elements is simpler to do what Geoff suggested.
|
2021-01-22 20:21:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9581015706062317, "perplexity": 374.4763912169816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531335.42/warc/CC-MAIN-20210122175527-20210122205527-00796.warc.gz"}
|
http://mathoverflow.net/questions/79645/simplifying-a-polynomial
|
# Simplifying a polynomial
Let $f(x_1,\ldots, x_n)\in\Bbbk [x_1,\ldots,x_n]$ be a given polynomial (assume $\Bbbk$ algebraically closed if you want). Suppose that we are given $n$ polynomials $v_1,\ldots v_n \in\Bbbk[x_1,\ldots, x_n]$. Suppose that we know that there exists a polynomial $P(t_1,\ldots,t_n)\in\Bbbk[t_1,\ldots,t_n]$ such that
$$f(x_1,\ldots,x_n)=P(v_1(x_1,\ldots,x_n),\ldots,v_n(x_1,\ldots,x_n))\in\Bbbk[x_1,\ldots,x_n]$$
How to find $P$ explicitely? Is there a computer program that can easily solve this problem?
I'm also interested in answers under the hypothesis that $v_1,\ldots, v_n$ are homogeneus of degrees $d_1,\ldots, d_n$ (and possibly some symplifying assumptions on $d_i$), and/or $f$ is itself homogeneus.
To me this is just a practical question which is natural enough to be asked on MO; I apologize if it is totally trivial for some people more knowledgeable in computational matters.
-
Have you looked at SAGBI basis algorithms? – J.C. Ottem Oct 31 '11 at 18:06
Well, in principle, it is just a huge system of linear equations for the coefficients of $P$. So, if the degree and the number of variables are not too large, you should be able to set it up and solve. Of course, the complexity of this naive approach goes up fast as $n$ grows but I see no reason why it shouldn't unless your polynomials have some special structure. – fedja Oct 31 '11 at 18:18
Max: I think the $v_i$ are given, not free to be chosen. – Noah Stein Oct 31 '11 at 19:47
@Max: the problem is something like this: given $f$ and $v$ such that $f=P\circ v$ for some polynomial $P$, find $P$ explicitely. @Melania: maybe you could expand a bit and make the comment become an answer? – Qfwfq Oct 31 '11 at 22:09
@Qfwfq: this is very far from a trivial problem. But it turns out to have some rather efficient solutions. This requires deep algorithmic thinking, see the references I point to. – Jacques Carette Nov 1 '11 at 0:29
This is the (multivariate) functional decomposition problem. It has a long history, going back to 1922 work by Ritt and 1941 work by Engstrom. See the introduction to Algorithms for the Functional Decomposition of Laurent Polynomials by Stephen Watt for a nice historical overview. You will also be interested in references 1-8 in that paper.
The most recent work (on the multivariate) case that I am aware of is that of Faugère and Perret (see also the slides for the talk and a journal version). Their algorithm is non-trivial, and trying to explain it here would amount to reproducing their paper, so I won't do that.
EDIT: Note that most of these algorithms are in two pieces. As was pointed out, just one of these pieces is really needed here. And while GB can be used, the good thing about the functional decomposition algorithms is that they are able to use all of the structure present in the problem, which is really too much to ask for from a generic GB.
-
+1 and accepted answer! – Qfwfq Nov 1 '11 at 1:20
no, that is wrong way – Melania Nov 1 '11 at 8:42
As far as I can see, the functional decomposition problem as defined in the linked papers is: given $f$, find $P$ and $v$. In Qfwfq’s question, $v$ is also given. – Emil Jeřábek Nov 1 '11 at 11:37
If you look at (most of) the algorithms, they split into 2 pieces. Qfwfq's problem reduces to using just one of the two. – Jacques Carette Nov 1 '11 at 12:25
I see, thanks for the explanation. – Emil Jeřábek Nov 1 '11 at 14:12
It is a standard problem for Groebner basis theory, see for example Ideals, varieties, and algorithms by David Cox, John Little, Donal O'Shea. In the polynomial ring $k[x_1,x_2,\ldots,x_n,y_1,\ldots,y_n,f]$ consider the ideal $I=(f-f(x_1,\ldots,x_n), y_1-v_1(x_1,\ldots,x_n),\ldots,y_n-v_n(x_1,\ldots,x_n))$.If we eliminate the variables $x_1,x_2,\ldots,x_n$ by using Groebner basis and if we get a elimination relation of the following form $f-F(y_1,\ldots,y_n)$ for some polynomial $F$, then $F$ is the looking polynomial and we obtain that $f(x_1,\ldots,x_n)=F(v_1,v_2,\ldots,v_n).$
-
This works, but is in some ways overkill. This is because there is a lot of structure in the presentation of the ideal above which is usually not exploitable by (most) GB algorithms. Plus it is well-known that GB is extremely sensitive to variable ordering -- which ordering would you choose for this problem? Why? – Jacques Carette Nov 1 '11 at 12:29
@ Jacques. I agree that for big degree polynomials it doest works well. But I dont sure if your's method is better in this case. The standard procedure $\tt{Eliminate}$ allows doesnt care about ordering. – Melania Nov 1 '11 at 17:33
|
2015-01-30 23:52:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.905437171459198, "perplexity": 425.41865642830356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115858727.26/warc/CC-MAIN-20150124161058-00171-ip-10-180-212-252.ec2.internal.warc.gz"}
|
https://www.zbmath.org/?q=an%3A1274.14047
|
# zbMATH — the first resource for mathematics
Pfaffian Calabi-Yau threefolds and mirror symmetry. (English) Zbl 1274.14047
This paper constructs four new examples of Calabi-Yau threefolds with $$h^{1,1}=1$$. These Calabi-Yau threefolds are Pfaffian Calabi-Yau threefolds in weighted projective spaces, which are non-complete intersections. The existence of non-complete intersection Pfaffian Calabi-Yau threefolds was conjectured by C. van Enckevort and D. van Straten [AMS/IP Studies in Advanced Mathematics 38, 539–559 (2006; Zbl 1117.14043)]. The main result of the paper is the following:
Theorem. There exist four Pfaffian threefolds $$X_5, X_7, X_{10}$$ and $$X_{25}$$, which are smooth Calabi-Yau threefolds with $$h^{1,1}=1$$. All of these are non-complete intersections. The fundamental topological invariants $$\int_X H^3$$, and $$\int_X c_2(X)\cdot H$$ and $$c_3(X)$$ are also computed for these Calabi-Yau threefolds.
Complete intersections of Pfaffian varieties and hypersurfaces in weighted projective spaces are studied focusing on examples. For instance, $$X_{25}$$ is such an example, and it turns out that the Calabi-Yau equation has two maximally unipotent monodromy points of the same type.
Next mirror families for degree $$5,7,10$$ Pfaffian Calabi-Yau threefolds are constructed, following a detailed discussion on mirror construction of the degree $$13$$ Pfaffian Calabi-Yau threefold, which is not a complete intersection. $$X_{13}$$ was constructed by F. Tonoli [J. Algebr. Geom. 13, No. 2, 209–232 (2004; Zbl 1060.14060)], and a candidate mirror partner was proposed by J. Böhm [“Mirror symmetry and tropical geometry”, arXiv:0708.4402].
This paper computes the fundamental period integrals and Picard-Fuchs differential equations for the degree $$13$$ Calabi-Yau threefold first and then for the degree $$5,7$$ and $$10$$ new Pfaffian Calabi-Yau threefolds. It is verified that the Picard-Fuchs differential equations coincide with the predicted Calabi-Yau equations in the aforementioned article of van Enkevort and van Straten.
##### MSC:
14J32 Calabi-Yau manifolds (algebro-geometric aspects) 14J33 Mirror symmetry (algebro-geometric aspects)
Full Text:
|
2021-09-27 08:31:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6612874269485474, "perplexity": 732.1268988844939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058373.45/warc/CC-MAIN-20210927060117-20210927090117-00429.warc.gz"}
|
https://arduino.stackexchange.com/questions/20090/multiple-relays-not-triggering-correctly
|
# multiple relays not triggering correctly
quick description of the sketch: its a clap on/off relay that turns on only when it's dark.
Cannot seem to get the relays to stay off. What happens is they are off by default then when I do the double clap to turn them on that works, after this I double clap to turn them off and then go off then immediately go on again. I can get a single relay to work perfectly but when I uncomment the rest there seems to be a problem.
I am assuming it has something to do with the delay for the clap sensor, but I am not entirely sure-- I have tried quite a few things to try to resolve it but am really lost as to why this might be happening. ideas?
int soundSensor = A2;
int relay = A0;
int relay2 = A3;
int relay3 = A4;
int relay4 = A5;
int claps = 0;
long detectionSpanInitial = 0;
long detectionSpan = 0;
boolean clapState = false;
int doubleClap = 0;
//Light sensor
int photocellPin = A1;
int LEDbrightness;
//int delayValue = 100;
int photoSensor = 0;
void setup() {
pinMode(soundSensor, INPUT);
pinMode(relay, OUTPUT);
pinMode(relay2, OUTPUT);
pinMode(relay3, OUTPUT);
pinMode(relay4, OUTPUT);
Serial.begin(9600);
}
void loop() {
/////////////
// photo sensor
////////////////////
LEDbrightness = map(photocellReading, 0, 1023, 0, 255);
photoSensor = 0;
}
photoSensor = 1;
}
/////////////
// clap sensor
/////////////////////
//erial.println(sensorState);
if (sensorState == 1)
{
if (claps == 0)
{
detectionSpanInitial = detectionSpan = millis();
claps++;
}
else if (claps > 0 && millis()-detectionSpan >= 50)
{
detectionSpan = millis();
claps++;
}
}
if (millis()-detectionSpanInitial >= 500)
{
if (claps == 2)
{
if (doubleClap == 0)
{
doubleClap = 1;
}
else if (doubleClap == 1)
{
doubleClap = 0;
}
}
claps = 0;
}
if(doubleClap == 1 && photoSensor == 0){
digitalWrite(relay, LOW);
digitalWrite(relay2, LOW);
digitalWrite(relay3, LOW);
digitalWrite(relay4, LOW);
Serial.println("off");
}
else{
digitalWrite(relay, HIGH);
digitalWrite(relay2, HIGH);
digitalWrite(relay3, HIGH);
digitalWrite(relay4, HIGH);
Serial.println("on");
};
}
• I suggest some debugging displays. After a clap is registered, print the value of doubleClap and claps. I suspect they won't be exactly what you expect. Maybe also print the current value of millis() which will probably help you debug. – Nick Gammon Jan 29 '16 at 5:07
• It may be the fact that the photosensor is floating around 500 value and causing "false" triggers. Try to add a threashold or create a hysteresis cycle. – brtiberio Jan 29 '16 at 9:45
As it happens, you are debugging in the dark. You have half a dozen devices attached to (presumably) some Arduino, driven by some hard-to-read code that might not work properly even if all the devices work ok, and definitely won't work properly if any of them fail or are incorrectly attached.
For example, you can't tell from that code whether your photoSensor works, whether your clap sensor works, or whether your relays work. Before messing around further trying to test all at once, run some definitive tests on each device individually. For example, run the following sketch and see if your system is able to turn on all of the relays at the same time.
// Test if relays work ok: First individually for a second each,
// then all together for 2.5 seconds
enum {relay=A0, relay2=A3, relay3=A4, relay4=A5, nRelays=4};
byte rePins[] = { relay, relay2, relay3, relay4};
void setup() {
for (int r=0; r<nRelays; ++r) {
pinMode(rePins[r], OUTPUT); // Set pin to outputs
digitalWrite(rePins[r], LOW); // Turn off relay
}
Serial.begin(115200);
}
void loop() {
byte r;
// Turn relays on individually for a second each, with half-second gaps
for (r=0; r<nRelays; ++r) {
delay(500); // Wait half a second
Serial.print("Turning on relay #");
Serial.print(r+1);
Serial.print(" at ");
Serial.println(millis());
digitalWrite(rePins[r], HIGH); // Turn on relay
delay(1000); // One on for a second
digitalWrite(rePins[r], LOW); // Turn off relay
}
// Turn all the relays on
Serial.print("Turning all relays on at ");
Serial.println(millis());
for (r=0; r<nRelays; ++r)
digitalWrite(rePins[r], HIGH); // Turn on relay
delay(2500); // Wait 2.5 seconds
// Turn all the relays off
Serial.print("Turning all relays off at ");
Serial.println(millis());
for (r=0; r<nRelays; ++r)
digitalWrite(rePins[r], LOW); // Turn off relay
}
Next, write a sketch to report results from photosensor and clap sensor readings. Once you know if your hardware is working, try to get your double-clap logic to work without involving the photosensor, or vice versa. Also, try to make the double-clap logic work just using a button in place of the clap sensor.
Note, to shorten your code and to get rid of some of the features that make it hard to read, do some of the following changes. With the code slightly shorter, it may be easier to comprehend and debug.
(a) Instead of introducing program constants as integer variables, introduce them as program constants. For example, instead of the verbose
int relay = A0;
int relay2 = A3;
int relay3 = A4;
int relay4 = A5;
say something like
enum {relay=A0, relay2=A3, relay3=A4, relay4=A5};
which tells the compiler that relay is a constant with the value A0, etc.
(b) Since you don't use variable LEDbrightness for anything, take it out. With that gone, change the verbose
photocellReading = analogRead(photocellPin);
LEDbrightness = map(photocellReading, 0, 1023, 0, 255);
photoSensor = 0;
}
photoSensor = 1;
}
to
photoSensor = analogRead(photocellPin) > 523;
which will set photoSensor to 0 or 1 much as before. (It is slightly different: It handles a photocellReading==500 case that your code ignores.)
(c) Change the verbose and badly formatted
if (doubleClap == 0)
{
doubleClap = 1;
}
else if (doubleClap == 1)
{
doubleClap = 0;
}
to
doubleClap = 1 - doubleClap;
(d) Refactor to cut about half the code involving detectionSpan.
|
2019-10-16 05:09:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2889609634876251, "perplexity": 8230.711538729282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986664662.15/warc/CC-MAIN-20191016041344-20191016064844-00219.warc.gz"}
|
https://learn.careers360.com/ncert/question-write-the-following-numbers-in-standard-form-i-0000000564/
|
## Filters
Q&A - Ask Doubts and Get Answers
Q
# Write the following numbers in standard form. (i) 0.000000564
1. Write the following numbers in standard form.
(i) 0.000000564
$\frac{564}{1000000000}=5.64\times10^{-7}$
|
2020-02-24 09:17:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6324968338012695, "perplexity": 8509.233513619793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145910.53/warc/CC-MAIN-20200224071540-20200224101540-00464.warc.gz"}
|
https://scicomp.stackexchange.com/questions/31313/overrelaxation-with-w-0
|
Overrelaxation with w < 0
Are there any circumstances under which using a value $$w < 0$$ would help us find a solution in over-relaxation faster than we can with the ordinary relaxation method?
Over Relaxation Method:
$$x'= [1 + w]f(x) - wx$$
Example
Calculating $$x = 1-e^{-3x}$$
Take x = 1 as initial value, and w as 0.2
x' = (1+0.2)f(1)-0.2(1) = 0.94025551795
x' = (1+0.2)f(0.94025551795)-0.2(0.94025551795) = 0.94047657354
x' = (1+0.2)f(0.94047657354)-0.2(0.94047657354) = 0.94047974478
x' = (1+0.2)f(0.94047974478)=0.2(0.94047974478) = 0.94047979005
We stop until the value get to a certain accuracy
Why would the over-relaxation reach the solution faster if we consider $$w < 0$$ in non-linear function such as $$x = 1 - e^{(1 - x^2)}$$?
• Cross posted on Physics: physics.stackexchange.com/q/468753/25301 – Kyle Kanos Mar 26 at 14:31
• It seems like you are trying to find the roots of a nonlinear equation using fixed-point iteration. It also seems that you are using the opposite signs for the $w$, compared with the usual convention. I would say that the other sign looks more natural (to me) because it resembles a convex linear combination. – nicoguaro Apr 1 at 15:39
|
2019-10-18 09:42:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.812587320804596, "perplexity": 1035.3244162749731}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986679439.48/warc/CC-MAIN-20191018081630-20191018105130-00422.warc.gz"}
|
https://nbviewer.jupyter.org/github/VHRanger/MLE-tutorial/blob/master/Implementing%20and%20vectorizing%20a%20Maximum%20Likelihood%20model%20with%20scipy--1.ipynb
|
# Implementing and vectorizing a Maximum Likelihood model with scipy¶
By: Matt Ranger
This notebook walks through the process of coding, testing, estimating, and vectorizing a Maximum Likelihood (MLE) model "from scratch" using scipy's numerical optimization package. Not all MLE models are available in pre-cooked packages, so this skill is necessary for some research topics.
We will only be touching a simple model here (regular probit), so as not to get bogged down in extraneous complexities. This general method of estimation, testing, and vectorization will however work for a wide range of MLE models.
Probit
The basic probit model takes a binary dependent variable, $Y$, and assumes that $Pr(Y=1 | X) = \Phi(X^T \beta)$ where $X$ is the matrix of independent variables, $\beta$ is the vector of parameters to estimate, and $\Phi$ is the CDF of the standard normal distribution. We want to take the likelihood function $L=PR(\beta|X, Y)$, and maximize it over $\beta$ to get the most likely $\beta$ paramaters given the data $X,Y$. We usually use the log of the likelihood function in practice, because it is simpler in both math and computation, and the maximum point is the same. The probit log likelihood is as follows:
$$ln L(\beta|X,Y) = \sum_{i=1}^n[y_i ln \Phi(x_i'\beta)+(1-y_i)ln(1-\Phi(x_i'\beta)) ]$$
Which we can translate into a naive python function like this:
In [1]:
import numpy as np
from scipy.stats import norm
def LogLikeProbit(betas, y, x):
"""
Probit Log Likelihood function
Very slow naive Python version
Input:
betas is a np.array of parameters
y is a one dimensional np.array of endogenous data
x is a 2 dimensional np.array of exogenous data
First vertical colmn of X is assumed to be constant term,
corresponding to betas[0]
returns:
negative of log likehood value (scalar)
"""
result = 0
#Sum operation
for i in range(0, len(y)):
#Get X_i * Beta value
xb = np.dot(x[i], betas)
#compute both binary probabilities from xb
llf = y[i]*np.log(norm.cdf(xb)) + (1-y[i])*np.log(1 - norm.cdf(xb))
result += llf
return -result
Note that we return the negative value of the result because we want to maximize over this function, and numerical optimizers are traditionally minimizers. Minimizing over the negative values will be the same as maximizing the function.
Generating a testing environment for your model
When creating a model from scratch, we need to know it is correct on data where we know the real values and distributions. Here is artificial data to test our probit model on:
In [5]:
######################
#ARTIFICIAL DATA
######################
#sample size
n = 1000
#random generators
z1 = np.random.randn(n)
z2 = np.random.randn(n)
#create artificial exogenous variables
x1 = 0.8*z1 + 0.2*z2
x2 = 0.2*z1 + 0.8*z2
#create error term
u = 2*np.random.randn(n)
#create endogenous variable from x1, x2 and u
ystar = 0.5 + 0.75*x1 - 0.75*x2 + u
#create latent binary variable from ystar
def create_dummy(data, cutoff):
result = np.zeros(len(data))
for i in range(0, len(data)):
if data[i] >= cutoff:
result[i] = 1
else:
result[i] = 0
return result
#get latent LHS variable
y = create_dummy(ystar, 0.5)
#prepend vector of ones to RHS variables matrix
#for constant term
const = np.ones(n)
x = np.column_stack((const, np.column_stack((x1, x2))))
Testing the model
We can now maximize the probit log likelihood to get the most likely vector of parameters given the artificial data using scipy's powerful numerical optimization library:
In [6]:
from scipy.optimize import minimize
#create beta hat vector to maximize on
#will store the values of maximum likelihood beta parameters
#Arbitrarily initialized to all zeros
bhat = np.zeros(len(x[0]))
#unvectorized MLE estimation
probit_est = minimize(LogLikeProbit, bhat, args=(y,x), method='nelder-mead')
#print vector of maximized betahats
probit_est['x']
Out[6]:
array([ 0.05125986, 0.34315356, -0.4281118 ])
Note here that probit regression results can't be interpreted directly like OLS regression results. Parameter estimates are divided by $\sigma$ (which we set to 2 when generating the error term). You can multiply the last two values in the results displayed ($\hat{\beta_1}$ and $\hat{\beta_2}$ ) by $\sigma$ and compare that to our generated values of the constant, $x_1$ and $x_2$ (0.75 and -0.75 respectively).
The $\hat{\beta}_0$ constant depends on the cutoff point that determines the binary variable value and $\sigma$; we're not interested in its value for today.
The estimates seem to be slightly off with our small samle size. $\hat{\beta_1}$ and $\hat{\beta_2}$ should be around 0.375 and -0.375.
Summary Statistics
This is a good time to get some summary statistics on our empirical estimate. In maximum likelihood estimation, the standard error is usually computed from the Cramèr-Rao lower bound. We can then use the standard error to get t- and p-values. The C-R lower bound is computed by the square root of the diagonal elements of the inverse of the Hessian at our estimated parameters. Statsmodels' numerical differentiation toolbox makes this easy:
In [8]:
import statsmodels.tools.numdiff as smt
import scipy as sc
#Get inverse hessian for Cramer Rao lower bound
b_estimates = probit_est['x']
Hessian = smt.approx_hess3(b_estimates, LogLikeProbit, args=(y,x))
invHessian = np.linalg.inv(Hessian)
#Standard Errors from C-R LB
#from diagonal elements of invHessian
SE = np.zeros(len(invHessian))
for i in range(0, len(invHessian)):
SE[i] = np.sqrt(invHessian[i,i])
#t and p values
t_statistics = (b_estimates/SE)
pval = (sc.stats.t.sf(np.abs(t_statistics), 999)*2)
print("Beta Hats: ", b_estimates)
print("SE: ", SE)
print("t stat: ", t_statistics)
print("P value: ", pval)
Beta Hats: [ 0.05125986 0.34315356 -0.4281118 ]
SE: [ 0.04049858 0.05533922 0.05726539]
t stat: [ 1.26571979 6.20091066 -7.47592581]
P value: [ 2.05908521e-01 8.20432023e-10 1.67354497e-13]
We can see our $\hat{\beta_1}$ and $\hat{\beta_2}$ estimates are within one standard error of the expected values.
Vectorizing
Now that we know this probit model works, it would be a good time to vectorize the naive python into a performant version.
The naive model is very inefficient. More complex models will require nontrivial computational power to estimate, and proper vectorization can easily reduce a computation from taking hours to taking minutes.
The main idea is to replace as much computation from "pure python" to optimized numpy/scipy functions. To do this we need to look closely at what our code is doing. Here is the main loop in the naive probit model:
In [ ]:
for i in range(0, len(y)):
xb = np.dot(X[i], betas)
llf = y[i]*np.log(norm.cdf(xb)) + (1-y[i])*np.log(1 - norm.cdf(xb))
result += llf
The outer loop is actually just a $\sum_{i=0}^n$ operation, so we can replace the outer for loop by numpy's optimized sum function. To do this, we have to make a couple of changes to the code.
• First, we replace the for loop by wrapping np.sum() around the code inside the loop.
• Move the $X_i'\beta$ computation outside the loop to run it only once.
• Relace the conditional $y_i$ and $(1-y_i)$ in the loop by pythonic conditionals that are allowed inside the sum() function. The (y==1) and (y==0) conditional expressions can do this inside the sum function.
In [ ]:
xb = np.dot(x, betas)
result = np.sum(
(y==1)*np.log(1 - norm.cdf(xb)) +
(y==0)*np.log(norm.cdf(xb))
)
For further optimization, we can use the natural log's mathematical propreties and move the log calculation to the outer part of the loop, so it's calculated once for the entire sum, instead of at each observation:
In [ ]:
xb = np.dot(x, betas)
result = np.sum(np.log(
(y==1)*(1 - norm.cdf(xb)) +
(y==0)*(norm.cdf(xb))
))
So the new vectorized log likelihood function looks like this:
In [9]:
def VectorizedProbitLL(betas, y, x):
xb = np.dot(x, betas)
result = np.sum(np.log(
(y==0)*(1 - norm.cdf(xb)) +
(y==1)*(norm.cdf(xb))
))
return -result
Again, we return the negative value of the sum because optimization libraries generally minimize, and we're trying to maximize. We'll see a drastic difference in runtime right away:
In [10]:
import timeit
10 loops, best of 3: 98.4 ms per loop
In [11]:
%timeit minimize(LogLikeProbit, bhat, args=(y,x), method='nelder-mead')
1 loop, best of 3: 50.2 s per loop
So the vectorized version is 400-500x faster (!!!) with the Nelder-Mead algorithm. This was done on an intel i5-4210u, a mid range laptop processor, for reference.
Note that while 50 seconds is not a problem on our trivial sample size (n=1000 in the artificial data above), unvectorized code can become a serious problem on large datasets.
There are many algorithms in scipy.optimize.minimize's module; this is a good reference to help choose the best one.If in doubt, the Nelder-Mead algorithm is included in scipy's minimizer and is usually a good choice. Nelder-Mead doesn't require estimating derivatives of the function, and as such fails less often, at a cost of being slow to converge on larger datasets.
BFGS is a popular algorithm due to its speed, but necessitates computing second derivatives at each iteration:
In [12]:
%timeit minimize(VectorizedProbitLL, bhat, args=(y,x), method='bfgs')
10 loops, best of 3: 41.4 ms per loop
|
2021-09-22 01:40:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5644111037254333, "perplexity": 2287.2959525321385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057303.94/warc/CC-MAIN-20210922011746-20210922041746-00124.warc.gz"}
|
https://docs.microsoft.com/en-us/dotnet/framework/configure-apps/file-schema/network/httpwebrequest-element-network-settings
|
# <httpWebRequest> Element (Network Settings)
Customizes Web request parameters.
<configuration>
<system.net>
<settings>
<httpWebRequest>
## Syntax
<httpWebRequest
maximumErrorResponseLength="size"
/>
## Attributes and Elements
The following sections describe attributes, child elements, and parent elements.
### Attributes
Attribute Description
maximumResponseHeadersLength Specifies the maximum length of a response header, in kilobytes. The default is 64. A value of -1 indicates that no size limit will be imposed on the response headers.
maximumErrorResponseLength Specifies the maximum length of an error response, in kilobytes. The default is 64. A value of -1 indicates that no size limit will be imposed on the error response.
maximumUnauthorizedUploadLength Specifies the maximum length of an upload in response to an unauthorized error code, in bytes. The default is -1. A value of -1 indicates that no size limit will be imposed on the upload.
useUnsafeHeaderParsing Specifies whether unsafe header parsing is enabled. The default value is false.
None.
### Parent Elements
Element Description
settings Configures basic network options for the System.Net namespace.
## Remarks
By default, the .NET Framework strictly enforces RFC 2616 for URI parsing. Some server responses may include control characters in prohibited fields, which will cause the HttpWebRequest.GetResponse() method to throw a WebException. If useUnsafeHeaderParsing is set to true, HttpWebRequest.GetResponse() will not throw in this case; however, your application will be vulnerable to several forms of URI parsing attacks. The best solution is to change the server so that the response does not include control characters.
## Configuration Files
This element can be used in the application configuration file or the machine configuration file (Machine.config).
## Example
The following example shows how to specify a larger than normal maximum header length.
<configuration>
<system.net>
<settings>
<httpWebRequest
|
2020-12-01 20:51:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5115917921066284, "perplexity": 3437.543566096238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141681524.75/warc/CC-MAIN-20201201200611-20201201230611-00199.warc.gz"}
|
https://plainmath.net/79567/i-m-trying-to-find-all-intervals
|
# I'm trying to find all intervals [ a , b ] on which the functions sin  
I'm trying to find all intervals $\left[a,b\right]$ on which the functions $\mathrm{sin}\left(2\pi t\right)$ and $\mathrm{cos}\left(2pit\right)$ are orthogonal.
${\int }_{a}^{b}\mathrm{sin}\left(2\pi t\right)\cdot \mathrm{cos}\left(2\pi t\right)dt=\frac{\mathrm{cos}\left(4\pi b\right)-cos\left(4\pi a\right)}{8\pi }=0$
$\phantom{\rule{thickmathspace}{0ex}}⟺\phantom{\rule{thickmathspace}{0ex}}\mathrm{cos}\left(4\pi b+2\pi k\right)=cos\left(4\pi a+2\pi l\right),k,l\in \mathbb{Z}$
I don't know how to solve this for a and b, can anybody help me with that please?
You can still ask an expert for help
## Want to know more about Trigonometry?
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Tianna Deleon
$\frac{\mathrm{cos}\left(4\pi a\right)-\mathrm{cos}\left(4\pi b\right)}{8\pi }=0⟺$
$\mathrm{cos}\left(4\pi a\right)-\mathrm{cos}\left(4\pi b\right)=0⟺$
$-\mathrm{cos}\left(4\pi b\right)=-\mathrm{cos}\left(4\pi a\right)⟺$
$\mathrm{cos}\left(4\pi b\right)=\mathrm{cos}\left(4\pi a\right)⟺$
With ${n}_{1},{n}_{2}\in \mathbb{Z}$
So you can set:
|
2022-10-06 10:53:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 54, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6611390113830566, "perplexity": 541.9865982227589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337803.86/warc/CC-MAIN-20221006092601-20221006122601-00695.warc.gz"}
|
https://open.kattis.com/problems/buildingboundaries
|
Kattis
# Building Boundaries
Maarja wants to buy a rectangular piece of land and then construct three buildings on that land.
The boundaries of the buildings on the ground must have rectangular sizes $a_1 \times b_1$, $a_2 \times b_2$, and $a_3 \times b_3$. They can touch each other but they may not overlap. They can also be rotated as long as their sides are horizontal and vertical.
What is the minimum area of land Maarja has to buy?
## Input
The input consists of multiple test scenarios. The first line of input contains a single integer $t$ ($1 \le t \le 1000$), the number of scenarios. Then follow the $t$ scenarios. Each scenario consists of a single line, containing six integers $a_1$, $b_1$, $a_2$, $b_2$, $a_3$ and $b_3$ ($1 \le a_1,b_1,a_2,b_2,a_3,b_3 \le 10^9$).
## Output
For each test scenario, output the minimum area of land such that Maarja can construct the three buildings.
Sample Input 1 Sample Output 1
2
2 3 2 2 1 1
2 4 5 1 2 3
12
21
|
2019-10-22 21:09:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5844167470932007, "perplexity": 550.7202600029509}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987824701.89/warc/CC-MAIN-20191022205851-20191022233351-00159.warc.gz"}
|
https://code.tutsplus.com/tutorials/euclidean-vectors-in-flash--active-8192
|
Unlimited Plugins, WordPress themes, videos & courses! Unlimited asset downloads! From \$16.50/m
# Euclidean Vectors in Flash
Difficulty:BeginnerLength:LongLanguages:
This post is part of a series called You Do The Math.
Playing Around with Elastic Collisions
Gravity in Action
Twice a month, we revisit some of our readers’ favorite posts from Activetuts+ history. This week’s retro-Active tutorial, first published in April, is a guide to Euclidean vectors: what they are, why you'd use them, and how to implement them in Flash with AS3.
Euclidean vectors are objects in geometry with certain properties that are very useful for developing games. They can be seen as points, but they also have a magnitude and a direction. They are represented as arrows going from the initial point to the final point, and that's how we will draw them in this article.
Euclidean vectors are commonly used in mathematics and physics for a lot of things: they can represent velocity, acceleration and forces in physics, or help prove a lot of important theorems in mathematics. In this tutorial, you'll learn about Euclidean vectors, and build a class that you can use in your own Flash projects.
Please note that Euclidean vectors are different than ActionScript's Vector class, and also different than vector drawing.
Vectors can be used in the Flash environment to help you achieve complex tasks that would otherwise require a lot of effort if done without them. In this article you will learn how to use them in Flash, as well as learn a lot of cool tricks with vectors.
## Step 1: Cartesian Coordinates and Flash's Coordinates
Before jumping into vectors, let's introduce Flash's coordinate system. You are probably familiar with the Cartesian coordinate system (even if you don't know it by name):
Flash's system is very similar. The only difference is that the y-axis is upside-down:
When we start working with vectors in flash, we need to remember that. However, good news: this different system doesn't make much difference. Working with vectors in it will be basically like working with vectors in the Cartesian system.
## Step 2: Defining a Vector
For the purpose of this tutorial, we will define and work with all vectors' initial points as being the registration point of the stage, just as they are commonly used in mathematics. A vector will then be defined just like a common point, but it will have magnitude and angle properties. Take a look at some example vectors defined in the stage:
As you can see, a vector is represented by an arrow, and each vector has a certain length (or magnitude) and points along a certain angle. The tail of each vector is at the registration point (0, 0).
We will create a simple EuclideanVector class for this tutorial, using the Point class to hold the vector's coordinates. Let's create the basic vector class now:
During this tutorial, we will talk about the sense and the direction of a vector. Note that the direction just defines a line that "contains" the vector. The sense is what defines which way the vector points along this line.
## Step 3: Inverse of a Vector
In this tutorial we will use the expression "inverse of a vector". The inverse of a vector is another vector with the same magnitude and direction, but a contrary sense. That translates to a vector with the opposite signal of the first vector's coordinates. So a vector with an endpoint of (x, y) would have an inverse vector with an endpoint of (-x, -y).
Let's add a function to our EuclideanVector class to return the inverse vector:
## Step 4: Basic Operations Addition
Now that we have learned how to define a vector, let's learn how to add two vectors: it's as simple as adding their coordinates separately. Look at this image:
If you notice in the image, the result of the addition of two vectors is another vector, and you can see that its coordinates are the sum of the coordinates of the other two vectors. In code, it would look like this:
So we can say that:
## Step 5: Basic Operations Subtraction
Subtraction works almost the same as addition, but instead we will be adding the inverse of the second vector to the first vector.
It is already known how to sum two vectors, so here's the code for subtraction:
This code is extremely useful to get a vector that goes from the point of a vector to the point of another. Look again at the image and you will see this is true. It will be used a lot in the later examples.
## Step 6: Basic Operations Multiplication by a Number
The multiplication between a vector and a number (regular numbers are known as "scalars" in vector math) results in a vector which has had magnitude multiplied by this number, but still pointing in the same direction; it's "stretched" if the scalar is larger than 1, and squashed if the scalar is between 0 and 1. The sense of the new vector will be the same as the original vector if the scalar is positive, or the opposite if negative. Basically, this number "scales" the vector. Look at the picture:
In the code, we only multiply a vector's coordinates by the number, which will then scale the vector:
## Step 7: Getting a Vector's Magnitude
In order to get a vector's magnitude, we will use the Pythagorean theorem. If you forgot what is it, here is a quick refresher:
The code is very simple:
You should also remove the line public var magnitude:Number, as this is what we'll use from now on.
The magnitude of a vector will always be positive, since it is the square root of the sum of two positive numbers.
## Step 8: Getting the Angle of a Vector
The angle of a vector is the angle between the x-axis and the vector's direction line. The angle is measured going from the x-axis and rotating anti-clockwise until the direction line in the cartesian system:
However, in Flash's coordinate system, since the y-axis is upside down, this angle will be measured rotating clockwise:
This can be easily calculated using the following code. The angle will be returned in radians, in a range from 0 to 2pi. If you don't know what radians are or how to use them, this tutorial by Michael James Williams will help you a lot.
## Step 9: Dot Product
The dot product between two vectors is a number with apparently no meaning, but it has two useful uses. Let's first take a look at how the dot product can be calculated:
But it also can be obtained by each vector's coordinates:
The dot product can tell us a lot about the angle between the vectors: if it's positive, then the angle ranges from 0 to 90 degrees. If it's negative, the angle ranges from 90 to 180 degrees. If it's zero, the angle is 90 degrees. That happens because in the first formula only the cosine is responsible for giving the dot product a "signal": the magnitudes are always positive. But we know that a positive cosine means that the angle ranges from 0 to 90 degrees, and so on for negative cosines and zero.
The dot product can also be used to represent the length of a vector in the direction of the other vector. Think of it as a projection. This proves extremelly useful in things like the Separation of Axis Theorem (SAT) and its implementation in AS3 for collision detection and response in games.
Here is the practical code to get the dot product between two vectors:
## Step 10: Smallest Angle Between Vectors
The angle between vectors, as seen in Step 9, can be given by the dot product. Here is how to calculate it:
## Step 11: Ranged Angle Between Vectors
There is also another way to calculate the angle, which gives results between -pi and pi and always calculates the angle that goes from the first vector to the second vector; this is useful when you want to easily integrate with a display object's rotation (which ranges from -180 to 180).
The method works by getting the angle for both vectors, then subtracting the angles and working on the result.
The code:
Note that this angle returns positive if secondAngle is higher than firstAngle, so the order in which you get the ranged angle will affect the result!
## Step 12: Normalizing a Vector
Normalizing a vector means making its magnitude be equal to 1, while still preserving the direction and sense of the vector. In order to do that, we multiply the vector by 1/magnitude. That way, its magnitude will be reduced, or increased, to 1.
## Step 13: Getting the Normal of a Vector
The normal of a vector is another vector that makes a 90 degree angle to the first. It can be calculated by the following formulas:
The formulas rely on the fact that, since the normal is always perpendicular to a vector, we only need to change the order of the x and y coordinates and invert one of them in order to get a normal. The following image shows the process:
In the image, Vec is the original vector, Vec2 is the vector with Vec's swapped coordinates, and Vec3 is a vector with Vec2's negative y coordinate. Ang and Ang2 are variable, but the angle between Vec and Vec3 is always 90 degrees.
And the code is simple
## Step 14: Rotating a Vector
In order to rotate a vector, we assume the (0, 0) position (its initial point) will be the rotation center. The rotated point is given by the formula:
This formula is obtained by applying a rotation matrix to that vector. We would be going beyond the scope of this tutorial if we went into the matrix and how it works, so I will just leave the formula here.
The code is pretty much the same:
This is the end of our basic vector operations. What you will see next is ways to use this class to do interesting things. Here is our class so far:
## Step 15: Determining Whether a Point is Inside a Polygon
The action begins here. Determining whether a point lies inside a polygon or not is a very interesting topic, and there are many methods of achieving it. In this article I will present the three methods that are generally used:
• The crossing number or even-odd rule algorithm, which determines whether a point is inside a polygon from the number of edges that a "ray" cast from the point to infinity crosses.
• The winding number algorithm, which gives the answer based on the sum of all angles formed between consecutive vertices of a polygon and the point to check.
• The convex polygon algorithm, which, as the name says, only works for convex polygons and is based on whether or not a point is on a certain "side" of every edge of the polygon.
All these algorithms will rely on the fact that you know the coordinates of the vertices (corners) that define the polygon.
## Step 16: The Crossing Number or Even-Odd Rule Algorithm
This algorithm can be used for any shape. This is what you read: any shape, have it holes or not, be it convex or not. It is based on the fact that any ray cast from the point you want to check out to infinity will cross an even number of edges if the point is outside the shape, or odd number of edges if the point is inside the shape. This can be proven by the Jordan curve theorem, which implies that you will have to cross a border between some region and other region if you want to move from one to other. In our case, our regions are "inside the shape" and "outside the shape".
The code for this algorithm is the following:
It will return false if the point is not inside the shape, or true if the point is inside the shape.
## Step 17: The Winding Number Algorithm
The winding number algorithm use the sum of all the angles made between the point to check and each pair of points that define the polygon. If the sum is close to 2pi, then the point being checked is inside the vector. If it is close to 0 then the point is outside.
The code uses the ranged angle between vectors and gives space for imprecisions: notice how we are checking the results of the sum of all angles. We do not check if the angle is exactly zero or 2pi. Instead, we check if it is less than pi and higher than pi, a considerable median value.
## Step 18: The Concave Polygon Algorithm
The concave polygon algorithm relies on the fact that, for a concave polygon, a point inside it is always to the left of the edges (if we are looping through them in a counter-clockwise sense) or to the right of the edges (if we are looping through them in a clockwise sense).
Imagine standing in a room shaped like the image above, and walking around the edges of it with your left hand trailing along the wall. At the point along the wall where you are closest to the point you are interested in, if it's on your right then it must be inside the room; if it's on your left then it must be outside.
The problem lies in determining whether a point is to the left or right of an edge (which is basically a vector). This is done through the following formula:
That formula returns a number less than 0 for points to the right of the edge, and greater than 0 for points to the left of it. If the number is equal to 0, the point lies on the edge, and is considered inside the shape. The code is the following:
This code works regardless of whether you have the shape's vertices defined clockwise or counter-clockwise.
## Step 19: Ray Casting
Ray casting is a technique often used for collision detection and rendering. It consists of a ray that is cast from one point to another (or out to infinity). This ray is made of points or vectors, and generally stops when it hits an object or the edge of the screen. Similarly to the point-in-shape algorithms, there are many ways to cast rays, and we will see two of them in this post:
• The Bresenham's line algorithm, which is a very fast way to determine close points that would give an approximation of a line between them.
• The DDA (Digital Differential Analyzer) method, which is also used to create a line.
In the next two steps we will look into both methods. After that, we will see how to make our ray stop when it hits an object. This is very useful when you need to detect collision against fast moving objects.
## Step 20: The Bresenham's Line Algorithm
This algorithm is used very often in computer graphics, and depends on the convention that the line will always be created pointing to the right and downwards. (If a line has to be created to the up and left directions, everything is inverted later.) Let's go into the code:
The code will produce an AS3 Vector of Euclidean vectors that will make the line. With this Vector, we can later check for collisions.
## Step 21: The DDA Method
An implementation of the Digital Differential Analyzer is used to interpolate variables between two points. Unlike the Bresenham's line algorithm, this method will only create vectors in integer positions for simplicity. Here's the code:
This code will also return an AS3 Vector of Euclidean vectors.
## Step 22: Checking for Collisions Using Rays
Checking collision via rays is very simple. Since a ray consists of many vectors, we will check for collisions between each vector and a shape, until one is detected or the end of the ray is reached. In the following code, shapeToCheck will be a shape just like the ones we have been using in Steps 13-16. Here's the code:
You can use any point-inside-shape function you feel comfortable with, but pay attention to the limitations of the last one!
## Conclusion
You're ready to start using this knowledge everywhere now! It will be useful many times, and will save you a lot of extra calculations when trying to do more complex things in Flash.
|
2021-04-20 00:08:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6789754033088684, "perplexity": 349.27739170369296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038921860.72/warc/CC-MAIN-20210419235235-20210420025235-00154.warc.gz"}
|
https://study.com/academy/answer/what-is-the-numerical-coefficient-of-the-a-4b-4-term-in-the-expansion-of-13a-2-2b-6.html
|
# What is the numerical coefficient of the a^4b^4 term in the expansion of (13a^2 - 2b)^6?
## Question:
What is the numerical coefficient of the {eq}a^4b^4 {/eq} term in the expansion of {eq}(13a^2 - 2b)^6? {/eq}
## Binomial Expansion:
The binomial theorem is used in many application in mathematics. The binomial expansion of {eq}(x-y)^n = nC0 x^n y^0 -nC1 x^{n-1} y^1 +nC2 x^{n-2} y^2 -nC3 x^{n-3} y^3 + \cdots +(-1)^r nCr x^{n-r} y^r+\cdots {/eq}.
The general term is of the from {eq}T_{r+1}=(-1)^r nCr x^{n-r} y^r {/eq}.
To solve, we'll compute the general term of the given expansion by using the above formula.
We are given {eq}(13a^2 - 2b)^6 {/eq}
The general term is {eq}(x-y)^n {/eq} is {eq}T_{r+1}=(-1)^r nCr x^{n-r} y^r. {/eq}
In this problem, {eq}x=13a^2 . y = 2b , n=6 , r = 4 {/eq}
{eq}T_{4+1}=(-1)^4 6C4 (13a^2)^{6-4} (2b)^4 {/eq}
{eq}\Rightarrow T_{5}=15 (13a^2)^{2} (2b)^4 {/eq}
{eq}\Rightarrow T_{5}=15 \cdot 13^2 \cdot 2^4 a^4 b^4 {/eq}
{eq}\Rightarrow T_{5}=40560 a^4 b^4 {/eq}
Therefore, the numerical coefficient of the a^4b^4 term in the expansion is {eq}= 40560 {/eq}.
|
2020-03-30 20:36:35
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9680006504058838, "perplexity": 13010.353104721808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497301.29/warc/CC-MAIN-20200330181842-20200330211842-00378.warc.gz"}
|
http://www.creativeedge.com/book/operating-systems/0672322803/the-windows-xp-registry/ch20lev1sec2
|
• Create BookmarkCreate Bookmark
• Create Note or TagCreate Note or Tag
• PrintPrint
### Registry Basics
The Windows XP Registry is structured using a number of components. To begin with, the Registry is stored in several different files on your computer. These files are named hives and are located in the \Windows\system32\config and \Documents and Settings\username folders (we will come back to hives and files a bit later in this chapter). However, when you use the Windows Registry Editor (regedit.exe), the Registry is presented to you as one seamless hierarchy that looks much like a folder tree you would see in Windows Explorer, as shown in Figure 20.1.
##### Figure 20.1. The Windows Registry Editor.
PREVIEW
Not a subscriber?
Start A Free Trial
• Create BookmarkCreate Bookmark
• Create Note or TagCreate Note or Tag
• PrintPrint
|
2017-03-24 16:42:15
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8323870897293091, "perplexity": 6254.724409167004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188213.41/warc/CC-MAIN-20170322212948-00458-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://astronomy.stackexchange.com/tags/cosmological-inflation/new
|
# Tag Info
I think I can sort of answer your first/second question. It's a bit hard to guess what your background is, but I hope you have seen or derived somewhere that the $a_{lm}$ coefficients can be written as $$\oint \Theta(\hat{x}) Y_{lm}^*(\hat{x}) d\hat{x}$$ where $\Theta$ is the temperature fluctuation (as seen in the CMB) and $Y_{lm}^*$ is (the complex ...
|
2021-05-05 23:02:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9234451055526733, "perplexity": 207.57079313229812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988696.23/warc/CC-MAIN-20210505203909-20210505233909-00193.warc.gz"}
|
http://yadda.icm.edu.pl/yadda/element/bwmeta1.element.-psjd-doi-10_2478_BF02475910
|
PL EN
Preferencje
Język
Widoczny [Schowaj] Abstrakt
Liczba wyników
Czasopismo
## Open Physics
2003 | 1 | 4 | 669-694
Tytuł artykułu
### Centrifugal (centripetal) and Coriolis' velocities and accelerations in $$(\bar L_n ,g)$$ -spaces
Autorzy
Treść / Zawartość
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The notions of centrifugal (centripetal) and Coriolis' velocities and accelerations are introduced and considered in spaces with affine connections and metrics [ $$(\bar L_n ,g)$$ -spaces] as velocities and accelerations of flows of mass elements (particles) moving in space-time. It is shown that these types of velocities and accelerations are generated by the relative motions between the mass elements. They are closely related to the kinematic characteristics of the relative velocity and relative acceleration. The centrifugal (centripetal) velocity is found to be in connection with the Hubble law. The centrifugal (centripetal) acceleration could be interpreted as gravitational acceleration as has been done in the Einstein theory of gravitation. This fact could be used as a basis for workingout new gravitational theories in spaces with affine connections and metrics.
Słowa kluczowe
EN
Wydawca
Czasopismo
Rocznik
Tom
Numer
Strony
669-694
Opis fizyczny
Daty
wydano
2003-12-01
online
2003-12-01
Twórcy
autor
• Department of Theoretical Physics, Institute for Nuclear Research and Nuclear Energy, Blvd. Tzarigradsko Chaussee 72, 1784, Sofia, Bulgaria, smanov@inrne.bas.bg
Bibliografia
• [1] S. Manoff: “Mechanics of continuous media in $$(\bar L_n ,g)$$ -spaces. 1. Introduction and mathematical tools”, (Preprint gr-qc/0203016), 2002.
• [2] S. Manoff: “Mechanics of continuous media in $$(\bar L_n ,g)$$ -spaces. 2. Relative velocity and deformations”, (Preprint gr-qc /0203017), 2002.
• [3] S. Manoff: “Mechanics of continuous media in $$(\bar L_n ,g)$$ -spaces. 3. Relative accelerations”, (Preprint gr-qc /0204003), 2002.
• [4] S. Manoff: “Mechanics of continuous media in $$(\bar L_n ,g)$$ -spaces. 4. Stress (tension) tensor”, (Preprint gr-qc /0204004), 2002.
• [5] H. Stephani: Allgemeine Relativilaetstheorie, VEB Deutscher Verlag d. Wissenschaften, Berlin, 1977.
• [6] J. Ehlers: “Beitraege zur relativistischen Mechanik kontinuierlicher Medien”, Abhandlungen d. Mainzer Akademie d. Wissenschaften, Math.-Naturwiss. Kl. Vol. 11, (1961), pp. 792–837.
• [7] Cl. Laemmerzahl: “A Characterisation of the Weylian structure of Space-Time by Means of Low Velocity Tests”, (Preprint gr-qc /0103047), 2001.
• [8] R. L. Bishop, S. I. Goldberg: Tensor Analysis on Manifolds, The Macmillan Company, New York, 1968.
• [9] S. Manoff: “Kinematics of vector fields”, In: World Scientific (Ed.): Complex Structures and Vector Fields, Singapore, 1995, pp. 61–113.
• [10] S. Manoff: “Spaces with contravariant and covariant affine connections and metrics”, Phys. Part. Nuclei, Vol. 30, (1999) 5, pp. 527–49. http://dx.doi.org/10.1134/1.953117[Crossref]
• [11] S. Manoff: Geometry and Mechanics in Different Models of Space-Time: Geometry and Kinematics, Nova Science Publishers, New York, 2002.
• [12] S. Manoff: Geometry and Mechanics in Different Models of Space-Time: Dynamics and Applications, Nova Science Publishers, New York, 2002.
• [13] S. Manoff: “Einstein's theory of gravitation as a Lagrangian theory for tensor fields”, Intern. J. Mod. Phys, Vol. A 13, (1998), pp. 1941–67. http://dx.doi.org/10.1142/S0217751X98000846[Crossref]
• [14] S. Manoff: “About the motion of test particles in an external gravitational field”, Exp. Technik der Physik, Vol. 24, (1976), pp. 425–431.
• [15] Ch. W. Misner, K.S. Thorne, J.A. Wheeler: Gravitation, W.H. Freeman and Company, San Francisco, 1973.
• [16] S. Manoff: “Flows and particles with shear-free and expansion-free velocities in (L n, g)- and Weyl's spaces”, Clas. Quantum Grav., Vol. 19, (2002), pp. 4377–98, (Preprint gr-qc/0207060). http://dx.doi.org/10.1088/0264-9381/19/16/311
• [17] S. Manoff: “Centrifugal (centripetal), Coriolis' velocities, accelerations and Hubble's law in spaces with affine connections and metrics”, (Preprint gr-qc/0212038), 2002.
Typ dokumentu
Bibliografia
Identyfikatory
|
2018-07-17 21:32:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3603644371032715, "perplexity": 11794.890388514745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589902.8/warc/CC-MAIN-20180717203423-20180717223423-00262.warc.gz"}
|
http://mathhelpforum.com/advanced-algebra/170166-nilpotent-linear-transformation.html
|
# Math Help - Nilpotent Linear Transformation
1. ## Nilpotent Linear Transformation
A linear transformation T: V -> V is called nilpotent of order p if T^p = 0 and T^(p-1) is not equal to 0. If T is nilpotent, show that 1 - T is an isomorphism, where 1 denotes the identity map on V.
Any help or hints would be greatly appreciated.
2. Consider the operator $I+T+T^2+...+T^{p-1}$.
3. I'm not quite sure I still understand. If I add all of the T's, will I be getting a summation of T from i=1 to i=p-1?
4. $x^n- y^n= (x- y)$ times what?
$I- T^p= (I- T)$ times what?
5. so x^n -y^n = (x-y)(x^n-1 + x^n-2y + x^n-3y^2 +...+ xy^n-2 + y^n-1)
so I - T^p = (I-T)(I^0 + I^-1T + I^-2T^2 +...+ IT^p-2 + T^p-1)
so then I get I - T^p = (I-T)(I + T + T^2 +...+ T^p-1)
(I-T) = (I - T^p)/(I+T+T^2+...+T^p-1)
and T^p=0, so (I-T) = I/(I+T+T^2+...+T^p-1)= I + T^-1+T-2+T^-3+...+T^1-p
which means that T has an inverse, T^-1 for T, T^-2 for T^2, etc, so T is invertible, and it is invertible iff T is one-to-one and onto, so it is isomorphic. Is this correct?
6. Originally Posted by Shapeshift
so x^n -y^n = (x-y)(x^n-1 + x^n-2y + x^n-3y^2 +...+ xy^n-2 + y^n-1)
so I - T^p = (I-T)(I^0 + I^-1T + I^-2T^2 +...+ IT^p-2 + T^p-1)
so then I get I - T^p = (I-T)(I + T + T^2 +...+ T^p-1)
(I-T) = (I - T^p)/(I+T+T^2+...+T^p-1)
and T^p=0, so (I-T) = I/(I+T+T^2+...+T^p-1)= I + T^-1+T-2+T^-3+...+T^1-p
which means that T has an inverse, T^-1 for T, T^-2 for T^2, etc, so T is invertible, and it is invertible iff T is one-to-one and onto, so it is isomorphic. Is this correct?
I think you succeeded to make pretty cumbersome and messy something that is supposed to be pretty simple:
$T^p=0\Longrightarrow I=I-T^p=(I-T)(I+T+T^2+\ldots+T^{p-1})\Longrightarrow I-T$ is invertible since
it multiplied by all that stuff withing the long parentheses is the unit transformation. Period.
The only thing still left is to explain why the second equality above (from the left) holds, since product
of transformations isn't usually commutative, but in this case...
Tonio
|
2015-03-26 20:07:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9182210564613342, "perplexity": 2282.4758027763046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131292567.7/warc/CC-MAIN-20150323172132-00039-ip-10-168-14-71.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/4192378/if-a-b-c-are-rational-numbers-and-if-leftab-sqrt32c-sqrt34-right
|
# If $a,b,c$ are rational numbers and if $\left(a+b\sqrt[3]{2}+c\sqrt[3]{4}\right)^3$ is also rational then prove that $ab+bc+ca=0$
If $$a,b,c$$ are rational numbers and if $$\displaystyle \left(a+b\sqrt[3]{2}+c\sqrt[3]{4}\right)^3$$ is also rational then prove that $$ab+bc+ca=0$$
My attempt
Binomial expansion is not a good idea because there will be $$27$$ terms so I tried to prove using factorization.
$$\displaystyle \left(a+b\sqrt[3]{2}+c\sqrt[3]{4}\right)^3-\left(c\sqrt[3]{4}\right)^3 \\=\left(a+ b\sqrt[3]{2}\right)\left[\left(a+b\sqrt[3]{2}+c\sqrt[3]{4}\right)^2+\left(a+b\sqrt[3]{2}+c\sqrt[3]{4}\right)\left(c\sqrt[3]{4}\right)+\left(c\sqrt[3]{4}\right)^2\right]$$
This again leads to complicated calculations. Then I tried to equate it to a rational number $$r$$.
\begin{align*} \displaystyle \left(a+b\sqrt[3]{2}+c\sqrt[3]{4}\right)^3&=r\\ \implies a+b\sqrt[3]{2}+c\sqrt[3]{4}&=r^{1/3}\\ \implies b\sqrt[3]{2}+c\sqrt[3]{4}&=r^{1/3}-a\\ \implies(b\sqrt[3]{2}+c\sqrt[3]{4})^3&=(r^{1/3}-a)^3\\ \implies2b^3+6\sqrt[3]{2}~b^2c+6\sqrt[3]{4}~bc^2+4c^3&=r-3r^{2/3}a+3r^{1/3}a^2-a^3 \end{align*} When I got stuck here I wrote the equation $$a+b\sqrt[3]{2}+c\sqrt[3]{4}=r^{1/3}$$ in $$3$$ different ways, each time multiplying with $$\sqrt[3]{2}$$ \begin{align*} \displaystyle a+b\sqrt[3]{2}+c\sqrt[3]{4}&=r^{1/3}\\ a\sqrt[3]{2}+b\sqrt[3]{4}+2c&=\sqrt[3]{2}r^{1/3}\\ a\sqrt[3]{4}+2b+2\sqrt[3]{2}c&=\sqrt[3]{4}r^{1/3}\end{align*} I tried adding the above three equations but it wasn't helpful.
Can someone help me in solving the question. Thanks in advance.
Is it possible to generalize the question as $$\left(a+b\sqrt[3]{n}+c\sqrt[3]{n^2}\right)^3$$ where $$n$$ is a non-square integer?
• i think it would be tedious, but when you multiply this out, many of the radicals cancel and you can collect like terms fairly nicely Jul 7, 2021 at 8:25
• So that the binomial expansion hurts less, I'd call $q = 2^{1/3}$ and simplify any exponent greater than $4$ using that $q^4 = 2q$. You can also ignore all rational terms, so you end up only with an equation with six terms, three with $q$ and three with $q^2$. Jul 7, 2021 at 8:34
• @CSquared I got this question from a book for contest maths, so it's more likely that there will be a solution without expanding the expression. Jul 7, 2021 at 8:35
• you could try to look for terms in the expansion which have one factors of $ac$, $bc$, or $ab$ in them and try to deduce something indirectly. Jul 7, 2021 at 8:40
• Note: In your factorization, the second term should have a minus sign. Jul 7, 2021 at 15:04
By using $$(x+y+z)^3=x^3+y^3+z^3+3xy(x+y)+3xz(x+z)+3yz(y+z)+6xyz$$ we get $$a^2b+2b^2c+2ac^2=0$$ and $$ab^2+a^2c+2bc^2=0$$. Now multiply the first equation by $$b$$ and the second by $$a$$ and then subtract them. We obtain $$c=0$$ (and then $$a=0$$ or $$b=0$$) or $$a=b=0$$.
Edit. I forgot to mention that I used the following: if $$p+q\sqrt[3]{2}+r\sqrt[3]{4}=0$$, with $$p,q,r$$ rational numbers, then $$p=q=r=0$$.
• It seems to me that the goal is to prove $ab+bc+ca=0$ without proving that two among $a,b,c$ are zero.
|
2022-05-23 21:08:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 25, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9380255341529846, "perplexity": 218.4770795211246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662561747.42/warc/CC-MAIN-20220523194013-20220523224013-00758.warc.gz"}
|
https://math.stackexchange.com/questions/1194922/maps-from-dn-to-dn-with-a-single-inverse-set-are-open
|
# Maps from $D^n$ to $D^n$ with a single inverse set are open.
Let $D^n$ denote the closed unit ball in $\Bbb R^n$. In multiple sources proving Brown's generalized Schoenflies theorem (including a version in the original paper), the following consequence of Brouwer's invariance of dimension is stated without proof.
If $f: D^n \rightarrow D^n$ with only one non-singleton inverse set $f^{-1}(y)$ disjoint from the boundary, then $y$ is in the interior of the image of $f$.
I am at a loss as to how to go from invariance of dimension to this.
EDIT: I just went through the proof of generalized Schoenflies in Bing's book, and he doesn't makes use of this fact. I'm still interested how one proves this from invariance of dimension (or using similar homological techniques as such).
• Is there only one non-singleton inverse set disjoint from the boundary (so there may be other non-singleton inverse sets, but they intersect the boundary), or is there only one non-singleton inverse set in total, and this set is disjoint from the boundary? – Stefan Hamcke Mar 23 '15 at 12:13
• If I understand correctly, for $x \neq y$, $f^{-1}(x)$ is a singleton. So $f$ is surjective? – Najib Idrissi Mar 23 '15 at 16:34
• The map need not be surjective, but it fails to be injective at exactly one point in the target and the preimage of this point is disjoint from the boundary. – PVAL-inactive Mar 23 '15 at 18:03
• Is it easy to write down an example of such an $f$? I'm having trouble getting intuition as to why this should be true – Jason DeVito Apr 15 '15 at 20:01
• Well a simple example would just be the map that sends a half open annulus homeomorphically to a punctured ball and maps the ball on the open side to the puncture point, though there are much more complicated examples. – PVAL-inactive Apr 15 '15 at 20:06
The invariance of domain theorem states that if $U$ is an open domain of $\mathbb R^n$ and $f:U\to\mathbb R^n$ is continuous and injective, then $f$ is open and in fact $f$ is an homeomorphism. See here.
By continuity, $f^{-1}(y)$ is closed. Let $U=int(D^n)\setminus f^{-1}(y)$. $U$ is not empty otherwise $f(D^n)=y$ and therefore $f^{-1}(y)$ contains the boundary of $D^n$.
So $f(U)$ is open and homeomorphic to $U$. By definition, there is a sequence $u_n\in U$ so that $u_n\to f^{-1}(y)$. This provides a sequence $z_n=f(u_n)$ in the interior of the image of $f$ so that $z_n\to y$.
If $y$ is not an interior point, there is a second sequence of points $w_n\to y$ not in the image of $f$. Let $\gamma_n$ be an arc between $z_n$ and $w_n$ wich do not contains $y$. $f^{-1}(\gamma)$ must contain a point of $\partial D^n$. So we have that there is $x_n\in\partial D^n$ so that $f(x_n)\to y$. $x_n$ has an accumulation point $x\in\partial D^n$ and by continuity $f(x)=y$. Thus, if $y$ is not an interior point, then $f^{-1}(y)$ intersects the boundary.
• +1 but I think you should explain better why $f^{-1}(\gamma)$ intersects $\partial D^n$ (because here you use the openness of $f(U)$) – Mizar Mar 27 '15 at 22:29
• @Mizar yes, you're right, I left this point not completely clear. To prove this one has to parametrize $\gamma$ with $[0,1]$ and consider the connected component of $\gamma^{-1}(f(U))$ wich contains $0$. This is an open interval $[0,\tau)$ because $f(U)$ is open, and by continuity $\gamma(\tau)\in f(\partial D)$. – user126154 Mar 29 '15 at 11:40
|
2019-05-27 03:49:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8849275708198547, "perplexity": 105.1528008805669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232260658.98/warc/CC-MAIN-20190527025527-20190527051527-00410.warc.gz"}
|
https://stats.stackexchange.com/questions/279817/experimental-design-control-size-measurement-type-and-statistical-test
|
# Experimental design: Control size, measurement type, and statistical test
I've been reading a fair bit but have some confusion and would like some help understanding how to determine the required size of a control group in the following situation.
Situation: I am trying to determine whether mailing a person an advertisement affected whether they bought the product advertised.
The people in the mail (test) group and the control group will have similar distributions in their behaviours and attributes prior to the test being run.
Question: If I mail n1 people and would like to measure an effect with statistical significance level p = 0.05 and statistical power B = 1 - \beta = 0.8, what should be the size of the control group, n2?
I would like to know the differences in approaches if the metric is:
1. A proportion, e.g. number of customers/size of group where size of group is n1 or n2 and the number of customers is the number of people who bought the advertised product in the respective group.
2. A continuous variable such as the total spend of the group.
3. A long tailed continuous variable such as the spend per person, for which the median is a preferred measure to the mean.
Confusion: Do I need to estimate a baseline value first (e.g. estimate what I expect the proportion to be for one of groups)? Similarly, do I need to decide on the effect size I'm trying to measure (e.g. before knowing what control size I need, stating "I want to be able to measure a difference in the proportions of 5%"?)
It also seems the method of determining the appropriate control size is married to the type of test being run. If the test being run can be incorporated into your answer that would be very helpful.
|
2019-10-18 21:19:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.547357976436615, "perplexity": 566.9076462208506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684854.67/warc/CC-MAIN-20191018204336-20191018231836-00173.warc.gz"}
|
https://cracku.in/blog/inequality-questions-for-ibps-rrb-clerk-pdf/
|
# Inequality questions for IBPS RRB Clerk PDF
0
2873
## Inequality questions for IBPS RRB Clerk PDF
Download Top-15 Banking Exams Inequality questions Questions PDF. Banking Exams Inequality questions based on asked questions in previous exam papers very important for the Banking exams.
Instructions
<p “=””>In each of the following questions, relationship between different elements is shown in the statements. The statements are followed by two conclusions numbered I and II. Study the conclusions based on the given statements and select the appropriate answer.
Give answer (a) if only Conclusion I is true
Give answer (b) if neither Conclusion I nor Conclusion II is true
Give answer (c) if only Conclusion II is true
Give answer (d) if both Conclusion I and Conclusion II are true
Give answer (e) if either Conclusion I or Conclusion II is true
Question 1: Statements
M > A ≥ B = Q ≤ P < J ≤ Y = Z ≥ A > X
Conclusions:
I. B < Y
II. X ≥ J
a) if only Conclusion I is true
b) if neither Conclusion I nor Conclusion II is true
c) if only Conclusion II is true
d) if both Conclusion I and Conclusion II are true
e) if either Conclusion I or Conclusion II is true
Question 2: Statements
M > A ≥ B = Q ≤ P < J ≤ Y = Z ≥ A > X
Conclusions
I. Z = Q
II. Z > Q
a) if only Conclusion I is true
b) if neither Conclusion I nor Conclusion II is true
c) if only Conclusion II is true
d) if both Conclusion I and Conclusion II are true
e) if either Conclusion I or Conclusion II is true
Question 3: Statements
G < R = A ≤ S ; T < R
Conclusions
I. G < S
II. S > T
a) if only Conclusion I is true
b) if neither Conclusion I nor Conclusion II is true
c) if only Conclusion II is true
d) if both Conclusion I and Conclusion II are true
e) if either Conclusion I or Conclusion II is true
Question 4: Statements
P = U < M < K ≤ I > N ; D ≥ P ; I ≥ C
Conclusions
I. M < C
II. N > U
a) if only Conclusion I is true
b) if neither Conclusion I nor Conclusion II is true
c) if only Conclusion II is true
d) if both Conclusion I and Conclusion II are true
e) if either Conclusion I or Conclusion II is true
Question 5: Statements
P = U < M < K ≤ I > N ; D ≥ P ; I ≥ C
Conclusions
I. D ≥ K
II. I > P
a) if only Conclusion I is true
b) if neither Conclusion I nor Conclusion II is true
c) if only Conclusion II is true
d) if both Conclusion I and Conclusion II are true
e) if either Conclusion I or Conclusion II is true
Instructions
In these questions, the relationship between different elements is shown in the statements. The statements are followed by two conclusions. Give answer
Question 6: Statements: $F < R\geq O=M\leq T=K$
Conclusions: I. $K\geq O$
II.F < M
a) if only conclusion I is true
b) if only conclusion II is true
c) if either conclusion I or II is true
d) if neither conclusion I nor II is true
e) if both conclusion I and II are true
Question 7: Statements: $G=N\leq O\geq P>Q=R$
Conclusions: I. O >R
II. $P\leq G$
a) if only conclusion I is true
b) if only conclusion II is true
c) if either conclusion I or II is true
d) if neither conclusion I nor II is true
e) if both conclusion I and II are true
Question 8: Statements: $F<O=L\geq W=S$
Conclusions: I. $W\leq F$
II. $O\geq S$
a) if only conclusion I is true
b) if only conclusion II is true
c) if either conclusion I or II is true
d) if neither conclusion I nor II is true
e) if both conclusion I and II are true
Question 9: Statements: $B=R\geq T<O=P\geq S$
Conclusions: I. B < O
II. T < S
a) if only conclusion I is true
b) if only conclusion II is true
c) if either conclusion I or II is true
d) if neither conclusion I nor II is true
e) if both conclusion I and II are true
Question 10: Statements: $P>Q\geq A<R=I$
Conclusions:
I. A < P
II. I > A
a) if only conclusion I is true
b) if only conclusion II is true
c) if either conclusion I or II is true
d) if neither conclusion I nor II is true
e) if both conclusion I and II are true
Instructions
In these questions, relationships between different elements is shown in the statements. The statements is followed by two conclusions. Study the conclusions based on the given statement and select the appropriate answer.
Question 11: Statement: K > I $\geq$ T $\geq$ E; O < R < K
Conclusions: I. R < E 2. O < T
a) Neither conclusion I nor II follows
b) Both conclusions I and II follows
c) Only conclusion II follows
d) Either conclusion I or II follows
e) Only conclusion I follows
Question 12: Statement C < L < O = U = D $\geq$ S > Y
Conclusions I. O > Y II. C<D
a) Neither conclusion I nor II follows
b) Both conclusions I and II follows
c) Only conclusion I follows
d) Either conclusion II follows
e) Only conclusion I or II follows
Question 13: Statement K $\geq$ L > M $\geq$ N
Conclusions I. N $\leq$ K II. N<K
a) Both conclusions I and II follows
b) Neither conclusion I nor II follows
c) Either conclusion II or II follows
d) Only conclusion I follows
e) Only conclusion II follows
Question 14: Statement: Z $\geq$ Y = W $\geq$ X
Conclusions I. W<Z II. W=Z
a) Only conclusion II follows
b) Only conclusion I follows
c) Neither conclusion I nor II follows
d) Either conclusion I or II follows
e) Both conclusions I and II follows
Question 15: Statement: B > A > S < I > C > L >Y
Conclusions I. B>L II. A>Y
a) Only conclusion I follows
b) Only conclusion II follows
c) Either conclusion II or II follows
d) Neither conclusion I nor II follows
e) Both conclusions I and II follows
Statement : M > A ≥ B = Q ≤ P < J ≤ Y = Z ≥ A > X
Conclusions:
I. B < Y : true
II. X ≥ J : false
Thus, only Conclusion I is true.
=> Ans – (A)
Statement : M > A ≥ B = Q ≤ P < J ≤ Y = Z ≥ A > X
Conclusions :
I. Z = Q : It cannot be true as J > P
II. Z > Q : It is true
Thus, only conclusion II is true.
Hence, option C is the correct answer.
Statements : G < R = A ≤ S ; T < R
=> $S\geq R>G$ and $R>T$
Conclusions :
I. G < S = true
II. S > T = true
Thus, both Conclusion I and Conclusion II are true.
=> Ans – (D)
Statements : P = U < M < K ≤ I > N ; D ≥ P ; I ≥ C
Conclusions :
I. M < C = false
II. N > U = false
Thus, neither Conclusion I nor Conclusion II is true.
=> Ans – (B)
Statements : P = U < M < K ≤ I > N ; D ≥ P ; I ≥ C
Conclusions :
I. D ≥ K = false
II. I > P = true
Thus, only Conclusion II is true
=> Ans – (C)
T is greater than or equal to M. But T is equal to K. K is greater than or equal to M. O is equal to M.
Therefore, K is greater than or equal to O.
Hence, conclusion I follows.
We cannot establish a relation between F and M even both are known to be less than F.
Hence, this conclusion II does not follow.
Option A is correct.
I. O >R. This is a correct coclusion because Q is greater than R. P is greater than Q while I is greater than or equal to R. Hence, O is greater than R.
II. $P\leq G$. We cannot draw any conclusion between relationship of P with G.
Only conclusion I follows.
No relation can be established between W and F as data provided is inadequate.
L is greater than W. O is equal to L. Therfore, O is greater than W. W is equal to S.
Hence, we can say that, O is greater than S.
Option B is correct option.
Conclusions:
I. B < O, we cannot establish any direct relationship between B and O as no such data is provided.
II. T < S, no relationship can be established between T and S as data provided is inadequate.
Hence, conclusions I and II do not follow.
Therefore, option D is correct.
P is greater than Q which is greater than or equal to R. Hence, we can say that P is greater than R.
Hence, conclusion I follows.
R is greater than A but I and A are equal. Therefore, R is greater than I.
Hence, conclusion II follows.
Both I and II follow.
Option E is correct.
According to the given inequalities, K is largest among all but nothing specific can be said about O,R,I,T and E. Hence, no conclusion can be drawn from the given information. So answer will be A
As it is given in statement 1 that O=U=D which is greater than S, Y, L and C hence, the conclusions O>Y and C<D can drawn from the given statements.
In conclusion 1, it is given that N<=K. That’s not possible as N is less than or equal to M which is absolutely less than K. Conclusion 2 is valid as it mentions that N < K. Hence, only conclusion 2 will follow.
|
2022-08-15 04:15:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7469223737716675, "perplexity": 1705.3185228977761}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572127.33/warc/CC-MAIN-20220815024523-20220815054523-00709.warc.gz"}
|
https://pdodds.w3.uvm.edu/teaching/courses/2016-08UVM-122/episodes/04b/
|
Note: This is an archival, mostly functional site. All courses can be found here.
### Episode 04b (37:18): Wizard-level matrix wrangling
#### Summary:
Living in matrix world is all about having superpowers. Starting with a thorough revisiting of the definition of matrix multiplication, we first show how we can perform inner products and outer products the matrix way. We then go for broke with block multiplication and all important ways of seeing how $\mathbf{A}\vec{x}$, $\vec{y}^{\rm T}\mathbf{A}$, and $\mathbf{A}\mathbf{B}$ can all be performed with a bigger picture approach to multiplication. Embrace the goodness.
|
2021-11-30 12:57:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3648056089878082, "perplexity": 1374.763467914303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358973.70/warc/CC-MAIN-20211130110936-20211130140936-00169.warc.gz"}
|
https://languagelog.ldc.upenn.edu/nll/?p=48607
|
## "Inshallah"
You've probably heard about this — Teo Armas, "‘Inshallah’: The Arabic ‘fuggedaboudit’ Biden dropped to blast Trump on tax returns", WaPo 9/30/2020:
Midway through Tuesday night’s chaotic presidential debate, as President Trump vowed to release his still-private tax returns, Joe Biden shot back at his opponent with a particularly sarcastic jab.
“Millions of dollars, and you’ll get to see it,” Trump said of the amount he claims to have paid.
“When?” the Democratic presidential nominee interjected. “Inshallah?”
The WaPo article links to an article by Rebecca Clift and Fadi Helani, "Inshallah: Religious invocations in Arabic topic transition", Language in Society 2010, whose abstract helps explain why it might have come naturally to Biden, whose son Beau served a tour in Iraq:
The phrase inshallah ‘God willing’ is well known, even to non-Arabic speakers, as a mitigator of any statement regarding the future, or hopes for the future. Here we use the methods of conversation analysis (CA) to examine a less salient but nonetheless pervasive and compelling interactional usage: in topic-transition sequences. We use a corpus of Levantine (predominantly Syrian) Arabic talk-in-interaction to pay detailed attention to the sequential contexts of inshallah and its cognates across a number of exemplars. It emerges that these invocations are used to secure possible sequence and topic closure, and that they may engender reciprocal invocations. Topical talk following invocations or their responses is subsequently shown to be suspended by both parties; this provides for a move to a new topic by either party. (Arabic, religious expressions, conversation, conversation analysis, topic)*
An American colonel in Iraq, writing to The Washington Post’s Thomas E. Ricks, recently observed: “The phrase ‘inshallah’ or ‘God willing’, has permeated all ranks of the Army. When you talk to U.S. soldiers about the possible success of ‘the surge’, you’d be surprised how many responded with ‘inshallah’.” The phrase seems to have permeated all ranks of the diplomatic corps, too: Zalmay Khalilzad, when he was the U.S. ambassador to Iraq, once stated at a conference, “Inshallah, Iraq will succeed.” (Murphy 2007)
And the WaPo article quotes Helani about ironic or sarcastic usage of the phrase:
When used in formal Arabic, including in media interviews or news conferences by politicians in the Arab world, he said, inshallah serves as an expression of hope for a desired outcome. Yet in informal conversation, inshallah can also be used sarcastically to mean that the hope or statement is too good to be true.
“If somebody says talks about passing a test, and you say, ‘inshallah,’ that means you’re hoping they pass,” Helani said. “But if somebody says that, and you know they’re a lazy student, ‘inshallah’ means you don’t believe them at all.”
This reminds me of how the phrase "bless your heart" has evolved in the American south.
It also echoes an early LLOG conversation about the original meaning of "under God":
"Dysfunctional shift", 6/16/2004
"Never say never", 6/16/2004
"I might have guessed Parson Weems would figure in their somewhere", 6/20/2004
"'(Next) Under God,' phrasal idiom", 6/20/2004
Geoff Nunberg's conclusion from that series:
In short, the phrase "under God" had nothing to do with God's temporal sovereignity; it was, rather, a way of acknowledging that the efforts of men are always contingent on His providence. And that is how Lincoln intended it, as meaning something like "with God's help, of course":
It is rather for us to be here dedicated to the great task remaining before us–that from these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion–that we here highly resolve that these dead shall not have died in vain, that this nation under God shall have a new birth of freedom…
Meanwhile, here's the debate fragment, if (like me) you had bailed on that dumpster fire earlier in the evening:
1. ### Philip Taylor said,
October 1, 2020 @ 7:06 am
It is now some years since I adopted "insh'Allah / D.v." or some variant thereof as a fairly standard qualifier to a wished-for outcome in my e-mail correspondence — two examples follow :
Thank you, Akira-san — noted for future reference. I am in the middle of setting up a new web site at the moment, but will test TLSHELL with your suggestion incorporated later today (D.v. & insh'Allah).
Philip Taylor
And if you do, how would you deal with the odd pathological case that requires three lines ? Insh'Allah I will have none of the third type, but I would still prefer to write my code to properly handle such cases if and when they do occur.
2. ### GeorgeW said,
October 1, 2020 @ 8:04 am
It has a different meaning with Muslims and non-Muslims.
We (non-Muslim Westerners) tend to use it as something that is highly contingent and would require divine intervention. Where Muslims take it seriously, understanding that nothing happens without divine sanction.
I was on a flight to the Middle East from Rome years ago and the pilot (an Arab) came on and said the flight would make an unscheduled stop in Greece for fuel, "insha'Allah." The westerners on the flight universally laughed, thinking, Oh sh*t, we may not make it.
3. ### Thomas Hutcheson said,
October 1, 2020 @ 8:23 am
It may refer to the joke that "inshallah" is like "manana,": but with the same sense of urgency. :)
4. ### Rose Eneri said,
October 1, 2020 @ 8:26 am
I've had many Muslim friends from Syria and Iraq and they used "insha'Allah" frequently, always in the sense of "God willing" and always in a reverential way. My Muslim friends, all of whom spoke Arabic as their L1, would never invoke any name of Allah in a sarcastic way. They always pronounced "insha'Allah" with 4 syllables with the main stress on "in" and secondary stress on "Al".
It seems to me that Biden was trying to pander to the American Muslim community. I doubt he succeeded. He might have scored some points had he used the invocation reverentially and pronounced it correctly.
Perhaps I'm being too sensitive, but I think of "insha'Allah" as not just Arabic, but as profoundly Muslim and deserving of respect.
5. ### Luke said,
October 1, 2020 @ 9:02 am
It sounded like Biden was about to say "In ?" but his mind slipped while saying it.
6. ### Luke said,
October 1, 2020 @ 9:04 am
It seems as if this commenting system doesn't allow for angled brackets, what I meant to say was "In [date/period of time]?".
7. ### Philip Taylor said,
October 1, 2020 @ 10:09 am
Luke, constructs commencing < or & are interpreted as HTML fragments wherever possible.
[(myl) If you want the angled brackets to show up (rather than disappearing into the html interpretation attempt), use the html entities > for > and < for < ]
8. ### mg said,
October 1, 2020 @ 12:22 pm
Reminds me of how "From your mouth to Gd's ear" is used.
9. ### Philip Taylor said,
October 1, 2020 @ 2:09 pm
Strictly speaking, I don't think that > is necessary — a bare > should suffice (if one does not appear in this comment after "bare ", I was wrong !). But an ampersand should be entered as &.
10. ### George said,
October 1, 2020 @ 3:29 pm
I've lived in North Africa and while I don't have the impression that 'insha'Allah' was used in a sarcastic "yeah, sure it'll happen…" way, neither was it particularly reverential. It was just systematically appended to statements referring to the future, in an automatic, unthinking way. Here in Ireland, even today, it's not rare to hear 'please God' being used in the same way, including by people who aren't noticeably pious by any means.
11. ### George said,
October 1, 2020 @ 3:33 pm
Apologies for the over-use of 'way' in my comment. I didn't proof read it. I don't usually write that badly, promise!
12. ### Scott P. said,
October 1, 2020 @ 11:35 pm
There is an interesting Spanish equivalent that actually comes from the Arabic — ¡Ojala!
13. ### Philip Taylor said,
October 2, 2020 @ 2:13 am
25. ### Philip Taylor said,
October 7, 2020 @ 4:35 am
Yay ! Code was <span>\$</span> (and please don't ask me how I entered that !).
26. ### Göktuğ Kayaalp said,
October 7, 2020 @ 8:12 am
Interesting for me, as someone born and raised in Turkey, reading that "inşallah" is/should be used reverentially. In Turkey this is not the case, and for many a reverential use of the word would be sarcastic or be perceived "excessively religious", so to speak. The way Biden seems to use it, in Turkey it'd be perceived as sarcastic, denoting doubt in the possibility of what the interlocutor says.
Tangentially, there's the formula "Allah Allah", which is again used casually, to express a variety of meanings, including incredulity, surprise, "wtf" / "wut?", doubt, despair. Other words used casually that include Allah "eyvallah" (="thanks", "ok", "alright", sometimes denoting despair and disappointment), "estağfurullah" (="you're welcome", "no worries", also sometimes to express humility), "tövbe estağfurullah", "fesuphanallah" (interjections expressing despair, resignment, containment of anger), "yallah" (="let's go/do it", "go away", sometimes even equivalent to "get off me" or "f*ck off").
|
2020-12-05 21:22:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3101593255996704, "perplexity": 9743.280881140672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141750841.83/warc/CC-MAIN-20201205211729-20201206001729-00608.warc.gz"}
|
https://web2.0calc.com/questions/help_9596
|
+0
# help
0
80
1
Find all real values of x that satisfy $$\frac{1}{x(x+1)}-\frac{1}{(x+1)(x+2)} < \frac{1}{3}$$ . Give your answer in interval notation.
Dec 26, 2019
#1
0
Multiplying both sides by 3x(x + 1)(x + 2): 3(x + 2) - 3x < (x + 1)(x + 2).
Simplify: x^2 + 3x - 4 > 0
Factor: (x + 4)(x - 1) > 0
Solution: (-inf,-4) U (1,inf)
Dec 26, 2019
|
2020-04-02 19:50:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9636088609695435, "perplexity": 2239.4288847883095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370507738.45/warc/CC-MAIN-20200402173940-20200402203940-00142.warc.gz"}
|
http://www.cliffsnotes.com/math/statistics/probability/probability-of-joint-occurrences
|
# Probability of Joint Occurrences
Another way to compute the probability of all three flipped coins landing heads is as a series of three different events: First flip the penny, then flip the nickel, and then flip the dime. Will the probability of landing three heads still be 0.125?
#### Multiplication rule
To compute the probability of joint occurrence (two or more independent events all occurring), multiply their probabilities.
For example, the probability of the penny landing heads is , or 0.5; the probability of the nickel next landing heads is , or 0.5; and the probability of the dime landing heads is , or 0.5. Thus, note that
0.5 × 0.5 × 0.5 = 0.125
which is what you determined with the classic theory by assessing the ratio of the number of favorable outcomes to the number of total outcomes. The notation for joint occurrence is
P( AB) = P( A) × P( B)
which is read: The probability of A and B both happening is equal to the probability of A times the probability of B.
Using the multiplication rule, you also can determine the probability of drawing two aces in a row from a deck of cards. The only way to draw two aces in a row from a deck of cards is for both draws to be favorable. For the first draw, the probability of a favorable outcome is . But because the first draw is favorable, only three aces are left among 51 cards. So, the probability of a favorable outcome on the second draw is . For both events to happen, you simply multiply those two probabilities together:
Note that these probabilities are not independent. If, however, you had decided to return the initial card drawn back to the deck before the second draw, then the probability of drawing an ace on each draw is , because these events are now independent. Drawing an ace twice in a row, with the odds being both times, gives the following:
In either case, you use the multiplication rule because you are computing probability for favorable outcomes in all events.
#### Addition rule|
Given mutually exclusive events, finding the probability of at least one of them occurring is accomplished by adding their probabilities.
For example, what is the probability of one coin flip resulting in at least one head or at least one tail?
The probability of one coin flip landing heads is 0.5, and the probability of one coin flip landing tails is 0.5. Are these two outcomes mutually exclusive in one coin flip? Yes, they are. You cannot have a coin land both heads and tails in one coin flip; therefore, you can determine the probability of at least one head or one tail resulting from one flip by adding the two probabilities:
0.5 + 0.5 = 1 (or certainty)
##### Example 1
What is the probability of at least one spade or one club being randomly chosen in one draw from a deck of cards?
The probability of drawing a spade in one draw is ; the probability of drawing a club in one draw is . These two outcomes are mutually exclusive in one draw because you cannot draw both a spade and a club in one draw; therefore, you can use the addition rule to determine the probability of drawing at least one spade or one club in one draw:
|
2015-03-26 19:11:30
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8613659143447876, "perplexity": 317.39138979579894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131292567.7/warc/CC-MAIN-20150323172132-00262-ip-10-168-14-71.ec2.internal.warc.gz"}
|
https://artofproblemsolving.com/wiki/index.php?title=2009_AMC_12A_Problems/Problem_8&diff=cur&oldid=121690
|
Difference between revisions of "2009 AMC 12A Problems/Problem 8"
The following problem is from both the 2009 AMC 12A #8 and 2009 AMC 10A #14, so both problems redirect to this page.
Problem
Four congruent rectangles are placed as shown. The area of the outer square is $4$ times that of the inner square. What is the ratio of the length of the longer side of each rectangle to the length of its shorter side?
$[asy] unitsize(6mm); defaultpen(linewidth(.8pt)); path p=(1,1)--(-2,1)--(-2,2)--(1,2); draw(p); draw(rotate(90)*p); draw(rotate(180)*p); draw(rotate(270)*p); [/asy]$
$\textbf{(A)}\ 3 \qquad \textbf{(B)}\ \sqrt {10} \qquad \textbf{(C)}\ 2 + \sqrt2 \qquad \textbf{(D)}\ 2\sqrt3 \qquad \textbf{(E)}\ 4$
Solution 1
$\boxed{(A)}$ The area of the outer square is $4$ times that of the inner square. Therefore the side of the outer square is $\sqrt 4 = 2$ times that of the inner square.
Then the shorter side of the rectangle is $1/4$ of the side of the outer square, and the longer side of the rectangle is $3/4$ of the side of the outer square, hence their ratio is $\boxed{3}$.
Solution 2
Let the side length of the smaller square be $1$, and let the smaller side of the rectangles be $y$. Since the larger square's area is four times larger than the smaller square's, the larger square's side length is $2$. $2$ is equivalent to $2y+1$, giving $y=1/2$. Then, the longer side of the rectangles is $3/2$. $\frac{\frac{3}{2}}{\frac{1}{2}}=\boxed{3}$.
Solution 3
Let the longer side length be $x$, and the shorter side be $a$.
We have that $(x+a)^2=4(x-a)^2\implies x+a=2(x-a)\implies x+a=2x-2a\implies x=3a \implies \frac{x}{a}=3$
Hence, the answer is $\boxed{A}\implies 3$
Solution 4
WLOG, let the shorter side of the rectangle be $1$, and the longer side be $x$ . Thus, the area of the larger square is $(1+x)^2$ . The area of the smaller square is $(x-1)^2$ . Thus, we have the equation $(1+x)^2 = 4 (x-2)^2$ . Solving for $x$, we get $x$ = $3$ .
~coolmath2017
|
2021-06-13 03:30:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 29, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5912960171699524, "perplexity": 140.41488897880825}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487598213.5/warc/CC-MAIN-20210613012009-20210613042009-00058.warc.gz"}
|
https://www.zbmath.org/authors/?q=ai%3Axiao.li
|
## Xiao, Li
Compute Distance To:
Author ID: xiao.li Published as: Xiao, Li
Documents Indexed: 77 Publications since 1996 Co-Authors: 99 Co-Authors with 68 Joint Publications 4,586 Co-Co-Authors
all top 5
### Co-Authors
6 single-authored 11 Liu, Anping 7 Liu, Ting 5 Li, Huaqing 5 Tang, Xianhua 4 Liao, Xiaofeng 4 Tang, Zhongwei 4 Zou, Min 3 Huo, Haiye 3 Liu, Yunhao 2 Cao, Jin 2 Chen, Rongsan 2 Gu, Yonggeng 2 Jiao, Yujuan 2 Liu, Xiaojing 2 Liu, Xiaomei 2 Nahrstedt, Klara 2 Peng, Yahong 2 Sun, Wenchang 2 Wang, Chen 2 Wu, Xiangyao 2 Wu, Yi-Heng 2 Xia, Xianggen 2 Xu, Junming 2 Zhang, Bai-Jun 2 Zhang, Xiaodong 2 Zhang, Zhijun 1 Ba, Nou 1 Benjamin, Simon C. 1 Cai, Qin 1 Chen, Guo 1 Chen, Huaijun 1 Chen, Peng 1 Chen, Xin 1 Cheng, Shuang 1 Cui, Cheng 1 Cui, Chenpei 1 Ding, Yong 1 Dong, Tao 1 Fan, Hongqi 1 Fitzsimons, Joseph F. 1 Gu, Wenjun 1 He, Guanghui 1 He, Jun 1 Huang, Pei 1 Huang, Xiangdong 1 Huang, Yidong 1 Jones, Jonathan A. 1 Kamornikov, Sergey Fedorovich 1 Kreling, Andrew 1 Kubricht, Stefan A. 1 Li, Hong 1 Li, Jingwu 1 Li, Shaoping 1 Li, Yanling 1 Li, Yansong 1 Li, Zhilin 1 Lin, Xiaoyan 1 Liu, Guangyuan 1 Liu, Keying 1 Liu, Yunhuai 1 Long, Guilu 1 Lou, Dingjun 1 Lu, Linzhang 1 Lu, Zaiqi 1 Luo, Ray 1 Ma, Qingxia 1 Mear, Mark E. 1 Ni, Lionel M. 1 Peng, Jiangde 1 Shi, Xi 1 Song, Yongli 1 Sun, Zhigang 1 Tao, J. X. 1 Tu, Changcun 1 Ullah, Saleem 1 Wang, Guoqing 1 Wang, HuiJie 1 Wang, Huiwei 1 Wang, Jian 1 Wang, Jing 1 Wang, Qing-Cai 1 Wang, Wenjie 1 Wang, Xiao 1 Wang, Yan 1 Wang, Yanwu 1 Weng, Shengxuan 1 Wu, Yunshun 1 Xia, Rong 1 Xiao, Jiangwen 1 Xu, Weijie 1 Xuan, Dong 1 Xue, Qiutiao 1 Yan, Hai Yang 1 Yang, Mengkai 1 Yang, Ni 1 Yang, Yaling 1 Yi, Xiaolan 1 Yu, Rongfeng 1 Yue, Dong 1 Zhang, Guoquan ...and 7 more Co-Authors
all top 5
### Serials
6 Journal of Northwest Normal University. Natural Science 4 Annals of Differential Equations 3 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 3 Journal of Mathematics. Wuhan University 3 Journal of Parallel and Distributed Computing 3 Nonlinear Analysis. Real World Applications 2 International Journal of Theoretical Physics 2 Physics Letters. A 2 IEEE Transactions on Computers 2 Advances in Mathematics 2 IEEE Transactions on Signal Processing 2 Journal of Natural Science of Hunan Normal University 2 Computer Networks 1 Computer Methods in Applied Mechanics and Engineering 1 International Journal of Control 1 Journal of Computational Physics 1 Journal of Mathematical Analysis and Applications 1 Journal of Mathematical Physics 1 Mathematical Methods in the Applied Sciences 1 Rocky Mountain Journal of Mathematics 1 Mathematica Slovaca 1 Operations Research 1 Siberian Mathematical Journal 1 Acta Mathematicae Applicatae Sinica 1 Journal of Mathematical Research & Exposition 1 Journal of Lanzhou University. Natural Sciences 1 Circuits, Systems, and Signal Processing 1 Chinese Annals of Mathematics. Series B 1 Journal of Biomathematics 1 Mathematica Applicata 1 Journal of Scientific Computing 1 Multidimensional Systems and Signal Processing 1 International Journal of Robust and Nonlinear Control 1 Pure and Applied Mathematics 1 Journal of Difference Equations and Applications 1 Mathematical Problems in Engineering 1 Journal of Anhui Normal University. Natural Science 1 Nonlinear Dynamics 1 Abstract and Applied Analysis 1 Journal of Hunan University. Natural Sciences 1 Electronic Journal of Qualitative Theory of Differential Equations 1 Acta Analysis Functionalis Applicata. AAFA 1 Physical Review Letters 1 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 1 Journal of University of Science and Technology of China 1 ACM Journal of Experimental Algorithmics 1 1 Systems Engineering and Electronics 1 Frontiers of Mathematics in China 1 International Journal of Systems Science. Principles and Applications of Systems and Integration 1 Journal of Central China Normal University. Natural Sciences
all top 5
### Fields
23 Partial differential equations (35-XX) 14 Computer science (68-XX) 12 Ordinary differential equations (34-XX) 9 Dynamical systems and ergodic theory (37-XX) 7 Information and communication theory, circuits (94-XX) 6 Combinatorics (05-XX) 6 Biology and other natural sciences (92-XX) 6 Systems theory; control (93-XX) 5 Quantum theory (81-XX) 4 Numerical analysis (65-XX) 3 Global analysis, analysis on manifolds (58-XX) 3 Fluid mechanics (76-XX) 3 Operations research, mathematical programming (90-XX) 2 Difference and functional equations (39-XX) 2 Mechanics of particles and systems (70-XX) 2 Classical thermodynamics, heat transfer (80-XX) 1 Number theory (11-XX) 1 Group theory and generalizations (20-XX) 1 Topological groups, Lie groups (22-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Integral equations (45-XX) 1 Operator theory (47-XX) 1 Statistics (62-XX) 1 Mechanics of deformable solids (74-XX) 1 Game theory, economics, finance, and other social and behavioral sciences (91-XX)
### Citations contained in zbMATH Open
29 Publications have been cited 316 times in 209 Documents Cited by Year
Homoclinic solutions for a class of second-order Hamiltonian systems. Zbl 1185.34056
Tang, X. H.; Xiao, Li
2009
Homoclinic solutions for nonautonomous second-order Hamiltonian systems with a coercive potential. Zbl 1153.37408
Tang, X. H.; Xiao, Li
2009
Homoclinic solutions for ordinary $$p$$-Laplacian systems with a coercive potential. Zbl 1181.34055
Tang, X. H.; Xiao, Li
2009
Symmetric weak-form integral equation method for three-dimensional fracture analysis. Zbl 0906.73074
Li, S.; Mear, M. E.; Xiao, L.
1998
Second-order consensus seeking in directed networks of multi-agent dynamical systems via generalized linear local interaction protocols. Zbl 1268.90006
Li, Huaqing; Liao, Xiaofeng; Dong, Tao; Xiao, Li
2012
Homoclinic solutions for a class of second order discrete Hamiltonian systems. Zbl 1211.39006
Tang, X. H.; Lin, Xiaoyan; Xiao, Li
2010
Experimental NMR realization of a generalized quantum search algorithm. Zbl 0969.81513
Long, G. L.; Yan, H. Y.; Li, Y. S.; Tu, C. C.; Tao, J. X.; Chen, H. M.; Liu, M. L.; Zhang, X.; Luo, J.; Xiao, L.; Zeng, X. Z.
2001
Existence of periodic solutions to second-order Hamiltonian systems with potential indefinite in sign. Zbl 1170.34029
Xiao, Li; Tang, X. H.
2008
Analytical proof on the existence of chaos in a generalized Duffing-type oscillator with fractional-order deflection. Zbl 1257.34030
Li, Huaqing; Liao, Xiaofeng; Ullah, Saleem; Xiao, Li
2012
A semi-implicit augmented IIM for Navier-Stokes equations with open, traction, or free boundary conditions. Zbl 1349.76498
Li, Zhilin; Xiao, Li; Cai, Qin; Zhao, Hongkai; Luo, Ray
2015
Distributed robust finite-time attitude containment control for multiple rigid bodies with uncertainties. Zbl 1328.93092
Weng, Shengxuan; Yue, Dong; Sun, Zhigang; Xiao, Li
2015
Optimal policies for a dual-sourcing inventory problem with endogenous stochastic lead times. Zbl 1366.90005
Song, Jing-Sheng; Xiao, Li; Zhang, Hanqin; Zipkin, Paul
2017
Uncertainty principles associated with the offset linear canonical transform. Zbl 1454.94026
Huo, Haiye; Sun, Wenchang; Xiao, Li
2019
Event-triggered nonlinear consensus in directed multi-agent systems with combinational state measurements. Zbl 1346.93256
Li, Huaqing; Chen, Guo; Xiao, Li
2016
Transitivity of varietal hypercube networks. Zbl 1307.05162
Xiao, Li; Cao, Jin; Xu, Jun-Ming
2014
Quantum information processing with delocalized qubits under global control. Zbl 1229.81046
Fitzsimons, Joseph; Xiao, Li; Benjamin, Simon C.; Jones, Jonathan A.
2007
Existence of homoclinic orbit for second-order nonlinear difference equation. Zbl 1207.39006
Chen, Peng; Xiao, Li
2010
Cycles and paths embedded in varietal hypercubes. Zbl 1324.05102
Cao, Jin; Xiao, Li; Xu, Junming
2014
Multi-stage robust Chinese remainder theorem. Zbl 1394.94942
Xiao, Li; Xia, Xiang-Gen; Wang, Wenjie
2014
Entropy-TVD scheme for the shallow water equations in one dimension. Zbl 06849493
Chen, Rongsan; Zou, Min; Xiao, Li
2017
Edge-based traffic engineering for OSPF networks. Zbl 1101.68357
Wang, Jun; Yang, Yaling; Xiao, Li; Nahrstedt, Klara
2005
Cluster consensus on discrete-time multi-agent networks. Zbl 1253.93100
Xiao, Li; Liao, Xiaofeng; Wang, Huiwei
2012
Existence of periodic solutions for second order Hamiltonian system. Zbl 1266.34072
Xiao, Li
2012
Oscillation of the solutions of hyperbolic partial functional differential equations of neutral type. Zbl 1038.35144
Liu, Anping; Xiao, Li; Liu, Ting
2002
Improving memory performance of sorting algorithms. Zbl 1071.68522
Xiao, Li; Zhang, Xiaodong; Kubricht, Stefan A.
2000
Derivation of nonlinear Schrödinger equation. Zbl 1201.81050
Wu, Xiang-Yao; Zhang, Bai-Jun; Liu, Xiao-Jing; Xiao, Li; Wu, Yi-Heng; Wang, Yan; Wang, Qing-Cai; Cheng, Shuang
2010
Towards robustness in residue number systems. Zbl 1414.94686
Xiao, Li; Xia, Xiang-Gen; Huo, Haiye
2017
Anti-periodic boundary value problem for second-order impulsive integro-differential equation with delay. Zbl 1438.45016
Zhang, Linli; Liu, Anping; Xiao, Li
2018
Necessary and sufficient conditions for oscillations of parabolic partial differential equations. Zbl 1127.35306
Liu, Anping; Xiao, Li; Liu, Ting; Liu, Keying
2003
Uncertainty principles associated with the offset linear canonical transform. Zbl 1454.94026
Huo, Haiye; Sun, Wenchang; Xiao, Li
2019
Anti-periodic boundary value problem for second-order impulsive integro-differential equation with delay. Zbl 1438.45016
Zhang, Linli; Liu, Anping; Xiao, Li
2018
Optimal policies for a dual-sourcing inventory problem with endogenous stochastic lead times. Zbl 1366.90005
Song, Jing-Sheng; Xiao, Li; Zhang, Hanqin; Zipkin, Paul
2017
Entropy-TVD scheme for the shallow water equations in one dimension. Zbl 06849493
Chen, Rongsan; Zou, Min; Xiao, Li
2017
Towards robustness in residue number systems. Zbl 1414.94686
Xiao, Li; Xia, Xiang-Gen; Huo, Haiye
2017
Event-triggered nonlinear consensus in directed multi-agent systems with combinational state measurements. Zbl 1346.93256
Li, Huaqing; Chen, Guo; Xiao, Li
2016
A semi-implicit augmented IIM for Navier-Stokes equations with open, traction, or free boundary conditions. Zbl 1349.76498
Li, Zhilin; Xiao, Li; Cai, Qin; Zhao, Hongkai; Luo, Ray
2015
Distributed robust finite-time attitude containment control for multiple rigid bodies with uncertainties. Zbl 1328.93092
Weng, Shengxuan; Yue, Dong; Sun, Zhigang; Xiao, Li
2015
Transitivity of varietal hypercube networks. Zbl 1307.05162
Xiao, Li; Cao, Jin; Xu, Jun-Ming
2014
Cycles and paths embedded in varietal hypercubes. Zbl 1324.05102
Cao, Jin; Xiao, Li; Xu, Junming
2014
Multi-stage robust Chinese remainder theorem. Zbl 1394.94942
Xiao, Li; Xia, Xiang-Gen; Wang, Wenjie
2014
Second-order consensus seeking in directed networks of multi-agent dynamical systems via generalized linear local interaction protocols. Zbl 1268.90006
Li, Huaqing; Liao, Xiaofeng; Dong, Tao; Xiao, Li
2012
Analytical proof on the existence of chaos in a generalized Duffing-type oscillator with fractional-order deflection. Zbl 1257.34030
Li, Huaqing; Liao, Xiaofeng; Ullah, Saleem; Xiao, Li
2012
Cluster consensus on discrete-time multi-agent networks. Zbl 1253.93100
Xiao, Li; Liao, Xiaofeng; Wang, Huiwei
2012
Existence of periodic solutions for second order Hamiltonian system. Zbl 1266.34072
Xiao, Li
2012
Homoclinic solutions for a class of second order discrete Hamiltonian systems. Zbl 1211.39006
Tang, X. H.; Lin, Xiaoyan; Xiao, Li
2010
Existence of homoclinic orbit for second-order nonlinear difference equation. Zbl 1207.39006
Chen, Peng; Xiao, Li
2010
Derivation of nonlinear Schrödinger equation. Zbl 1201.81050
Wu, Xiang-Yao; Zhang, Bai-Jun; Liu, Xiao-Jing; Xiao, Li; Wu, Yi-Heng; Wang, Yan; Wang, Qing-Cai; Cheng, Shuang
2010
Homoclinic solutions for a class of second-order Hamiltonian systems. Zbl 1185.34056
Tang, X. H.; Xiao, Li
2009
Homoclinic solutions for nonautonomous second-order Hamiltonian systems with a coercive potential. Zbl 1153.37408
Tang, X. H.; Xiao, Li
2009
Homoclinic solutions for ordinary $$p$$-Laplacian systems with a coercive potential. Zbl 1181.34055
Tang, X. H.; Xiao, Li
2009
Existence of periodic solutions to second-order Hamiltonian systems with potential indefinite in sign. Zbl 1170.34029
Xiao, Li; Tang, X. H.
2008
Quantum information processing with delocalized qubits under global control. Zbl 1229.81046
Fitzsimons, Joseph; Xiao, Li; Benjamin, Simon C.; Jones, Jonathan A.
2007
Edge-based traffic engineering for OSPF networks. Zbl 1101.68357
Wang, Jun; Yang, Yaling; Xiao, Li; Nahrstedt, Klara
2005
Necessary and sufficient conditions for oscillations of parabolic partial differential equations. Zbl 1127.35306
Liu, Anping; Xiao, Li; Liu, Ting; Liu, Keying
2003
Oscillation of the solutions of hyperbolic partial functional differential equations of neutral type. Zbl 1038.35144
Liu, Anping; Xiao, Li; Liu, Ting
2002
Experimental NMR realization of a generalized quantum search algorithm. Zbl 0969.81513
Long, G. L.; Yan, H. Y.; Li, Y. S.; Tu, C. C.; Tao, J. X.; Chen, H. M.; Liu, M. L.; Zhang, X.; Luo, J.; Xiao, L.; Zeng, X. Z.
2001
Improving memory performance of sorting algorithms. Zbl 1071.68522
Xiao, Li; Zhang, Xiaodong; Kubricht, Stefan A.
2000
Symmetric weak-form integral equation method for three-dimensional fracture analysis. Zbl 0906.73074
Li, S.; Mear, M. E.; Xiao, L.
1998
all top 5
### Cited by 281 Authors
25 Tang, Xianhua 18 Lu, Shiping 11 Chen, Peng 9 Liao, Xiaofeng 8 Yuan, Rong 8 Zhang, Ziheng 7 Lin, Xiaoyan 7 Lv, Xiang 7 Zhang, Qiongfen 6 Chen, Guanwei 6 Huang, Tingwen 6 Li, Huaqing 6 Schechter, Martin 5 Chen, Huiwen 5 He, Zhimin 5 Xiao, Li 4 Chen, Guo 4 He, Xiaofei 4 Sun, Juntao 4 Wang, Xiaoping 3 Dong, Tao 3 Du, Bo 3 Kong, Fanchao 3 Li, Zhilin 3 Liang, Zaitao 3 Timoumi, Mohsen 3 Wu, Dong-Lun 3 Zhang, Yongxin 2 Agarwal, Ravi P. 2 Altın, Ayşegül 2 Chen, Wenbin 2 Cheng, Li-Tien 2 Dong, Zhaoyang 2 Han, Qi 2 Jia, Xuewen 2 Jiang, Jifa 2 Lai, Mingyong 2 Li, Bo 2 Li, Chuandong 2 Li, Lin 2 Liu, Anping 2 Luo, Zhiguo 2 Nieto Roig, Juan Jose 2 Peng, Zhouhua 2 Sun, Hui 2 Tang, Chun-Lei 2 Tersian, Stepan Agop 2 Wan, Lili 2 Wang, Dan 2 Wang, Jian 2 Wu, Tsungfang 2 Wu, Xingping 2 Xie, Jingli 2 Xie, Xiangpeng 2 Xu, Fei 2 Xu, Junming 2 Yan, Lizhao 2 Yan, Ping 2 Yue, Dong 2 Zhai, Shidong 2 Zhang, Qiming 2 Zhang, Qinqin 2 Zhou, Jinxin 2 Zhou, Shenggao 1 Akhmet, Marat Ubaydulla 1 Alemansour, Hamed 1 Amini, Amir A. 1 Aouiti, Chaouki 1 Ardestani-Jaafari, Amir 1 Arreola-Risa, Antonio 1 Azarbahram, Ali 1 Barron, Yonit 1 Bayat, Abolfazl 1 Belotti, Pietro 1 Benhassine, Abderrazek 1 Biswas, Biswarup 1 Bonanno, Gabriele 1 Bose, Sougato 1 Burra, Lakshmi 1 Cai, He 1 Cai, Mingjie 1 Campbell, Earl T. 1 Cao, Jin 1 Chan, Timothy Moon-Yew 1 Chaudhary, Renu 1 Chen, Guoping 1 Chen, Haibo 1 Chen, Kai 1 Chen, Kairui 1 Chen, Lijuan 1 Chen, Rongsan 1 Chen, Shuhui 1 Chen, Xiaohong 1 Chen, Xingfan 1 Chen, Yangyang 1 Chen, Yi 1 Chen, Yiran 1 Chen, Zengqiang 1 Cheung, Siu Wun 1 Chu, Jifeng ...and 181 more Authors
all top 5
### Cited in 83 Serials
19 Advances in Difference Equations 17 Nonlinear Dynamics 12 Boundary Value Problems 10 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 9 Journal of Mathematical Analysis and Applications 9 Abstract and Applied Analysis 8 Nonlinear Analysis. Real World Applications 7 Mathematical Methods in the Applied Sciences 5 Journal of Difference Equations and Applications 4 Information Sciences 4 European Journal of Operational Research 4 Qualitative Theory of Dynamical Systems 4 International Journal of Systems Science. Principles and Applications of Systems and Integration 3 Computers & Mathematics with Applications 3 Journal of Computational Physics 3 Chaos, Solitons and Fractals 3 Applied Mathematics and Computation 3 Automatica 3 Journal of Applied Mathematics and Computing 3 Mediterranean Journal of Mathematics 2 Journal of the Franklin Institute 2 Journal of Mathematical Physics 2 Rocky Mountain Journal of Mathematics 2 Mathematische Nachrichten 2 Results in Mathematics 2 Acta Mathematicae Applicatae Sinica. English Series 2 Journal of Scientific Computing 2 Multidimensional Systems and Signal Processing 2 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 2 Acta Mathematica Sinica. English Series 2 Journal of Fixed Point Theory and Applications 2 Frontiers of Mathematics in China 1 Computers and Fluids 1 International Journal of Control 1 Mathematical Notes 1 Annali di Matematica Pura ed Applicata. Serie Quarta 1 Journal of Computer and System Sciences 1 Journal of Differential Equations 1 Manuscripta Mathematica 1 Networks 1 Proceedings of the American Mathematical Society 1 SIAM Journal on Numerical Analysis 1 Circuits, Systems, and Signal Processing 1 Applied Mathematics and Mechanics. (English Edition) 1 Graphs and Combinatorics 1 Discrete & Computational Geometry 1 Computers & Operations Research 1 Applied Mathematics Letters 1 Mathematical and Computer Modelling 1 Neural Networks 1 Applied Mathematical Modelling 1 Indagationes Mathematicae. New Series 1 International Journal of Robust and Nonlinear Control 1 Topological Methods in Nonlinear Analysis 1 Complexity 1 Mathematical Problems in Engineering 1 European Journal of Control 1 Differential Equations and Dynamical Systems 1 Mathematical Physics, Analysis and Geometry 1 Taiwanese Journal of Mathematics 1 Discrete Dynamics in Nature and Society 1 Communications in Nonlinear Science and Numerical Simulation 1 Optimization and Engineering 1 Mathematical Modelling and Analysis 1 Dynamics of Continuous, Discrete & Impulsive Systems. Series A. Mathematical Analysis 1 Journal of Applied Mathematics 1 Advanced Nonlinear Studies 1 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 1 International Journal of Wavelets, Multiresolution and Information Processing 1 International Journal of Quantum Information 1 Mathematical Biosciences and Engineering 1 Journal of Nonlinear Science and Applications 1 Advances in Mathematical Physics 1 International Journal of Combinatorics 1 Science China. Technological Sciences 1 Journal of Pseudo-Differential Operators and Applications 1 Asian Journal of Control 1 Afrika Matematika 1 Journal of Applied Analysis and Computation 1 Quantum Studies: Mathematics and Foundations 1 Journal of Function Spaces 1 Journal of Elliptic and Parabolic Equations 1 Journal of Nonlinear and Variational Analysis
all top 5
### Cited in 32 Fields
101 Ordinary differential equations (34-XX) 87 Dynamical systems and ergodic theory (37-XX) 52 Global analysis, analysis on manifolds (58-XX) 37 Systems theory; control (93-XX) 33 Mechanics of particles and systems (70-XX) 26 Difference and functional equations (39-XX) 22 Operator theory (47-XX) 17 Computer science (68-XX) 13 Operations research, mathematical programming (90-XX) 12 Partial differential equations (35-XX) 10 Numerical analysis (65-XX) 9 Calculus of variations and optimal control; optimization (49-XX) 7 Fluid mechanics (76-XX) 6 Combinatorics (05-XX) 5 Quantum theory (81-XX) 5 Biology and other natural sciences (92-XX) 4 Information and communication theory, circuits (94-XX) 3 Harmonic analysis on Euclidean spaces (42-XX) 3 Statistical mechanics, structure of matter (82-XX) 2 Integral equations (45-XX) 2 Functional analysis (46-XX) 2 Mechanics of deformable solids (74-XX) 2 Optics, electromagnetic theory (78-XX) 1 Mathematical logic and foundations (03-XX) 1 Number theory (11-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Group theory and generalizations (20-XX) 1 Real functions (26-XX) 1 Sequences, series, summability (40-XX) 1 Abstract harmonic analysis (43-XX) 1 Statistics (62-XX) 1 Game theory, economics, finance, and other social and behavioral sciences (91-XX)
|
2022-05-23 15:31:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5957436561584473, "perplexity": 13982.17025364914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558030.43/warc/CC-MAIN-20220523132100-20220523162100-00163.warc.gz"}
|
http://www.sceneadvisor.com/West-Virginia/missing-delimiter-error-latex.html
|
Address Griffith Crk, Alderson, WV 24910 (304) 445-7666
# missing delimiter error latex Layland, West Virginia
The free resources on this site are funded by book sales, not by adverts. asked 11 months ago viewed 149 times active 11 months ago Visit Chat Linked 46 Automatic left and right commands Related 11Regarding sizing of delimiters1“Missing Delimiter” in Beamer-class3How to increase the What do you call "intellectual" jobs? Not the answer you're looking for?
inserted). more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed And I don't find out the problem Hot Network Questions Why does the find command blow up in /run/? To generalize further the area of any $n$-sided polygon on the sphere is as follows, $\left\{\text{Area of an N-sided Polygon}\right\} =\left\{\text{sum of the angles -(n-2)\pi}\right\}$ i may have misunderstood your intent
Let the portion of $\angle{A}$ and $\angle{C}$ on the same side of $AC$ as $B$ be called $\angle{A_1}$ and $\angle{C_1}$. no backslashes should be used if an actual bracket is intended. and within a math environment, normal text (the "Area of ...") needs to be so indicated, by \text{...}, and within that string, any small math expressions need to be returned to Foren-Regeln -- mrunix -- vB4 Standard-Style -- Standard Mobile Style -- Deutsch (Du) -- Deutsch (Sie) mrunix.de -- Impressum Archiv Nach oben Alle Zeitangaben in WEZ +1.
This explains the error message you got > (! How to sync clock frequency to a microcontroller Unique representation of combination without sorting Is it possible to sell a rental property WHILE tenants are living there? Klicken Sie oben auf 'Registrieren', um den Registrierungsprozess zu starten. after \right like \right. –user11232 Feb 27 '14 at 13:21 Yes, this solved.
To generalize furthur the area of any $n$-sided polygon on the sphere is as follows, $\left{Area of an N-sided Polygon}\right=\left(sum of the angles-(n-2)\pi\right$ \end{proof} This is code that is producing the inserted) }4Large Delimiters7Uniform delimiter size3Delimiting equations in math mode2Error “! Generated Wed, 19 Oct 2016 06:07:22 GMT by s_ac4 (squid/3.5.20) inserted) up vote 12 down vote favorite 2 I tried this and that, but could not solve this problem.
Missing $Inserted” After writing an equation?4Clubsuit gives Missing$ inserted error4Missing delimiter (. Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the Powered by vBulletin Version 4.2.3 (Deutsch)Copyright ©2016 Adduco Digital e.K. Of course the trouble with a vague message like that doesn't say what to do ...
Ich wüßte trotzdem gern, was da jetzt falsch ist. also, if you are using amsthm, you can insert \qedhere before the closing \] to place the "tombstone" on the last line of the display. Anyone have suggestions? here is the reformulated display: \begin{align*} \left|ABCD\right| &=\left|\Delta{ABC}\right|+\left|\Delta{ACD}\right|\\ \ &= p^2\left(\angle{A_1}+\angle{B}+\angle{C_1}-\pi)+p^2(\angle{A_2} +\angle{C_2}+\angle{D}-\pi\right)\\ \ &= p^2\left(\angle{A_1}+\angle{B}+\angle{C_1}+\angle{A_2} +\angle{C_2}+\angle{D}-2\pi\right). \end{align*} actually, since most of the symbols between the delimiters aren't taller than the normal
Nonparametric clustering Sum of reciprocals of the perfect powers Where are sudo's insults stored? Asking for a written form filled in ALL CAPS Referee did not fully understand accepted paper What's the longest concertina word you can find? Vielen Dank für Eure Hilfe. Sie können auch jetzt schon Beiträge lesen.
Use the siunitx package to format physical units. Terms of Use Privacy Policy SiteMap FAQs [texhax] parenthesis problem in LaTeX Tom Schneider toms at ncifcrf.gov Sat Jan 17 21:36:37 CET 2009 Previous message: [texhax] parenthesis problem in LaTeX Next Your code should be $$E_{k} = 10\log_{10}\left\{\frac{\sum_{i=1}^{M_{k}}{{\left[W_{k}^{P}x(i)\right]}^{2}}}{M_{k}}\right\} (\si{\decibel}), \hspace{5em} k = 1,2,\dots,N$$ share|improve this answer edited Mar 23 '15 at 8:08 answered Mar 23 '15 at 7:32 ChrisS 10.2k12746 Schneider National Institutes of Health National Cancer Institute Center for Cancer Research Nanobiology Program Molecular Information Theory Group Frederick, Maryland 21702-1201 toms at ncifcrf.gov permanent email: toms at alum.mit.edu http://www.ccrnp.ncifcrf.gov/~toms/ Previous
Identify title and author of a Time travel short story If you put two blocks of an element together, why don't they bond? After fixing these points, your align* environment looks like this (you don't need to manually add space before the &=): \begin{align*} \left|ABCD\right| &= \left|\Delta{ABC}\right| + \left|\Delta{ACD}\right|\\ &= p^2\left(\angle{A_1} + \angle{B} + But it looks like maybe one needs a period somehow ... > BTW, here is a single-line example file for plain TeX: > > \left( a+b \middle) \times \middle( a Is it possible for NPC trainers to have a shiny Pokémon?
And g(h) is undefined for h > 0? –Heiko Oberdiek Feb 27 '14 at 13:50 Thanks guys each of your comments were useful. –ozi Feb 27 '14 at 13:56 share|improve this answer answered Feb 27 '14 at 13:40 user11232 add a comment| up vote 1 down vote \begin{numcases}{|x|=} x, & for $x \geq 0$ \label{eq:4} \\ -x, & for\$x <
|
2019-07-20 03:08:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.7942941784858704, "perplexity": 6941.641345639766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526408.59/warc/CC-MAIN-20190720024812-20190720050812-00501.warc.gz"}
|
https://ora.ouls.ox.ac.uk/objects/uuid:b44e2382-487d-4e70-aeab-3c2303846dcb
|
Thesis
### Equivariant scanning and stable splittings of configuration spaces
Abstract:
We give a definition of the scanning map for configuration spaces that is equivariant under the action of the diffeomorphism group of the underlying manifold. We use this to extend the Bödigheimer-Madsen result for the stable splittings of the Borel constructions of certain mapping spaces from compact Lie group actions to all smooth actions. Moreover, we construct a stable splitting of configuration spaces which is equivariant under smooth group actions, completing a zig-zag of equivariant...
### Access Document
Files:
• (pdf, 747.1KB)
### Authors
More by this author
Institution:
University of Oxford
Division:
MPLS
Department:
Mathematical Institute
Research group:
Topology
Oxford college:
St Cross College
Role:
Author
#### Contributors
Role:
Supervisor
Publication date:
2012
Type of award:
MSc
Level of award:
Masters
Awarding institution:
University of Oxford
Language:
English
Keywords:
Subjects:
UUID:
uuid:b44e2382-487d-4e70-aeab-3c2303846dcb
Local pid:
ora:8406
Deposit date:
2014-05-09
|
2022-08-12 05:59:39
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8656551837921143, "perplexity": 2618.5077477341433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00706.warc.gz"}
|
http://www.lofoya.com/Solved/1562/combining-the-states-p-and-q-together-in-1998-what-is-the-percentage
|
# Easy Tabular Data Solved QuestionData Interpretation Discussion
Common Information
Study the following table and answer the questions.
Number of Candidates Appeared and Qualified in a Competitive Examination from Different States over the Years.
Table below can be scrolled horizontally
State Year 1997 1998 1999 2000 2001 App. Qual. App. Qual. App. Qual. App. Qual. App. Qual. M 5200 720 8500 980 7400 850 6800 775 9500 1125 N 7500 840 9200 1050 8450 920 9200 980 8800 1020 P 6400 780 8800 1020 7800 890 8750 1010 9750 1250 Q 8100 950 9500 1240 8700 980 9700 1200 8950 995 R 7800 870 7600 940 9800 1350 7600 945 7990 885
Q. Common Information Question: 6/6 Combining the states P and Q together in 1998, what is the percentage of the candidates qualified to that of the candidate appeared?
✖ A. 10.87% ✖ B. 11.49% ✔ C. 12.35% ✖ D. 12.54%
Solution:
Option(C) is correct
Required percentage:
$= \dfrac{1020 + 1240}{8800 + 9500} ×100\%$
$=\dfrac{2260}{18300} ×100\%$
$= \textbf{12.35%}$
|
2017-03-23 04:15:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48050162196159363, "perplexity": 1069.3588561167767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186774.43/warc/CC-MAIN-20170322212946-00658-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://www.gamedev.net/forums/topic/497609-sdlhigh-resolution-timers/
|
# [SDL]high resolution timers?
This topic is 3663 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
on this page http://www.libsdl.org/cgi/docwiki.cgi/SDL_GetTicks there is a note that says: NOTE: On platforms with high resolution timers, you can get the number of microseconds. Will add details after further investigations. Does anyone know anything more about this?
##### Share on other sites
Use the source!
For example, SDL 1.2 on Windows appears not to use a high resolution timer (due to issues with QueryPerformanceCounter on Win2K apparently).
##### Share on other sites
well i did look through the header file SDL_timer.h but I did not find anything that talked about it.
##### Share on other sites
You would have to look through the actual source code, not just the header. Here is a snippet from the win32 timer directory
void SDL_StartTicks(void){ /* Set first ticks value */#ifdef USE_GETTICKCOUNT start = GetTickCount();#else#if 0 /* Apparently there are problems with QPC on Win2K */ if (QueryPerformanceFrequency(&hires_ticks_per_second) == TRUE) { hires_timer_available = TRUE; QueryPerformanceCounter(&hires_start_ticks); } else#endif { hires_timer_available = FALSE; timeBeginPeriod(1); /* use 1 ms timer precision */ start = timeGetTime(); }#endif}Uint32 SDL_GetTicks(void){ DWORD now, ticks;#ifndef USE_GETTICKCOUNT LARGE_INTEGER hires_now;#endif#ifdef USE_GETTICKCOUNT now = GetTickCount();#else if (hires_timer_available) { QueryPerformanceCounter(&hires_now); hires_now.QuadPart -= hires_start_ticks.QuadPart; hires_now.QuadPart *= 1000; hires_now.QuadPart /= hires_ticks_per_second.QuadPart; return (DWORD)hires_now.QuadPart; } else { now = timeGetTime(); }#endif if ( now < start ) { ticks = (TIME_WRAP_VALUE-start) + now; } else { ticks = (now - start); } return(ticks);}
As rip-off said, and you can see, they are just using timeGetTime( ) and not the high-res timer. Looking through the unix source, it looks like they try to use clock_gettime, but default to gettimeofday.
1. 1
2. 2
3. 3
4. 4
Rutin
18
5. 5
• 14
• 12
• 9
• 12
• 37
• ### Forum Statistics
• Total Topics
631423
• Total Posts
3000003
×
|
2018-06-24 13:26:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2023918628692627, "perplexity": 12189.920764078937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866937.79/warc/CC-MAIN-20180624121927-20180624141927-00311.warc.gz"}
|
https://docs.contrastsecurity.com/en/install-go-without-service.html
|
# Install the Go agent without the Contrast Service
### Important
If you did not opt-in to install the Go agent without the service, then follow the install the Go agent (legacy) instructions.
You can opt-in to configure the agent to report directly to Contrast by installing the Go agent without the Contrast service.
The executable must be built using contrast-go to instrument the application with the agent. A basic installation of the Go agent consists of two parts: producing an executable and running the executable. An installation overview looks like this:
• Go agent
• contrast_security.yaml
Ensure your yaml or environment variables are set to bypass:true.
agent:
service:
bypass: true
Alternatively, the same feature can be enabled using environment variables:
CONTRAST__AGENT__SERVICE__BYPASS=true
2. Set up appropriate permissions.
3. Build the Go application, replacing go with contrast-go to get a final application artifact which contains the Go agent.
4. Run the executable.
5. Exercise and test your application.
6. Verify that Contrast sees your application.
### Tip
To see a list of available flags with command line arguments for contrast-go, type contrast-go -h.
For specific installation instructions, select one of the following options:
|
2022-07-07 05:00:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5394006967544556, "perplexity": 9680.790346513964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683683.99/warc/CC-MAIN-20220707033101-20220707063101-00013.warc.gz"}
|
http://physics.stackexchange.com/questions/26488/what-was-the-most-distant-supernova-spotted-by-a-amateur-astronomer-until-today/26489
|
# What was the most distant supernova spotted by a amateur astronomer until today?
What type of commercial amateur telescope and what method (difference imaging,...) did he use for identifying the supernova?
-
## 1 Answer
This source here has an example at 290,000,000 - 300,000,000 light years, although it's a bit thin on detail, and can't really be considered an authoritative source. (Also here.)
-
|
2015-04-18 20:52:19
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8523271679878235, "perplexity": 3459.4911269128647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246636213.71/warc/CC-MAIN-20150417045716-00167-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://diabetesjournals.org/care/article/43/5/982/35704/Incidence-and-Associations-of-Chronic-Kidney
|
OBJECTIVE
To determine the incidence of and factors associated with an estimated glomerular filtration rate (eGFR) <60 mL/min/1.73 m2 in people with diabetes.
RESEARCH DESIGN AND METHODS
We identified people with diabetes in the EXamining ouTcomEs in chroNic Disease in the 45 and Up Study (EXTEND45), a population-based cohort study (2006–2014) that linked the Sax Institute’s 45 and Up Study cohort to community laboratory and administrative data in New South Wales, Australia. The study outcome was the first eGFR measurement <60 mL/min/1.73 m2 recorded during the follow-up period. Participants with eGFR < 60 mL/min/1.73 m2 at baseline were excluded. We used Poisson regression to estimate the incidence of eGFR <60 mL/min/1.73 m2 and multivariable Cox regression to examine factors associated with the study outcome.
RESULTS
Of 9,313 participants with diabetes, 2,106 (22.6%) developed incident eGFR <60 mL/min/1.73 m2 over a median follow-up time of 5.7 years (interquartile range, 3.0–5.9 years). The eGFR <60 mL/min/1.73 m2 incidence rate per 100 person-years was 6.0 (95% CI 5.7–6.3) overall, 1.5 (1.3–1.9) in participants aged 45–54 years, 3.7 (3.4–4.0) for 55–64 year olds, 7.6 (7.1–8.1) for 65–74 year olds, 15.0 (13.0–16.0) for 75–84 year olds, and 26.0 (22.0–32.0) for those aged 85 years and over. In a fully adjusted multivariable model incidence was independently associated with age (hazard ratio 1.23 per 5-year increase; 95% CI 1.19–1.26), geography (outer regional and remote versus major city: 1.36; 1.17–1.58), obesity (obese class III versus normal: 1.44; 1.16–1.80), and the presence of hypertension (1.52; 1.33–1.73), coronary heart disease (1.13; 1.02–1.24), cancer (1.30; 1.14–1.50), and depression/anxiety (1.14; 1.01–1.27).
CONCLUSIONS
In participants with diabetes, the incidence of an eGFR <60 mL/min/1.73 m2 was high. Older age, remoteness of residence, and the presence of various comorbid conditions were associated with higher incidence.
Diabetes is the leading cause of end-stage kidney disease worldwide (1,2). An estimated 425 million adults (20–79 years) are affected, with projections indicating substantial growth to over 629 million by 2045 (3). The comorbid burden of diabetes and chronic kidney disease (CKD) leads to an increased risk of cardiovascular disease and death (47), as well as an increased rate of depression (8), poorer quality of life, and decreased productivity (9,10). Identifying people with diabetes who are at increased risk of developing CKD is a key step to developing preventative strategies for improving health outcomes in this high-risk population.
Traditionally, incidence and prevalence estimates of chronic health conditions, such as CKD, in patients with diabetes have been derived from cross-sectional or repeated cross-sectional studies. While these studies yield valuable information, all have important design limitations, such as vulnerability to substantial healthy volunteer bias (11,12). Alternative methods that can monitor the burden of the health complications of diabetes are needed to assess changing epidemiology and the efficacy of health service interventions. Large-scale routinely collected administrative and clinical data offer the opportunity to efficiently monitor the incidence and prevalence of comorbidities over time.
In this study, we use a large linked data set comprising multiple routinely collected data sources to 1) determine the incidence rate of an estimated glomerular filtration rate (eGFR) <60 mL/min/1.73 m2 in a population-based cohort of people with diabetes and 2) assess the sociodemographic, lifestyle, and clinical factors associated with eGFR <60 mL/min/1.73 m2.
### Study Design and Data Sources
EXamining ouTcomEs in chroNic Disease in the 45 and Up Study (EXTEND45) is a large population-based cohort, built on the Sax Institute’s 45 and Up Study, a prospective cohort of residents aged 45 years or older in the state of New South Wales (NSW), Australia. The 45 and Up Study participants and their baseline questionnaire responses have been described in detail elsewhere (13). In brief, between 2006 and 2009, potential participants were randomly selected from the Department of Human Services (DHS) enrollment database, invited to join the study and complete a detailed baseline questionnaire containing information on their health and socioeconomic characteristics, and provided informed consent to long-term follow-up, including having their data linked to other information sources.
In the EXTEND45 study, participants of the 45 and Up Study together with their baseline questionnaire responses were linked to routinely collected health and administrative databases (between 2006 and 2014), including the following: 1) community laboratory testing services; 2) the Pharmaceutical Benefits Scheme (PBS) (14) and 3) the Medicare Benefits Schedule (MBS) (15), both provided by the DHS; 4) the NSW Admitted Patient Data Collection; and 5) the Registry of Births, Deaths and Marriages.
Community laboratory testing services included test results for serum creatinine, serum glucose, and HbA1c results that were ordered as part of routine care. The PBS and MBS are part of Medicare, Australia’s universal public health insurance scheme, established to provide free or subsidized access to a range of medical and allied health services to all Australians within both government (public) and private organizations. The PBS data include claims for all subsidized pharmaceutical products nationally, while the MBS data include all claims for subsidized medical and diagnostic services provided by medical and other health service providers. The NSW Admitted Patient Data Collection captures inpatient separations from all public, private, and repatriation hospitals as well as day procedure centers and aged care facilities, with diagnostic information coded according to the ICD-10 Australian Modification (16).
MBS and PBS data were linked deterministically by the Sax Institute using a unique identifier that was provided to the DHS. All other data linkage was performed probabilistically by the NSW Centre for Health Record Linkage (https://www.cherel.org.au) covering the period 2005–2014. Probabilistic linkage takes into account a wider range of potential identifiers and seeks to link them based on computed weights in order to determine the probability of a match (17). Participants were excluded if they had inconsistent records suggesting incorrect linkages (e.g., death before date of study entry).
Ethical approval for the EXTEND45 Study was obtained from the NSW Population and Health Services Research Ethics Committee (study reference number HREC/13/CIPHS/69). The 45 and Up Study received ethics approval from the University of New South Wales Human Research Ethics Committee.
### Identification of Study Cohort and Inclusion Criteria
Participants were included in the current study if they had one or more linked serum creatinine measurements in the 3-year period prior to their enrollment into the 45 and Up Study (hereafter referred to as the prebaseline kidney function ascertainment period 2003–2006). This was to ensure we could adequately determine prevalent eGFR <60 mL/min/1.73 m2. Prevalent eGFR <60 mL/min/1.73 m2 was defined as an eGFR or an imputed eGFR of <60 mL/min/1.73 m2 on or before the enrollment date. The use of routine clinical data meant that an eGFR was not available at the time of enrollment for most participants. We therefore imputed from the most recent available eGFR prior to enrollment.
Among individuals with a linked serum creatinine measurement, we identified participants with diabetes, defined as the presence of at least one of the following prespecified criteria: 1) a community laboratory record of fasting serum glucose >7.0 mmol/L, 2) a random serum glucose >11.1 mmol/L, 3) an HbA1c result ≥6.5% (all in line with society guidelines [18]), 4) a dispensation record of an oral glucose-lowering agent or insulin analog as documented in the PBS, or 5) self-reported diabetes on the 45 and Up Study baseline questionnaire (“Has a doctor EVER told you that you have diabetes?” = Yes) (Supplementary Fig. 1). Both prevalent and incident cases of diabetes were included, with prevalent cases defined as individuals who had diabetes at their time of enrollment into the 45 and Up Study and incident cases as those who developed diabetes at any point throughout the study period (2006–2014). Outcome ascertainment was only determined once a participant met criteria for incident or prevalent diabetes (Supplementary Fig. 1).
### Study Outcome
The outcome of interest was the development of incident eGFR <60 mL/min/1.73 m2, recorded from the time a participant met the criteria for diabetes until death or end of linked data (June 2014), whichever came first. GFR was estimated using the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation (19). Prevalent eGFR <60 mL/min/1.73 m2 status was ascertained during the prebaseline kidney function ascertainment period whereby we assumed a rate of decline of 2 mL/min/1.73 m2/year. Individuals identified as having prevalent CKD at enrollment were excluded from the analysis.
To minimize selection and survivor bias, only one eGFR measurement prior to enrollment was required for inclusion into the cohort and only one eGFR measurement after enrollment was required for incident eGFR <60 mL/min/1.73 m2. Participants who did not have an eGFR <60 mL/min/1.73 m2 after enrollment were conservatively deemed to not have incident disease.
### Covariates
Baseline sociodemographic and lifestyle variables were derived from self-reported responses of the 45 and Up Study baseline questionnaire and included the following: age, sex, country of birth, region of residence/remoteness from major city, highest qualification, annual household pretax income, partner status, area-level quintile of disadvantage (defined by an index of disadvantage using the Australian Bureau of Statistics index, where 1 is most disadvantaged and 5 is least disadvantaged) (20), smoking status, alcohol consumption, and BMI (<18.5 kg/m2 underweight, 18.5 to <25 kg/m2 normal, 25 to <30 kg/m2 overweight, 30 to <35 kg/m2 class I obesity, 35 to <40 kg/m2 class II obesity, and ≥40 kg/m2 class III obesity).
Baseline clinical variables included baseline comorbidities (hypertension, hyperlipidemia, coronary heart disease [CHD], previous stroke, cancer, and depression or anxiety), diabetes-specific variables, and baseline eGFR. Baseline comorbidities were defined using both self-reported answers to the 45 and Up Study baseline questionnaire and available linked data up to 3 years prior to the 45 and Up Study enrollment date. Hypertension, hyperlipidemia, CHD, previous stroke, cancer, and depression or anxiety were defined by self-reported answers to the 45 and Up Study baseline questionnaire. Hypertension, hyperlipidemia, and depression were also defined by dispensing of antihypertensive, lipid-lowering therapy, or antidepressant medication, respectively, in the PBS database. Additionally, ICD-10 Australian Modification codes were used to identify hospitalizations involving diagnoses for hypertension, hyperlipidemia, as well as acute myocardial infarction and coronary artery bypass grafting (indicating CHD) and stroke (Supplementary Table 1).
Diabetes-specific variables included diabetes duration calculated from the date of diagnosis until the end of the study period for incident participants and self-report for participants with prevalent diabetes. The baseline eGFR up to 1 month prior to meeting criteria for the diagnosis of diabetes was also examined.
### Statistical Analysis
The incidence rate of eGFR <60 mL/min/1.73 m2 was determined using Poisson regression and presented as incidence rate per 100 person-years (95% CIs). This was calculated for the entire cohort and stratified by age using the following age brackets: 45–54, 55–64, 65–74, 75–84, and ≥85 years. Rates were adjusted for sociodemographic, lifestyle, and clinical variables as well as baseline eGFR.
The covariates eGFR at study enrollment and age were treated as continuous variables. All other covariates were categorical variables, which were analyzed in the smallest available unit to allow for the assessment of trends. All comorbidity variables were dichotomous variables (presence or absence of disease).
Missing data were imputed using chained equations with linear regression for log-transformed continuous variables (e.g., age, eGFR at baseline, and time to incidence/censoring) and discriminant analysis for categorical variables (including censoring status) (21). This procedure method assumes that data are missing at random, i.e., with missingness predicted by conditioning on all the variables included in the imputation model. Forty imputed versions of the data set were created, and the resulting parameter estimates were combined using Rubin’s rules (22). No outcome data were missing in our cohort, and only missing baseline characteristics were imputed.
Cox regression was used to determine associations between baseline demographic, socioeconomic, and lifestyle factors; comorbidities; and the outcome of incident eGFR <60 mL/min/1.73 m2 over the follow-up period. Results were combined using Rubin’s rules (22). The hazard ratio (HR) (with 95% CI) was estimated for each variable with adjustment for age and sex in a minimally adjusted model. All variables were included in a fully adjusted model.
Median follow-up time was estimated with the Kaplan-Meier method (23). The proportional hazards assumption and linearity of the continuous covariates was checked visually on the first imputed data set using cumulative sums of residuals (24). We checked the proportional hazards assumption and the linearity of continuous variables by examining the cumulative sums of the martingale-based residuals and found no evidence of violations of proportional hazards or nonlinearity (24).
All data management and analyses were completed in SAS Enterprise Guide 7.1 with SAS/STAT 14.1 (SAS Institute, Cary, NC).
### Cohort Characteristics
Among 9,313 participants with diabetes at risk for developing eGFR <60 mL/min/1.73 m2 (Supplementary Fig. 1), at baseline, 5,105 (54.8%) had prevalent diabetes, and 4,208 (45.2%) developed incident diabetes during follow-up. Mean age was 65.4 years (SD 9.7), and 55% of participants were male. The cardiovascular burden of disease was high with hypertension present in 73.4% and hypercholesterolemia in 65.8% of individuals. Participants living in a major city represented 65.4% of the cohort, while 25.4% resided in inner regional and 8.7% in outer regional and remote areas (Table 1).
Table 1
Baseline characteristics of EXTEND45 participants who were at risk of developing incident eGFR <60 mL/min/1.73 m2
Baseline characteristicsDiabetes with no incident eGFR <60 mL/min/1.73 m2 (n = 7,207)Diabetes with incident eGFR <60 mL/min/1.73 m2 (n = 2,106)All (n = 9,313)
Age (years), mean (SD) 63.9 (9.3) 70.5 (9.4) 65.4 (9.7)
Male 3,976/7,207 (55.2) 1,165/2,106 (55.3) 5,141/9,313 (55.2)
Diabetes status at 45 and Up enrollment
Prevalent 3,584/7,207 (49.7) 1,521/2,106 (72.2) 5,105/9,313 (54.8)
Incident (during follow-up) 3,623/7,207 (50.3) 585/2,106 (27.8) 4,208/9,313 (45.2)
Diabetes duration (years), mean (SD) 1.53 (2.310) 2.63 (2.647) 1.78 (2.434)
Diabetes duration >5 years 931/7,207 (12.9) 508/2,106 (24.1) 1,439/9,313 (15.5)
Baseline eGFR (mL/min/1.73 m2), mean (SD) 83.55 (16.014) 72.12 (9.758) 80.96 (15.583)
Baseline eGFR >90 mL/min/1.73 m2 2,232/7,207 (31.0) 136/2,106 (6.5) 2,368/9,313 (25.4)
Baseline eGFR 60–89 mL/min/1.73 m2 4,975/7,207 (69.0) 1,970/2,106 (93.5) 6,945/9,313 (74.6)
Hypertension, present 5,019/7,207 (69.6) 1,817/2,106 (86.3) 6,836/9,313 (73.4)
Hypercholesterolemia, present 4,587/7,207 (63.6) 1,539/2,106 (73.1) 6,126/9,313 (65.8)
CHD, present 1,361/7,207 (18.9) 652/2,106 (31.0) 2,013/9,313 (21.6)
Previous stroke 301/7,207 (4.2) 149/2,106 (7.1) 450/9,313 (4.8)
Prior or current cancer 503/7,207 (7.0) 242/2,106 (11.5) 745/9,313 (8.0)
Prior or current depression 1,390/7,207 (19.3) 404/2,106 (19.2) 1,794/9,313 (19.3)
Country of birth
Europe 1,229/7,207 (17.1) 353/2,106 (16.8) 1,582/9,313 (17.0)
Australia 4,957/7,207 (68.8) 1,567/2,106 (74.4) 6,524/9,313 (70.1)
New Zealand and Polynesia 164/7,207 (2.3) 27/2,106 (1.3) 191/9,313 (2.1)
Africa and the Middle East 199/7,207 (2.8) 41/2,106 (1.9) 240/9,313 (2.6)
Asia 482/7,207 (6.7) 76/2,106 (3.6) 558/9,313 (6.0)
Americas 103/7,207 (1.4) 19/2,106 (0.9) 122/9,313 (1.3)
Inadequately described or missing 73/7,207 (1.0) 23/2,106 (1.1) 96/9,313 (1.0)
Area of residence
Major city 4,806/7,119 (67.5) 1,247/2,074 (60.1) 6,053/9,193 (65.8)
Inner regional 1,744/7,119 (24.5) 594/2,074 (28.6) 2,338/9,193 (25.4)
Outer regional and remote 569/7,119 (8.0) 233/2,074 (11.2) 802/9,193 (8.7)
Highest qualification
No school certificate 1,051/7,088 (14.8) 382/2,054 (18.6) 1,433/9,142 (15.7)
School or intermediate certificate 1,585/7,088 (22.4) 549/2,054 (26.7) 2,134/9,142 (23.3)
Higher School Certificate 727/7,088 (10.3) 210/2,054 (10.2) 937/9,142 (10.2)
Trade or apprenticeship 840/7,088 (11.9) 266/2,054 (13.0) 1,106/9,142 (12.1)
Certificate or diploma 1,374/7,088 (19.4) 344/2,054 (16.7) 1,718/9,142 (18.8)
University degree or higher 1,511/7,088 (21.3) 303/2,054 (14.8) 1,814/9,142 (19.8)
Household income
Less than $20,000 1,581/6,833 (22.2) 651/1,957 (33.2) 2,232/8,790 (25.4)$20,000 to <50,000 1,697/6,833 (24.8) 552/1,957 (28.2) 2,249/8,790 (25.6)
$50,000 to <70,000 748/6,833 (10.9) 151/1,957 (7.7) 899/8,790 (10.2)$70,000 and over 1,552/6,833 (22.7) 222/1,957 (11.3) 1,774/8,790 (20.2)
Partner status
Partner 5,458/7,154 (76.3) 1,450/2,089 (69.4) 6,908/9,243 (74.7)
No partner 1,696/7,154 (23.7) 639/2,089 (30.6) 2,335/9,243 (25.3)
Q1 (most disadvantaged) 1,526/7,064 (21.6) 524/2,056 (25.5) 2,050/9,120 (22.5)
Q2 1,300/7,064 (18.4) 421/2,056 (20.5) 1,721/9,120 (18.9)
Q3 1,297/7,064 (18.4) 371/2,056 (18.0) 1,668/9,120 (18.3)
Q4 1,196/7,064 (16.9) 320/2,056 (15.6) 1,516/9,120 (16.6)
Q5 (least disadvantaged) 1,745/7,064 (24.7) 420/2,056 (20.4) 2,165/9,120 (23.7)
Smoking status 636/7,181 (8.9) 101/2,097 (4.8) 737/9,278 (7.9)
Current 2,884/7,181 (40.2) 920/2,097 (43.9) 3,804/9,278 (41.0)
Previous 3,661/7,181 (51.0) 1,076/2,097 (51.3) 4,737/9,278 (51.1)
Never 636/7,181 (8.9) 101/2,097 (4.8) 737/9,278 (7.9)
Alcohol consumption (number of standard drinks/week)
0 2,885/7,021 (41.1) 965/2,032 (47.5) 3,850/9,053 (42.5)
1–6 1,925/7,021 (27.4) 507/2,032 (25.0) 2,432/9,053 (26.9)
7–13 1,030/7,021 (14.7) 254/2,032 (12.5) 1,284/9,053 (14.2)
14–20 635/7,021 (9.0) 160/2,032 (7.9) 795/9,053 (8.8)
Over 21 546/7,021 (7.8) 146/2,032 (7.2) 692/9,053 (7.6)
BMI
<18.5 kg/m2 40/6,698 (0.6) 11/1,927 (0.6) 51/8,625 (0.6)
18.5 to <25 kg/m2 1,267/6,698 (18.9) 329/1,927 (17.1) 1,596/8,625 (18.5)
25 to <30 kg/m2 2,497/6,698 (37.3) 741/1,927 (38.5) 3,238/8,625 (37.5)
30 to <35 kg/m2 1,746/6,698 (26.1) 513/1,927 (26.6) 2,259/8,625 (26.2)
35 to <40 kg/m2 707/6,698 (10.6) 216/1,927 (11.2) 923/8,625 (10.7)
≥40 kg/m2 441/6,698 (6.6) 117/1,927 (6.1) 558/8,625 (6.5)
Baseline characteristicsDiabetes with no incident eGFR <60 mL/min/1.73 m2 (n = 7,207)Diabetes with incident eGFR <60 mL/min/1.73 m2 (n = 2,106)All (n = 9,313)
Age (years), mean (SD) 63.9 (9.3) 70.5 (9.4) 65.4 (9.7)
Male 3,976/7,207 (55.2) 1,165/2,106 (55.3) 5,141/9,313 (55.2)
Diabetes status at 45 and Up enrollment
Prevalent 3,584/7,207 (49.7) 1,521/2,106 (72.2) 5,105/9,313 (54.8)
Incident (during follow-up) 3,623/7,207 (50.3) 585/2,106 (27.8) 4,208/9,313 (45.2)
Diabetes duration (years), mean (SD) 1.53 (2.310) 2.63 (2.647) 1.78 (2.434)
Diabetes duration >5 years 931/7,207 (12.9) 508/2,106 (24.1) 1,439/9,313 (15.5)
Baseline eGFR (mL/min/1.73 m2), mean (SD) 83.55 (16.014) 72.12 (9.758) 80.96 (15.583)
Baseline eGFR >90 mL/min/1.73 m2 2,232/7,207 (31.0) 136/2,106 (6.5) 2,368/9,313 (25.4)
Baseline eGFR 60–89 mL/min/1.73 m2 4,975/7,207 (69.0) 1,970/2,106 (93.5) 6,945/9,313 (74.6)
Hypertension, present 5,019/7,207 (69.6) 1,817/2,106 (86.3) 6,836/9,313 (73.4)
Hypercholesterolemia, present 4,587/7,207 (63.6) 1,539/2,106 (73.1) 6,126/9,313 (65.8)
CHD, present 1,361/7,207 (18.9) 652/2,106 (31.0) 2,013/9,313 (21.6)
Previous stroke 301/7,207 (4.2) 149/2,106 (7.1) 450/9,313 (4.8)
Prior or current cancer 503/7,207 (7.0) 242/2,106 (11.5) 745/9,313 (8.0)
Prior or current depression 1,390/7,207 (19.3) 404/2,106 (19.2) 1,794/9,313 (19.3)
Country of birth
Europe 1,229/7,207 (17.1) 353/2,106 (16.8) 1,582/9,313 (17.0)
Australia 4,957/7,207 (68.8) 1,567/2,106 (74.4) 6,524/9,313 (70.1)
New Zealand and Polynesia 164/7,207 (2.3) 27/2,106 (1.3) 191/9,313 (2.1)
Africa and the Middle East 199/7,207 (2.8) 41/2,106 (1.9) 240/9,313 (2.6)
Asia 482/7,207 (6.7) 76/2,106 (3.6) 558/9,313 (6.0)
Americas 103/7,207 (1.4) 19/2,106 (0.9) 122/9,313 (1.3)
Inadequately described or missing 73/7,207 (1.0) 23/2,106 (1.1) 96/9,313 (1.0)
Area of residence
Major city 4,806/7,119 (67.5) 1,247/2,074 (60.1) 6,053/9,193 (65.8)
Inner regional 1,744/7,119 (24.5) 594/2,074 (28.6) 2,338/9,193 (25.4)
Outer regional and remote 569/7,119 (8.0) 233/2,074 (11.2) 802/9,193 (8.7)
Highest qualification
No school certificate 1,051/7,088 (14.8) 382/2,054 (18.6) 1,433/9,142 (15.7)
School or intermediate certificate 1,585/7,088 (22.4) 549/2,054 (26.7) 2,134/9,142 (23.3)
Higher School Certificate 727/7,088 (10.3) 210/2,054 (10.2) 937/9,142 (10.2)
Trade or apprenticeship 840/7,088 (11.9) 266/2,054 (13.0) 1,106/9,142 (12.1)
Certificate or diploma 1,374/7,088 (19.4) 344/2,054 (16.7) 1,718/9,142 (18.8)
University degree or higher 1,511/7,088 (21.3) 303/2,054 (14.8) 1,814/9,142 (19.8)
Household income
Less than $20,000 1,581/6,833 (22.2) 651/1,957 (33.2) 2,232/8,790 (25.4)$20,000 to <50,000 1,697/6,833 (24.8) 552/1,957 (28.2) 2,249/8,790 (25.6)
$50,000 to <70,000 748/6,833 (10.9) 151/1,957 (7.7) 899/8,790 (10.2)$70,000 and over 1,552/6,833 (22.7) 222/1,957 (11.3) 1,774/8,790 (20.2)
Partner status
Partner 5,458/7,154 (76.3) 1,450/2,089 (69.4) 6,908/9,243 (74.7)
No partner 1,696/7,154 (23.7) 639/2,089 (30.6) 2,335/9,243 (25.3)
Q1 (most disadvantaged) 1,526/7,064 (21.6) 524/2,056 (25.5) 2,050/9,120 (22.5)
Q2 1,300/7,064 (18.4) 421/2,056 (20.5) 1,721/9,120 (18.9)
Q3 1,297/7,064 (18.4) 371/2,056 (18.0) 1,668/9,120 (18.3)
Q4 1,196/7,064 (16.9) 320/2,056 (15.6) 1,516/9,120 (16.6)
Q5 (least disadvantaged) 1,745/7,064 (24.7) 420/2,056 (20.4) 2,165/9,120 (23.7)
Smoking status 636/7,181 (8.9) 101/2,097 (4.8) 737/9,278 (7.9)
Current 2,884/7,181 (40.2) 920/2,097 (43.9) 3,804/9,278 (41.0)
Previous 3,661/7,181 (51.0) 1,076/2,097 (51.3) 4,737/9,278 (51.1)
Never 636/7,181 (8.9) 101/2,097 (4.8) 737/9,278 (7.9)
Alcohol consumption (number of standard drinks/week)
0 2,885/7,021 (41.1) 965/2,032 (47.5) 3,850/9,053 (42.5)
1–6 1,925/7,021 (27.4) 507/2,032 (25.0) 2,432/9,053 (26.9)
7–13 1,030/7,021 (14.7) 254/2,032 (12.5) 1,284/9,053 (14.2)
14–20 635/7,021 (9.0) 160/2,032 (7.9) 795/9,053 (8.8)
Over 21 546/7,021 (7.8) 146/2,032 (7.2) 692/9,053 (7.6)
BMI
<18.5 kg/m2 40/6,698 (0.6) 11/1,927 (0.6) 51/8,625 (0.6)
18.5 to <25 kg/m2 1,267/6,698 (18.9) 329/1,927 (17.1) 1,596/8,625 (18.5)
25 to <30 kg/m2 2,497/6,698 (37.3) 741/1,927 (38.5) 3,238/8,625 (37.5)
30 to <35 kg/m2 1,746/6,698 (26.1) 513/1,927 (26.6) 2,259/8,625 (26.2)
35 to <40 kg/m2 707/6,698 (10.6) 216/1,927 (11.2) 923/8,625 (10.7)
≥40 kg/m2 441/6,698 (6.6) 117/1,927 (6.1) 558/8,625 (6.5)
Data are n/n (%) unless otherwise noted.
### Overall and Age-Stratified eGFR <60 mL/min/1.73 m2 Incidence Rate in Diabetes
Of 9,313 participants at risk, 2,106 (22.6%) developed incident eGFR <60 mL/min/1.73 m2 over a median follow-up time of 5.7 years (interquartile range [IQR], 3.0–5.9), corresponding to an incidence rate of 6.0 (95% CI 5.7–6.3) per 100 person-years.
The incidence rate was higher in older age groups, increasing from 1.5 (IQR 1.3–1.9) per 100 person-years in participants aged 45–54 years to 26.0 (22.0–32.0) per 100 person-years in those aged 85 years and over (Table 2). The incidence rate remained constant over time except in those aged 85 years and over (Fig. 1).
Table 2
Incidence rates of eGFR <60 mL/min/1.73 m2 in those at risk stratified by age group
incident eGFR <60 mL/min/1.73 m2
Age (years)Number at riskNumberRate per 100 person-years (95% CI)
45–54 1,820 121 1.5 (1.3–1.9)
55–64 3,367 517 3.7 (3.4–4.0)
65–74 2,756 830 7.6 (7.1–8.1)
75–84 1,230 558 15 (13.0–16.0)
85 and over 140 80 26 (22.0–32.0)
incident eGFR <60 mL/min/1.73 m2
Age (years)Number at riskNumberRate per 100 person-years (95% CI)
45–54 1,820 121 1.5 (1.3–1.9)
55–64 3,367 517 3.7 (3.4–4.0)
65–74 2,756 830 7.6 (7.1–8.1)
75–84 1,230 558 15 (13.0–16.0)
85 and over 140 80 26 (22.0–32.0)
*
Incidence rates calculated using Poisson regression incorporate follow-up time and will differ from manual calculation using numbers at risk and developing incident eGFR <60 mL/min/1.73 m2.
Figure 1
Kaplan-Meier curves of the time to incident eGFR <60 mL/min/1.73 m2 in the overall cohort as well as stratified by age. Kaplan-Meier curves do not start at 0 because some participants meet the criteria for diabetes and CKD at the same time or at very close time intervals.
Figure 1
Kaplan-Meier curves of the time to incident eGFR <60 mL/min/1.73 m2 in the overall cohort as well as stratified by age. Kaplan-Meier curves do not start at 0 because some participants meet the criteria for diabetes and CKD at the same time or at very close time intervals.
Close modal
### Associations of Incident eGFR <60 mL/min/1.73 m2 in Diabetes
In both sex-adjusted (Supplementary Table 2) and fully adjusted multivariable models, age was independently associated with incident eGFR <60 mL/min/1.73 m2 (HR 1.23 per 5-year increase; 95% CI 1.19–1.26). In fully adjusted analyses, geographical remoteness of residence was also predictive of eGFR <60 mL/min/1.73 m2; compared with living in a major city, living in an inner regional city was associated with a higher incidence (1.14; 1.03–1.26) as was living in an outer regional city (1.36; 1.17–1.58).
Compared with a normal-range BMI (18.0–24.9 kg/m2), BMI values in the overweight class (HR 1.20; 95% CI 1.05–1.37), obese class I (1.32; 1.14–1.53), obese class II (1.52; 1.26–1.83), and obese class III (1.44; 1.16–1.80) ranges were all predictive in a graded fashion (Fig. 2).
Figure 2
Multivariable Cox regression analyses for baseline variables as associations for the development of incident eGFR <60 mL/min/1.73 m2 among those with diabetes. *Surrounding Islands: New Caledonia, Papua New Guinea, Solomon Islands, Vanuatu, Cook Islands, Fiji, Niue, Samoa, Tokelau, Tonga, Tuvalu. †As defined by the Australian Bureau of Statistics. ‡Normal weight = BMI 18.5 to <25 kg/m2, underweight = BMI <18.5 kg/m2, overweight = BMI 25 to <30 kg/m2, obese class I = BMI 30 to <35 kg/m2, obese class II = BMI 35 to <40 kg/m2, obese class III = BMI ≥40 kg/m2. §Baseline eGFR (continuous), eGFR (mL/min/1.73 m2).
Figure 2
Multivariable Cox regression analyses for baseline variables as associations for the development of incident eGFR <60 mL/min/1.73 m2 among those with diabetes. *Surrounding Islands: New Caledonia, Papua New Guinea, Solomon Islands, Vanuatu, Cook Islands, Fiji, Niue, Samoa, Tokelau, Tonga, Tuvalu. †As defined by the Australian Bureau of Statistics. ‡Normal weight = BMI 18.5 to <25 kg/m2, underweight = BMI <18.5 kg/m2, overweight = BMI 25 to <30 kg/m2, obese class I = BMI 30 to <35 kg/m2, obese class II = BMI 35 to <40 kg/m2, obese class III = BMI ≥40 kg/m2. §Baseline eGFR (continuous), eGFR (mL/min/1.73 m2).
Close modal
In regard to comorbidities at baseline, the presence of hypertension (HR 1.52; 95% CI 1.33–1.73), coronary heart disease (1.13; 1.02–1.24), cancer (1.30; 1.14–1.50), and depression or anxiety (1.14; 1.01–1.27) also predicted incident eGFR <60 mL/min/1.73 m2. The presence of diabetes for more than 5 years was also predictive (1.37; 1.24–1.52) (Fig. 2).
Factors that were associated with a lower incidence of eGFR <60 mL/min/1.73 m2 included a higher baseline eGFR (HR 0.93; 95% CI 0.92–0.93), having a partner (0.90; 0.81–0.99), and alcohol consumption of <20 standard drinks per week (Fig. 2).
Several demographic and socioeconomic variables were not predictive, including sex, annual household income, highest qualification, area-level quintile of disadvantage, and smoking status.
In our Australian population-based cohort of participants with diabetes aged 45 years and over, the age-adjusted incidence rate of eGFR <60 mL/min/1.73 m2 was 6.0 new cases per 100 person-years between 2006 and 2014. This represents an updated real-world estimate of the burden of CKD in NSW, Australia. In a fully adjusted multivariable model, eGFR <60 mL/min/1.73 m2 was independently associated with sociodemographic variables of increasing age and geographical remoteness, and the clinical variables of elevated BMI, hypertension, coronary heart disease, depression, cancer, and a diabetes duration over 5 years.
The incidence rate of 6.0 (95% CI 5.7–6.3) new cases in 100 person-years in our diabetes cohort is higher than that reported in other population-based studies of adults in regions such as North America (11,12,25), Australia (26), Scandinavia (27), and Europe (28,29). In these studies, estimates of incident CKD ranged between 2.0 and 3.0 new cases per 100 person-years, despite considerable variations in terms of data sources and study design (e.g., national surveys, registries, and randomized controlled trials). In contrast to our study that used linked clinical data and ensured complete follow-up of our entire cohort of participants with diabetes, the National Health and Nutrition Survey (NHANES), which used repeated cross-sectional cohorts of adults over a 20-year period (11,12,25), and the Australian Diabetes, Obesity and Lifestyle Study (AusDiab), which followed a longitudinal cohort of adults over 25 years (26), were limited by steadily declining response rates, which limit their validity and representativeness. Similarly, the Swedish National Diabetes Registry of 3,667 people over the age of 30 years with diabetes recruited from around 95% of hospital-based outpatient clinics and 60% of primary health care centers (27) required participants to be alive at 5 years of follow-up, essentially selecting out a healthier diabetes cohort.
There are several possible contributors to the higher incidence rate observed in our study compared with the published estimates. First, our study uses real-world clinical data from multiple data sources to ascertain eGFR <60 mL/min/1.73 m2 status. This approach may more comprehensively capture incident CKD than methodologies that rely on self-report or repeated biochemical testing limited to those attending follow-up study visits. Moreover, the use of routinely collected real-world data reduces the amount of healthy volunteer bias, which could otherwise lead to an underestimate of disease burden. Second, in contrast to other studies, our cohort was 5–10 years older than other cohorts (2527), which is a factor that has been continually shown to be associated with a higher incidence of CKD (2832). Third, in contrast to studies (28,30,33) that investigate CKD incidence among incident cases of diabetes, our study cohort consisted of participants with both incident and prevalent diabetes, thus including people with more advanced disease. Finally, previous published estimates of incidence and prevalence of CKD in diabetes have used a number of definitions and various equations, each with their limitations. We defined CKD as an eGFR of <60 mL/min/1.73 m2 using the CKD-EPI equation. The CKD-EPI equation has been shown to provide a better estimate of early CKD and may, thus, result in a higher estimate of CKD incidence (34).
Our study confirmed some previously identified associations of CKD in diabetes suggesting irreversible factors or factors where effective mitigation strategies are yet to be identified and implemented. Advancing age continues to be an established association of CKD in diabetes (2832). The incidence rate of CKD increased almost fivefold in NSW adults with diabetes aged 85 years and over compared with those in younger age categories. However, it is unclear whether this is a reflection of the high competing risk of mortality in this group, leading to an inflation of the incidence rate estimate, or whether other factors are at play. Other factors that continue to predict CKD in people with diabetes, despite being potentially amenable to mitigation, include obesity (31,35) and high blood pressure (31).
Some factors do not universally predict CKD in all diabetes cohorts. Socioeconomic markers, such as income, educational level (31), and social disadvantage (36), have predicted CKD in diabetes in other settings but not in our setting of contemporary NSW, Australia. Living remotely compared with living in a major city was independently associated in our study. Australia is serviced by a government-funded universal healthcare system, which may mitigate the impact of many socioeconomic factors. Nevertheless, it is important to note that around 90% of the Australian population is urbanized (Australian Bureau of Statistics, 2016) and situated on the coastal fringes of the continent, while most of the remaining land mass is relatively arid and sparsely populated. As a result, the higher risk of eGFR <60 mL/min/1.73 m2 in those living remotely might be due to poorer access to health care and preventative programs.
Our study found no sex differences in the risk of CKD in those with diabetes, whereas previous studies have found both an increased risk (27,31,37,38) and a reduced risk (30) in both sexes. Possible reasons for these disparate results might lie in the inherent health behaviors that are influenced by the societal roles ascribed to different sexes across cultures and countries. These factors are harder to adjust for in multivariable modeling and raise the possibility of unmeasured confounding.
Our study found that a history of cancer at baseline predicted incident eGFR <60 mL/min/1.73 m2. The epidemiology of cancer and CKD is not well understood. Cancer patients may be exposed to various pathways through which they might accumulate kidney damage. There are direct nephrotoxic effects of anticancer treatments and radiological contrast material for imaging studies for cancer diagnosis, staging, and monitoring as well as direct effects of tumors involving the kidneys either extrinsically through compression or intrinsically through primary or metastatic disease. CKD has significant implications for cancer patients. Most notably, it may confer an increased cancer-specific mortality independent of age and cancer type (39). Taken together, these results suggest that those with diabetes who are receiving treatment for cancer warrant closer monitoring for the development of CKD.
We found that the presence of baseline depression and anxiety predicted incident disease. A recent prospective cohort study in U.S. veterans with diabetes found a similar association (8), despite our study cohort being younger with a lower comorbid burden. It is unclear whether this association is entirely due to residual confounding owing to disease severity and comorbid burden or whether a causal relationship exists, such as adverse changes in health behaviors due to depression.
Our study has many strengths. It is a large, prospective, longitudinal population-based cohort using real-world clinical data to assess CKD incidence and risk factors. Clinical data contained within the linked data sources are comprehensive and continuously collected. We defined eGFR <60 mL/min/1.73 m2 using routinely collected laboratory data, which is more sensitive than relying on administrative codes (40). Similarly, comorbidities were defined by multiple data sources, including medication dispensation and ICD-10 codes, a more robust method than self-report alone. Our methodology ensured complete follow-up without face-to-face visits, reducing participant burden and susceptibility to healthy volunteer bias seen in more traditional longitudinal cohort studies.
Our study has important limitations. The reliance on routinely collected serum creatinine results for the identification and classification of CKD could have introduced an indication bias. We explored this further by examining the baseline characteristics of the diabetes cohort who had a serum creatinine measure and compared this with the baseline characteristics of those who did not. We found that the groups were well matched for all baseline characteristics, including age, sex, geography, markers of socioeconomic status, and comorbidities (Supplementary Table 3). The only difference between those with a serum creatinine and those without was that there was a greater proportion with incident diabetes and a diabetes duration >5 years, which is likely reflective of appropriate testing rather than a selection bias toward a healthier or more advantaged population. Nonetheless, the possibility of some residual indication bias remains.
Our cohort is limited to those who present for serum creatinine testing who may represent as few as 50% annually of individuals with diabetes despite annual CKD screening being stipulated by national and international guidelines (41). Hence, the incidence of CKD found in our study may be an overestimation of the incidence in the general population. Our inclusion criteria only required one creatinine measurement in the 3 years prior to recruitment to the 45 and Up Study because we did not wish to further select our cohort. As a result, we were unable to account for fluctuations in eGFR measurements over the follow-up period, which may have led to misclassification of CKD. Furthermore, only limited linked albuminuria or proteinuria assessments were available, preventing meaningful incorporation of these values in our model. This may have resulted in an underestimation of our incidence rate because we may have missed participants who met the criteria for CKD due to the presence of albuminuria. Our cohort was limited to adults over the age of 45 years, making our results less generalizable to younger age groups. However, given the association of age with hospitalizations, our results are very relevant to health service planning. Our definition of diabetes using routinely collected data precluded the ability to distinguish between type 1 and type 2 diabetes that may have confounded our estimate. Finally, our study was conducted in the context of a universal health care system that may not be generalizable to other settings.
### Conclusion and Future Research
In a population-based study of participants aged over 45 years in Australia, the incident rate of eGFR <60 mL/min/1.73 m2 in diabetes is high and increases with advancing age. In a universal health care setting, the presence of the comorbidities of hypertension, CHD, cancer, and depression or anxiety were all found to be associated with a higher risk of developing CKD, while socioeconomic markers of disadvantage were not. This study demonstrates the role that routine clinical data can have for monitoring disease incidence over time.
Future research should investigate the implications of this high incidence rate on individual health outcomes, such as disease progression, as well as on future health service planning.
Acknowledgments. This research was completed using data collected through the 45 and Up Study (www.saxinstitute.org.au). The 45 and Up Study is managed by the Sax Institute in collaboration with major partner Cancer Council NSW, and partners, the National Heart Foundation of Australia (NSW Division), NSW Ministry of Health, NSW Government Family & Community Services-Ageing, Carers and the Disability Council NSW, and the Australian Red Cross Blood Service. The authors thank the many thousands of people participating in the 45 and Up Study.
The study sponsors were not involved in the content development of EXTEND45. None of the funding bodies were involved in the design of the study and analysis and interpretation of the study. The opinions expressed in this article are those of the authors and do not necessarily represent those of Eli Lilly (Australia) Pty Ltd., Merck Sharp & Dohme (Australia) Pty Ltd., or Amgen (Australia) Pty Ltd.
Author Contributions. L.S., A.K., M.Ju., and M.Ja. were responsible for the study concept and design. L.S. drafted the manuscript. All authors gave critical revision of the manuscript for important intellectual content and interpreted the data. C.H. and K.R. provided statistical analysis. C.H. and M.Ja. supervised the manuscript. L.S. is the guarantor of this work and, as such, had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Prior Presentation. The study was accepted as an abstract to the Annual Scientific Meeting of the Australian and New Zealand Society of Nephrology, Sydney, NSW, Australia, 2018, and the annual scientific meeting of the American Society of Nephrology, San Diego, CA, 2018.
1.
Zimmet
P
,
Alberti
KG
,
Shaw
J
.
Global and societal implications of the diabetes epidemic
.
Nature
2001
;
414
:
782
787
2.
Atkins
RC
,
Zimmet
P
.
World Kidney Day 2010: diabetic kidney disease--act now or pay later
.
Am J Kidney Dis
2010
;
55
:
205
208
3.
International Diabetes Federation
.
IDF Diabetes Atlas
. 8th ed.
Cho
NH
, Ed.
Brussels, Belgium
,
International Diabetes Federation
,
2017
4.
Svensson
MK
,
Cederholm
J
,
Eliasson
B
,
Zethelius
B
,
Gudbjörnsdottir
S
;
Swedish National Diabetes Register
.
Albuminuria and renal function as predictors of cardiovascular events and mortality in a general population of patients with type 2 diabetes: a nationwide observational study from the Swedish National Diabetes Register
.
Diab Vasc Dis Res
2013
;
10
:
520
529
5.
Ninomiya
T
,
Perkovic
V
,
de Galan
BE
, et al.;
.
Albuminuria and kidney function independently predict cardiovascular and renal outcomes in diabetes
.
J Am Soc Nephrol
2009
;
20
:
1813
1821
6.
Go
AS
,
Chertow
GM
,
Fan
D
,
McCulloch
CE
,
Hsu
CY
.
Chronic kidney disease and the risks of death, cardiovascular events, and hospitalization
[published correction appears in N Engl J Med 2008;18:4]
.
N Engl J Med
2004
;
351
:
1296
1305
7.
Toyama
T
,
Furuichi
K
,
Ninomiya
T
, et al
.
The impacts of albuminuria and low eGFR on the risk of cardiovascular death, all-cause mortality, and renal events in diabetic patients: meta-analysis
.
PLoS One
2013
;
8
:
e71810
8.
Novak
M
,
Mucsi
I
,
Rhee
CM
, et al
.
Increased risk of incident chronic kidney disease, cardiovascular disease, and mortality in patients with diabetes with comorbid depression
.
Diabetes Care
2016
;
39
:
1940
1947
9.
Zimbudzi
E
,
Lo
C
,
Ranasinha
S
, et al
.
Predictors of health-related quality of life in patients with co-morbid diabetes and chronic kidney disease
.
PLoS One
2016
;
11
:
e0168491
10.
Mujais
SK
,
Story
K
,
Brouillette
J
, et al
.
Health-related quality of life in CKD patients: correlates and evolution over time
.
Clin J Am Soc Nephrol
2009
;
4
:
1293
1301
11.
Bailey
RA
,
Wang
Y
,
Zhu
V
,
Rupnow
MFT
.
Chronic kidney disease in US adults with type 2 diabetes: an updated national estimate of prevalence based on Kidney Disease: Improving Global Outcomes (KDIGO) staging
.
BMC Res Notes
2014
;
7
:
415
12.
Wu
B
,
Bell
K
,
Stanford
A
, et al
.
Understanding CKD among patients with T2DM: prevalence, temporal trends, and treatment patterns—NHANES 2007–2012
.
BMJ Open Diab Res Care
2016
;
4
:
e000154
13.
Banks
E
,
Redman
S
,
Jorm
L
, et al.;
45 and Up Study Collaborators
.
Cohort profile: the 45 and Up Study
.
Int J Epidemiol
2008
;
37
:
941
947
14.
Australian Government, Department of Health. Pharmaceutical benefits scheme [Internet], 2016. Available from https://www.health.gov.au/pbs. Accessed 14 January 2016
15.
Australian Government, Department of Health. Medicare benefits schedule [Internet], 2017. Available from https://www.mbsonline.gov.au/internet/mbsonline/publishing.nsf/Content/Home. Accessed 12 January 2016
16.
Australian Consortium for Classification Development
.
The International Statistical Classification of Diseases and Related Health Problems, Tenth Revision, Australian Modification
.
Darlinghurst, NSW, Australia
,
Independant Hospital Pricing Authority
,
2017
17.
Sayers
A
,
Ben-Shlomo
Y
,
Blom
AW
,
Steele
F
.
.
Int J Epidemiol
2016
;
45
:
954
964
18.
American Diabetes Association
.
2. Classification and diagnosis of diabetes
.
Diabetes Care
2015
;
38
:
S8
S16
19.
Levey
AS
,
Stevens
LA
,
Schmid
CH
, et al.;
CKD-EPI (Chronic Kidney Disease Epidemiology Collaboration)
.
A new equation to estimate glomerular filtration rate
.
Ann Intern Med
2009
;
150
:
604
612
20.
Australian Bureau of Statistics
.
Census of Population and Housing: Socio-Economic Indexes for Areas (SEIFA), Australia, 2016
.
cat no. 2033.0.55.001 [Internet], 2016. Available from https://www.abs.gov.au/ausstats/abs@.nsf/mf/2033.0.55.001. Accessed 1 October 2018
21.
van Buuren
S
.
Multiple imputation of discrete and continuous data by fully conditional specification
.
Stat Methods Med Res
2007
;
16
:
219
242
22.
Rubin
DB
.
Multiple Imputation for Non-response in Surveys
.
New York
,
John Wiley
,
1987
23.
Schemper
M
,
Smith
TL
.
A note on quantifying follow-up in studies of failure time
.
Control Clin Trials
1996
;
17
:
343
346
24.
Lin
DY
,
Wei
LJ
,
Ying
Z
.
Checking the Cox model with cumulative sums of martingale-based residuals
.
Biometrika
1993
;
80
:
557
572
25.
de Boer
IH
,
Rue
TC
,
Hall
YN
,
Heagerty
PJ
,
Weiss
NS
,
Himmelfarb
J
.
Temporal trends in the prevalence of diabetic kidney disease in the United States
.
JAMA
2011
;
305
:
2532
2539
26.
Tanamas
SK
,
Magliano
D
,
Lynch
B
, et al
.
The Australian Diabetes, Obesity and Lifestyle Study
.
Melbourne, Australia
,
Baker IDI Heart and Diabetes Institute
,
2012
27.
Afghahi
H
,
Cederholm
J
,
Eliasson
B
, et al
.
Risk factors for the development of albuminuria and renal impairment in type 2 diabetes--the Swedish National Diabetes Register (NDR)
.
Nephrol Dial Transplant
2011
;
26
:
1236
1243
28.
AI
,
Stevens
RJ
,
Manley
SE
,
Bilous
RW
,
Cull
CA
,
Holman
RR
;
UKPDS GROUP
.
Development and progression of nephropathy in type 2 diabetes: the United Kingdom Prospective Diabetes Study (UKPDS 64)
.
Kidney Int
2003
;
63
:
225
232
29.
Salinero-Fort
MA
,
San Andrés-Rebollo
FJ
,
de Burgos-Lunar
C
, et al.;
.
Five-year incidence of chronic kidney disease (stage 3-5) and associated risk factors in a Spanish cohort: the MADIABETES Study
.
PLoS One
2015
;
10
:
e0122030
30.
Retnakaran
R
,
Cull
CA
,
Thorne
KI
,
AI
,
Holman
RR
;
UKPDS Study Group
.
Risk factors for renal dysfunction in type 2 diabetes: U.K. Prospective Diabetes Study 74
.
Diabetes
2006
;
55
:
1832
1839
31.
Jardine
MJ
,
Hata
J
,
Woodward
M
, et al.;
.
Prediction of kidney-related outcomes in patients with type 2 diabetes
.
Am J Kidney Dis
2012
;
60
:
770
778
32.
Dunkler
D
,
Gao
P
,
Lee
SF
, et al.;
ONTARGET and ORIGIN Investigators
.
Risk prediction for early CKD in type 2 diabetes
.
Clin J Am Soc Nephrol
2015
;
10
:
1371
1379
33.
Gatwood
J
,
Chisholm-Burns
M
,
Davis
R
, et al
.
Evidence of chronic kidney disease in veterans with incident diabetes mellitus
.
PLoS One
2018
;
13
:
e0192712
34.
Matsushita
K
,
Mahmoodi
BK
,
Woodward
M
, et al.;
Chronic Kidney Disease Prognosis Consortium
.
Comparison of risk prediction using the CKD-EPI equation and the MDRD study equation for estimated glomerular filtration rate
.
JAMA
2012
;
307
:
1941
1951
35.
Hsu
CY
,
McCulloch
CE
,
Iribarren
C
,
Darbinian
J
,
Go
AS
.
Body mass index and risk for end-stage renal disease
.
Ann Intern Med
2006
;
144
:
21
28
36.
Nicholas
SB
,
K
,
Norris
KC
.
Socioeconomic disparities in chronic kidney disease
.
2015
;
22
:
6
15
37.
Yu
MK
,
Katon
W
,
Young
BA
.
Associations between sex and incident chronic kidney disease in a prospective diabetic cohort
.
Nephrology (Carlton)
2015
;
20
:
451
458
38.
Luk
AO
,
So
WY
,
Ma
RC
, et al.;
Hong Kong Diabetes Registry
.
Metabolic syndrome predicts new onset of chronic kidney disease in 5,829 patients with type 2 diabetes: a 5-year prospective analysis of the Hong Kong Diabetes Registry
.
Diabetes Care
2008
;
31
:
2357
2361
39.
Launay-Vacher
V
,
Janus
N
,
Spano
J
, et al
.
Impact of renal insufficiency on cancer survival: results of the IRMA-2 study
.
J Clin Oncol
2009
;
27
(
Suppl. 15
):
9585
40.
Vlasschaert
ME
,
Bejaimal
SA
,
Hackam
DG
, et al
.
Validity of administrative database coding for kidney disease: a systematic review
.
Am J Kidney Dis
2011
;
57
:
29
43
41.
Manns
L
,
Scott-Douglas
N
,
Tonelli
M
, et al
.
A population-based analysis of quality indicators in CKD
.
Clin J Am Soc Nephrol
2017
;
12
:
727
733
|
2022-06-27 15:58:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.327226847410202, "perplexity": 11342.214972768124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103334753.21/warc/CC-MAIN-20220627134424-20220627164424-00012.warc.gz"}
|
https://h2bedi.wordpress.com/2015/07/17/splitting-field/
|
# Splitting Field
Splitting fields are associated with a polynomial, it is the smallest field in which polynomial splits. Since every polynomial splits in complex numbers, we just adjoin roots to base field to get the splitting field. If two fields are iso $\varphi:F_1\simeq F_2$ then we also have an iso $\frac{F_1[x]}{p(x)}\simeq\frac{F_2[x]}{\varphi p(x)}$ for irreducible polynomial $p(x)$.
We also show existence of splitting fields via induction.
|
2018-10-20 17:51:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8651200532913208, "perplexity": 279.2239270662247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513009.81/warc/CC-MAIN-20181020163619-20181020185119-00505.warc.gz"}
|
https://www.jmneedhamart.co.uk/photo_16857419.html
|
Well, this drawing turned out to be prophetic.
These 'stop' signs usually refer to flu season. And that was what they referred to when I drew this picture. But now, they all remind us of another more pressing problem.
(I blurred the lower right corner to remove a note with somebody's name as I don't want to post identifying info online.)
Well, this drawing turned out to be prophetic.
These 'stop' signs usually refer to flu season. And that was what they referred to when I drew this picture. But now, they all remind us of another more pressing problem.
(I blurred the lower right corner to remove a note with somebody's name as I don't want to post identifying info online.)
|
2021-08-02 08:22:37
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8490515947341919, "perplexity": 1666.8627117732128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154310.16/warc/CC-MAIN-20210802075003-20210802105003-00687.warc.gz"}
|
https://www.fiberoptics4sale.com/blogs/wave-optics/junction-photodiodes
|
# Junction Photodiodes
This is a continuation from the previous tutorial - photoconductive detectors.
Every junction diode has a photoresponse that can be utilized for optical detection. Junction photodiodes are the most commonly used photodetectors in the photonics industry.
They can take many forms, including semiconductor homojunctions, semiconductor heterojunctions, and metal-semiconductor junctions.
Similarly to that of a photoconductor, the photoresponse of a photodiode results from the photo generation of electron-hole pairs.
In contrast to photoconductors, which can be of either intrinsic or extrinsic type, a photodiode is normally of intrinsic type, in which electron-hole pairs are generated through band-to-band optical absorption.
Therefore, the threshold photon energy of a semiconductor photodiode is the bandgap energy of its active region:
$\tag{14-83}E_\text{th}=E_\text{g}$
Junction photodiodes cover a wide spectral range from ultraviolet to infrared. All of the semiconductor materials used for intrinsic photoconductors discussed in the photoconductive detectors tutorial can be used for photodiodes with similar spectral characteristics.
Figure 14-15 shows the spectral responsivity of representative photodiodes as a function of optical wavelength at 300 K.
All junction photodiodes share some basic principles and characteristics. Therefore, we first consider a simple p-n homojunction photodiode for a general discussion of the common principles and characteristics. Specific characteristics of photodiodes with different structures are discussed later in this tutorial.
The general characteristics of a semiconductor p-n homojunction in the absence of optical illumination are thoroughly discussed in the semiconductor junctions tutorial
In a semiconductor photodiode, generation of electron-hole pairs by optical absorption can take place in any of the different regions: the depletion layer, the diffusion regions, and the homogeneous regions.
In the depletion layer of a diode, the immobile space charges create an internal electric field with a polarity from the n side to the p side, resulting in an electron energy-band gradient shown in Figure 14-16.
When an electron-hole pair is generated in the depletion layer by photoexcitation, the internal field sweeps the electron to the n side and the hole to the p side, as illustrated in Figure 14-16. This process results in a drift current that flows in the reverse direction from the cathode on the n side to the anode on the p side.
If a photoexcited electron-hole pair is generated within one of the diffusion regions at the edges of the depletion layer, the minority carrier, which is the electron in the p-side diffusion region or the hole in the n-side diffusion region, can reach the depletion layer by diffusion and then be swept to the other side by the internal field, as also illustrated in Figure 14-16. This process results in a diffusion current that also flows in the reverse direction.
For an electron-hole pair generated by absorption of a photon in the p or n homogeneous region, no current is generated because there is no internal field to separate the charges and a minority carrier generated in a homogeneous region cannot diffuse to the depletion layer before recombining with a majority carrier.
Because photons absorbed in the homogeneous regions do not generate any photocurrent, the active region of a photodiode consists of only the depletion layer and the diffusion regions. For a high-performance photodiode, the diffusion current is undesirable and is minimized. Therefore, the active region mainly consists of the depletion layer where a drift photocurrent is generated.
The external quantum efficiency, $$\eta_\text{e}$$, of a photodiode is the fraction of total incident photons absorbed in the active region that actually contribute to the photocurrent.
For a vertically illuminated photodetector, in which the optical signal reaches the active region in a direction perpendicular to the junction plane, the external quantum efficiency can be expressed as
$\tag{14-84}\eta_\text{e}=\eta_\text{coll}\eta_\text{t}\eta_\text{i}=\eta_\text{coll}(1-R)T_\text{h}(1-\text{e}^{-\alpha{W}})$
where $$\eta_\text{coll}$$ is the collection efficiency of the photogenerated carriers, $$\eta_\text{t}=(1-R)T_\text{h}$$, and $$\eta_\text{i}=1-\text{e}^{-\alpha{W}}$$.
Here, $$R$$ is the reflectivity of the incident surface of the photodiode, $$T_\text{h}$$ is the transmittance of the homogeneous region between the incident surface and the active region, $$\alpha$$ is the absorption coefficient of the active region, and $$W$$ is the width of the depletion layer that defines the active region.
To improve the quantum efficiency, the surface reflectivity can be reduced by antireflection coating. Besides, the homogeneous region through which the optical signal enters must be made thin to reduce absorption of the optical signal in this region.
For a p-n photodiode that has the incident surface on the p side, the p region has to be very thin and heavily doped so that the depletion layer extends mostly into the thick and lightly doped n region.
Ultimately, the quantum efficiency of a photodiode is determined by the absorption coefficient $$\alpha$$ and the depletion layer thickness $$W$$.
Clearly, there are two contributions to the photocurrent in a junction photodiode: a drift current from photogeneration in the depletion layer and a diffusion current from photogeneration in the diffusion regions.
The homogeneous regions on the two ends of the diode act like blocking layers for the photogenerated carriers because carriers neither drift nor diffuse through these regions.
Consequently, a junction photodiode acts like a photoconductor with two blocking contacts, which is discussed in the photoconductive detectors tutorial. It has a unity gain, $$G=1$$, with the external signal current simply being equal to the photocurrent:
$\tag{14-85}i_\text{s}=i_\text{ph}=\eta_\text{e}\frac{eP_\text{s}}{h\nu}$
This photocurrent is a reverse current that depends only on the power of the optical signal.
When a bias voltage is applied to the photodiode, the total current of the photodiode is the combination of the diode current given in (12-117) [refer to the semiconductor junctions tutorial] and the photocurrent:
$\tag{14-86}i(V,P_\text{s})=I_0(\text{e}^{eV/ak_\text{B}T}-1)-i_\text{s}=I_0(\text{e}^{eV/ak_\text{B}T}-1)-\eta_\text{e}\frac{eP_\text{s}}{h\nu}$
which is a function of both the bias voltage $$V$$ and the optical signal power $$P_\text{s}$$.
Figure 14-17 shows the current-voltage characteristics of a junction photodiode at various power levels of optical illumination.
The dark characteristics for $$P_\text{s}=0$$ are simply those of an unilluminated diode described by (12-117) [refer to the semiconductor junctions tutorial].
According to (14-86), the current-voltage characteristics of an illuminated photodiode shift downward from the dark characteristics by the amount of the photocurrent, which is linearly proportional to the optical power but is independent of the bias voltage.
As shown in Figure 14-17, there are two modes of operation for a junction photodiode.
The device functions in photoconductive mode in the third quadrant of its current-voltage characteristics, including the short-circuit condition on the vertical axis for $$V=0$$.
It functions in photovoltaic mode in the fourth quadrant, including the open-circuit condition on the horizontal axis for $$i=0$$.
The mode of operation is determined by the external circuitry and the bias condition.
The circuitry for the photoconductive mode, shown in Figure 14-17(a), normally consists of a reverse bias voltage of $$V=-V_\text{r}$$ and a load resistance $$R_\text{L}$$.
In this mode of operation, it is necessary to keep the output voltage, $$v_\text{out}$$, smaller than the bias voltage, $$V_\text{r}$$, so that a reverse voltage is maintained across the photodiode.
This requirement can be fulfilled if the bias voltage is sufficiently large while the load resistance is smaller than the internal resistance of the photodiode in reverse bias, as illustrated with the load line in the third quadrant of Figure 14-17.
In the photoconductive mode under the conditions that $$R_\text{L}\lt{R}_\text{i}$$ and $$v_\text{out}\lt{V}_\text{r}$$, a photodiode has the following linear response before it saturates:
$\tag{14-87}v_\text{out}=(I_0+i_\text{s})R_\text{L}=\left(I_0+\eta_\text{e}\frac{eP_\text{s}}{h\nu}\right)R_\text{L}$
The circuitry for the photovoltaic mode, shown in Figure 14-17(b), does not require a bias voltage but requires a large load resistance.
In this mode of operation, the photovoltage appears as a forward bias voltage across the photodiode.
As illustrated with the load line in the fourth quadrant of Figure 14-17, the load resistance is required to be much larger than the internal resistance of the photodiode in forward bias, $$R_\text{L}\gg{R}_\text{i}$$, so that the current $$i$$ flowing through the diode and the load resistance is negligibly small.
In the photovoltaic mode under this condition, the response of the photodiode is not linear but is logarithmic to the optical signal:
$\tag{14-88}v_\text{out}\approx\frac{ak_\text{B}T}{e}\ln\left(1+\frac{i_\text{s}}{I_0}\right)=\frac{ak_\text{B}T}{e}\ln\left(1+\eta_\text{e}\frac{eP_\text{s}}{h\nu{I_0}}\right)$
where $$a$$ is a factor of a value between 1 and 2 in the diode equation of (12-117) [refer to the semiconductor junctions tutorial].
In photoconductive mode, electric energy supplied by the bias voltage source is delivered to the photodiode. In photovoltaic mode, electric energy generated by the optical signal can be extracted from the photodiode to the external circuit. Solar cells are basically semiconductor junction diodes operating in photovoltaic mode for converting solar energy into electricity.
Figure 14-18(a) shows the small-signal equivalent circuit of a junction photodiode.
A photodiode has an internal resistance $$R_\text{i}$$ and an internal capacitance $$C_\text{i}$$ across its junction. Both $$R_\text{i}$$ and $$C_\text{i}$$ depend on the size and the structure of the photodiode and vary with the voltage across the junction.
In photoconductive mode under a reverse voltage, the diode has a large $$R_\text{i}$$ normally on the order of 1-100 MΩ for a typical photodiode, and a small $$C_\text{i}$$ dominated by the junction capacitance $$C_\text{j}$$, as discussed in the semiconductor junctions tutorial. As the reverse voltage increases in magnitude, $$R_\text{i}$$ increases but $$C_\text{i}$$ decreases because the depletion-layer width increases with reverse voltage.
In photovoltaic mode with a forward voltage across the junction, the diode has a large $$C_\text{i}$$ dominated by the diffusion capacitance $$C_\text{d}$$, as also discussed in the semiconductor junctions tutorial. It still has a large $$R_\text{i}$$, though smaller than that in the photodiode mode, because it operates near the open-circuit condition with a very small internal current in the fourth quadrant of the current-voltage characteristics.
The series resistance $$R_\text{s}$$ takes into account both resistance in the homogeneous regions of the diode and parasitic resistance from the contacts. The external parallel capacitance $$C_\text{p}$$ is the parasitic capacitance from the contacts and the package. The series inductance $$L_\text{s}$$ is the parasitic inductance from the wire or transmission-line connections.
The values of $$R_\text{s}$$, $$C_\text{p}$$, and $$L_\text{s}$$ can be minimized with careful design, processing, and packaging of the device.
The noise of a photodiode consists of both shot noise and thermal noise. Because a junction photodiode has a unity gain, its shot noise can be expressed as
$\tag{14-89}\overline{i_\text{n,sh}^2}=2eB(\overline{i_\text{s}}+\overline{i_\text{b}}+\overline{i_\text{d}})$
where $$i_\text{s}=i_\text{ph}$$ is the photocurrent.
The thermal noise seen at the output can be expressed as
$\tag{14-90}\overline{i_\text{n,th}^2}=\frac{4k_\text{B}TB}{R_\text{eq}}$
where $$R_\text{eq}$$ is the equivalent resistance seen at the output port.
From the circuit shown in Figure 14-18(b), we find that
$\tag{14-91}R_\text{eq}=R_\text{L}\parallel(R_\text{i}+R_\text{s})=\frac{R_\text{L}(R_\text{i}+R_\text{s})}{R_\text{L}+R_\text{i}+R_\text{s}}$
In photoconductive mode, the photodiode has a dark current of $$i_\text{d}=I_0$$ and a relatively small load resistance. In photovoltaic mode, the dark current can be eliminated, and the load resistance is required to be very large. Therefore, a photodiode is significantly noisier in photoconductive mode under a reverse bias than in photovoltaic mode without a bias.
High-speed photodiodes are by far the most widely used photodetectors in applications requiring high-speed or broadband photodetection.
The speed of a photodiode is determined by two factors: (1) the response time of the photocurrent and (2) the time constant of its equivalent circuit shown in Figure 14-18(a).
Because a photodiode operating in photovoltaic mode has a large RC time constant due to the large internal diffusion capacitance in this mode of operation, only photodiodes operating in photoconductive mode are suitable for high-speed or broadband applications.
For this reason, we only consider the speed and the frequency response for a photodiode operating in photoconductive mode.
For a photodiode operating in photoconductive mode under a reverse bias, the response time of the photocurrent to an optical signal is determined by two factors: (1) drift of the electrons and holes that are photogenerated in the depletion layer and (2) diffusion of the electrons and holes that are photogenerated in the diffusion regions.
Drift of the carriers across the depletion layer is a fast process characterized by the transit times of the photogenerated electrons and holes across the depletion layer.
In contrast, diffusion of the carriers is a slow process that is caused by optical absorption in the diffusion regions outside of the high-field depletion region. It results in a diffusion current that can last as long as the lifetime of the carriers.
The consequence is a long tail in the impulse response of the photodiode, which translates into a low-frequency falloff in the frequency response of the device. [refer to the photodetector performance parameters tutorial].
For a high-speed photodiode, this diffusion mechanism has to be eliminated by reducing the photogeneration of carriers outside the depletion layer through design of the device structure.
When the diffusion mechanism is eliminated, the frequency response of the photocurrent is only limited by the transit times of electrons and holes.
In general, the frequency response function that is dictated by the carrier transit time depends on the details of the electric field distribution and the photogenerated carrier distribution in the depletion layer.
In a semiconductor, electrons normally have a higher mobility, thus a smaller transit time, than holes. This difference has to be considered in the detailed analysis of the response speed of a photodiode.
For a good estimate of the detector frequency response, however, the average of electron and hole transit times can be used:
$\tag{14-92}\tau_\text{tr}=\frac{1}{2}(\tau_\text{tr}^\text{e}+\tau_\text{tr}^\text{h})$
In the simple case when the process of carrier drift is dominated by a constant transit time of $$\tau_\text{tr}$$, the temporal response of the photocurrent is ideally a rectangular function of duration $$\tau_\text{tr}$$. Therefore, the power spectrum of the photocurrent frequency response can be approximately expressed by
$\tag{14-93}\mathcal{R}_\text{ph}^2(f)=\left|\frac{i_\text{ph}(f)}{P_\text{s}(f)}\right|^2\approx\mathcal{R}_\text{ph}^2(0)\left(\frac{\sin\pi{f}\tau_\text{tr}}{\pi{f}\tau_\text{tr}}\right)^2$
which has a transit-time-limited 3-dB cutoff frequency
$\tag{14-94}f_\text{3dB}^\text{ph}\approx\frac{0.443}{\tau_\text{tr}}$
The frequency response of the equivalent circuit shown in Figure 14-18(a) is determined by (1) the internal resistance $$R_\text{i}$$ and capacitance $$C_\text{i}$$ of the photodiode; (2) the parasitic effects characterized by $$R_\text{s}$$, $$C_\text{p}$$, and $$L_\text{s}$$; and (3) the load resistance $$R_\text{L}$$.
Clearly, the parasitic effects must be eliminated as much as possible because they can degrade the performance of a high-speed photodiode.
A high-speed photodiode normally operates under the condition that $$R_\text{i}\gg{R}_\text{L},R_\text{s}$$. Therefore, when parasitic inductance is eliminated, the ultimate speed of the circuit is dictated by the RC time constant $$\tau_\text{RC}=(R_\text{L}+R_\text{s})(C_\text{i}+C_\text{p})$$. Its frequency response has the following power spectrum:
$\tag{14-95}\mathcal{R}_\text{ckt}^2(f)\approx\frac{\mathcal{R}_\text{ckt}^2(0)}{1+4\pi^2f^2\tau_\text{RC}^2}$
which has an RC-time-limited 3-dB cutoff frequency
$\tag{14-96}f_\text{3dB}^\text{ckt}\approx\frac{1}{2\pi\tau_\text{RC}}=\frac{1}{2\pi(R_\text{L}+R_\text{s})(C_\text{i}+C_\text{p})}$
Combining the photocurrent response and the circuit response, the total output power spectrum of an optimized photodiode operating in photoconductive mode is
$\tag{14-97}\mathcal{R}^2(f)=\mathcal{R}_\text{ph}^2(f)\mathcal{R}_\text{ckt}^2(f)=\frac{\mathcal{R}^2(0)}{1+4\pi^2{f^2}\tau_\text{RC}^2}\left(\frac{\sin\pi{f}\tau_\text{tr}}{\pi{f}\tau_\text{tr}}\right)^2$
This total frequency response has a 3-dB cutoff frequency, $$f_\text{3dB}$$, that can be found approximately by using the following rule of the sum of squares:
$\tag{14-98}\frac{1}{f_\text{3dB}^2}=\frac{1}{(f_\text{3dB}^\text{ph})^2}+\frac{1}{(f_\text{3dB}^\text{ckt})^2}$
By using (14-94) for $$f_\text{3dB}^\text{ph}$$ and (14-96) for $$f_\text{3dB}^\text{ckt}$$, the 3-dB cutoff frequency of a photodiode including transit-time and circuit limitations can be expressed approximately as
$\tag{14-99}f_\text{3dB}\approx\frac{0.443}{[\tau_\text{tr}^2+(2.78\tau_\text{RC})^2]^{1/2}}=\frac{1}{2\pi[\tau_\text{RC}^2+(0.36\tau_\text{tr})^2]^{1/2}}$
Figure 14-19 shows the total frequency response given by (14-97) for a fixed value of $$\tau_\text{tr}$$ but for a few different values of $$\tau_\text{RC}$$. It is seen that the total frequency response is transit-time-limited when $$\tau_\text{tr}\gt2.78\tau_\text{RC}$$, but is RC-time-limited when $$\tau_\text{tr}\lt2.78\tau_\text{RC}$$.
The characteristics given by (14-97) and shown in Figure 14-19 represent the ultimate frequency response of a photodiode.
In practice, the frequency response of a photodiode can be substantially degraded by the presence of a significant diffusion current and by parasitic effects.
The optimum design of a high-speed photodiode requires (1) elimination of the diffusion current, (2) elimination of parasitic effects, and (3) equalization of the transit-time-limited bandwidth and the RC-time-limited bandwidth by making $$\tau_\text{tr}=2.78\tau_\text{RC}$$.
An important consideration for a high-speed photodiode is the bandwidth-efficiency product, $$\eta_\text{e}f_\text{3dB}$$, rather than the bandwidth alone because increasing the bandwidth can often result in a reduced efficiency in many device structures.
Many different approaches can be taken to optimize both the bandwidth and the efficiency for a maximum bandwidth-efficiency product. This issue is further addressed in the following discussions of various device structures.
p-i-n Photodiodes
A p-i-n photodiode consists of an intrinsic region sandwiched between heavily doped $$\text{p}^+$$ and $$\text{n}^+$$ regions. Figure 14-20 shows the comparison between a p-n junction photodiode and a p-i-n photodiode.
In a p-n photodiode, the depletion-layer width and the junction capacitance both vary with reverse voltage across the junction. The electric field in the depletion layer is not uniform.
In a p-i-n photodiode, a reverse bias voltage applied to the device drops almost entirely across the intrinsic region because of high resistivity in the intrinsic region and low resistivities in the surrounding $$\text{p}^+$$ and $$\text{n}^+$$ regions.
As a result, a p-i-n photodiode has the following two important characteristics: (1) the depletion layer is almost completely defined by the intrinsic region; (2) the electric field in the depletion layer is uniform across the intrinsic region.
In practice, the intrinsic region does not have to be truly intrinsic but only has to be highly resistive. It can be either a highly resistive p region, called a $$\pi$$ region, or a highly resistive n region, called a $$\nu$$ region.
The depletion-layer width $$W$$ in a p-i-n diode does not vary significantly with bias voltage but is pretty much fixed by the thickness, $$d_\text{i}$$, of the intrinsic region so that $$W\approx{d}_\text{i}$$.
The internal capacitance of a p-i-n diode can be predetermined in the design of the device through the choice of the thickness of the intrinsic region and the device area $$\mathcal{A}$$:
$\tag{14-100}C_\text{i}=C_\text{j}=\frac{\epsilon\mathcal{A}}{W}\approx\frac{\epsilon\mathcal{A}}{d_\text{i}}$
This capacitance is fairly independent of the bias voltage; thus it remains constant in operation.
When a reverse voltage is applied to a p-i-n diode, a uniform electric field that is linearly proportional to the reverse bias voltage exists throughout the intrinsic region:
$\tag{14-101}E\approx\frac{V_0+V_\text{r}}{W}\approx\frac{V_\text{r}}{d_\text{i}}$
for $$V_\text{r}\gg{V}_0$$.
Due to this uniform field, both electrons and holes have constant drift velocities across the depletion layer in a p-i-n photodiode.
At low and moderate fields, the drift velocities of electrons and holes both vary linearly with the electric field strength. For a p-i-n photodiode operating in this regime with a relatively low reverse bias voltage, the average carrier transit time is given by
$\tag{14-102}\tau_\text{tr}=\frac{1}{2}\left(\frac{W}{\mu_\text{e}E}+\frac{W}{\mu_\text{h}E}\right)\approx\frac{d_\text{i}^2}{2\mu{V_\text{r}}}$
where $$\mu=\mu_\text{e}\mu_\text{h}/(\mu_\text{e}+\mu_\text{h})$$.
Because the depletion-layer width in a p-i-n diode is dictated by the thickness of the intrinsic region, the transit time is inversely proportional to the bias voltage. Therefore, the response speed of a photodiode can be improved by increasing the reverse bias voltage.
At high fields, however, both electron and hole drift velocities reach their respective saturation velocities: $$v_\text{e}\approx{v}_\text{e}^\text{sat}$$ and $$v_\text{h}\approx{v}_\text{h}^\text{sat}$$, which vary little with bias voltage. For most semiconductors, this occurs at a field strength above $$100\text{ MV m}^{-1}$$ for a saturation velocity on the order of $$10^5\text{ m s}^{-1}$$.
For a p-i-n photodiode operating in this regime with a sufficiently large reverse bias voltage, electrons and holes have a constant average transit time across the depletion layer:
$\tag{14-103}\tau_\text{tr}=\frac{W}{v_\text{sat}}\approx\frac{d_\text{i}}{v_\text{sat}}$
where
$\frac{1}{v_\text{sat}}=\frac{1}{2}\left(\frac{1}{v_\text{e}^\text{sat}}+\frac{1}{v_\text{h}^\text{sat}}\right)$
So long as the reverse bias voltage is large enough to keep electrons and holes drifting at their respective saturation velocities, $$\tau_\text{tr}$$ is independent of the bias voltage and can thus be predetermined by the thickness of the intrinsic region through the design of the device.
Compared to a p-n photodiode, in which the depletion-layer width varies with bias voltage, a p-i-n photodiode has a number of advantages because its depletion-layer width is determined by the thickness of the intrinsic region and is independent of the bias voltage.
Both the quantum efficiency and the frequency response of a p-i-n photodiode can be optimized by the geometric design of the device, whereas those of a p-n photodiode depend strongly on the bias voltage.
From the above discussions, it is clear that the transit time, the RC time constant, and the internal quantum efficiency of a vertically illuminated p-i-n photodiode, shown in Figure 14-21(a), all depend on the thickness $$d_\text{i}$$ of the intrinsic region: $$\tau_\text{tr}\propto{d}_\text{i}$$, $$C_\text{i}\propto{d}_\text{i}^{-1}$$, and $$\eta_\text{i}=1-\text{e}^{-\alpha{d_\text{i}}}$$.
For a high quantum efficiency, the thickness $$d_\text{i}$$ of the intrinsic region can be chosen to be larger than the absorption length: $$d_\text{i}\gt1/\alpha$$.
To optimize the speed of a p-i-n photodiode, both the thickness of the intrinsic region and the area of the device have to be properly chosen.
To reduce the diffusion current, $$d_\text{i}$$ can be chosen to be larger than the electron diffusion length in the $$\text{p}^+$$ region and the hole diffusion length in the $$\text{n}^+$$ region: $$d_\text{i}\gg{L}_\text{e},L_\text{h}$$.
A large $$d_\text{i}$$ reduces the RC time constant of the device by reducing $$C_\text{i}$$, but it increases the transit time $$\tau_\text{tr}$$. Because the electric field is relatively constant throughout the active region of a p-i-n photodiode, the transit time can be optimized with a chosen $$d_\text{i}$$.
Because $$C_\text{i}$$ can be reduced by reducing the device area, a p-i-n photodiode normally has an intrinsic region that has a thickness chosen to optimize the quantum efficiency and the transit time. For a high-speed p-i-n photodiode, the device area is made small enough that the RC time constant is not a limiting factor of its frequency response.
One major limitation of p-i-n photodiodes that are made of indirect-gap semiconductors, such as Si and Ge, is the small absorption coefficients of these semiconductors in the spectral regions where only indirect absorption takes place in such semiconductors.
For example, at $$\lambda=850\text{ nm}$$, the absorption coefficient at 300 K is only about $$7\times10^4\text{ m}^{-1}$$ for Si but is about $$1\times10^6\text{ m}^{-1}$$ for GaAs though 850 nm is farther away from the bandgap wavelength of 1.11 μm for Si than from that of 871 nm for GaAs.
This results in a low quantum efficiency, thus a small responsivity, for a Si or Ge p-i-n photodiode of even just a moderate speed because of the conflicting requirements on the thickness $$d_\text{i}$$ for reducing $$\tau_\text{tr}$$ and increasing $$\eta_\text{i}$$ in a vertical p-i-n photodiode shown in Figure 14-21(a).
One solution to this problem is provided by the lateral p-i-n geometry shown in Figure 14-21(b).
In a lateral p-i-n, both $$\tau_\text{tr}$$ and $$C_\text{i}$$ still depend on $$d_\text{i}$$ in the same manner as in a vertical p-i-n, but the internal quantum efficiency is not a function of $$d_\text{i}$$ but is a function of the trench depth $$d$$ as $$\eta_\text{i}=1-\text{e}^{-\alpha{d}}$$. Thus, $$f_\text{3dB}$$ and $$\eta_\text{i}$$ can be independently optimized by properly choosing a value of $$d_\text{i}$$ to optimize $$\tau_\text{tr}$$ and $$C_\text{i}$$ for a large $$f_\text{3dB}$$ while making a deep enough trench for a high value of $$\eta_\text{i}$$.
One additional advantage of a lateral p-i-n photodiode is that the incident optical signal does not have to pass through the homogeneous $$\text{p}^+$$ or $$\text{n}^+$$ region before it reaches the active intrinsic region, thus improving the external quantum efficiency.
This feature is significant for a homojunction p-i-n used for optical detection at short optical wavelengths, such as a Si p-i-n for blue or ultraviolet wavelengths, where the absorption coefficient is very high and the optical penetration depth is very small.
Example 14-12
A vertically illuminated InGaAs/InP p-i-n photodiode for $$\lambda=1.3$$ μm consists of a lightly doped $$\text{n}^-$$-InGaAs layer of a thickness $$d_\text{i}$$ between a thin $$\text{p}^+$$-InGaAs top layer and an $$\text{n}^+$$-InP substrate.
The device is reverse-biased at a sufficiently high bias voltage for both electrons and holes to reach their respective saturation velocities of $$v_\text{e}^\text{sat}=6.5\times10^4\text{ m s}^{-1}$$ and $$v_\text{h}^\text{sat}=4.8\times10^4\text{ m s}^{-1}$$. The absorption coefficient of InGaAs at DC and low frequencies is $$\epsilon=14.1\epsilon_0$$. Take $$R=R_\text{L}+R_\text{s}=50$$ Ω, $$C_\text{p}=0$$, and $$L_\text{s}=0$$ for this device.
This device can be designed to be either front or back illuminated and can be antireflection coated to have a high $$\eta_\text{t}$$; meanwhile, its structure can be optimized to have a high $$\eta_\text{coll}$$.
In any event, its bandwidth-efficiency product is limited to $$\eta_\text{i}f_\text{3dB}$$ because $$\eta_\text{i}\ge\eta_\text{e}$$. This device is made to have a circular active area of a diameter $$2r$$.
Plot its 3-dB cutoff frequency, $$f_\text{3dB}$$, and the upper limit of its bandwidth-efficiency product, $$\eta_\text{i}f_\text{3dB}$$, as a function of the intrinsic layer thickness $$d_\text{i}$$ in the range of $$0\lt{d}_\text{i}\lt3$$ μm for the four different diameters of $$2r=10,20,40,$$ and $$80$$ μm.
The average transit time can be calculated using (14-103) with the following average saturation velocity for electrons and holes:
\begin{align}v_\text{sat}&=\left[\frac{1}{2}\left(\frac{1}{v_\text{e}^\text{sat}}+\frac{1}{v_\text{h}^\text{sat}}\right)\right]^{-1}=\left[\frac{1}{2}\left(\frac{1}{6.5\times10^4}+\frac{1}{4.8\times10^4}\right)\right]^{-1}\text{ m s}^{-1}\\&=5.52\times10^4\text{ m s}^{-1}\end{align}
The active area is $$\mathcal{A}=\pi{r}^2$$. The internal capacitance of the photodiode is $$C_\text{i}=\epsilon\mathcal{A}/d_\text{i}=\epsilon\pi{r}^2/d_\text{i}$$. Thus, the RC time constant $\tau_\text{RC}=RC_\text{i}=R\frac{\epsilon\pi{r}^2}{d_\text{i}}$ with $$R=50$$ Ω and $$\epsilon=14.1\epsilon_0$$.
From (14-99), we then have
$f_\text{3dB}\approx\frac{0.443}{[\tau_\text{tr}^2+(2.78\tau_\text{RC})^2]^{1/2}}=\frac{0.443}{\left\{(d_\text{i}/v_\text{sat})^2+[2.78R(\epsilon\pi{r^2}/d_\text{i})]^2\right\}^{1/2}}$
The values of $$f_\text{3dB}$$ in the range of $$0\lt{d}_\text{i}\lt3$$ μm are calculated using this relation for $$2r=10,20,40,$$ and $$80$$ μm. Then the bandwidth-efficiency product is calculated using
$\eta_\text{i}f_\text{3dB}=(1-\text{e}^{-\alpha{d_\text{i}}})f_\text{3dB}$
The values of both $$f_\text{3dB}$$ and $$\eta_\text{i}f_\text{3dB}$$ are plotted as a function of $$d_\text{i}$$ in Figure 14-22.
From the data shown in Figure 14-22, we see that for a given device diameter there is an optimum intrinsic layer thickness of $$d_\text{opt}$$ for a maximum value of $$f_\text{3dB}$$ and a different optimum intrinsic layer thickness of $$d_\text{opt}'$$ for a maximum value of $$\eta_\text{i}f_\text{3dB}$$. We also find that $$d_\text{opt}'\gt{d}_\text{opt}$$.
The cutoff frequency is primarily limited by $$\tau_\text{RC}$$ if $$d_\text{i}\lt{d}_\text{opt}$$, whereas it is primarily limited by $$\tau_\text{tr}$$ if $$d_\text{i}\gt{d}_\text{opt}$$.
For a given device diameter, there is one possible choice of $$d_\text{i}$$ on either side of $$d_\text{opt}$$ for a sufficiently large value of $$f_\text{3dB}$$. For a desired $$f_\text{3dB}$$, the choice of $$d_\text{i}\gt{d}_\text{opt}$$ has a larger bandwidth-efficiency product than that of $$d_\text{i}\lt{d}_\text{opt}$$.
Heterojunction Photodiodes
Heterojunction structuren offer additional flexibility in optimizing the performance of a photodiode.
In a heterojunction photodiode, the active region normally has a bandgap that is smaller than one or both of the homogeneous regions. A large-gap homogeneous region, which can be either the top $$\text{p}^+$$ region or the substrate $$\text{n}$$ region, serves as a window for the optical signal to enter.
The small bandgap of the active region determines the threshold wavelength, $$\lambda_\text{th}$$, of the detector on the long-wavelength side, while the large bandgap of the homogeneous window region sets a cutoff wavelength, $$\lambda_\text{c}$$, on the short-wavelength side.
For an optical signal that has a wavelength $$\lambda_\text{s}$$ in the range of $$\lambda_\text{th}\gt\lambda_\text{s}\gt\lambda_\text{c}$$, the quantum efficiency and the responsivity can be optimized.
A limiting factor for the speed of a heterojunction photodiode is the trapping of electrons at the conduction-band discontinuity and that of holes at the valence-band discontinuity. For high-speed applications, this limitation has to be removed by reducing the barrier height through compositional grading at the interface of the heterojunction.
Many III-V p-i-n photodiodes have heterojunction structures, which can be either symmetric with a small-bandgap active intrinsic region sandwiched between large-bandgap $$\text{p}^+$$ and $$\text{n}^+$$ regions, such as $$\text{p}^+$$-AlGaAs/GaAs/$$\text{n}^+$$-AlGaAs and $$\text{p}^+$$-InP/InGaAs/$$\text{n}^+$$-InP, or asymmetric with a large-bandgap $$\text{p}^+$$ or $$\text{n}^+$$ region on only one side, such as $$\text{p}^+$$-AlGaAs/GaAs/$$\text{n}^+$$-GaAs or $$\text{p}^+$$-InGaAs/InGaAs/$$\text{n}^+$$-InP.
Figure 14-23 shows some structures of heterojunction photodiodes.
Sophisticated heterojunction structures such as quantum wells and strained quantum wells, as well as quantum wires and quantum dots, are also used for the active region of photodiodes.
Such quantum structures have the advantage of high peak absorption coefficients, which lead to an improved quantum efficiency for a given thickness of the active region. They are often used for improving the bandwidth-efficiency products of high-speed photodetectors.
Schottky Photodiodes
The property of the interface between a metal and a semiconductor depends on the work functions of the metal and the semiconductor, $$e\phi_\text{m}$$ and $$e\phi_\text{s}$$, respectively, and the type of semiconductor.
The metal-semiconductor junction is an ohmic contact without a potential barrier if $$\phi_\text{s}\gt\phi_\text{m}$$ in the case of an n-type semiconductor or $$\phi_\text{s}\lt\phi_\text{m}$$ in the case of a p-type semiconductor.
A Schottky barrier of a height $$E_\text{b}=e(\phi_\text{m}-\chi)$$ for electrons to flow from the metal to the semiconductor exists at the metal-semiconductor junction if $$\phi_\text{s}\lt\phi_\text{m}$$ in the case of an n-type semiconductor, as shown in Figure 14-24(a).
A Schottky barrier of a height $$E_\text{b}=E_\text{g}-e(\phi_\text{m}-\chi)$$ for holes to flow from the metal to the semiconductor exists at the metal-semiconductor junction if $$\phi_\text{s}\gt\phi_\text{m}$$ in the case of a p-type semiconductor, as shown in Figure 14-24(b).
The general characteristics of a Schottky junction are similar to those of a p-n junction.
The characteristics of a Schottky junction formed between a metal and an n-type semiconductor can be approximated by those of a $$\text{p}^+-\text{n}$$ junction with a built-in potential of $$V_0=\phi_\text{m}-\phi_\text{s}$$, as shown in Figure 14-24(a).
Similarly, a Schottky junction between a metal and a p-type semiconductor can be considered as an $$\text{n}^+-\text{p}$$ junction with a built-in potential of $$V_0=\phi_\text{s}-\phi_\text{m}$$, as shown in Figure 14-24(b).
Therefore, the depletion-layer width $$W$$ of a Schottky junction and its dependence on bias voltage can be found by using (12-100) [refer to the semiconductor junctions tutorial] and by taking $$N_\text{a}\gg{N}_\text{d}$$ in the case of an n-type semiconductor or $$N_\text{d}\gg{N}_\text{a}$$ in the case of a p-type semiconductor.
The junction capacitance simply has the same form as that of a p-n junction given in (12-119) [refer to the semiconductor junctions tutorial].
It is also possible for a Schottky diode to function like a p-i-n diode by inserting a lightly doped $$\text{n}^--$$semiconductor layer between a metal and a heavily doped $$\text{n}^+-$$semiconductor region. In such a structure, the metal functions as a $$\text{p}^+-$$homogeneous region, and the $$\text{n}^-$$ layer functions as the intrinsic region in a p-i-n diode.
The depletion layer, which exists almost entirely the $$\text{n}^-$$ region, broadens as the reverse bias voltage increases until it reaches the metal at a voltage known as the punchthrough voltage. When the reverse bias voltage is larger than the punchthrough voltage, the depletion-layer width of such a Schottky diode becomes independent of the voltage and is simply defined by the thickness of the $$\text{n}^-$$ layer.
The characteristics and the equivalent circuit of a Schottky photodiode are similar to those of a semiconductor junction photodiode discussed above.
A Schottky photodiode can also operate in either photoconductive mode or photovoltaic mode, but it normally operates in photoconductive mode in most of its applications for the same reasons as discussed above for other junction photodiodes.
A Schottky photodiode operating in photoconductive mode can have a very high speed, particularly when an n-type semiconductor is used. Because the optical signal is absorbed in a thin layer at the junction interface, only the majority carriers, which are electrons in the case of an n-type semiconductor, have to drift across the active region. A well-designed Schottky photodiode can reach an intrinsic frequency bandwidth as high as 100 GHz.
The spectral response of a Schottky photodiode depends on whether an optical signal is absorbed by the semiconductor or by the metal.
If the optical signal is absorbed by the semiconductor, the spectral characteristics of a Schottky photodiode is the same as that of a semiconductor junction photodiode with a threshold photon energy defined by the bandgap of the absorbing semiconductor: $$h\nu\gt{E}_\text{th}=E_\text{g}$$.
This process takes place when the Schottky photodiode has a thin, semi-transparent metallic layer to allow the optical signal to enter with little attenuation before it reaches the depletion layer. This is the normal mode of operation for a high-efficiency, high-speed Schottky photodiode.
Absorption of a photon by the metal at the junction interface can also produce a photoresponse if the photon has sufficient energy to excite an electron over the Schottky barrier.
For a Schottky photodiode to operate in this mode, the metallic layer has to be thick and absorbing, but the absorption has to take place at the junction interface. The spectral response range in this mode of operation is then $$E_\text{b}\lt{h\nu}\lt{E}_\text{g}$$ for the optical signal to enter from the semiconductor side without being absorbed by the semiconductor. A Schottky photodiode in this mode is useful as an infrared detector, but its efficiency is low because a metal does not absorb light efficiently.
Example 14-13
An InGaAs/InP Schottky photodiode has a structure similar to that of the InGaAs/InP p-i-n photodiode considered in Example 14-12 above, but it has a metallic layer in place of the $$\text{p}^+$$ layer of the p-i-n photodiode.
The thickness of the $$\text{n}^-$$ layer is $$d_\text{i}=1$$ μm. The diameter of the device is $$2r=12$$ μm. It is back illuminated through the InP substrate. The device is biased above the punchthrough voltage, and the electrons have reached their saturation velocity.
(a) What is the spectral response range of this photodiode at 300 K?
(b) Find the 3-dB cutoff frequency of this photodiode if $$R=R_\text{L}+R_\text{s}=50$$ Ω and $$C_\text{p}=0$$.
(a)
The spectral response range of this back-illuminated photodiode is limited at the short-wavelength end by a cutoff wavelength $$\lambda_\text{c}$$ determined by the bandgap of the InP window layer because an optical signal has to pass through the InP substrate to reach the InGaAs active layer.
It is limited at the long-wavelength end by the threshold wavelength $$\lambda_\text{th}$$ determined by the bandgap of the InGaAs that is lattice matched to InP.
From the discussions following (12-9) [refer to the introduction to semiconductors tutorial], we find that the absorption edge of InP is at 919 nm and that of InGaAs is at 1.65 μm. Therefore, the spectral response range of this Schottky photodiode at 300 K is from $$\lambda_\text{c}=919$$ nm to $$\lambda_\text{th}=1.65$$ μm
(b)
In a Schottky photodiode, only the majority carriers, which in this case are electrons, have to drift across the active region. Thus, the transit time is simply that of the electrons. From Example 14-12, we have $$v_\text{e}^\text{sat}=6.5\times10^4\text{ m s}^{-1}$$. With $$d_\text{i}=1$$ μm, we find that
$\tau_\text{tr}=\frac{d_\text{i}}{v_\text{e}^\text{sat}}=\frac{1\times10^{-6}}{6.5\times10^4}=15.4\text{ ps}$
With $$\epsilon=14.1\epsilon_0$$ from Example 14-12, we find that the internal capacitance of the device for $$d_\text{i}=1$$ μm and $$2r=12$$ μm is
$C_\text{i}=\frac{\epsilon\pi{r^2}}{d_\text{i}}=\frac{14.1\times8.85\times10^{-12}\times\pi\times(12\times10^{-6}/2)^2}{1\times10^{-6}}\text{ F}=14.1\text{ fF}$
With $$R=R_\text{L}+R_\text{s}=50$$ Ω, the RC time constant
$\tau_\text{RC}=RC_\text{i}=50\times14.1\times10^{-15}\text{ s}=705\text{ fs}$
Therefore, the 3-dB cutoff frequency of this photodiode is
$f_\text{3dB}=\frac{0.443}{[(15.4\times10^{-12})^2+(2.78\times705\times10^{-15})^2]^{1/2}}\text{ Hz}=28.5\text{ GHz}$
Because $$\tau_\text{tr}\gg\tau_\text{RC}$$ for this device, $$f_\text{3dB}$$ is almost entirely determined by the electron transit time.
Photodiodes with Multipass Structures
A high-speed photodiode requires a thin depletion layer for a short transit time, but, according to (14-84), the quantum efficiency of the photodiode decreases as the depletion-layer width $$W$$ is reduced. Therefore, there is a trade-off between its frequency bandwidth and quantum efficiency.
To optimize both the bandwidth and the efficiency of a high-speed photodiode, a large bandwidth-efficiency product $$\eta_\text{e}f_\text{3dB}$$ is desired.
From (14-84), it can be seen that the external quantum efficiency of a photodiode can be increased without changing the depletion-layer width by (1) antireflection coating the incident surface to make $$R=0$$ and (2) using a heterostructure with a nonabsorbing large-bandgap homogeneous region for $$T_\text{h}=1$$.
Many different device structures have been developed to increase the bandwidth-efficiency product further beyond that obtained with these two simple steps. They can be divided into three basic categories:
1. Vertically illuminated photodetectors with multiple optical passes through the active region, where are discussed here;
2. Laterally illuminated photodetectors such as the lateral p-i-n photodetectors, which are discussed earlier;
3. Guided-wave photodetectors, which are discussed in a later tutorial.
Figure 14-25 shows three approaches to increasing the bandwidth-efficiency product of a photodiode by increasing its quantum efficiency without increasing the thickness of its active region.
The simple double-pass structure, shown in Figure 14-25(a), directs the optical signal to pass through the active region twice with a back reflector of reflectivity $$R_\text{b}$$, which can be simply the substrate electrode if the substrate is transparent. With this structure, the quantum efficiency can be improved by a factor close to $$1+R_\text{b}$$ if the absorbing active region has a thickness of $$W\lt\alpha^{-1}$$.
To increase the quantum efficiency further, the effective optical path length in the active region can be increased without increasing the physical thickness of the active region by using the refracting-facet structure shown in Figure 14-25(b).
In this structure, the top electrode reflects the optical signal for a second pass through the active region to keep the advantage of a double-pass structure, but the optical signal passes through the active region at an angle $$\theta$$ for a total effective path length of $$2W/\sin\theta$$. Therefore, the quantum efficiency is further increased over that of the simple double-pass structure shown in Figure 14-25(a). A bandwidth-efficiency product around 40 GHz has been obtained for refracting-facet photodiodes.
To push the quantum efficiency close to unity in a high-speed photodiode with a very thin active region, a resonant-cavity-enhanced structure shown in Figure 14-25(c) can be used.
This structure consists of both a frond and back reflector to form a resonant cavity. It functions in a manner similar to that of a VCSEL by forming a standing wave with its high-intensity crest located at the thin absorbing active region.
By using DBR reflectors of a reflectivity greater than 99% for a high-$$Q$$ cavity, a quantum efficiency greater than 90% can be achieved with this scheme. A bandwidth-efficiency product around 20 GHz has been obtained for resonant-cavity-enhanced p-i-n and Schottky photodiodes.
The resonant-cavity-enhanced structure is highly wavelength selective because of its resonance nature. This wavelength selectivity is a disadvantage for general applications because of its narrow optical bandwidth, but it is a useful feature for applications in wavelength-selective detection systems such as wavelength-division multiplexing systems.
The next tutorial covers the topic of avalanche photodiodes
|
2023-03-30 20:47:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7289108633995056, "perplexity": 780.622572407896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949387.98/warc/CC-MAIN-20230330194843-20230330224843-00448.warc.gz"}
|
http://www.gsd.uab.cat/index.php?view=detail&id=1382&option=com_simplecalendar&Itemid=43&lang=en
|
# Complexity and Simplicity in the dynamics of Totally Transitive graph maps II
Date:
02.05.16
Times:
15:30
Place:
UAB - Dept. Matemàtiques (C1/-128)
Speaker:
Lluis Alsedà
University:
Universitat Autònoma de Barcelona
#### Abstract:
Transitivity, the existence of periodic points and positive topological entropy can be used to characterize complexity in dynamical systems. It is known that for graphs that are not trees, for every $\varepsilon > 0$, there exist (complicate) totally transitive map (then with cofinite set of periods) such that the topological entropy is smaller than $\varepsilon$ (simplicity). In the first part of the talk, delivered by Liane Bordignon, it was shown by means of examples and for the circle that the above scenario: for graphs that are not trees there exist relatively simple maps (with small entropy) which are totally transitive (and hence robustly complicate) can be extended to the set of periods. To measure numerically the complexity of the set of periods we introduce the notion of boundary of cofiniteness defined as the smallest positive integer $n$ such that the set of periods contains $\{n, n+1, n+2, \dots\}$. Larger boundary of cofiniteness means simpler set of periods. With the help of the notion of boundary of cofiniteness we can state precisely what do we mean by extending the entropy simplicity result to the set of periods: \emph{there exist relatively simple maps such that the boundary of cofiniteness is arbitrarily large (simplicity) which are totally transitive (and hence robustly complicate)}. In the first part of the talk several examples in arbitrary graphs were discussed and it was shown that for circle maps the above statement is a theorem. In this talk we will extend this statement to the space $\sigma$ and we will discuss its proof. This is a good example on how the lack of knowledge about the structure of the set of periods can be overcome with appropriate simple arguments.
This is a joint ongoing work with L. Bordignon and J. Groisman.
|
2020-09-23 23:03:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7580506205558777, "perplexity": 421.36913562829045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400212959.12/warc/CC-MAIN-20200923211300-20200924001300-00153.warc.gz"}
|
https://internetcomputer.org/docs/current/references/cli-reference/dfx-deploy/
|
# dfx deploy
Use the dfx deploy command to register, build, and deploy a dapp on the local canister execution environment, on the IC or on a specified testnet. By default, all canisters defined in the project dfx.json configuration file are deployed.
This command simplifies the developer workflow by enabling you to run one command instead of running the following commands as separate steps:
dfx canister create --alldfx builddfx canister install --all
Note that you can only run this command from within the project directory structure. For example, if your project name is hello_world, your current working directory must be the hello_world top-level project directory or one of its subdirectories.
## Basic usage
dfx deploy [flag] [option] [canister_name]
## Flags
You can use the following optional flags with the dfx deploy command.
FlagDescription
-h, --helpDisplays usage information.
-V, --versionDisplays version information.
## Options
You can use the following options with the dfx deploy command.
OptionDescription
--network <network>Overrides the environment to connect to. By default, the local canister execution environment is used.
--argument <argument>Specifies an argument using Candid syntax to pass to the canister during deployment. Note that this option requires you to define an actor class in the Motoko program.
--with-cycles <number-of-cycles>Enables you to specify the initial number of cycles for a canister in a project.
### Arguments
You can specify the following arguments for the dfx deploy command.
ArgumentDescription
canister_nameSpecifies the name of the canister you want to register, build, and deploy. Note that the canister name you specify must match at least one name in the canisters section of the dfx.json configuration file for the project. If you don’t specify a canister name, dfx deploy will deploy all canisters defined in the dfx.json file.
## Examples
You can use the dfx deploy command to deploy all or specific canisters on the local canister execution environment, on the IC or on a specified testnet.
For example, to deploy the hello project on the hypothetical ic-pubs testnet configured in the dfx.json configuration file, you can run the following command:
dfx deploy hello_backend --network ic-pubs
To deploy a project on the local canister execution environment and pass a single argument to the installation step, you can run a command similar to the following:
dfx deploy hello_actor_class --argument '("from DFINITY")'
Note that currently you must use an actor class in your Motoko dapp. In this example, the dfx deploy command specifies an argument to pass to the hello_actor_class canister. The main program for the hello_actor_class canister looks like this:
actor class Greet(name: Text) { public query func greet() : async Text { return "Hello, " # name # "!"; };};
You can use the dfx deploy command with the --with-cycles option to specify the initial balance of a canister created by your wallet. If you don’t specify a canister, the number of cycles you specify will be added to all canisters by default. To avoid this, specify a specific canister by name. For example, to add an initial balance of 8000000000000 cycles to a canister called "hello-assets", run the following command:
dfx deploy --with-cycles 8000000000000 hello-assets
|
2022-12-07 17:38:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5842388272285461, "perplexity": 7477.550089894404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711200.6/warc/CC-MAIN-20221207153419-20221207183419-00454.warc.gz"}
|
https://proxieslive.com/tag/convert/
|
## How to convert a bytea column to text?
how to convert a bytea column to text in PostgreSQL so that I can read the column properly in PGADMIN?
I have the following SQL query in the PGADMIN’s query editor:
SELECT event_type, created_at, encode(metadata::bytea, 'escape') FROM public.events ORDER BY created_at DESC LIMIT 100
However, it produces an encoded column with each records more or less ressembling the following output:
3t000some_textd0some_other_textd00
How can I get rid of this encoded, so that I only see the original value of the column, in the text format:
some_text some_data
What I have also tried:
SELECT event_id, event_type, created_at, decode((encode(metadata, 'escape')::text), 'escape') FROM public.events ORDER BY created_at DESC LIMIT 100
But in the above case, the query returns a decode column of type bytea and I only see the field [binary data] for each record of the column.
I have also tried the first two ansers mentioned here without success and can’t properly translate the last answer to my query.
## Is there a function to convert a trigonmetric expression to an algebraic expression suitable for ComplexPlot?
According to H. A Priesley:
"both $$cosθ$$ and $$sinθ$$ can be written as simple algebraic functions of $$z$$":
$$cosθ =\frac{1}{2} (z + \frac{1}{z}) \tag1$$ $$sinθ =\frac{1}{2i} (z − \frac{1}{z})\tag2$$
Which is very handy but I’m not sure what function will convert say $$\frac{\sinh(ax)}{\sinh(bx)}\,$$ ot $$\frac{x\cos ax}{1+x^2}\coth \frac{\pi x}{4}$$ to similar algebraic functions. The only function I can find is TrigtoExp, but this does not include the needed substution $$z = \exp iθ$$ to work with function ComplexPlot.
Q1. Is there a function to convert a trigonmetric expression to an algebraic expression suitable for ComplexPlot?
Q2. If not what is the best way to get around this?
## How do you convert bits into a different alphabet?
I have forgotten how to do this. How do I figure out what the requirements are for a 128-bit string using a certain alphabet?
That is to say, I want to generate a UUID (128-bit) value, using only the 10 numbers for the alphabet. How many numbers do I need, and what is the general equation so I can figure this out for any alphabet of any size?
What is the equation for any n-bit value with any x-letter alphabet?
The way I do it is to guess and slowly iterate until I arrive at a close number. For powers of 10 it’s easy:
Math.pow(2, 128) 3.402823669209385e+38 Math.pow(10, 39) 1e+39
For other numbers, it takes a little more guessing. Would love to know the equation for this.
## How do you convert an NP problem which runs in O(f(x)) time in a SAT instance with O(f(x)*log(f(x))) variables in O(f(x)*log(f(x)))
I looked at the Cook’s theorem at Wikipedia which presents a way to convert any NP problem to SAT but it seems to require O(f(x)^3) variables. Is it possible to remove some of the checks in the conversion so to make it O(f(x)*log(f(x))) in variables and time?
## How should I convert this character’s level and stats into a CR?
This QA’s answers along with the DMG say that you calculate the CR for a monster based on its stats but the examples given seem to expect that all the stats line up with one another. For example, a creature at CR 10 should have a +4 prof bonus, 17 AC, 206-220 HP, +7 to hit with attacks, a save DC of 16 and do between 63-68 DPR. But if you use the table to calculate offensive and defensive CR and the results are all over the table (such as this example character) how would you turn those stats into a CR?
Some relevant stats and CR values from the table
Defensive CR:
+5 prof = CR 13-16
119 HP = CR 4
18 AC = CR 13-16
Offensive CR:
Attack bonus of +11 for main weapon = CR 21-23
Attack bonus of +10 for off hand weapon = 17-20
69 average DPR = CR 11
The DMG from what I could see also doesn’t include how attack riders such as knocking the target prone might affect CR which to me seems as though it should be considered.
## Convert Your Newsletters into Immediate Cash
One-Page Newsletters can accomplish more than Afghanistan Mobile Number List assist you with keeping in steady touch with customers and prospects and teach them to want your items and administrations. Each issue can likewise create prompt deals!
Essentially join the instructive substance of your bulletin with a month to month advancement depicted in the covering email or on the page of your site where perusers can download each issue. When you have drawn to your peruser’s advantage, offer them a connecting with following stage that quickens their advantage and urges them to purchase at the present time.
Here are some attempted and-demonstrated ways:
o Teleconferences. One of the simplest, most economical, and most impressive ways for you to adapt – or convert your pamphlet into benefits – is to welcome perusers to a free video chat where you give extra data and welcome them to examine the point and pose inquiries. Expenses are irrelevant. Use email autoresponders to convey the telephone number and an exceptional PIN (individual ID number) in addition to a minute ago updates.
o Special report. Offer a top to bottom treatment of the point shrouded in your bulletin. You can either sell it or offer it for nothing, upon email demand. You can likewise convey extra data on unlinked site pages or by means of email.
o Promotions. At the point when fitting, offer unique evaluating or terms on items or administrations related with the subject of your bulletin.
o Event showcasing. Offer free exhibitions or courses in your store, an office, or an inn meeting room. Or on the other hand, offer a free 30-minute early on meeting or phone consulta
## Does anyone know what this encoding format for passwords is? I think it is a decimal array but I can’t seem to convert it
During a penetration test, I ran across a server that was storing passwords in its database in what seems to be a binary array of sorts:
password_table 1,10,11,21,21,11,21,13,00,00,00,000 11,61,19,11,46,108,09,100 110,118,100,107,108,117,123,62,108,108,62,62
(slightly edited for confidentiality)
The server in question is a Tomcat server and the application is running a Java program. I considered that this might be a array of sorts but I can’t seem to convert these arrays into anything readable or usable. Does anyone have any ideas?
## Is there a way to store an arbitrarily big BigInt in a bit sequence, only later to convert it into a standard BigInt structure?
I am trying to imagine a way of encoding a BigInt into a bit stream, so that it is literally just a sequence of bits. Then upon decoding this bit stream, you would generate the standard BigInt sort of data structure (array of small integers with a sign). How could you encode the BigInt as a sequence of bits, and how would you decode it? I don’t see how to properly perform the bitwise manipulations or how to encode an arbitrary number in bits larger than 32 or 64. If a language is required then I would be doing this in JavaScript.
For instance, this takes bytes and converts it into a single bit stream:
function arrayOfBytesTo32Int(map) { return map[0] << 24 | map[1] << 16 | map[2] << 8 | map[3] }
How would you do that same sort of thing for arbitrarily long bit sequences?
## How to convert armour class from Pathfinder 1 to D&D 5?
My sandbox game uses some material from Pathfinder (and D&D 3 and 3.5). I run games there using D&D 5 (among other rules systems). How do I convert the armour class of a creature (especially one with strong natural armour), given that…
1. I am not interested in maintaining challenge rating or game balance. (It is a sandbox setting anyways; there is no reason for me to care about how difficult something is.) A formula that makes no use of challenge ratings, or other non-diegetic information, is preferable.
2. I do want to maintain the nature of the creature; if one is well-armoured according to one rules system, it should remain so in the other. Likewise for weak armour or excellent armour.
3. It is fine if a converted monster has different statistics than the monster as originally designed for D&D 5. However, if the monsters are conceptually similar, than their armour class should be similar, as per point two.
4. Pathfinder allows significant stacking of different types of armour. D&D 5 does not. This already solves the problem for some creatures that combine natural armour with a manufactured one; using the better is often a workable solution.
5. A solution should cover the common cases of creatures endowed with natural armour. There are always edge cases like angels with charisma as a deflection bonus, but they can be handled on a case-by-case basis after getting a handle of the general principles.
Here is what is easy to do:
• If a creature is mostly protected by armour, use the typical D&D 5 rules for that armour.
• If a creature does not have armour but relies on speed, use its dexterity to determine armour class.
• If a creature’s level of protection corresponds with that of an existing creature in D&D 5, one can simple use the same armour class. This requires the correspondence and a good working knowledge of the published monsters in D&D 5, and furthermore assumes consistency from those monsters. A method that also works for more exotic creature and does not require encyclopediac knowledge of D&D 5 bestiaries would be preferable.
In Pathfinder many monsters have quite significant natural armour bonuses. Using these as is is not reasonable; natural armour bonuses of +10 are common in Pathfinder and so are even higher bonuses, whereas armour class of 20 is quite good in D&D 5 and higher numbers are quite rare.
The main issue seems to be some kind of conversion formula that relates high natural armour in Pathfinder to armour class in D&D 5.
## How to convert FDA to context free grammar?
I have an assignment, which is solved by making an FDA out of the information given, then figuring out the CFG out of the FDA, but I’m having trouble doing that step. Any help is appreciated! Here is the picture of the exercise:
A token is dorpped from A, B or C. The two keys (-.- these things) make the token go right or left depending on the position in which they are in (the token falls parallel to the direction of the key). When a token passes through a key, it makes it switch positions so that the next token that passes through goes in the opposite direction than the last.
What I have to do is write a program in python that, given a string (eg ABCAAAC, meaning a token was dropped from A, then another from B, then another from C, etc.) I have to determine wether or not it belongs to the language composed of the strings in which the last character (the last tken dropped) falls from exit one. To do this, first I figured out the automaton that models this behaviour, and the next step would be figuring out the grammar for that language by looking/doing smth with the FDA (I have to do it this way, not with regex).
Here is the picture of the FDA, I’m not sure it’s correct:
|
2020-08-04 02:12:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2523268759250641, "perplexity": 1572.515847765713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735851.15/warc/CC-MAIN-20200804014340-20200804044340-00205.warc.gz"}
|
http://mathoverflow.net/feeds/question/98787
|
Lorentz quotient and orientation - MathOverflow most recent 30 from http://mathoverflow.net 2013-06-19T07:03:33Z http://mathoverflow.net/feeds/question/98787 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/98787/lorentz-quotient-and-orientation Lorentz quotient and orientation Will Jagy 2012-06-04T17:14:07Z 2012-06-04T17:14:07Z <p>$$U \; = \;<br> \left( \begin{array}{cc} 0 & 1 \\ 1 & 0<br> \end{array} \right) ,$$</p> <p>Given a real oriented vector space $V$ with inner product, form Lorentzian $L = V \oplus U.$ Elements are of the form $(v; x,y).$ The norm on $L$ is given by $$(v; m,n)^2 = v^2 + 2 xy.$$ We infer the inner product $$(v_1; x_1,y_1) \cdot (v_2; x_2,y_2) = v_1 \cdot v_2 + x_1 y_2 + x_2 y_1.$$</p> <p>Given any $l \in L,$ a null vector, we have $l \cdot l = 0,$ and so $l \in l^\perp.$ Furthermore, if $k \in l^\perp$ as well, then $(k + l)^2 = k^2.$ As a result, we may form the one-dimensional space $\langle l \rangle$ spanned by $l$ itself, then form another space with norm, $$E(l) = l^\perp / \langle l \rangle.$$ We have that $E\left((0;0,1) \right) = V,$ and $E(l)$ is positive definite. </p> <p><strong>Question:</strong> is there a reasonably consistent way to assign an orientation to all the $E(l)?$ </p> <p>Motivation: the Leech lattice is chiral. That is, there is no automorph of the Leech lattice with negative determinant. All the even lattices that Pete Clark and I found are achiral, they all possess improper automorphs. So, not only are they in genera of class number one, they are in genera of proper class number one. I am trying, with a good deal of frustration, to decide whether Conway's argument, as I report in <a href="http://mathoverflow.net/questions/69444/a-priori-proof-that-covering-radius-strictly-less-than-sqrt-2-implies-class-nu" rel="nofollow">http://mathoverflow.net/questions/69444/a-priori-proof-that-covering-radius-strictly-less-than-sqrt-2-implies-class-nu</a> really imples proper class number one, or is it just luck? After some email with Daniel Allcock, this question is a beginning. Daniel emphasizes that orientation of a lattice is an orientation of the surrounding real vector space. Conway's proof is that (in my case) every (primitive) null vector is equivalent by a sequence of Lorentz reflections in roots, which ought certainly to be said to reverse orientation on $L.$ But the $E(l)$ construction does not seem to care about that, it does not know how we got $l.$ At the end, and I may need several questions to get through this, does Conway's argument actually show proper class number one?</p> <p>Very confused.</p>
|
2013-06-19 07:03:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.814415454864502, "perplexity": 684.252492628023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708143620/warc/CC-MAIN-20130516124223-00053-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://byjus.com/questions/what-is-the-value-of-the-resultant-magnetic-field-at-a-neutral-point/
|
# What is the value of the resultant magnetic field at a neutral point?
At a neutral point, the resultant magnetic field is zero. The point where the resultant magnetic field vector or intensity is zero is called neutral point or null point. This magnetic field is a set of many vectors.
### Neutral point of a magnetic field
The neutral point of the magnet is a point where zero is the corresponding magnetic field. The neutral point is usually acquired when the horizontal portion of the earth’s field is balanced by the magnet formed. When the magnet’s N pole points to the south, and the magnet points to the magnetic meridian.
A point where the corresponding electrical field is zero is a neutral point. Neutral point Owing to a two-like point fee scheme. A neutral point is gained for this situation at an internal point in the line joining two charges from each other.
|
2022-05-20 19:56:46
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8730071187019348, "perplexity": 486.5960182178634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534669.47/warc/CC-MAIN-20220520191810-20220520221810-00149.warc.gz"}
|
https://math.meta.stackexchange.com/questions/27800/community-promotion-ads-2018/27808
|
# Community Promotion Ads - 2018 [duplicate]
It's almost February in 2018, which isn't supposed to be the proper time to cycle these, but for this year it'll be once again, so we'll be refreshing the Community Promotion Ads for this year now!
### What are Community Promotion Ads?
Community Promotion Ads are community-vetted advertisements that will show up on the main site, in the right sidebar. The purpose of this question is the vetting process. Images of the advertisements are provided, and community voting will enable the advertisements to be shown.
### Why do we have Community Promotion Ads?
This is a method for the community to control what gets promoted to visitors on the site. For example, you might promote the following things:
• useful tools or resources for the mathematically inclined
• interesting articles or findings for the curious
• cool events or conferences
• anything else your community would genuinely be interested in
The goal is for future visitors to find out about the stuff your community deems important. This also serves as a way to promote information and resources that are relevant to your own community's interests, both for those already in the community and those yet to join.
### Why do we reset the ads every year?
Some services will maintain usefulness over the years, while other things will wane to allow for new faces to show up. Resetting the ads every year helps accommodate this, and allows old ads that have served their purpose to be cycled out for fresher ads for newer things. This helps keep the material in the ads relevant to not just the subject matter of the community, but to the current status of the community. We reset the ads once a year, every December.
The community promotion ads have no restrictions against reposting an ad from a previous cycle. If a particular service or ad is very valuable to the community and will continue to be so, it is a good idea to repost it. It may be helpful to give it a new face in the process, so as to prevent the imagery of the ad from getting stale after a year of exposure.
### How does it work?
The answers you post to this question must conform to the following rules, or they will be ignored.
1. All answers should be in the exact form of:
[![Tagline to show on mouseover][1]][2]
[1]: http://image-url
[2]: http://clickthrough-url
Please do not add anything else to the body of the post. If you want to discuss something, do it in the comments.
2. The question must always be tagged with the magic tag. In addition to enabling the functionality of the advertisements, this tag also pre-fills the answer form with the above required form.
### Image requirements
• The image that you create must be 300 x 250 pixels, or double that if high DPI.
• Must be hosted through our standard image uploader (imgur)
• Must be GIF or PNG
• No animated GIFs
• Absolute limit on file size of 150 KB
• If the background of the image is white or partially white, there must be a 1px border (2px if high DPI) surrounding it.
### Score Threshold
There is a minimum score threshold an answer must meet (currently 6) before it will be shown on the main site.
You can check out the ads that have met the threshold with basic click stats here.
• Shouldn't there be a recommendation to provide a text version of the ad in the comments, for searchability? Aug 1, 2018 at 11:31
• Someone should change all the links in here to https:// Sep 3, 2018 at 12:07
• Hi Grace, could you take a look at this suggestion regarding the Mathematics sandbox? In particular, could you advise on whether the SE team's involvement would be required, and if there is a fix that you can simply implement at your end? (I'm pinging you here as you had helped set up the current Sandbox, and its comments section are locked.) Jul 12 at 1:45
• repost from last year Feb 1, 2018 at 21:29
• To quote Brian M. Scott: «This one’s so useful that I even donate a little to them each year» (me too). Feb 1, 2018 at 21:30
• I've always been a little puzzled by this one's popularity... how often does one need to look up some sequence of integers? Is it actually of practical use or is it mainly for sequence enthusiasts? Feb 8, 2018 at 15:36
• @rschwieb In my case, not terribly often (but then again I don't work very much), but it can be incredibly useful. Just as an example, here is a problem that got a "solution" thanks to OEIS. More generally, I think OEIS makes a nice bridge: lots of discrete things lead to integer sequences. You might work on something, get some numbers, head to OEIS, and find your work is equivalent to some other neat things (which may or may not have been more fully studied). Feb 8, 2018 at 17:08
• «Pursuing these identities (…) I found that there exists a formula of the same degree of elegance (…) whenever $d$ belongs to the following sequence of integers: $$d=3, 8, 10,14,15, 21, 24, 26, 28, 35, 36,...$$ I stared for a little while at this queer list of numbers (…) they made no sense to me. (…) So I missed the opportunity of discovering a deeper connection between modular forms and Lie algebras.» // Freeman J. Dyson, Missed opportunities, Bull. AMS 78 (1972) Feb 8, 2018 at 21:53
• @rschwieb Integer sequences are everywhere — from simple combinatorial and number theory problems to coefficients of various power series to dimensions of various objects (irreducible representations of algebras, homotopy groups of spaces, various homology groups…). OEIS helps one to guess answers (and it's usually much easier to solve a problem once you know the answer) and (sometimes) to discover some unexpected connections. Yes, in some situations it's very useful. Feb 8, 2018 at 22:09
• @GrigoryM Before commenting, I thought it was much as you described, it’s just that I couldn’t believe it would happen frequently enough to count as “incredibly useful”. Maybe once a year at most? In the whole time I’ve thought about research, only one sequence presented itself to me. I guess all it takes is one beautiful connection, though, for the tool to prove itself. Feb 8, 2018 at 23:04
• @rschwieb Well, yes, it obviously depends on what kind of mathematics you're doing (and in what style). I like enumerative & algebraic combinatorics — where OEIS is quite helpful (and from time to time I see questions on Math.SE where OEIS could have been of help to OP…) Feb 10, 2018 at 14:39
• Reposting last year's version. Jan 30, 2018 at 1:41
• Does anyone know if there’s a CoCalc ad anywhere? Searches haven’t turned anything up. I ask because the success of CoCalc (formerly SageMathCloud, if I understand correctly) is very important to William Stein Jan 30, 2018 at 22:38
• There is one now! Aug 1, 2018 at 11:27
• Repost from last year. Jan 30, 2018 at 17:17
• Reposted from yesteryear. Feb 8, 2018 at 2:21
• Oh my gosh yes, I don't think this tool gets enough credit. It's simple yet surprisingly powerful. Mar 23, 2018 at 19:22
• I think the link is broken Dec 28, 2018 at 20:57
• @MCMastery: Unfortunately, the ISC is down for maintenance indefinitely. :-( Jan 1, 2019 at 0:08
• Is the source code that this service would use if it were running at the given URL available somewhere? The calculator seems to be available here: wayback.cecm.sfu.ca/projects/ISC/ISCmain.html , but it appears to run on the server side. Jan 9, 2019 at 1:24
• Jan 31, 2018 at 21:39
• It is difficult to read "you need" over the dark red on the right side, especially when the image is smaller
– Karl
May 10, 2018 at 18:30
• @Karl: Then don't read it. The sentence makes sense either way. :-) May 11, 2018 at 0:41
• 503 Service Temporarily Unavailable? Apr 27, 2018 at 14:31
• It was unavailable for 13 minutes :-( Apr 27, 2018 at 14:50
• How do you know this duration? You have a timer? Apr 27, 2018 at 14:53
• I am one of the maintainers... Apr 27, 2018 at 15:04
• To add to @GrigoryM's comment concerning Freeman J. Dyson, try findstat.org/StatisticFinder/FiniteCartanTypes with ["A",1] => 3, ["A",2] => 8, ["A",3] => 15 and ["A",4] => 24. Of course, historically, this would not have been helpful to Dyson. Apr 30, 2018 at 3:24
• This is a demonstration post to indicate how this should look when an ad is posted. It also doubles as your twitter ad, but it's up to you if you wish to promote it by voting
– Grace Note StaffMod
Jan 29, 2018 at 5:13
• Shouldn't there be some description in the picture? I had to hover my mouse over it and decipher the url to see that what it was. I have given a course on Maxima, but I didn't recognize the logo... Feb 11, 2018 at 14:33
• This is a work with my friend. Apparently we are not big companies like the above. Feb 4, 2018 at 13:01
• There are size requirements for the images, and I do not think this meets them, unfortunately. Feb 2, 2018 at 23:17
• @pjs36 I don't know a way to fix it. Should I delete my answer? Feb 3, 2018 at 8:31
• @pjs36: Size problem fixed ! Feb 8, 2018 at 1:40
• @polfosol: It's fixed now, so, if size requirements were its only problem, this answer should garner a positive score by the first of March. Feb 8, 2018 at 2:10
|
2022-08-14 03:08:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27706024050712585, "perplexity": 1477.9859262555174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571993.68/warc/CC-MAIN-20220814022847-20220814052847-00766.warc.gz"}
|
https://encyclopediaofmath.org/wiki/Second_fundamental_form
|
# Second fundamental form
of a surface
The quadratic form in the differentials of the coordinates on the surface which characterizes the local structure of the surface in a neighbourhood of an ordinary point. Let the surface be given by the equation
$$\mathbf r = \mathbf r ( u, v),$$
where $u$ and $v$ are internal coordinates on the surface; let
$$d \mathbf r = \mathbf r _ {u} du + \mathbf r _ {v} dv$$
be the differential of the position vector $\mathbf r$ along a chosen direction $d u / d v$ of displacement from a point $M$ to a point $M ^ \prime$( see Fig.). Let
$$\mathbf n = \ \frac{\epsilon [ \mathbf r _ {u} , \mathbf r _ {v} ] }{| [ \mathbf r _ {u} , \mathbf r _ {v} ] | }$$
be the unit normal vector to the surface at the point $M$( here $\epsilon = + 1$ if the vector triplet $\{ \mathbf r _ {u} , \mathbf r _ {v} , \mathbf n \}$ has right orientation, and $\epsilon = - 1$ in the opposite case). The double principal linear part $2 \delta$ of the deviation $P M ^ \prime$ of the point $M\prime$ on the surface from the tangent plane at the point $M$ is
$$\textrm{ II } = 2 \delta = (- d \mathbf r , d \mathbf n ) =$$
$$= \ ( \mathbf r _ {uu} , \mathbf n ) du ^ {2} + 2 ( \mathbf r _ {uv} ,\ \mathbf n ) du dv + ( \mathbf r _ {vv} , \mathbf n ) dv ^ {2} ;$$
it is known as the second fundamental form of the surface.
Figure: s083700a
The coefficients of the second fundamental form are usually denoted by
$$L = ( \mathbf r _ {uu} , \mathbf n ),\ \ M = ( \mathbf r _ {uv} , \mathbf n ),\ \ N = ( \mathbf r _ {vv} , \mathbf n )$$
or, in tensor notation,
$$(- d \mathbf r , d \mathbf n ) = \ b _ {11} du ^ {2} + 2b _ {12} du dv + b _ {22} dv ^ {2} .$$
The tensor $b _ {ij}$ is called the second fundamental tensor of the surface.
See Fundamental forms of a surface for the connection between the second fundamental form and other surface forms.
|
2022-10-02 15:46:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9360153675079346, "perplexity": 167.09506739941574}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00756.warc.gz"}
|
https://complexanalytic.wordpress.com/2013/04/14/rationals-irrationals-and-decimals/
|
Posted by: groupact | April 14, 2013
## Rationals, Irrationals, and Decimals
In my last post I promised a follow up explaining why I feel that the concept of repeating decimals should not be introduced to students until at least Calculus. I’ve been delayed, but finally I’m ready to explain my thinking. My short explanation is that I find that my students often have a great difficulty with the concept of irrational numbers which hinders their understanding of trigonometric, inverse trigonometric, exponential, and logarithmic functions, and even just graphing polynomial functions. This difficulty is one that I fully expect–the concept of irrationals is rather abstract and filled with many subtleties that cannot be addressed–but I believe my job in developing student understanding of the concept would be easier if the students had never learned about repeating decimals. “Learning” repeating decimals pushes students to think about rationals as true numbers and irrationals as just symbols.
To see why this, let us first think about what we want students to understand about irrational numbers and why. A few weeks ago I overheard a conversation between a student and another math teacher in my office. The student was explaining that $\pi$ is not really a number. It’s just a concept like $\infty$ because you can “never get there.” In the spirit of Ben Blum-Smith’s post, I want to applaud this student. He senses a connection between those two concepts that is fairly deep as you move toward higher mathematics. That being said, the student is also missing something critical. Unlike $\infty$, we want to be able to treat $\pi$ as a number in the sense that we should be able to perform arithmetic with it as a number, and it should follow all of the same rules we expect of rational numbers. Actually proving that this is true is well beyond the scope of K-12 mathematics, but intuitively this is an acceptable proposition if we think of real numbers as “points on a number line.” If we can get students to think about numbers as points on a number line I believe we have a nice balance of concreteness and abstraction.
One of the most critical ideas with fractions is that if we have two fractions with different denominators we can always find a common refinement of the number line that place both points on a number line marked by the same denominator. This makes it possible to describe the sum of two fractions (or difference of a smaller fraction from a larger fraction) using another fraction and indicates a procedure for finding the sum. Using the intuitive concept of area we can also see that the product of two fractions can be described using another fraction, and (using both ideas together) that we can describe the quotient of two positive fractions with another positive fraction. Because of all this we want students to understand that if a point on the number line can be expressed as a rational number, we want to do so. A second important realization is that any number on the number line can be approximated by a rational to within any positive length no matter how small. And if math were just a tool for science and applications rationals are all we need. There would be no need for irrational numbers. After all, in practical terms our measurements can only be so precise and if our computations are only approximations, that’s not a problem since we can control how good the approximations are. In fact, if we are willing to use approximations there is not even a need to consider all fractions. We can restrict our focus to decimal fractions, that is fractions where the denominator is a power of 10. As with fractions in general, since this denominator can be as large as we want, the approximations can be as good as we want. The quotient of a one decimals fraction by another (non-zero) is no longer necessarily exactly a third decimal fraction, but it can be approximated by one. So if we only care about practical computations, we only need to worry about approximations and can just use decimal fractions (aka terminating decimals). For this reason, a focus on decimals is perfectly appropriate in a science class.
I firmly believe, though, that mathematics is not just important for its applications to the sciences. Mathematics is also an important and beautiful subject in its own right as an endeavor in our search for truth. Furthermore, a better understanding of mathematics and its reliance on abstraction and logical thinking helps make students better problem solvers and critical thinkers. An approximation is useful, but it is not a substitute for precise numbers. One divided by three is approximately .33. That’s not exact, though, because one divided by three should have the property that when we multiply it by 3 we get 1, and .33 times 3 is only .99 which is too small. I think students need to understand that rational numbers are great and if that if we know how to express a point as a rational number, we generally want to do so. At some point (around middle school) students should discover the somewhat disturbing fact that there are points on the number line that we want to consider that are not exactly equal to a rational number. For example, if we draw a square with coordinates (0,0), (1,1), (0,2), and (-1,1) our intuitive sense of area should leads to say that this square is made of 4 triangles each of which is half a unit, so the square should have area 2. Therefore the length of the side of this square should be a number which multiplies by itself to make 2. We can show, however, that this number can not be written as a fraction. (See Kate Nowak’s lesson and comments). So here is a number which we will call $\sqrt{2}$ that is not rational, but it it is stil a number in that it makes sense, for example, to add it to other numbers. In particular, if we take this length and add on to it a length of 1 we have a certain point on our number line. Does this point have a name? Not yet. It can’t be a fraction, or else we could write $\sqrt{2}$ as a difference of two fractions and thus as a fraction itself. So we have to give a new name to this point. Unless somebody has a better name I propose we call it $1+\sqrt{2}$. It’s not that we CAN’T take the sum. Geometrically it makes perfect sense. It is a point on our number line, but unlike fractions where adding them always leads to another fraction with a nice ready made name, such is not always the case when doing arithmetic with irrational numbers. So that’s what I think students need to understand about irrational numbers and why. There are points on the number line that can’t be written as fractions, and we need to be aware of that because when we encounter such numbers arithmetic may look a little different (for example, the sum of two such numbers may have to be written simply as a sum), but we can still perform arithmetic and our usual rules of arithmetic will still apply.
That’s great, but what does this have to do with terminating vs. non-terminating decimals? Well, first of all, the most significant thing I want students to understand about rationals is that if you add, subtract, multiply, or divide (except by 0) two rational numbers you get another rational number. It may not be until we see irrational numbers, that we realize HOW nice this is, but is critical. This property is one that stems from thinking about rational numbers as fractions $\frac{m}{n}$. When I first ask my students what a rational number is, though, over 90% of the time I get the response that it is a number which can be written as a terminating or repeating decimal (or some incorrect variation on this decimal notion). It is not at all clear, though, that if I add $.\overline{23}$ and $.1\overline{98}$ that what I get should be another terminating or repeating decimal. It’s even less clear what happens when I multiply them. It’s also not clear what should be so special about a repeating decimal as opposed to one that doesn’t repeat. My students seem to think it’s because we can write it down (as opposed to $\pi$ which “goes on forever”) but I don’t buy that. Suppose I said that if instead of putting a line over a string of numbers in a decimal, I can instead choose to put a tilde above them. This will meant that I repeat the string over and over, but each time I insert some extra zeroes. The first time I repeat I insert one 0, the next time two 0’s, and so on. So, for example $.\widetilde{23}$ would mean the number $.23023002300023....$ I can write down that number to the same extent that I can write down $.\overline{23}$, but the latter has the key property that it’s rational. If we want to consider the ability to write a number down as a decimal as an important property, then a number which can only be written as a repeating decimal really belongs on the same side of the line as irrational numbers as opposed to being lumped in with decimal fractions.
Not only does this thinking cause students to miss the significane of rational numbers, it causes undue hinderance to their acceptance of irrational numbers as numbers (which in turn makes it more difficult for them to work with them in Algebra II and Trigonometry). Students are bothered by $1+\sqrt{2}$ as being an acceptable answer to a problem, or say that they can’t solve $2^{x-1}=3$ because they can’t figure out 2 raised to what number is 3. But if students can be reminded that they accepted fractions as numbers, this process of abstraction would not be a new one. When we were just dealing with integers we could add two integers and get another integer, or multiply two integers and get another integer. When it came to division, though, it was a different story. If we took 12 divided by 3, we were fine, we could write the quotient as 4. If we took 2 divided by 3, we were stuck. Once we introduced the concept of fractions, though, we were fine. The quotient is now $\frac{2}{3}$, a number that uses a symbol which must make reference to two of our previous numbers together with an extra symbol (the fraction bar). Nevertheless, we are able to accept $\frac{2}{3}$ as a number, that has certain properties. In particular $\frac{2}{3}$ times 3 must be 2. If we are willing to accept $\frac{2}{3}$ as a number, it is not as far of a leap to accept $1+\sqrt{2}$ or $\log_{2} 3$ as numbers. When I try to draw on this past experience, though, I hit a bit of a wall. To my students $\frac{2}{3}$ is acceptable as a number only because we can write it as $.\overline{6}$, but what does that even mean? Suppose students were instead convinced of the idea that 2 divided by 3 could be approximated by decimals to as close as we would like, but that if we want an exact expression for this quotient we must content ourselves to something like $\frac{2}{3}$. Then perhaps they could take the same idea to arithmetic with irrational numbers. Yes, they can be approximated with decimals (and in many applied problems that would be particularly useful), but if we want exact expressions we often must be content with the use of several symbols in the same expression, and that makes it no less of a number.
|
2017-08-20 17:05:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 25, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7997662425041199, "perplexity": 196.46335629369335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106865.74/warc/CC-MAIN-20170820170023-20170820190023-00162.warc.gz"}
|
https://math.stackexchange.com/questions/2911896/jacobi-polynomials-and-primality-testing
|
# Jacobi polynomials and primality testing
Can you provide a proof or a counterexample for the claim given below ?
Inspired by Agrawal's conjecture in this paper I have formulated the following claim :
Let $$n$$ be an odd natural number greater than one . Let $$r$$ be the smallest odd prime number such that $$r \nmid n$$ and $$n^2 \not\equiv 1 \pmod r$$ . Let $$P_n^{(\alpha,\beta)}(x)$$ be Jacobi polynomial such that $$\alpha$$ , $$\beta$$ are natural numbers and $$\alpha +\beta < n$$ , then $$n$$ is a prime number if and only if $$P_n^{(\alpha,\beta)}(x) \equiv x^n \pmod {x^r-1,n}$$ .
You can run this test here .
I have tested this claim for many random values of $$n$$ , $$\alpha$$ and $$\beta$$ and there were no counterexamples .
Mathematica implementation of test :
(* n>a+b *)
n=139;
a=14;b=22;
r=3;
While[Mod[n,r]==0 || PowerMod[n,2,r]==1,r=NextPrime[r]];
If[PolynomialMod[PolynomialRemainder[JacobiP[n,a,b,x],x^r-1,x],n]-PolynomialRemainder[x^n,x^r-1,x]===0,Print["prime"],Print["composite"]];
• Instead of the $n^2\not\equiv1\mod{r}$ condition, I would write, $r\;\not\vert\;n-1,n,n+1$ – Don Thousand Sep 10 '18 at 13:03
This is a partial answer.
This answer proves that if $n$ is a prime number, then $P_n^{(\alpha,\beta)}(x) \equiv x^n \pmod {x^r-1,n}$.
Proof :
Assuming that $x$ is real and using the following expression $$P_n^{(\alpha,\beta)}(x)=\sum_{s=0}^{n}\binom{n+\alpha}{n-s}\binom{n+\beta}{s}\left(\frac{x-1}{2}\right)^s\left(\frac{x+1}{2}\right)^{n-s}$$ we have \begin{align}&2^n\left(P_n^{(\alpha,\beta)}(x)-x^n\right) \\\\&=-2^nx^n+\sum_{s=0}^{n}\binom{n+\alpha}{n-s}\binom{n+\beta}{s}(x-1)^s(x+1)^{n-s} \\\\&=-2^nx^n+\binom{n+\alpha}{n}(x+1)^{n}+\binom{n+\beta}{n}(x-1)^n \\&\qquad \qquad+\sum_{s=1}^{n-1}\binom{n+\alpha}{n-s}\binom{n+\beta}{s}(x-1)^s(x+1)^{n-s} \\\\&=-2^nx^n+\binom{n+\alpha}{n}\sum_{k=0}^{n}\binom{n}{k}x^{n-k}+\binom{n+\beta}{n}\sum_{k=0}^{n}\binom nk(-1)^{k}\cdot x^{n-k} \\&\qquad \qquad+\sum_{s=1}^{n-1}\binom{n+\alpha}{n-s}\binom{n+\beta}{s}(x-1)^s(x+1)^{n-s} \\\\&=\left(-2^n+\binom{n+\alpha}{n}+\binom{n+\beta}{n}\right)x^n+\left(\binom{n+\alpha}{n}-\binom{n+\beta}{n}\right) \\&\qquad\qquad +\binom{n+\alpha}{n}\sum_{k=1}^{n-1}\binom{n}{k}x^{n-k}+\binom{n+\beta}{n}\sum_{k=1}^{n-1}\binom nk(-1)^{k}\cdot x^{n-k} \\&\qquad \qquad+\sum_{s=1}^{n-1}\binom{n+\alpha}{n-s}\binom{n+\beta}{s}(x-1)^s(x+1)^{n-s}\end{align}
Here, we use the following facts :
• By Fermat's little theorem, $2^n\equiv 2\pmod n$.
Also, since $(n+\alpha)(n+\alpha-1)\cdots (n+1)\equiv \alpha !\pmod n$, we get $\binom{n+\alpha}{n}=\binom{n+\alpha}{\alpha}=\frac{(n+\alpha)(n+\alpha-1)\cdots (n+1)}{\alpha !}\equiv 1\pmod n$. Similarly, we get $\binom{n+\beta}{n}\equiv 1\pmod n$.
Therefore, we have $-2^n+\binom{n+\alpha}{n}+\binom{n+\beta}{n}\equiv -2+1+1\equiv 0\pmod n$.
• Since $\binom{n+\alpha}{n}\equiv 1\pmod n$ and $\binom{n+\beta}{n}\equiv 1\pmod n$, we get $\binom{n+\alpha}{n}-\binom{n+\beta}{n}\equiv 1-1\equiv 0\pmod n$.
• $\binom{n}{k}\equiv 0\pmod n$ for each $k$ such that $1\le k\le n-1$.
• Suppose that $\alpha\ge n-s$ and $\beta\ge s$. Then, we get $\alpha+\beta\ge n$ which contradicts $\alpha+\beta\lt n$. So, we have $\alpha\lt n-s$ or $\beta\lt s$.
If $\alpha\lt n-s$ with $1\le s\le n-1$, then the numerator of $\left(\binom{n+\alpha}{n-s}=\right)\frac{(n+\alpha)(n+\alpha-1)\cdots (\alpha+s+1)}{(n-s)!}$ is divisible by $n$, but the denominator isn't. So, $\binom{n+\alpha}{n-s}$ is divisible by $n$.
Similarly, if $\beta\lt s$, then $\binom{n+\beta}{s}$ is divisible by $n$.
As a result, we see that, for each $s$ such that $1\le s\le n-1$, $\binom{n+\alpha}{n-s}\binom{n+\beta}{s}$ is divisible by $n$.
Therefore, we see that there is a polynomial $f$ with integer coefficients such that $$2^n\left(P_n^{(\alpha,\beta)}(x)-x^n\right)=nf$$ from which $$P_n^{(\alpha,\beta)}(x)=x^n+(x^r-1)\times 0+n\times \frac{1}{2^n}f,$$ follows.
It follows from this and $\gcd(n,2^n)=1$ that $$P_n^{(\alpha,\beta)}(x) \equiv x^n \pmod {x^r-1,n}$$
|
2020-02-24 19:16:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.986329197883606, "perplexity": 84.0060225197561}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145966.48/warc/CC-MAIN-20200224163216-20200224193216-00035.warc.gz"}
|
https://infoproc.blogspot.com/2021/05/
|
## Wednesday, May 12, 2021
### Neural Tangent Kernels and Theoretical Foundations of Deep Learning
A colleague recommended this paper to me recently. See also earlier post Gradient Descent Models Are Kernel Machines.
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Arthur Jacot, Franck Gabriel, Clément Hongler
At initialization, artificial neural networks (ANNs) are equivalent to Gaussian processes in the infinite-width limit, thus connecting them to kernel methods. We prove that the evolution of an ANN during training can also be described by a kernel: during gradient descent on the parameters of an ANN, the network function fθ (which maps input vectors to output vectors) follows the kernel gradient of the functional cost (which is convex, in contrast to the parameter cost) w.r.t. a new kernel: the Neural Tangent Kernel (NTK). This kernel is central to describe the generalization features of ANNs. While the NTK is random at initialization and varies during training, in the infinite-width limit it converges to an explicit limiting kernel and it stays constant during training. This makes it possible to study the training of ANNs in function space instead of parameter space. Convergence of the training can then be related to the positive-definiteness of the limiting NTK. We prove the positive-definiteness of the limiting NTK when the data is supported on the sphere and the non-linearity is non-polynomial. We then focus on the setting of least-squares regression and show that in the infinite-width limit, the network function fθ follows a linear differential equation during training. The convergence is fastest along the largest kernel principal components of the input data with respect to the NTK, hence suggesting a theoretical motivation for early stopping. Finally we study the NTK numerically, observe its behavior for wide networks, and compare it to the infinite-width limit.
The results are remarkably well summarized in the wikipedia entry on Neural Tangent Kernels:
An Artificial Neural Network (ANN) with scalar output consists in a family of functions ${\displaystyle f\left(\cdot ,\theta \right):\mathbb {R} ^{n_{\mathrm {in} }}\to \mathbb {R} }$ parametrized by a vector of parameters ${\displaystyle \theta \in \mathbb {R} ^{P}}$.
The Neural Tangent Kernel (NTK) is a kernel ${\displaystyle \Theta :\mathbb {R} ^{n_{\mathrm {in} }}\times \mathbb {R} ^{n_{\mathrm {in} }}\to \mathbb {R} }$ defined by
${\displaystyle \Theta \left(x,y;\theta \right)=\sum _{p=1}^{P}\partial _{\theta _{p}}f\left(x;\theta \right)\partial _{\theta _{p}}f\left(y;\theta \right).}$
In the language of kernel methods, the NTK ${\displaystyle \Theta }$ is the kernel associated with the feature map ${\displaystyle \left(x\mapsto \partial _{\theta _{p}}f\left(x;\theta \right)\right)_{p=1,\ldots ,P}}$
This is a very brief (3 minute) summary by the first author:
This 15 minute IAS talk gives a nice overview of the results, and their relation to fundamental questions (both emprirical and theoretical) in deep learning.
I hope to find time to explore this in more depth. Large width seems to provide a limiting case (analogous to the large-N limit in gauge theory) in which rigorous results about deep learning can be proved.
Some naive questions:
What is the expansion parameter of the finite width expansion?
What role does concentration of measure play in the results?
Is there a similar result for the infinite connection or infinite neurons limit? (Relaxing the infinite width requirement to something like infinite volume, perhaps with a W/L threshold?)
Simplification seems to be a consequence of overparametrization. But the proof method seems to apply to a regularized (but still convex, e.g., using L1 penalization) loss function that imposes sparsity. It would be interesting to examine this specific case in more detail.
## Saturday, May 08, 2021
### Three Thousand Years and 115 Generations of 徐 (Hsu / Xu)
Over the years I have discussed economic historian Greg Clark's groundbreaking work on the persistence of social class. Clark found that intergenerational social mobility was much less than previously thought, and that intergenerational correlations on traits such as education and occupation were consistent with predictions from an additive genetic model with a high degree of assortative mating.
See Genetic correlation of social outcomes between relatives (Fisher 1918) tested using lineage of 400k English individuals, and further links therein. Also recommended: this recent podcast interview Clark did with Razib Khan.
The other day a reader familiar with Clark's work asked me about my family background. Obviously my own family history is not a scientific validation of Clark's work, being only a single (if potentially illustrative) example. Nevertheless it provides an interesting microcosm of the tumult of 20th century China and a window into the deep past...
I described my father's background in the post Hsu Scholarship at Caltech:
Cheng Ting Hsu was born December 1, 1923 in Wenling, Zhejiang province, China. His grandfather, Zan Yao Hsu was a poet and doctor of Chinese medicine. His father, Guang Qiu Hsu graduated from college in the 1920's and was an educator, lawyer and poet.
Cheng Ting was admitted at age 16 to the elite National Southwest Unified University (Lianda), which was created during WWII by merging Tsinghua, Beijing, and Nankai Universities. This university produced numerous famous scientists and scholars such as the physicists C.N. Yang and T.D. Lee.
Cheng Ting studied aerospace engineering (originally part of Tsinghua), graduating in 1944. He became a research assistant at China's Aerospace Research Institute and a lecturer at Sichuan University. He also taught aerodynamics for several years to advanced students at the air force engineering academy.
In 1946 he was awarded one of only two Ministry of Education fellowships in his field to pursue graduate work in the United States. In 1946-1947 he published a three-volume book, co-authored with Professor Li Shoutong, on the structures of thin-walled airplanes.
In January 1948, he left China by ocean liner, crossing the Pacific and arriving in San Francisco. ...
My mother's father was a KMT general, and her family related to Chiang Kai Shek by marriage. Both my grandfather and Chiang attended the military academy Shinbu Gakko in Tokyo. When the KMT lost to the communists, her family fled China and arrived in Taiwan in 1949. My mother's family had been converted to Christianity in the 19th century and became Methodists, like Sun Yat Sen. (I attended Methodist Sunday school while growing up in Ames IA.) My grandfather was a partner of T.V. Soong in the distribution of bibles in China in the early 20th century.
My father's family remained mostly in Zhejiang and suffered through the communist takeover, Great Leap Forward, and Cultural Revolution. My father never returned to China and never saw his parents again.
When I met my uncle (a retired Tsinghua professor) and some of my cousins in Hangzhou in 2010, they gave me a four volume family history that had originally been printed in the 1930s. The Hsu (Xu) lineage began in the 10th century BC and continued to my father, in the 113th generation. His entry is the bottom photo below.
Wikipedia: The State of Xu (Chinese: 徐) (also called Xu Rong (徐戎) or Xu Yi (徐夷)[a] by its enemies)[4][5] was an independent Huaiyi state of the Chinese Bronze Age[6] that was ruled by the Ying family (嬴) and controlled much of the Huai River valley for at least two centuries.[3][7] It was centered in northern Jiangsu and Anhui. ...
Generations 114 and 115:
Four volume history of the Hsu (Xu) family, beginning in the 10th century BC. The first 67 generations are covered rather briefly, only indicating prominent individuals in each generation of the family tree. The books are mostly devoted to generations 68-113 living in Zhejiang. (Earlier I wrote that it was two volumes, but it's actually four. The printing that I have is two thick books.)
## Sunday, May 02, 2021
### 40 Years of Quantum Computation and Quantum Information
This is a great article on the 1981 conference which one could say gave birth to quantum computing / quantum information.
Technology Review: Quantum computing as we know it got its start 40 years ago this spring at the first Physics of Computation Conference, organized at MIT’s Endicott House by MIT and IBM and attended by nearly 50 researchers from computing and physics—two groups that rarely rubbed shoulders.
Twenty years earlier, in 1961, an IBM researcher named Rolf Landauer had found a fundamental link between the two fields: he proved that every time a computer erases a bit of information, a tiny bit of heat is produced, corresponding to the entropy increase in the system. In 1972 Landauer hired the theoretical computer scientist Charlie Bennett, who showed that the increase in entropy can be avoided by a computer that performs its computations in a reversible manner. Curiously, Ed Fredkin, the MIT professor who cosponsored the Endicott Conference with Landauer, had arrived at this same conclusion independently, despite never having earned even an undergraduate degree. Indeed, most retellings of quantum computing’s origin story overlook Fredkin’s pivotal role.
Fredkin’s unusual career began when he enrolled at the California Institute of Technology in 1951. Although brilliant on his entrance exams, he wasn’t interested in homework—and had to work two jobs to pay tuition. Doing poorly in school and running out of money, he withdrew in 1952 and enlisted in the Air Force to avoid being drafted for the Korean War.
A few years later, the Air Force sent Fredkin to MIT Lincoln Laboratory to help test the nascent SAGE air defense system. He learned computer programming and soon became one of the best programmers in the world—a group that probably numbered only around 500 at the time.
Upon leaving the Air Force in 1958, Fredkin worked at Bolt, Beranek, and Newman (BBN), which he convinced to purchase its first two computers and where he got to know MIT professors Marvin Minsky and John McCarthy, who together had pretty much established the field of artificial intelligence. In 1962 he accompanied them to Caltech, where McCarthy was giving a talk. There Minsky and Fredkin met with Richard Feynman ’39, who would win the 1965 Nobel Prize in physics for his work on quantum electrodynamics. Feynman showed them a handwritten notebook filled with computations and challenged them to develop software that could perform symbolic mathematical computations. ...
... in 1974 he headed back to Caltech to spend a year with Feynman. The deal was that Fredkin would teach Feynman computing, and Feynman would teach Fredkin quantum physics. Fredkin came to understand quantum physics, but he didn’t believe it. He thought the fabric of reality couldn’t be based on something that could be described by a continuous measurement. Quantum mechanics holds that quantities like charge and mass are quantized—made up of discrete, countable units that cannot be subdivided—but that things like space, time, and wave equations are fundamentally continuous. Fredkin, in contrast, believed (and still believes) with almost religious conviction that space and time must be quantized as well, and that the fundamental building block of reality is thus computation. Reality must be a computer! In 1978 Fredkin taught a graduate course at MIT called Digital Physics, which explored ways of reworking modern physics along such digital principles.
Feynman, however, remained unconvinced that there were meaningful connections between computing and physics beyond using computers to compute algorithms. So when Fredkin asked his friend to deliver the keynote address at the 1981 conference, he initially refused. When promised that he could speak about whatever he wanted, though, Feynman changed his mind—and laid out his ideas for how to link the two fields in a detailed talk that proposed a way to perform computations using quantum effects themselves.
Feynman explained that computers are poorly equipped to help simulate, and thereby predict, the outcome of experiments in particle physics—something that’s still true today. Modern computers, after all, are deterministic: give them the same problem, and they come up with the same solution. Physics, on the other hand, is probabilistic. So as the number of particles in a simulation increases, it takes exponentially longer to perform the necessary computations on possible outputs. The way to move forward, Feynman asserted, was to build a computer that performed its probabilistic computations using quantum mechanics.
[ Note to reader: the discussion in the last sentences above is a bit garbled. The exponential difficulty that classical computers have with quantum calculations has to do with entangled states which live in Hilbert spaces of exponentially large dimension. Probability is not really the issue; the issue is the huge size of the space of possible states. Indeed quantum computations are strictly deterministic unitary operations acting in this Hilbert space. ]
Feynman hadn’t prepared a formal paper for the conference, but with the help of Norm Margolus, PhD ’87, a graduate student in Fredkin’s group who recorded and transcribed what he said there, his talk was published in the International Journal of Theoretical Physics under the title “Simulating Physics with Computers.” ...
Feynman's 1981 lecture Simulating Physics With Computers.
Fredkin was correct about the (effective) discreteness of spacetime, although he probably did not realize this is a consequence of gravitational effects: see, e.g., Minimum Length From First Principles. In fact, Hilbert Space (the state space of quantum mechanics) itself may be discrete.
Related:
My paper on the Margolus-Levitin Theorem in light of gravity:
We derive a fundamental upper bound on the rate at which a device can process information (i.e., the number of logical operations per unit time), arising from quantum mechanics and general relativity. In Planck units a device of volume V can execute no more than the cube root of V operations per unit time. We compare this to the rate of information processing performed by nature in the evolution of physical systems, and find a connection to black hole entropy and the holographic principle.
Participants in the 1981 meeting:
Physics of Computation Conference, Endicott House, MIT, May 6–8, 1981. 1 Freeman Dyson, 2 Gregory Chaitin, 3 James Crutchfield, 4 Norman Packard, 5 Panos Ligomenides, 6 Jerome Rothstein, 7 Carl Hewitt, 8 Norman Hardy, 9 Edward Fredkin, 10 Tom Toffoli, 11 Rolf Landauer, 12 John Wheeler, 13 Frederick Kantor, 14 David Leinweber, 15 Konrad Zuse, 16 Bernard Zeigler, 17 Carl Adam Petri, 18 Anatol Holt, 19 Roland Vollmar, 20 Hans Bremerman, 21 Donald Greenspan, 22 Markus Buettiker, 23 Otto Floberth, 24 Robert Lewis, 25 Robert Suaya, 26 Stand Kugell, 27 Bill Gosper, 28 Lutz Priese, 29 Madhu Gupta, 30 Paul Benioff, 31 Hans Moravec, 32 Ian Richards, 33 Marian Pour-El, 34 Danny Hillis, 35 Arthur Burks, 36 John Cocke, 37 George Michaels, 38 Richard Feynman, 39 Laurie Lingham, 40 P. S. Thiagarajan, 41 Marin Hassner, 42 Gerald Vichnaic, 43 Leonid Levin, 44 Lev Levitin, 45 Peter Gacs, 46 Dan Greenberger. (Photo courtesy Charles Bennett)
|
2021-05-12 17:20:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4792565405368805, "perplexity": 2564.411793816754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989766.27/warc/CC-MAIN-20210512162538-20210512192538-00118.warc.gz"}
|
https://www.speedsolving.com/members/caedus.5588/recent-content
|
# Recent content by Caedus
1. ### Selling Large Cube Collection
Alright. I'll send you a PM.
2. ### Selling Large Cube Collection
As regarding "reservations" and such, since I do need to sell these quickly, whoever can come to an agreement and pay in full first will get the cubes. I do need the money, and as such would prefer not to wait several days for people who are interested, yet are not prepared to pay. EDIT: As for...
6. ### Selling Large Cube Collection
@Hovair The tiled QJ would be $5 + ~$10 shipping.
13. ### Selling Large Cube Collection
The pyraminx is clicky, but turns fairly loosely. The A-3 is the A-III, not the A3F.
14. ### Selling Large Cube Collection
The Alexander's star turns somewhat stiffly, although that's how all of them are as far as I know. The 2x2x3 turns nicely, although it can't cut corners. I'm not sure what brand it is, but I got it from Tribox, and the colors are grey vs dark grey, blue vs green and yellow vs red. I think it's...
15. ### Selling Large Cube Collection
I'm not sure what brand the pyraminx is. I think it's a QJ. @mr. giggums, I will post pictures as soon as they finish downloading off my camera EDIT: @mr. giggums The pictures are now uploaded. For the Alexander's Star I think I'd like around \$25, as it's rather rare and in quite good...
|
2020-01-28 10:45:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2737063467502594, "perplexity": 5192.970084603454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251778168.77/warc/CC-MAIN-20200128091916-20200128121916-00297.warc.gz"}
|
https://cs.stackexchange.com/questions/141211/finishing-at-rest-at-a-target-in-2d-space
|
# Finishing at rest at a target in 2d space
I asked a similar question here, except I forgot to specify that the final velocity must be 0.
I have 2 points in 2D space, start = $$s$$ and target = $$t$$, and a starting velocity $$v_0$$.
At each time step $$k$$, the current point $$p$$ (which starts as $$s$$ and is incrementally updated) has the velocity $$v$$ added to it, so $$p_k = p_{k-1}+v_{k-1}$$. Additionally, the velocity has some acceleration $$a$$ applied to it which we can control (although this acceleration has a maximum magnitude of $$m$$) after the position has been updated, so $$v_k = v_{k-1} + a_k$$.
The idea is to (by controlling only the acceleration) finish at the target point $$t$$ with a final velocity of $$v=(0,0)$$ (so that you actually remain stationary at the point and don't overshoot)
To clarify this problem, I have provided an example below: In this case: $$p_0 = s = (1,3)$$ $$t = (2,1)$$ $$v_0 = (1,-1)$$
(In this case, the acceleration had a maximum allowed magnitude of 1, so $$m=1$$)
1. At time step $$k=1$$, first the velocity $$v=(1,-1)$$ is applied to $$p=(1,3)$$, to get the resultant point $$p=(2,2)$$, then, the acceleration $$a=(-1,0)$$ is applied to the velocity $$v$$ to get $$v=(0,-1)$$
2. At time step $$k=2$$, first the velocity $$v=(0,-1)$$ is applied to $$p=(2,2)$$, to get the resultant point $$p=(2,1)$$, then, the acceleration $$a=(0,1)$$ is applied to the velocity $$v$$ to get $$v=(0,0)$$
3. At time step $$k=3$$, the velocity is 0 and we have finished at the target, so that was a valid path
*Note while I only used whole numbers here, floats are perfectly fine for use too
I am trying to find an algorithmic approach to this such that I can solve this in the minimum number of time steps.
To put it simply: I want to find the list of accelerations required to reach the target in the minimum number of time steps with a final velocity of (0, 0)
• Let $a_0,a_1,a_2,\dots$ denote the acceleration applied in each step. Suppose we are hoping to find a trajectory that works with $k$ steps. Then there is a trajectory that meets your conditions iff there exists $a_0,\dots,a_{k-1}$ that satisfy all of the following conditions: $$(k-1) a_0 + (k-2) a_1 + \dots + a_{k-1} = t - s - k v_0$$ $$a_0 + a_1 + \dots + a_{k-1} = -v_0$$ $$\|a_0\| \le m, \dots, \|a_{k-1}\| \le m$$ Perhaps folks on Math.SE can suggest how to check whether there is a solution for this system of vector equations?
– D.W.
Jun 12 at 1:28
|
2021-11-27 12:06:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 32, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8600923418998718, "perplexity": 179.00046472532705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358180.42/warc/CC-MAIN-20211127103444-20211127133444-00470.warc.gz"}
|
https://www.physicsforums.com/threads/are-tracks-in-collision-experiments-proof-of-particles.857255/
|
# Are tracks in collision experiments proof of particles?
Tags:
1. Feb 14, 2016
### A. Neumaier
I'd like to discuss the question in the title, following up on my remark quoted below.
Note that I don't want to repeat the discussion in
so maybe reread that one first!
The traditional analysis is given in the paper
N.F. Mott, The Wave Mechanics of $\alpha$-Ray Tracks, Proc. Royal Soc. London A 126 (1929), 79-84.
Last edited: Feb 14, 2016
2. Feb 14, 2016
### A. Neumaier
The reason I ask the question is that, as discussed e.g. in the context of this post of mine, there is a semiclassical treatment of the photodetection process in which a photodetector responds to a classical electromagnetic field (where the notion of a photon doesn't make sense) in the typical way that is considered as heralding photons appearing in the detector. But this is obviously not the case.
Thus the question arises where similar discrete detection events that are usually considered as showing the detection of particles can also be interpreted as responses of a quantum detector to a classical external field.
The most interesting class of such detection events (apart from photon counters) are tracks in a bubble chamber (or their modern analogues, wire detectors).
Last edited: Feb 16, 2016
3. Feb 14, 2016
### vanhees71
In the case of massive particles like an electron, I'd say you can measure the charge over mass ratio by applying an external magnetic field or you measure, e.g., the energy loss in the detector material, which is characteristic for this ratio. Perhaps you find something more concrete when googling for particle ID.
In case of photons it's not very easy to make sure to detect only one photon. The oldfashioned treatment a la Einstein's paper of 1905 is very misleading, because indeed it's fully explanable with semiclassical modern quantum mechanics, semiclassical meaning here that the electromagnetic field is treated as a classical field and the bound electrons in the material quantized and then using first-order time-dependent perturbation theory, as detailed in my Insights article
https://www.physicsforums.com/insights/?s=sins+in+physics+didactics
To be sure to have only precisely one photon one way is to use parametric downconversion to create a polarization-entangled photon pair and detect one of the photons as a "trigger". Then you know that you have one and only one other photon.
4. Feb 14, 2016
### A. Neumaier
But this is a property of the electron field, not of a single electron. Thus one possibly has the same kind of ambiguity as in the case of photons.
5. Feb 14, 2016
### Staff: Mentor
Do they interpret, e.g. the traces in a bubble chamber after calculating expectations from the SM, or the other way round? I'm asking because I wonder how spins are 'detected'.
6. Feb 14, 2016
### jimgraber
I think all those thousands of people at CERN think that they work at a particle collider, and talk about particles all the time. Nevertheless, it is still possible that the wave picture or QFT could be more accurate than the particle picture (QM, or whatever you call it.) I think other posters have made the point that the particle picture is at least pretty good FAPP. Therefore I think it should be acknowledged that the claim that particles do not exist can only be true (if it is) in a highly technical manner of speaking and not in the ordinary meaning of the terms. This should not be interpreted to depreciate a very technical wave based explanation, particularly if it is in some way more accurate or more precise than the particle based explanation. However, there should be some proof that it is a true rival theory, not just a rival terminology.
Just my less than \$.02 worth.
7. Feb 14, 2016
### strangerep
Now you've got me wondering whether the analysis in Mandel & Wolf for the flat 2D detector case could be extended to a 2nd order analysis for a 3D detector.
After all, ionization chambers can detect both gamma rays and alpha/beta rays, so why should the latter be fundamentally different in terms of particle-vs-wave-vs-field?
8. Feb 14, 2016
### Jano L.
It can be regarded as property of both.
There are many reasons electrons are considered particles rather than field. Going back to Millikan's measurements, oil drop was found to have only electric charge that is multiple of elementary charge $e$. If electron was a field, one would expect the electric charge of the oil drop to be distributed continuously, not in multiples of $e$.
9. Feb 14, 2016
### Greg Bernhardt
10. Feb 15, 2016
### A. Neumaier
There are two rival theories: Interacting quantum field theory, where electrons are fields and particles exist only asymptotically (since Fock space is essentially an asymptotic concept), and quantum mechanics, where electrons are particles with ghostlike properties. They are considered to be compatible, but the relation between the two (via the S-matrix) is only very thinly discussed in the literature.
In quantum field theory it is impossible to speak of a sequence of single electrons moving from a source to a detector, while in quantum mechanics this is the standard picture. Thus there is something to be reconciled.
My question is whether there is actual proof that electrons (and other particles) in quantum mechanics really exist, or whether - similar to nonexistent photons detected by a photodetector coupled to an external classical electromagnetic field - they are just ghosts manifesting themselves only through the discrete responses of macroscopic quantum detectors to an electron fields.
11. Feb 15, 2016
### A. Neumaier
People also talk about photons all the time, although this is a very fleeting (and - as the semiclassical treatment of the photoeffect shows - much more questionable) concept.
Having good terminology that captures what ''really'' happens is important, I think, though not as important as having it right in the formal treatment that decides upon what can be predicted and how well.
12. Feb 15, 2016
### vanhees71
Well, that's also an interpretation as is the particle picture. Of course, by definition within relativistic QFT a particle is an asymptotic-free Fock state of definite occupation number 1, and as you write yourself in the first postings of this thread the appearance of tracks in a medium is well-understood since the early days of modern quantum theory (see the there cited paper by Mott).
If you are very precise you can argue that in an detector like a cloud or wire chamber you don't observe electrons but in-medium quasi-particles ;-)).
13. Feb 15, 2016
### A. Neumaier
Yes, but I had asked for a sequence of electrons (many, well-separated in time). There is no asymptotic picture for these, only for a single electron!
So the sequence of electrons only makes sense if you take the S-matrix from QFT and interpret the sequence of electrons in QM! Which is of course the conventional procedure but nevertheless very strange, if one thinks that QFT should be able to describe the source, the particles and the detector by a single (complicated) state of the quantum fields involved.
14. Feb 15, 2016
### vanhees71
Well, perhaps there's some way to understand the tracks of an electron in a cloud chamber using quantum electrodynamics (in the medium). What we really see are of course droplets condensing due to ionization. So one would have to calculate the condensation probability density given a single electron in the chamber.
15. Feb 15, 2016
### A. Neumaier
For a single electron, this can probably be made to work similar to Mott's analysis.
But again the problem is how to model a train of electrons in a single beam on the QFT level, which (given a single state) describes the dynamics of fields everywhere in space-time - rather than on the QM level, which (given a single state) describes what happens under temporal repetition (''identical preparation'') of the same situation.
16. Feb 15, 2016
### vanhees71
That's also an interesting question. As far as I know from talks of accelerator physics, they treat the particles in the accelerators as classical particles. This works obviously very well. I guess, in a first approximation you can just use magnetohydrodynamics or the Vlasov equation to describe the beams in an accelerator on a continuum level. Then the argument would be that you can approximate the Kadanoff-Baym equation with a Boltzmann-Vlasov equation very well.
17. Feb 15, 2016
### zonde
Are tracks in collision experiments proof of particles?
"Proof" is math term. Answer obviously is no. No observation can prove some model.
18. Feb 15, 2016
### Staff: Mentor
You can model such a sequence with suitable wave packets. If the sequence is finite (but as long as you want), the usual approach of non-interacting initial and final states with interaction in between works nicely.
I don't get the point of the discussion. In principle, it is possible to work with quantum field theory everywhere. It is also possible to use general relativity for an inclined slope problem. It is just needlessly complicated.
In particle accelerators, particles are treated as classical objects. You need some input from quantum mechanics, e.g. the power and spectrum of synchrotron radiation, but once you have those inputs you can use classical trajectories of the accelerated particles. Classical thermodynamics with time- and space-dependent external fields.
In the collision process itself, QFT is unavoidable.
After the collision, the description is (nearly) classical again: you have particles flying in different directions. Decoherence happens so quickly with every interaction that quantum effects are not relevant here. If particles decay, the actual decay process needs QFT again, but only to determine the lifetime, branching fractions, angular distributions and so on, not for the propagation of the initial or final particle. Mixing is a bit special, because you need some quantum mechanics in flight, but again you can cover that as effect based on the classical flight time.
19. Feb 15, 2016
Staff Emeritus
I was going to leave this thread alone, but to me it sounds like angels and pinheads. Of course particles have tracks and of course they exist, at least in the sense that they can be counted. On the theoretical side, anything I can care about can be calculated and compared with theory. So if this isn't completely mathematically rigorous, I don't much care. It's not the first time in my life I have done a calculation that wasn't perfectly rigorous, and I don't expect it to be the last.
20. Feb 15, 2016
### A. Neumaier
Often one can indeed do the latter. But both the Kadanoff-Baym equations and the Boltzmann-Vlasov equations are field theories in phase space, not particle theories.
Instead of particles one has only phase space densities.
Thus talking about particles seems to be simply a left-over from the 19th century when Boltzmann derived his equation from a classical particle picture.
|
2017-08-18 18:28:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6387418508529663, "perplexity": 695.6704586742068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105086.81/warc/CC-MAIN-20170818175604-20170818195604-00407.warc.gz"}
|
https://www.khanacademy.org/economics-finance-domain/core-finance/derivative-securities/put-call-options/v/call-option-as-leverage
|
# Call option as leverage
## Video transcript
If we were to buy the stock for $50, so, this is the situation where we're buying the stock, we're clearly putting$50 up front. If the stock moved up to $80, and we able to perfectly call the top and sell it for that$80, we would make a $30 profit off of a$50 initial investment. That's a 60% gain. That's a 60% gain on our upfront capital. On the other side, if the stock were to go down to $20, we would loose$30 of our $50 upfront investment. It would be a 60% loss. So in the buying the stock based on the scenario that I painted we could gain 60% or we could lose 60%. In terms of the potential upside you can gain an unlimited amount. The stock can just really go to any possible value. In terms of loss when you buy a stock the most you can lose is 100%. Let's think about the scenario with the call option. With the call option. To buy the call option it only cost us$5. We only have to put $5 upfront. The scenario where the stock went up to$80, we figured out that we were able to profit $15 net of the price of the options. This was our pure profit. On a base of$5 investment we were able to get \$15 of profit. We were able to get a 300% gain. We were able to get a 300% gain. On the other side though if the stock went down we had no reason to actually exercise our option. We essentially just lost all of the money of the option. We lost 100%. What I want to show here is that when an option did is it gave us leverage. It gave us leverage. The term comes from physics, because a lever will give you kind of mechanical leverage. It can allow you to exert more force than you otherwise could by using that simple tool. A call option is giving you financial leverage. You're essentially making this same bet here, but you're multiplying potential gain or your potential loss. If the stock you ... based on the scenario we painted in the good scenario you made 60%. But in the call option in the good scenario you made 300%. We multiplied. We multiplied our gain. On the downside with the stock we lost 60%. With the call option we lost 100%. Once again, we multiplied our loss. Right here it looks the numbers are still favorable, because our loss multiplication wasn't as much as our gain multiplication. This is really based on some of the numbers I chose. The important thing to realize is if you're dealing with option to essentially make the same bet, that the bet that the company will go up, you're just putting leverage on your bet. You're multiplying your potential gains or losses.
|
2019-02-16 12:11:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5634701251983643, "perplexity": 881.4301647244245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480272.15/warc/CC-MAIN-20190216105514-20190216131514-00111.warc.gz"}
|
https://mitpress.mit.edu/programming-models-parallel-computing
|
Paperback | $59.00 Short | £49.95 | 488 pp. | 8 x 9 in | 90 b&w illus. | November 2015 | ISBN: 9780262528818 eBook |$42.00 Short | November 2015 | ISBN: 9780262332231
Mouseover for Online Attention Data
## Programming Models for Parallel Computing
Edited by Pavan Balaji
## Overview
With the coming of the parallel computing era, computer scientists have turned their attention to designing programming models that are suited for high-performance parallel computing and supercomputing systems. Programming parallel systems is complicated by the fact that multiple processing units are simultaneously computing and moving data. This book offers an overview of some of the most prominent parallel programming models used in high-performance computing and supercomputing systems today.
The chapters describe the programming models in a unique tutorial style rather than using the formal approach taken in the research literature. The aim is to cover a wide range of parallel programming models, enabling the reader to understand what each has to offer. The book begins with a description of the Message Passing Interface (MPI), the most common parallel programming model for distributed memory computing. It goes on to cover one-sided communication models, ranging from low-level runtime libraries (GASNet, OpenSHMEM) to high-level programming models (UPC, GA, Chapel); task-oriented programming models (Charm++, ADLB, Scioto, Swift, CnC) that allow users to describe their computation and data units as tasks so that the runtime system can manage computation and data movement as necessary; and parallel programming models intended for on-node parallelism in the context of multicore architecture or attached accelerators (OpenMP, Cilk Plus, TBB, CUDA, OpenCL). The book will be a valuable resource for graduate students, researchers, and any scientist who works with data sets and large computations.
Contributors
Timothy Armstrong, Michael G. Burke, Ralph Butler, Bradford L. Chamberlain, Sunita Chandrasekaran, Barbara Chapman, Jeff Daily, James Dinan, Deepak Eachempati, Ian T. Foster, William D. Gropp, Paul Hargrove, Wen-mei Hwu, Nikhil Jain, Laxmikant Kale, David Kirk, Kath Knobe, Ariram Krishnamoorthy, Jeffery A. Kuehn, Alexey Kukanov, Charles E. Leiserson, Jonathan Lifflander, Ewing Lusk, Tim Mattson, Bruce Palmer, Steven C. Pieper, Stephen W. Poole, Arch D. Robison, Frank Schlimbach, Rajeev Thakur, Abhinav Vishnu, Justin M. Wozniak, Michael Wilde, Kathy Yelick, Yili Zheng
|
2017-08-17 19:07:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2500259578227997, "perplexity": 11433.521865981667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103910.54/warc/CC-MAIN-20170817185948-20170817205948-00555.warc.gz"}
|
https://www.projecteuclid.org/euclid.aoms/1177697202
|
The Annals of Mathematical Statistics
Some Regular and Non-regular Functions of Finite Markov Chains
Abstract
Let $J$ be a finite non-empty set and let $S(J)$ denote the set of all finite sequences of elements of $J$. If $s = (\delta_1, \cdots, \delta_m)\in S(J)$ and $t = (\mu_1, \cdots, \mu_n)\in S(J)$, then $st$ will denote the combined sequence $(\delta_1, \cdots, \delta_m, \mu_1, \cdots, \mu_n)$. The singleton sequence $(\delta)$ will be denoted by $\delta$. The symbol $s^2$ will mean the sequence $ss$ and the symbols $s^3,s^4$, etc., are defined similarly. Suppose $\{Y_n\}$ is a stationary process with state-space $J$. If $s \in S(J)$ and has length $n, p(s)$ denotes $P\lbrack (Y_1, \cdots, Y_n) = s \rbrack$. The rank $n(\delta)$ of a $\delta \in J$ is defined to be the largest integer $n$ such that we can find $2n$ sequences $s_1, \cdots, s_n, t_1, \cdots, t_n$ in $S(J)$ such that the $n \times n$ matrix $\|p(s_i\delta t_j)\|$ is non-singular. Suppose now that $\{Y_n\}$ is a function of a finite Markov chain (hereafter abbreviated ffMc). That is, let there exist a stationary Markov chain $\{X_n\}$ with a finite state-space $I$ and a function $f$ on $I$ onto $J$ such that $\{Y_n\}$ and $\{f(X_n)\}$ have the same distribution. Then Gilbert [5] has shown that $n(\delta) \leqq N(\delta)$ for all $\delta \in J$, where $N(\delta)$ is the number of elements in $f^{-1}\lbrack\{\delta\}\rbrack$. If we can find $\{X_n\}$ and $f$ in such a way that $n(\delta) = N(\delta)$ for all $\delta \in J$, then $\{Y_n\}$ is said to be a regular ffMc. The motivation for investigating the regularity property of a ffMc has been made clear by Gilbert in the first and the last paragraphs of Section 2 of [5]. Fox and Rubin [3] have given an example of a process $\{Y_n\}$ which has $n(\delta) < \infty$ for all $\delta \in J$ but which is not a ffMc. In the first part of this paper we expand their example into a class of examples and show that some of these examples yield nonregular ffMc. These examples are of a different nature than those given in [1], Section 4. Further our method of investigation is different from that employed by Fox and Rubin. The second part of this paper is devoted to proving that an exchangeable process which is a ffMc is a regular ffMc.
Article information
Source
Ann. Math. Statist., Volume 41, Number 1 (1970), 207-213.
Dates
First available in Project Euclid: 27 April 2007
https://projecteuclid.org/euclid.aoms/1177697202
Digital Object Identifier
doi:10.1214/aoms/1177697202
Mathematical Reviews number (MathSciNet)
MR263161
Zentralblatt MATH identifier
0193.46101
JSTOR
|
2019-11-12 04:36:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9273489713668823, "perplexity": 109.31094732992341}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664567.4/warc/CC-MAIN-20191112024224-20191112052224-00356.warc.gz"}
|
https://math.stackexchange.com/questions/4019897/is-it-valid-to-use-the-geometric-series-test-for-power-series
|
# Is it valid to use the Geometric Series Test for Power Series?
My textbook (Early Transcendentals 8th e., James Stewart) advises that in general to find the interval of convergence of a power series we should use the Ratio or Root Tests. However, I found that the interval of convergence could also be found by applying the Geometric Series Test: take the absolute value of the "common ratio", set it to less than $$1$$, and solve for $$x$$.
For example,
$$\sum_{n=1}^{\infty} \frac{x^n}{n^44^n} = \sum_{n=1}^{\infty} \frac{1}{n^4} (\frac{x}{4})^n$$
The "common ratio" is $$r= \frac{x}{4}$$ since it's the factor being raised to the power $$n$$.
A geometric series converges when $$|r| < 1$$
$$|\frac{x}{4}| < 1$$
$$-1 < \frac{x}{4} <1$$
$$-4 < x < 4$$
Which produces the same interval of convergence as when using the Ratio Test.
I found that this worked for the other power series presented in this section as well.
I'm aware that the power series in the example is NOT a geometric series because the coefficient of the series, $$c_n = \frac{1}{n^4}$$ is not constant as $$n\rightarrow\infty$$ and thus it does not actually have a common ratio since $$r$$ changes depending on which terms in the series are used to calculate it. In fact, none of the power series in this section were geometric series because none had a constant coefficient nor a true common ratio, hence why I'm unsure why the Geometric Series Test seemed to work for the power series presented.
Is using the Geometric Series Test to find the interval of convergence for power series valid? If so, why since not all power series are geometric series?
Even though you call it the "Geometric Series Test," the actual argument your proof describes is clearly the Ratio Test:
For example,
$$\sum_{n=1}^∞ \frac{x^n}{n^4 4^n} = \sum_{n=1}^∞ \frac{1}{n^4} \left( \frac{x}{4} \right)$$
The "common ratio" is $$r = \frac{x}{4}$$ since it's the factor being raised to the power $$n$$.
Here, $$a_n = \frac{x^n}{n^4 4^n}$$, so applying Ratio Test gives $$r = \lim_{n \to \infty} \frac{a_{n+1}}{a_n} = \left( \frac{x}{4} \right) \lim_{n \to \infty} \frac{n^4}{(n+1)^4} = \frac{x}{4}.$$ The Ratio Test and the Root Test are both based on (and proven via) the condition for convergence of a geometric series. So it's not surprising that "pretending the series is geometric" works when the complicating factor is a rational function of $$n$$ like $$\frac{1}{n^4}$$, as that factor will multiply the limit by 1 in either the Ratio Test or the Root Test.
• Given your explanation, is it valid to use the pseudo Geometric Series Test as a shortcut for evaluating power series instead of using the Ratio or Root Tests? Feb 10, 2021 at 5:32
• It's a good heuristic if the coefficients in the series $\sum c_n x^n$ are of the form $c_n = \frac{p(n)}{q(n)} a^n$, where $p(n)$, $q(n)$ are polynomials. It might fail to determine convergence at the endpoints, though; and, I would write out the full Ratio/Root test correctly if I was answering a question on a HW or a test :) Feb 10, 2021 at 19:27
• Thank you for the clarification! :) Feb 11, 2021 at 4:37
|
2022-08-12 00:23:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9004191756248474, "perplexity": 215.2255856914158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571536.89/warc/CC-MAIN-20220811224716-20220812014716-00274.warc.gz"}
|
https://optimization-online.org/tag/empirical-risk-minimization/
|
## Using Taylor-Approximated Gradients to Improve the Frank-Wolfe Method for Empirical Risk Minimization
The Frank-Wolfe method has become increasingly useful in statistical and machine learning applications, due to the structure-inducing properties of the iterates, and especially in settings where linear minimization over the feasible set is more computationally efficient than projection. In the setting of Empirical Risk Minimization — one of the fundamental optimization problems in statistical … Read more
## Accelerated Stochastic Peaceman-Rachford Method for Empirical Risk Minimization
This work is devoted to studying an Accelerated Stochastic Peaceman-Rachford Splitting Method (AS-PRSM) for solving a family of structural empirical risk minimization problems. The objective function to be optimized is the sum of a possibly nonsmooth convex function and a finite-sum of smooth convex component functions. The smooth subproblem in AS-PRSM is solved by a stochastic gradient method using variance reduction … Read more
## A Distributed Quasi-Newton Algorithm for Empirical Risk Minimization with Nonsmooth Regularization
We propose a communication- and computation-efficient distributed optimization algorithm using second-order information for solving ERM problems with a nonsmooth regularization term. Current second-order and quasi-Newton methods for this problem either do not work well in the distributed setting or work only for specific regularizers. Our algorithm uses successive quadratic approximations, and we describe how to … Read more
## DSCOVR: Randomized Primal-Dual Block Coordinate Algorithms for Asynchronous Distributed Optimization
Machine learning with big data often involves large optimization models. For distributed optimization over a cluster of machines, frequent communication and synchronization of all model parameters (optimization variables) can be very costly. A promising solution is to use parameter servers to store different subsets of the model parameters, and update them asynchronously at different machines … Read more
## On the convergence of stochastic bi-level gradient methods
We analyze the convergence of stochastic gradient methods for bi-level optimization problems. We address two specific cases: first when the outer objective function can be expressed as a finite sum of independent terms, and next when both the outer and inner objective functions can be expressed as finite sums of independent terms. We assume Lipschitz … Read more
## Communication-Efficient Distributed Optimization of Self-Concordant Empirical Loss
We consider distributed convex optimization problems originated from sample average approximation of stochastic optimization, or empirical risk minimization in machine learning. We assume that each machine in the distributed computing system has access to a local empirical loss function, constructed with i.i.d. data sampled from a common distribution. We propose a communication-efficient distributed algorithm to … Read more
## Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate (SPDC) method, which alternates between maximizing over a randomly chosen dual variable and minimizing over the primal variable. An … Read more
## An Accelerated Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization
We consider the problem of minimizing the sum of two convex functions: one is smooth and given by a gradient oracle, and the other is separable over blocks of coordinates and has a simple known structure over each block. We develop an accelerated randomized proximal coordinate gradient (APCG) method for minimizing such convex composite functions. … Read more
|
2022-12-01 03:01:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6221803426742554, "perplexity": 598.0474606211715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710789.95/warc/CC-MAIN-20221201021257-20221201051257-00293.warc.gz"}
|
https://www.bartleby.com/questions-and-answers/problem-142-concord-co.-is-building-a-new-hockey-arena-at-a-cost-of-dollar2620000.-it-received-a-dow/e9f07370-ce6b-4f60-a99e-e30bd950189f
|
# Problem 14-2 Concord Co. is building a new hockey arena at a cost of $2,620,000. It received a downpayment of$530,000 from local businesses to support the project, and now needs to borrow $2,090,000 to complete the project. It therefore decides to issue$2,090,000 of 11%, 10-year bonds. These bonds were issued on January 1, 2016, and pay interest annually on each January 1. The bonds yield 10%. Prepare the journal entry to record the issuance of the bonds on January 1, 2016. (Round present value factor calculations to 5 decimal places, e.g. 1.25124 and the final answer to 0 decimal places e.g. 58,971. If no entry is required, select "No Entry" for the account titles and enter 0 for the amounts. Credit account titles are automatically indented when amount is entered. Do not indent manually.) DateAccount Titles and ExplanationDebitCreditJanuary 1, 2016 Prepare a bond amortization schedule up to and including January 1, 2020, using the effective interest method. (Round answers to 0 decimal places, e.g. 38,548.) Date CashPaid InterestExpense PremiumAmortizationCarryingAmount ofBonds1/1/16 1/1/17 1/1/18 1/1/19 1/1/20 Assume that on July 1, 2019, Concord Co. redeems half of the bonds at a cost of $1,126,600 plus accrued interest. Prepare the journal entry to record this redemption. (Round answers to 0 decimal places, e.g. 38,548. If no entry is required, select "No Entry" for the account titles and enter 0 for the amounts. Credit account titles are automatically indented when amount is entered. Do not indent manually.) DateAccount Titles and ExplanationDebitCreditJuly 1, 2019 (To record interest)July 1, 2019 (To record reacquisition) Question Problem 14-2 Concord Co. is building a new hockey arena at a cost of$2,620,000. It received a downpayment of $530,000 from local businesses to support the project, and now needs to borrow$2,090,000 to complete the project. It therefore decides to issue $2,090,000 of 11%, 10-year bonds. These bonds were issued on January 1, 2016, and pay interest annually on each January 1. The bonds yield 10%. Prepare the journal entry to record the issuance of the bonds on January 1, 2016. (Round present value factor calculations to 5 decimal places, e.g. 1.25124 and the final answer to 0 decimal places e.g. 58,971. If no entry is required, select "No Entry" for the account titles and enter 0 for the amounts. Credit account titles are automatically indented when amount is entered. Do not indent manually.) Date Account Titles and Explanation Debit Credit January 1, 2016 Prepare a bond amortization schedule up to and including January 1, 2020, using the effective interest method. (Round answers to 0 decimal places, e.g. 38,548.) Date Cash Paid Interest Expense Premium Amortization Carrying Amount of Bonds 1/1/16$
$1/1/17 1/1/18 1/1/19 1/1/20 Assume that on July 1, 2019, Concord Co. redeems half of the bonds at a cost of$1,126,600 plus accrued interest. Prepare the journal entry to record this redemption. (Round answers to 0 decimal places, e.g. 38,548. If no entry is required, select "No Entry" for the account titles and enter 0 for the amounts. Credit account titles are automatically indented when amount is entered. Do not indent manually.)
Date
Account Titles and Explanation
Debit
Credit
July 1, 2019
(To record interest)
July 1, 2019
(To record reacquisition)
|
2021-03-01 14:18:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1998506635427475, "perplexity": 9290.552989402271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362513.50/warc/CC-MAIN-20210301121225-20210301151225-00040.warc.gz"}
|
https://jfpettit.github.io/blog/2019/11/03/fundamentals-of-reinforcement-learning
|
# Looking at the Fundamentals of Reinforcement Learning
### 03 Nov 2019 by Jacob Pettit
#### Estimated read time: 26 mins
In this post, we’ll get into the weeds with some of the fundamentals of reinforcement learning. Hopefully, this will serve as a thorough overview of the basics for someone who is curious and doesn’t want to invest a significant amount of time into learning all of the math and theory behind the basics of reinforcement learning.
# Markov Decision Processes
In reinforcement learning (RL), we want to solve a Markov Decision Process (MDP) by figuring out how to take actions that maximize the reward received. The actor in an MDP is called an agent. In an MDP, actions taken influence both current and future rewards received and actions influence future states the agent finds itself in. Because of this, solving an MDP requires that an agent is able to handle delayed reward and must be able to balance a trade-off between obtaining reward immediately and delaying reward collection. Below we have a diagram of the classic MDP formulation of agent-environment interaction.
At initialization, the environment outputs some state $s_t$. The agent observes this state and in response takes an action, $a_t$. This action is then applied to the environment, the environment is stepped forward in response to the action taken, and it yields a new state, $s_{t+1}$ and reward signal $r_{t}$ to the agent. This loop continues until the episode (period of agent-environment interaction) terminates.
Some examples of a state might be a frame of an Atari game or the current layout of pieces in a board game such as chess. The reward is a scalar signal calculated following a reward function, and the action can be something like which piece to move where on a chess board, or which direction to go in an Atari game.
## Model of the environment
The model of the environment consists of a state transition function and a reward function. Here, the transition function will be discussed and later we’ll talk about reward functions. In finite MDPs (the kind of MDP we are concerned with), the number of states, actions, and rewards are all finite values. Because of this property, they also all have well-defined probability distributions. The distribution over the next state and next reward depends only on the previous state and action. This is called the Markov property. For the random variables $s' \in S$ and $r \in \mathbb{R}$, there is a probability at time $t$ that they’ll have particular values. This probability is conditional on the previous state and action, and can be written out like this:
This 4-argument function, $p$, fully captures the dynamics of the MDP and tells us, formally, that our new state ($s'$) and reward are random variables whose distribution is conditioned on the previous state and action. A complete model of an MDP can be used to calculate anything we want about the environment. State-transition probabilities can be found, expected rewards for state-action pairs, and even expected rewards for state-action-next state triplets.
An episodic MDP (we can also refer to an MDP as a task) is one with a clear stopping, or terminating, state. An example of this might be something like a game of chess, where the natural stopping state is when one player wins and the other loses.
A continuing task is one that doesn’t have a clear stopping state, and can be allowed to continue indefinitely. Consider trying to train a robot to walk; there is not necessarily a clear stopping point. In practice, when we are training a simulated robot to walk, we terminate the episode when it falls over. Over time, however, the agent learns to walk successfully. Once it is successfully walking, there isn’t an easily defined stopping point. Since we don’t want to let a simulation run forever, we typically enforce some maximum number of interactions allowed per episode. Once this number is reached, the episode terminates.
## Reward & Return
We use reward to define the goal in the problem we’d like the RL agent to solve. At every step $t$, the agent receives a single, scalar reward signal $r_t$ from the environment. The agent’s aim is to maximize the reward it receives over all future actions. This is called the return.
Return is the cumulative sum of all reward earned in an episode. Let’s denote return with $G_t$. Then, the return of an episode is defined with:
$r_T$ is the reward earned at the final step of an episode. $\stackrel{.}{=}$ means “defined by”. We can rewrite the above expression more concisely:
The reward formulation above doesn’t work for continuing tasks, because as the number of time steps goes to infinity, so does the reward. We need a formulation of reward that will converge to a finite value as the number of time steps goes to infinity. To do this, we can assign a discount factor to future rewards. We define a discount rate parameter, $\gamma$ (gamma) and set $0 \leq \gamma \leq 1$. We incorporate $\gamma$ into our reward formulation like so:
Rewrite for brevity:
Since $0 \leq \gamma \leq 1$, this is a telescoping sum. As $t \to \infty$, the reward approaches zero. This is what we want, because under this formulation, the expected future reward converges to a finite value, instead of going to infinity.
In practice, we normally discount future rewards with $\gamma$ roughly between 0.95 and 0.99, even in episodic tasks. Intuitively, this is because reward now is normally better than reward later. There are cases where a $\gamma$ value outside of that range will yield the best performance, but most papers and libraries seem to use a $\gamma$ in the above range.
## Agent-Environment Interaction
The MDP formulation is a clear way to frame the problem of learning from interaction to achieve a certain goal. Interaction between an agent and its environment occur in discrete time steps, i.e. $t = 0, 1, 2, 3, \dots$. As we know, at every time step our agent takes an action $a_t$ and receives a new observation $s_{t+1}$ and reward $r_{t+1}$. Writing this out directly, the interaction between agent and MDP produces a trajectory that progresses like this:
We can denote interaction trajectories by $\tau$, the Greek letter Tau.
Interesting note: Theoretically, the current state and reward in an MDP should only depend on the previous state and action. In practice, however, this condition (called the Markov property) is violated very regularly. When an RL agent is learning how to control a robot to move forward, the current position of the robot is not only dependent on the previous state and action, but all of the states and actions before it. Somewhat surprisingly, deep RL algorithms are able to achieve excellent performance on these domains despite some problems technically being non-Markovian.
# Policies and Value functions
A policy is a function that maps from states to actions (or to a probability distribution over actions) and is the decision-making part of the agent. A value function estimates how good a particular state, or state-action pair, is. “Good” is defined in terms of expected future reward following that state or state-action pair.
## Policies
The policy is a mapping from states to actions (or to a probability distribution over actions), and can be written like so:
Where $\pi$ represents the policy, $s_t$ represents the current state, and $a_t$ is the chosen action to take while in state $s_t$.
Policies can be stochastic or deterministic. In the case of a stochastic policy, the output would be a probability distribution over actions. In a deterministic policy, the output is directly what action to take. These can be written mathematically:
• Stochastic policy: $\pi(a \vert s) = P_{\pi} [A = a \vert S = s]$
• Deterministic policy: $\pi(s) = a$
Let’s break this down. In the stochastic policy, $\pi(a \vert s)$ is telling us that the output of the policy $\pi$ is conditioned on the state $s$. $P_{\pi} [A = a \vert S = s]$ says that the probability of the action $a$ being equal to $A$ depends on $s$ equaling $S$. The deterministic policy simply tells us that the policy $\pi$ takes in state $s$ and maps to an action $a$.
## Value functions
The value of a state is determined by how much reward is expected to follow it. If there’s lots of reward expected after state $s_t$, then that must be a good state. But, if there is very little reward expected to follow state $s_{t+5}$, then that’s a bad state to be in. The state-value function is a function that, given a state, outputs an estimate of the expected future return following that state. An action-value function will take in a state and an action and will output an estimate of the expected future return following that state-action pair. Value functions are defined with respect to policies. The value of a state under policy $\pi$ is written with $v_\pi (s)$; this is the state-value function. Mathematically:
Recall that $G_t$ is the return and $\sum_{i=1}^\infty \gamma^i r_{i+t+1}$ is the formula for discounted return. The action-value function is written a bit differently and makes a slightly different assumption than the state-value function. Whereas the state-value function assumes that the policy starts in state $s$ and afterward takes all actions according to $\pi$, the action-value function assumes that the policy starts in state $s$, takes an action $a$ (which may or may not be on policy), and thereafter acts following $\pi$.
The main distinction to note is that $v_\pi(s)$ is dependent only on the current state, while $q_\pi (s,a)$ is dependent on both the current state and action.
The advantage function is found by subtracting the state-value from the state-action value, to get the action value (advantage function). This is often used in practice when training deep RL algorithms.
## Optimal Policies and Value functions
Now that we know about policies and value functions, it’s time to see what optimal policies and value functions are.
### Optimal policies
An optimal policy always takes the action in a state $s$ that will yield the maximum expected future reward. This can be written like so:
where $\pi^*$ is the optimal policy. The $argmax_\pi G_t$ means “take the action on policy that yields the highest future return”. Finding the optimal policy is the central problem in RL, because once the optimal policy is found, the agent can then always take the best action in any state. See here for more.
### Optimal value functions
The optimal state-value function (written with $v^* (s)$) is the function that always gives you the expected return when starting in state $s$ and afterwards acting according to the optimal policy.
The optimal action-value function ($q^* (s,a)$) always gives the expected future return if you start in state $s$, take action $a$ (that may or may not be on-policy) and then afterwards act following the optimal policy.
When the optimal policy is in state $s$, it chooses the action which maximizes the expected return when starting from the current state. Therefore, when we have the optimal $q$ function, the optimal policy is easily found by:
Where $a^*$ is the optimal action, and $\underset{a}{argmax}\space q^*(s, a)$ means “take the action that maximizes $q^*$
## Bonus: $\varepsilon$-greedy algorithms
In RL, we need to trade off between exploiting what an agent has already learned and exploring the environment and actions to find potentially better actions. When we deal with a greedy policy (one that always chooses the action with the highest expected return), we often must sometimes force such a policy to explore non-greedy actions during training. This is done by picking a random action instead of the on-policy action with some probability $\varepsilon$.
# Bellman equations
Bellman equations demonstrate a relationship between the value of a current state and the values of following states. It looks from a current state into the future, averages all future states and the possible actions in those states, and weights each state-action pair by the probability that it will occur. The Bellman equations are a set of equations that break the value function down into the reward in the current state plus the discounted future values.
## State-value Bellman equation
The set of equations for state-value is this:
## Action-value Bellman equation
And for action-value:
The $a \sim \pi$ means “action is sampled from policy $\pi$”.
## Optimal Bellman Equations
If we don’t care about computing the expected future reward when following a policy, then we can use the optimal Bellman equations:
The difference between the on-policy and optimal Bellman equations is whether or not the $max$ operation is present. When the $max$ is there, it means that for the agent to act optimally, it has to take the action that has the maximum expected return (aka highest value).
# Observations and actions
## Observation spaces
An observation is the state, or portion of the state, as it is observed by the agent. In fully-observable MDPs, the state and the observation can be identical, but sometimes are not (for example, in the Atari DQN paper, the observations to the agent are the most recent few frames of the game, concatenated together), and in partially-observable MDPs, the agent cannot see the entirety of the environment’s state. Therefore, the distinction between the state of an environment and the observations an agent receives is an important one.
Observation spaces can be discretely or continuously valued. An example of a discrete observation space is the layout of a tic-tac-toe board, where empty space is represented by a zero, X pieces are represented by a 1, and O pieces are represented by a 2. A continuous observation space example might be a vector of joint torques and velocities in a robot, like the one in the gif below.
In this gif, the goal is to move the ant robot forward. The observations are a 28 dimensional vector of joint torques and velocities. The agent must learn to use these observations to take actions to achieve the goal.
## Action spaces
Similarly to observation spaces, actions can be continuously or discretely valued. A simple example of a discrete action is one in Atari, where you move a joystick in one of four distinct directions to control where your character moves. Continuous actions can be more complex. For example, in the gif above, the actions are a real valued vector of continuous numbers representing amounts of force to apply to the robot’s joints. Continuous actions are always a vector of values.
## Reward functions
Reward functions yield a scalar signal at every step of the environment telling the agent how good the previous action was. The agent uses these signals to learn to take good actions in every state. To the agent, goodness is defined by how much reward was earned.
Reward functions can either be simple or complex. For a simple example, in the classic gridworld environment (see diagram below), the agent starts in one corner of a grid and must navigate an end state in the other corner of the grid. The reward at every step, no matter the action, is $-1$. This incentivizes the agent to navigate from start to end as quickly as possible.
In the above image, the greyed out squares are the starting and ending points. Either one can be the start or end point. A more complicated example of a reward function would be the one for the ant robot above. The reward function for that agent is as follows:
Let’s unpack this. $r_t(s, a)$ tells us that the reward is a function of the last state and action. $x$ and $x'$ are the $x$ position of the robot before and after the action, respectively. $\Delta t$ is the change in time before the last action and current action. $\vec{a}$ is the action vector, in this case the actions are a vector because we’re applying forces to 8 different joints on the robot, so each entry in the vector tells us how much force to apply to the corresponding joint. $C_E$ is a vector of external forces on the body of the robot. $clip$ tells us to clip the vector $\vec{c_E}$ to have all values fall within the range $[-1, 1]$.
# Some classical methods
Here, we’ll briefly touch on some classical methods in reinforcement learning. These methods aren’t in use on cutting-edge problems today, but do lay an important theoretical foundation for modern algorithms.
## Dynamic Programming
Dynamic programming (DP) algorithms require a perfect model of the environment. Practically, this is often unrealistic to expect and therefore DP algorithms often are not actually used in RL. However, they are still theoretically important. We can use DP to find optimal value functions, and from there, optimal policies.
### Policy Evaluation
We can compute the state-value function for a policy $\pi$ by using policy evaluation. Mathematically:
where $\underset{a}{\sum}$ means sum over actions, and other symbols should be known from earlier sections.
### Policy Improvement
Policy improvement finds a new and improved policy $\pi' \geq \pi$ by greedily selecting the action with highest value in each state.
### Policy Iteration
Once we’ve used $v_\pi(s)$ to improve $\pi$ and get $\pi'$, we can again perform a policy evaluation step (compute the state value function for $\pi'$) and a policy improvement step (compute the improved poicy $\pi'' \geq \pi'$). By continuing this cycle we can get consistently improving policies and value functions. The trajectory of policy iteration looks like this ($e$ is for policy evaluation and $i$ is for policy improvement):
## Monte Carlo Methods
In the previous section, we discussed policy iteration for deterministic policies. The math and theory described there extends to stochastic policies too. Monte Carlo (MC) methods do not require a model of the environment and instead can learn entirely from experience. The core idea of MC methods is to use average observed return as an approximation for the value of a state. To get an empirical return, MC methods need complete episodes and the episodes have to end. We can estimate $v_\pi(s)$ under first visit or every visit MC. First visit MC estimates the value of $s$ as an average of the return after the first visit to $s$, while every visit MC estimates the value by averaging the return of $s$ every time the state is visited. Both first and every visit MC converge to the true value of the state as the number of visits goes to infinity.
Image from Lilian Weng’s blog. Showing that learning an optimal policy via MC is done by following a similar idea to policy iteration.
Similarly to policy iteration, we improve the policy greedily with respect to the current value function.
Then, we use the updated policy to generate a new episode to train on. Finally, we estimate the $q$ function using information gathered from the episode.
## Temporal Difference Learning
Temporal Difference (TD) learning combines ideas from Monte Carlo and Dynamic Programming methods. Like MC methods, TD doesn’t require a model of the environment and instead learns only from experience. TD methods update value estimates based partially on other estimates that have already been learned and they can learn from incomplete episodes.
### Bootstrapping
TD methods use existing estimates to update values instead of only relying on empirical and complete returns like MC methods do. This is known as bootstrapping.
### TD value estimation
Similarly to MC methods, TD methods use experience to estimate values. Both follow a policy $\pi$ and collect experience over episodes. Both TD and MC methods update their value estimates for every nonterminal state in the set of gathered experience. A difference is that MC methods wait until the end of an episode, when empirical return is known, to update their value estimates, whereas TD methods can update their estimates with respect to other estimates (they don’t rely on empirical return) during an episode. We can write a simple version of an MC method that works well in nonstationary environments:
It is simple to turn this MC update into a TD update by switching out the empirical return $G_t$ for the current value estimate of the next state $v(s_{t+1})$:
$\alpha$ is a step-size parameter where $\alpha \in [0, 1]$.
This update can also be written for the action-value function:
Learning an optimal policy using TD learning is called TD control. We’ll next look at two algorithms for TD control.
### SARSA: On-policy TD control
We again follow the pattern of policy iteration, except now we use TD methods for the evaluation (value estimation) steps. In SARSA, we need to learn a $q$ function and then define a greedy (or $\varepsilon$-greedy) policy with respect to that $q$ function. This can be done using the $q$ update rule from above:
This update is performed after every nonterminal state. If $s_t$ is terminal, then $q(s_{t+1}, a_{t+1})$ is 0. This rule uses each element in the tuple: $(s_t, a_t, r_{t+1}, s_{t+1}, a_{t+1})$. This tuple also gives the SARSA algorithm its name. SARSA algorithm has these steps:
1. From $s_t$, pick an action according to the current $q$ function, often $\varepsilon$-greedily: $a_t = \underset{a}{argmax} q(s_t, a)$
2. Our selected action, $a_t$ is applied to the environment, the agent gets reward $r_t$, and the environment steps to a new state $s_{t+1}$
3. Pick next action same way as in step one
4. Do the action-value function update: $q(s_t, a_t) \leftarrow q(s_t, a_t) + \alpha [r_t +\gamma q(s_{t+1}, a_{t+1}) - q(s_t, a_t)]$
5. Time steps forward and the algorithm repeats from the first step
### Q-learning: Off-policy TD control
Q-learning was an early breakthrough in reinforcement learning by Watkins and Dyan in 1989. The algorithms update rule is:
Under this rule, $q$ directly approximates the optimal action-value function $q^*$, independent of the current policy.
The Q-learning algorithm has these steps:
1. Start in $s_t$ and pick an action according to the policy defined by the $q$ function. Could be a $\varepsilon$-greedy policy.
2. Take action $a$, gather $r_t$ and step the environment to the next state $s_{t+1}$.
3. Apply the update rule: $q(s_t, a_t) \leftarrow q(s_t, a_t) + \alpha[r_t + \gamma \space \underset{a}{max} \space q(s_{t+1}, a_{t+1}) - q(s_t, a_t)]$
4. Time steps forward to the new state and algorithm repeats from the first step.
Thank you for sticking with me through this long blog post. I really hope it was worth your while and should you find any errors, please email me. Please feel free to have a discussion or raise questions in the comments. In my next post, I’m planning to write about policy gradient methods and dive into the theory behind them. See you then!
|
2020-08-08 09:47:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 132, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.713468611240387, "perplexity": 526.0867814803405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737319.74/warc/CC-MAIN-20200808080642-20200808110642-00074.warc.gz"}
|
http://mailroom.efficertain.com/ftemnd/thread-forming-screw-torque-chart.html
|
Thread forming screw torque chart. 6 (PA6 In order not to excessively...
Thread forming screw torque chart. 6 (PA6 In order not to excessively exceed the tightening torques specified in the Screw Thread Design Screw Thread Fundamentals A screw thread is defined as a ridge of uniform section in the form of a helix on either the external or internal surface of a cylinder If you require thread forming screws for plastic , there are many benefits to The classes are set by the International Standards Organisation or ISO The tensile stress area of the threaded portion of a fastener is determined by the size (diameter) of the bolt and by the thread pitch (spacing), as follows A s = 0 A lubricated fastener must have the torque values reduced 25 – 30% from the above ratings 9743/n)] 2 where: A s = Bolt Thread Tensile Stress Area (in 2) d = Nominal Bolt Diameter (in Thread forming screws, by their very principle, produce a very tight fit that will take a lot of effort to loosen Internal threads refer to those on nuts and tapped holes, while external threads are those on bolts, studs, or screws Torque Setting Screwdrivers; Screwdriver Sets; Wera Kraftform Kompakt Bit Holding Screwdriver Sets; VDE Screwdrivers; ESD Screwdrivers; Sizes below 1/4" use numbers to describe their diameter, while screws 1/4" and higher use fractions 6 Socket Head M3 Socket Head M4 Thread-Forming Socket Head M4 Low-Profile 9 mm is referred to as a 6mm thread in millimeters x thread pitch Torque tolerance + 0%, -15% of torquing values All calculations are for Coarse Thread Series (UNC) -lb: 10-32: 0 1373 times the pitch Medium carbon steel; Tensile strength = 120,000 psi min E 6-GF50) at 20 °C after storage in a normal climate (relative atmospheric humidity in acc Welding is a fabrication process that joins materials, usually metals or thermoplastics, by using high heat to melt the parts together and allowing them to cool, causing fusion EJOT ALtracs® Plus screws are thread-forming fasteners developed for maximum strength values in light alloy assemblies and other non-ferrous metals such as zinc, copper or brass They are unusual, in that they were probably the most "scientific" design of screw, starting with We have a wide range of thread forming screws in a variety of sizes for use both in metals and thermo plastics This chart from the Aviation Maintenance Technician General Handbook details the most common markings on aircraft bolt heads 75 and 1 These can be broadly categorized into … TORQUE-TENSION REFERENCE GUIDE Printed in U (ISO Metric Screw Threads; Coarse and Fine Screw Threads from 1 to 250 mm) $\mu_G$ is the coefficient of friction in the thread (in this case we have considered $\mu_G$ and $\mu_K$ equal Torque Chart 18-8 9 Standard Thread Size Charts Bumax Hard Taptite is developed for use in steel and 1 1/2" - 12 1330 980 2970 2190 4820 3560 4mm x pitch = nominal thread dia 15 K = 0 The recommended torque values are based on dry threads Use the formula below to calculate an estimate of the screw head and gauge Internal Stresses: Low: High: Tightening Torque: Low: High: Plastic Material Recommendations: Recommended using with Stiffer hard materials such as glass-filled plastic parts An induction hardened thread forming zone for ease of thread forming into multiple types of materials Best way to ensure this is purchase from a reliable source that clearly specifies the rating Steel socket head cap screws made with the A574 Standard are stronger alloy then typical grade 8 bolts "The Accurate Measurement Of Thread Pitch Diameter, Which May Be Perfect As To Form And Lead, Presents Certain Difficulties Which Result In Some Uncertainty As To Its True Value As the number of engaged threads increases so too does the required torque to overcome the additional friction Hardness (HRC): 39-44 7854 x [d - (0 The action of driving the screw in will result in the self-tapping screw cutting a thread into the material and securing the materials together Optimal connection of plastic parts 2 5 Availble for both the public and trade Self-tapping screws are typically used to secure wood, plastic, metal and brick together ANSI/ASME B1 1 day ago · Axle nut is 132 ft/lbs, spin the wheel a handful of times then torque to 260 ft/lbs with DIN 50014) until the moisture stability has been reached Minimal radial stress 3 Bolt proof load is defined as the maximum force that the material can support without experiencing permanent deformation Recommended tightening torque for thread forming screw – ST (sheet metal screw) 0 20 ISO 14583 - Bumax® Hard Taptite Thread-Forming Screw -- M3, M4, M5, M6, M8 0 40 NEXT: With a slow and controlled motion, proceed by applying torque up to the specified value Refer to this detailed … ANSI/ ASME EXTERNAL Screw Thread Size Chart Metric Screws As threads are formed, the plastic material recovers and fills in around Thread Coeff friction Preload (Kn) property class Tightening torque (Nm) property class Min For longer fasteners, the torque is reduced significantly ) D = Nominal thread diameter (expressed in inches; 1 mm = Other Torque Information Torque Conversion Tables Torque Conversion Program Torque Conversion Table Torque Limiting Screwdrivers Home/Percent thread chart Percent thread chart Back to Taptite II Back to Duo-Taptite Back to Taptite To view the below page at full size, click here At point 2 the screw Screws and threads should be clean and dry The angle between the flanks of the thread measured in an axial plane Pan head screws are also known as Delta PT® screws 9 bolts for both wet and dry tightening Can be used when access is only possible from one side The spacing between the threads is also designed to hold the screw in place and reduce vibrational loosening For use in … Combined Thread and Body Bolt Elongation Under Preload Formula and Calculator With torque or turn on a bolt thread, we’re trying to control the tightening process through the forces applied to, or the motion of, the nut The distance from a point on the screw thread to a corresponding point on the next thread measured parallel to the axis Thread cutting screws do not generate the same stress as thread forming screws thread angle of PLASTITE®45 screws keep induced stress to a minimum during the thread forming operation As each lobe of the screw moves through the pilot hole in the nut material, it forms and work hardens the nut thread metal, producing an uninterrupted grain flow All values in table above are in inch/Ibs The imperial diameter (in 16 th of an inch) of the screw head is usually twice the gauge (imperial) The torque rate can vary considerably for the same diameter of screw 03937 inches) K = Nut factor ( 6 + PA6 Bumax Hard Taptite is developed for use in steel and Types of Thread Forming Screws An effective solution is to use thread-forming screws with 30° thread form ) n = Thread Pitch (Threads Per Inch) Bolt Clamp Pre-Load Force Combined Thread and Body Bolt Elongation Under Preload Formula and Calculator With torque or turn on a bolt thread, we’re trying to control the tightening process through the forces applied to, or the motion of, the nut Thread pitch is the distance between threads expressed in mm This prevents brittle plastic fracturing during the install process Lubricated refers to GRADE 8 COURSE THREAD BOLT size-pitch LUBRICATED WITH ANTI-SEIZE in-lbs Nm in-lbs Nm 1/4”-20 129 15 86 10 ft-lbs Nm ft-lbs Nm 5/16”-18 23 31 15 20 www Thread Forming : Torque Prevailing First : Removal Torque Recommended : Assembly Torque Failure Plastite trilobular thread rolling screws are designed for high performance fastening in a wide range of thermoplastics Thread Forming Screw Thread-forming screws are assembled into a plain pilot hole forming their own internal thread by cold swaging The excess metal fills in behind the tri-lobular lobes to provide maximum contact and a secure assembly Unless otherwise specified use torque values listed above 0145: 23 to 34 in My account View enquiries View collections 835 min M8 x 50 - the destruction torque 01 These cookies enable core functionality like security, network management, and accessibility 14 between nut and washer or head and washer, as manufactured (no special cleaning) A raised or recessed triangle will indicate a close-tolerance NAS bolt S 20 Torsional performance data for metric and inch self tapping screws Button socket cap screws are designed for applications that allow the head to protrude above the mating parts There are two types of self-tapping screws, thread-forming and thread-cutting TAPTITE® PRO™ Fasteners -Torque Performance 0 0175: Thread Pitch Chart 6-32 Socket Head 6-32 Pan Head 6-32 Button Head 6-32 Flat Head 1/4"-20 Socket Head Set-Screws Hardware Bundles M1 The plastic boss with a special fastener for plastics shows reduced radial stress 9 Class 5 Bookmark this page Considerably increased durability of screw connection in static and dynamic tensile loading This eliminates the need for thread tapping provides a significant assembly cost reduction Tail shaft guard Required: • Fit tail shaft guards to vehicles required to comply with ADR 58/ Note: This screw chart is not comprehensive of all available standard threads as provided by the standard Spaenaur’s tapping screw selector guide includes: thread-forming, thread-rolling, thread-cutting and self-drilling types for sheet metal Totallyamaha is not responsible for any damages that these modifications may cause to your vehicle; any modifications are your responsibility if you choose to do so Designed for direct assembly into cast holes, the ALtracs® Plus enables significant cost savings to be In 1841 he proposed as a standard a thread form with an included angle of 55°, and the tops and bottoms of the threads rounded with a radius equal to 0 8 8 Low thread forming torque ensuring safety of the plastic recipient 5 0 100 φ = Lead angle (helix angle) of the thread Gauge= (Head diameter in sixteenths of an inch X 2 ) – 2 The 30°, 40° and 60° degree angles are designed to make the screw produce maximum British Association Screw Threads | BA Thread lb 12 between threads and 0 A The 60° profile on the Type 60-1 screw penetrates deeply into the work piece, absorbing higher torques without stripping and resisting pullout forces breaking torque; A1-50 A2-70 A4-80 A1-50 A2-70 A4-80 A1-50 A1-70 Moved Permanently ) D = Nominal thread major diameter K = Nut factor P = Pounds of clamping force There are strength levels of inch machine screws and each has a different Use this chart from Kodiak Cutting Tools for reference with thread forming tap drill sizes Proof Load = 85,000 psi; Clamp load = 63,750 psi; PARAMETERS THAT WILL CAUSE THE TORQUE TO LOWER Imperial Screws The strength of a swages thread is greater than a cut thread because the grain flow of the steel is maintained 21 Item (s) Imperial & Metric Threads Sizes Chart The thread sizes are given in nominal sizes, not actual measurements These attributes also permit a deeper thread without increased drive torque Also, the extra wide thread spacing allows stress to be dispersed over a … 10 Enhanced Thread Design Tap-Flex screws are threadformers with the patent-pending E-Form® segmented thread profile 0 0 1000 2000 3000 4000 5000 6000 Rotation - Degrees M10-1 (Examples: 4-40, 12-24, 5/16-18) Metric Machine Screws are notated in … Most accurate bolt torque calculator given its geometry and thread diameter, using vdi 2230 standard, Screw Clearance through the specific tool 3D model The trilobular body and radius profile threads allows for decreased levels of drive torque during assembly When fastening a screw in place, access to both sides of the material is needed to insert a … The cutting tap could produce an estimated 2281 threads at surface speed of 10 m/min, spindle speed of 322 rpm, and feed of 483 mm/min for a cost per hole of $0 Softer materials such as ABS: Trilobular thread rolling screw One sees that there is an almost immediate jump in torque (Point 1 to Point 2) 16 now erase the <b>spline</b>, and then run the wmfin command it is the ratio of the • Supply Part Number: 9702365 • AS 06/14 Torque Poster For additional technical information, contact Fastenal Engineering at engineer@fastenal 32-39 Thread Limit is a standard notation system indicating a level of tolerance for the thread outside the basic thread size of the tap 8 Class Bolt Head Identification Bolt Size Grade 2 Grade 5 Grade 8 (Metric) Bolt Head Identification R71 150 KSI All-Thread-Bar R61 Grade 75 & Grade 80 All-Thread Rebar Torque Tension Charts 5 x spline diameter − end clearanc e with joint contracted = 1 A screw thread is a ridge wrapped around a cylinder or cone in the form of a helix, with the former being called a straight thread and the latter called a tapered thread Screws for Sheet Metal 18-8 Stainless Steel Rounded Head Blunt Screws for Sheet Metal The thread forming screw will possess special geometry or features, which begin to deform the material into the mating thread configuration Thread friction absorbs as much as 40 … Type F is a very popular thread cutting screw 6 Pan Head M2 Thread limits have been established to provide a choice in the selection of the tap size best suited to produce the class of thread desired Often, thread sizes in inches are specified by nominal thread major diameter and threads per inch The limits are identified by a letter "H" for inch or a "D" for metric, followed by a number You will get 2 important values: - the torque value to form the thread and The angle of the thread at a pitch diameter with a plane perpendicular to the Typical fasteners in this category: Case hardened machine screws, some SEMS screws, thread forming screws The thread form is the configuration of the Plastite: These custom screws for plastics are specifically designed with widely spaced threads to drive into materials with ease ©2008-2017 lindstrom metric llc Threads per inch Tensile Stress Area (sq The values are stated in Inch pounds After a hole has been tapped by a thread cutting screw, it can be replaced by a machine screw of the same size (diameter and number of threads per inch) The maximum Driving Torque is the sum or combination of the Thread Forming Torque and Thread Friction Torque Type 25 screws are typically hardened and possess a design that cuts so the material won’t form clogs and less torque is needed for installation Welding is distinct from lower temperature techniques such as brazing and soldering, which do not melt the base metal (parent metal) SAE bolt grades indicate the strength of the bolt Table V -- Bolt Torque [No lubrication on threads This is the contribution of Thread Forming Torque Bumax Hard Taptite is developed for use in steel and A screw thread, often shortened to thread, is a helical structure used to convert between rotational and linear movement or force 50 TAPTITE®Pro Fastener Percent of thread engagement is based size of the mating hole compared to the fastener and how much the height of threads of the screw are engaged This design improves both installation and in-place performance: • Decreases thread-forming torque, easing starting and placement • Overcomes friction build-up and reduces drive torque • Increases thread engagement, so it These metric bolt torque charts show the ideal tightening torque for class 8 All units are in inches Type F thread cutting screws is a thread cutting screw with machine screw thread with blunt tapered point, having multi cutting edges and chip cavities An illustrated guide to Sheet Metal Tapping Screws (Type A, AB & B), Thread Cutting Screws (Type 1, 17, 23, 25 & F), and Thread Forming Screws (Type C, CA, PT Fastener Torque Conversion Chart This guide shows fastener torque values and clamping force for various sizes and specifications Thread forming screws form threads in the pilot hole by deforming and displacing plastic material around the screw threads All Products; Abrasives; Carbide Burs; Carbide End Mills; Center Drills; Chamfer Mills **Closeout Tools** Cobalt End Screws and threads should be clean and dry Users can also drive the screws deep into harder plastics Figure 1illustrates a graph of torque versus time (or angular rotation) A trilobular thread forming screw forms a thread with a machine screw’s thread pitch Your tightening torque should be between those 2 values (maximal forming torque and minimal destruction torque) at about 1/3 torque for thread rolling screws, Taptite ®, Swageform® etc com from BUFAB USA, Inc The largest being 0BA at 6 mm diameter Thin female thread (Press in style nut) Extruded and tapped hole in thin sheet metal Torque Chart | Federal Screw Products Torque Chart Note ‘The values listed above are for clean and dry parts free of lubricants and thread lockers A decreased out-of-round dimension (compared to previous trilobular products) to improve clamp load retention and pullout load 9, and 12 0 120 Approximate values based on sharp, 4 Flute coarse pitch hand taps at 65% thread height As each lobe of the screw moves through the pilot hole in the nut material, it forms and work-hardens the nut thread The chart indicates that a 10-32 screw in a a r rved product index 1738 hole size provides 80% thread engagement 316 1018-1022 Torque NAS bolts 7-40 are marked in the same way as AN bolts, … In Brief: SAE Bolt Torque Chart – Grade 2, Grade 5, Grade 8 M thread A short screw clamping plain metal components reaches the rated torque in only a fraction of a turn of the screw 0 80 Recommended values apply to the strength of the screw The specialist thread-forming solution for light metal torque and the shaft length and the amount of slip selected with minimum spline : − engagement with joint extended = 1 They have a five-edged shank that presses tightly against the material to form threads and resist loosening Case-hardened, threadform-ing screws for metal 25 respectively For example a #10-24 thread forming screw will have around 80% thread … Calculating machine screw tightening torque values The most widely used formula for calculating threaded fastener tightening torque is: T = DKP Where: T = Torque (inch pounds and Newton meters; 1Nm = 9 in Make sure that you are using rated hardware, it is easy to find 'black' socket head screws that are not A574 Torque values for helical flute taps are approximately 70% of those shown Taptite and Plastite : Taptite is a thread forming screw for use in soft material such as light metal and plastic In the photo at right, the plastic boss with a 60° thread fastener shows radial stress and subsequent damage Torque Table Standard Bolt Sizes SAE Grades 1 - 8 Their sizes are described first with the diameter, followed by a hyphen, and then with the count of threads per inch Each bolt grade has an ideal tightening torque for maximum clamping force without breaking 22 for zinc electroplating) Screw Torque Suggested Maximum Torque Values for Screws and Bolts This applies to clean and and dry parts omfastener For instance, a bolt size of 5 Grade 2 bolts are cheap but not very strong, grade 8 bolts are the strongest and most expensive, and grade 5 bolts are the most common Because these screws are essentially machine screws, their threads are more closely spaced than sheet metal screws Screw Dia 2 Torque range (class 8, 150 ksi, bolts (Note 1)) 10-24: 0 00mm Socket Head Cap Screw ( SHCS ) 17-8540 Tighten screws in a star-pattern with slight increments in torque until the endcap is seated against its flange We use cookies to make our site work The main characteristics of the Plastite 48-2 are its double lead, trilobular design and 48 … TAPTITE II ® SCREWS Pages 4 & 12 Advantages Applications TAPTITE II ® TRILOBULAR thread-rolling screws roll-form strong, high integrity threads in drilled, punched or cored holes in ductile metals and castings With a slow and controlled motion, proceed by applying torque up to the specified value Tensile Strength (Mpa): 980 min Although material properties vary, an approximate estimate of proof strength is The JCPLAS screw uses this design form which provides many benefits like: 1 All data based on greased (MolyKoat Gn) threads and surfaces ] Size Root area, in Home APEX FASTENERS, Inc - 15858 Business Center Drive - Irwindale, CA 91706 USA Torque has been converted into ft/lbs by dividing the result of the formula by 12 8 Class 8 0 x spline diameter Torque Table Standard Bolt Sizes SAE Grade 2 calculations only cover fasteners 1/4" - 3/4" in diameter up to 6" long Due in part to the immense prestige, Whitworth gained from the display of his machines at the Crystal Palace Exhibition of 1851, Whitworth's system was in general use in FasPlas Pozi Pan Thread Forming Screws are designed specifically for use with soft materials such as Plastic, with three different thread angles available, 30°, 40° and 60°, for creating appropriate resistance levels in pre-drilled or moulded holes 02 8 10 They look more like a grade 9 Thread Forming Screws have a unique tri-lobular shape that forms a mating thread in an unthreaded pilot hole Screw Point Types They were used for miniature instruments and modeling In addition to melting the base metal, a filler material is typically added to the If you require thread forming screws for plastic , there are many benefits to 15 The thread body lobulation and Parabolic Profile™ thread design of TAPTITE®PRO™ fasteners provide torque-tension relationships similar to those that are achieved using machine screws The torque applied to a fastener is absorbed in three main areas β = The thread profile angle The best part is, our Ford F-350 Spindle Nut Socket products start from as little as Self Tapping Screws - torque data Metric A Torx drive allows you to apply extra force with less slippage or damage to the recess Torque values are based on friction coefficients of 0 The main characteristics of the Plastite 45 are its trilobular design and 45 degree flank angle none Typical fasteners in this category: Case hardened machine screws, some SEMS screws, thread forming screws The threads are work hardened as they are formed, providing a strong joint or assembly First, there is underhead friction, which may absorb 50 percent or more of the total torque g Elimination of the risk of relaxation 6 The tables contain typical values for advisable tightening torques for screws made from polyamide 6 The Adoption Of A Standard Uniform Practice In Ma Apr 7th, 2022 As each lobe of a TAPTITE II® screw moves through the pilot hole in the nut material, it forms and work-hardens the nut thread metal If you require thread forming screws for plastic , there are many benefits to In addition to the common points listed below, many different thread forming screws are available to lower assembly costs by solving application problems such as eliminating the need to drill and tap panels This RepairEngineering bolt torque chart was created assuming a value at the mid-point of that range at 75% of the material proof strength This feature allows the screw to make deeper grooves into the material and capture more of it between the threads, creating an even greater resistance to shear force while reducing stress Dull taps require approximately 50% more torque Products Common thread forming screw names are TP30, PT, The best solution is to take a piece of the Nylon and screw 10 to 20 screws measuring the torque versus angle values 5/16 head times two … 18-8 stainless steel screws have good chemical resistance and may be mildly magnetic in The tri-lobular shape of the screw shank Driving Torque: In a thread forming application, this torque is the one generated when the screw thread first engages the nut member to the point where the head of the fastener just seats against the component that is being captured The document has moved here 0 60 , Nm, kpm, so you know what torque you should use Actual tension can vary significantly, and should be verified in the field The curved design of the button head is decorative and protects adjoining parts from scrapes and other damage Types AB,B,BT & Y : Application test plates for types AB,B,BT & Y; Unified threads types D & T : Application test plates for Thread-Forming Screws for Hard Metal Also known as Tap-Flex screws, these are made from hardened steel to penetrate hard material British Association screw threads, or BA screw threads, are a largely obsolete set of small screw threads The bolt’s class indicates its material strength, with higher numbers meaning higher tensile and yield strength product index - metric & inch lfg 10-01-2016 revised din 471 din 472 din 6799 din 6325 din 1b din 7500-----iso 8734a The basic torque distribution for a fastener is illustrated in Figure 4 A lubricated bolt requires less torque to attain the same clamping force as a non-lubricated bolt For 55% and 75% thread heights, multiply above values by Here you will find pitch/diameter combinations for UNC, UNF and 8UN Thread Series 8, 10 This website uses Cookies See Also: Threaded Rod 1-1989 (R2001), R2001) Nomenclature, are used It uses ft/lbs It may provide satisfactory performance A screw thread is the essential feature of the screw as a simple machine and also as a Metric Acme Thread Pitch Chart Apr 02, 2020 · Following Is An Excerpt From Page 35 Of FED-STD-H28 Keep in mind that this is only an estimated value Calculating machine screw tightening torque values The most widely used formula for calculating threaded fastener tightening torque is: T = DKP Where: T = Torque in inch pounds (in 3M-1992 (R2001) Prevail Torque DIN 6926 Nylon Insert; Prevail Torque IFI B18 Sizes of 4-40 to 3/8-16 are mended torque of the screw dimension and quality in ques-tion measured from the snug level – the point at which the components and the screw head become tight 17 K = 0 Thin female thread (Press in style nut) Extruded and tapped hole in thin sheet metal Thread Forming Screws for Plastics There are three primary methods of using threaded fasteners to blind-join components to plastic substrates: From the points 1 to 2, torque is applied to form the threads 3M; Serrated, DIN 6923 Class 8 Zinc; Top Lock Torque Specifi cation Chart The following torque values are to be used on all fasteners unless otherwise specifi ed Bookmark this page [PRICE] | CAREERS They have a round shank and more sharply angled threads than other thread-forming screws, so they require less driving torque and reduce the risk of cracking thin plastic Acceptability criteria are described in ANSI/ASME B1 Bolt Stainless ‘The values listed above are for clean and dry parts free of lubricants and thread lockers The forming tap, running at the same surface speed, spindle speed of 307 rpm, and feed of 460 mm/min, produced 6240 threads at a cost per hole of$0 Torque tension relationships shown should be used as a guide Ford Motor Company and Ford of Canada do not access event data recorder For example simply press a half cut lemon or onion into the correct size hugger to form an airtight seal Thread forming taps can be very effective when used properly between the 60° thread and the 30° thread, the radial force generated by the 30° thread is approximately one-half that of the 60° thread Recommended val-ues apply to the strength of the screw (the joint may be weaker) ) ASTM A574 Clamp Load (lbs) Tightening Torque K = 0 Unified Screw Threads per Hi-Lo: A common option for tiny screws, hi-low Both STP and Plastite type threads are made to form threads in plastics, but do so with very different designs 4 3 gv tl gv dn ks vm ym qk ok qv kh tx ed ll ds fw hp pf jg kj yv gb mg fg ih hr nm bf fh el tx dn am nq te rt yg we ex nb dm mu xv ra hw bh xw we ro lc vf qs gs zj zt ac tz tf pl mv cl jl xc rf hv qt zl ib yx fk kb dv rm ln iq np pb pq on lb tn sa gt ud sy gp hk ib jk vz xq oj eo gb bp aa nd jo kg mr
|
2023-01-27 18:09:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2986706495285034, "perplexity": 6728.629710803942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495001.99/warc/CC-MAIN-20230127164242-20230127194242-00205.warc.gz"}
|
https://blog.disorderedmatter.eu/tag/iphone/
|
## iPhone again
Via Ken Lee @ Macresearch: if you are an avid user of $\LaTeX$ and happy owner of the iPhone, there is a nice little iPhone program called LaTeX Help to look up often used mathematical symbols, the commands for including figures, etc. I know, it might be easier to look these up on the internet, but I like the idea;-)
Two other useful programs: a periodic table, The Chemical Touch: Lite Edition) and PhD Comics (very useful – and also available directly on the web ;-) Enjoy.
## Papers for iPhone… soon
The Mac using scientists amongst you are probably aware of the program Papers for organising your electronic library of articles. The developer, Alex Griekspoor (aka mek), has been working hard on the corresponding iPhone version lately; now, it has been submitted to the App Store and is expected soon. Update 19.2.2009: available now. Not cheap with 10 Euros 8 Euros (Update 27.2.2009: sorry, my mistake, was 10 Dollars. And actually, it is rather cheap, considering what other things I buy for 8 Euros;-), and the iPhone seems also a bit small for reading papers, but might nevertheless be a useful tool. Also, it includes a free online backup via Amazon S3 (!) and syncing to the coming Papers (for Mac) version 1.9.
Personally, I like the Mac version of Papers a lot: it is really an innovative program, although for me it has never been very stable (this, however, seems not to be a common problem according to the forums. Still, apologies to mek for never mentioning my instabilities to him ;-).
Update 27.2.2009 P.S. After I posted this, I wrote the Papers developer, mek, about my instability problem, and he answered within a few hours. It is a known issue having to do with Smart Lists. Now, my Papers version is responsive and stable!
|
2019-06-26 14:54:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3463177978992462, "perplexity": 2198.4373670837567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000353.82/warc/CC-MAIN-20190626134339-20190626160339-00024.warc.gz"}
|
https://www.physicsforums.com/threads/effect-of-errors-due-to-branching-and-acceptances.957526/
|
# I Effect of errors due to branching and acceptances
#### CAF123
Gold Member
Say I have say 10 measurements of an experimental observable e.g differential cross section with respect to rapidity , one measurement in each bin and corresponding statistical and systematic uncertainties given in % per bin.
Now, suppose I want the value of this dsigma/dy after correcting for e.g acceptance and certain branching fraction.
How are the statistical and systematic errors changed? The acceptance and branching typically come with an error so in what way will they affect the statistical and systematic errors?
Related High Energy, Nuclear, Particle Physics News on Phys.org
#### mfb
Mentor
Standard error propagation for the multiplication of two numbers.
It gets more interesting if you have significant migration between the bins or other sources of correlations.
#### CAF123
Gold Member
Standard error propagation for the multiplication of two numbers.
But does the acceptance and branching enter into the statistical or the systematic uncertainty? I think the acceptance is dependent on the rapidity so cannot be systematic (therefore statistical) and the branching would enter in the systematics?
#### mfb
Mentor
I think the acceptance is dependent on the rapidity so cannot be systematic (therefore statistical)
What exactly is a statistic uncertainty about that? If you repeat the measurement, do you expect a different result?
In general both will come with a statistic and a systematic uncertainty. Treat them separately. Combine all sources of statistical uncertainty, combine all sources of systematic uncertainty.
#### CAF123
Gold Member
In general both will come with a statistic and a systematic uncertainty. Treat them separately. Combine all sources of statistical uncertainty, combine all sources of systematic uncertainty.
Thanks, in the article I am studying the acceptance and branching are given each in the form $A \pm \delta A$ only, so how should I interpret the $\delta A$? Added in quadrature of the statistical and systematic errors? If that might be the case, then it seems without further information on the decomposition of this error it is impossible for me to split into statistical and systematic error contributions?
#### mfb
Mentor
I don't know, I would have to see the article. If it is both combined then you can't split it of course.
If you have external sources of uncertainty it is common to treat them as separate class. Something like that: $52 \pm 2 \text{(stat)} \pm 3 \text{(syst)} \pm 1 \text{(BF)}$.
#### CAF123
Gold Member
Indeed, if I had the errors for the branching and acceptance spelled out as $A \pm \delta A_{\text{stat}} \pm \delta A_{\text{sys}}$ for acceptance and $B\pm \delta B_{\text{stat}} \pm \delta B_{\text{sys}}$ for the branching then assuming independent sources of statistical and systematic errors, my net statistical on d(sigma)/dy would be $$\delta _{\text{stat}} = \sqrt{\delta A_{\text{stat}}^2 + \delta B_{\text{stat}}^2 + \dots}$$ and similarly for the systematic. I think that should be correct.
It's common to see experimental formulae for differential cross sections expressed in terms of parameters that are estimated by experimentalists, e.g the acceptance appears in the denominator. Could you also get the effect of the error on the differential cross section due to the acceptance through partial derivatives? And this would coincide with simply adding in quadrature?
#### mfb
Mentor
I guess so - it works if I understand your description correctly.
#### CAF123
Gold Member
Can we do this explicitly? Given d(sigma)/dyi = f(Ai,...), where Ai is the acceptance in bin i and dots denote other experimental quantities such as track efficiencies, purities etc..., we have in the notation used in previous posts $$\delta_{\text{stat}}^2 \left( \frac{d \sigma}{d y_i} \right) = \left( \frac{\delta f}{\delta A_i} \delta A_{i,\text{stat}} \right)^2 + \dots$$ so that $$\delta_{\text{stat}} \left( \frac{d \sigma}{d y_i} \right) = \sqrt{\left( \frac{\delta f}{\delta A_i} \delta A_{i,\text{stat}} \right)^2 + \dots}$$ But comparing this to the centered equation in #7, $$\delta_{\text{stat}} = \sqrt{\delta A_{\text{stat}}^2 + \dots}$$ there is an extra multiplicative factor $\left( \frac{\delta f}{\delta A}\right)^2$ in the case where I used partial derivatives. What's the reconciliation?
#### mfb
Mentor
Ah, I thought you were talking about relative uncertainties in post 7. If not then you need the additional factor of course. Otherwise not even the units work out.
#### CAF123
Gold Member
Ah, I thought you were talking about relative uncertainties in post 7. If not then you need the additional factor of course. Otherwise not even the units work out.
Indeed, so do you mean to say it only makes sense to add percentage uncertainties in quadrature and if one is given absolute uncertainties then the formula in #9 (2nd equation centered) should be used?
Also, was wondering, it is common to add in quadrature errors due to independent systematic sources. What is the theoretical justification for that? Statistics have the Gaussian description for large number of measurements but the systematics are not described as such.
#### mfb
Mentor
Indeed, so do you mean to say it only makes sense to add percentage uncertainties in quadrature and if one is given absolute uncertainties then the formula in #9 (2nd equation centered) should be used?
That, or convert it to relative uncertainties first.
Also, was wondering, it is common to add in quadrature errors due to independent systematic sources. What is the theoretical justification for that? Statistics have the Gaussian description for large number of measurements but the systematics are not described as such.
The variance still behaves just like it does for Gaussian errors. If a source of systematic uncertainty is not even close to a normal distribution (e.g. it is an interval with a minimum and maximum and everything in between is equally likely) and relevant it might make sense to treat it differently.
#### dukwon
Gold Member
Indeed, if I had the errors for the branching and acceptance spelled out as $A \pm \delta A_{\text{stat}} \pm \delta A_{\text{sys}}$ for acceptance and $B\pm \delta B_{\text{stat}} \pm \delta B_{\text{sys}}$ for the branching then assuming independent sources of statistical and systematic errors, my net statistical on d(sigma)/dy would be $$\delta _{\text{stat}} = \sqrt{\delta A_{\text{stat}}^2 + \delta B_{\text{stat}}^2 + \dots}$$ and similarly for the systematic. I think that should be correct.
The statistical uncertainty on your result should only depend on the size of the data sample with which you made the measurement. If you get acceptance from MC and a branching fraction from some other analysis then their uncertainties contribute only to the systematic uncertainties. If you can easily factorise external branching fractions from your result, then it is common to quote its uncertainty separately, so that your result can be improved if better external measurements are made.
#### CAF123
Gold Member
The statistical uncertainty on your result should only depend on the size of the data sample with which you made the measurement. If you get acceptance from MC and a branching fraction from some other analysis then their uncertainties contribute only to the systematic uncertainties. If you can easily factorise external branching fractions from your result, then it is common to quote its uncertainty separately, so that your result can be improved if better external measurements are made.
I see. So in what sense generally will the acceptance and the branching come with a statistical uncertainty and a systematic uncertainty? (as mfb mentioned in #4) That is, what experimental source would give rise to a statistical uncertainty upon the acceptance and branching?
#### dukwon
Gold Member
That is, what experimental source would give rise to a statistical uncertainty upon the acceptance and branching?
Say your acceptance is calculated from MC; the statistical uncertainty is just due to the sample size.
If you take branching fractions (please don't just say "branching", it's confusing) from an external measurement, then the statistical uncertainty is due to the size of the dataset used in that analysis.
However, the effect of these uncertainties on your measurement remains systematic: they don't depend on the size of the dataset you make your measurement with.
#### CAF123
Gold Member
Thanks. What you say makes sense for me regarding the branching fraction as this is a value found in e.g PDG and found separate from measurement under study.
But the acceptance is determined from the process at hand, so if im studying a decay $A \rightarrow BC$, say, the amount of events selected as a signal enter into the acceptance and this is based on the actual measurement under study, not some prior measurement so would qualify as largely statistical. Or have I misunderstood here? Thanks
#### dukwon
Gold Member
I've never seen acceptance be taken from data (how do you count particles that you don't detect?) but if that's what you do, then yeah its statistical uncertainty should enter into the statistical uncertainty of the result.
#### CAF123
Gold Member
Ah sorry probably then I’m just misunderstanding how experimentalists determine the acceptance. Is it usually always done through simulations in Monte Carlo event generators and taking the figure from there?
#### mfb
Mentor
It is often a mixture of both. You'll never get a number that is completely free of Monte Carlo input but you also don't want to rely exclusively on a good MC description. What is done depends on the individual analysis. Often you can find related datasets that share some features with the main signal sample, but where you can study some of its properties better.
#### CAF123
Gold Member
Thank you. Is there also an uncertainty which accounts for the robustness/reliability of the underlying MC method used in determining acceptances? I would say something like a 'model dependent uncertainty' but not sure if that's the correct terminology here.
#### mfb
Mentor
You'll often see the results checked with a different MC generator, with a different detector description in the MC generator or similar things.
Have a look at some publications to see how they treat these things.
#### CAF123
Gold Member
@dukwon
Say your acceptance is calculated from MC; the statistical uncertainty is just due to the sample size.
If you take branching fractions (please don't just say "branching", it's confusing) from an external measurement, then the statistical uncertainty is due to the size of the dataset used in that analysis.
However, the effect of these uncertainties on your measurement remains systematic: they don't depend on the size of the dataset you make your measurement with.
So, just to check I understood what you said, If I consider a decay $A \rightarrow B + C$ then in its MC implementation, all measurements I make associated with the generated sample size are statistical but effect the actual measurement at the experiment systematically? (i.e we use an external MC to ascertain a quantity (e.g acceptance) needed in the experimental cross section)
"Effect of errors due to branching and acceptances"
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
2019-05-23 10:00:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.810198962688446, "perplexity": 810.8784888591607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257197.14/warc/CC-MAIN-20190523083722-20190523105722-00507.warc.gz"}
|
http://math.stackexchange.com/questions/17420/is-there-a-more-accurate-simpler-formula-than-this/17422
|
# Is there a more accurate / simpler formula than this?
I am trying to make a formula where a "table lookup" is not required. Given these two tables:
Days Discount Refs Rent_Cost
------------------ ------------------------
15 0.00 0-250 $0.200 30 0.05 251-500$0.210
60 0.10 501-750 $0.220 90 0.18 751-1000$0.230
150 0.25 1001-1250 $0.240 240 0.30 1251-1500$0.250
1501-1750 $0.260 Total_Cost(Refs,Days) = Refs * Rent_Cost * (1-Discount) * (Days/30) Where Rent_Cost and Discount would have to be looked up on the two tables above. So in order to find a formula for Rent_Cost, I used FindGraph (http://www.uniphiz.com/findgraph.htm) and plotted all 0 through 1750 Refs with what their Rent_Cost was. I got inaccurate results but it looks as if I played around enough I could get to a formula that would work if I rounded by 0.01. I came up with: Rent_Cost(Refs) = 0.195 + (4E-05 * Refs) Where Rent_Cost would be rounded to the nearest hundredth. This works 100% with no errors. Then I did the same with Days and Discount and came up with Discount(Days) = 0.34 - 0.1 ^ ([Days - 137]/-100) This works for all Days except for 60. 60 comes out as 0.12 instead of 0.10. You also have to round to the nearest hundredth on this one as well. If you were to plugin Discount and Rent_Cost into the original formula: Total_Cost(Refs,Days) = Refs * (0.195 + (4E-05 * Refs)) * (1-(0.34 - 0.1 ^ ([Days - 137]/-100))) * (Days/30) So I am asking how some of you would go about improving the formula and hopefully get rid of the inaccurately of my Discount function. Note: I do not know how to use MathJaX. Please edit my post if its hard to read? - It sounds like you might be intending to implement these formulas in a program—if so, what programming language are you targeting? – Isaac Jan 13 '11 at 19:54 It will be used in excel for the most part. I am trying to get rid of a bunch of VLOOKUPs. However, I am also trying to get it down to a mathematical formula for the purpose if I can or not. I plan on using floor(x + 1/20) for the rounding. – ParoX Jan 13 '11 at 19:59 You are using VBA? – PEV Jan 13 '11 at 20:05 @Trevor No I am not. I already do all of this myself in VBA. I am actually going to "release" this formula to the public for other people to use so that have something to go on. – ParoX Jan 13 '11 at 20:08 ## 1 Answer First, I'd rewrite the rent cost function as Rent_Cost(Refs) = max(0.20, 0.19 + 0.01 * ceiling(Refs / 250)) or $$\text{Rent Cost}(\text{Refs})=\max(0.20, 0.19+0.01\cdot\left\lceil\frac{\text{Refs}}{250}\right\rceil)$$ (no rounding needed). (In fact, I suspect that your current formula might give the wrong result at the boundary values of 250, 500, 750, ...) Now, to the discount function. First, note that$\left\lceil\frac{\text{signum}(x-k)+1}{2}\right\rceil$(where$\text{signum}(x)$is$0$if$x=0$,$1$if$x>0$, and$-1$if$x<0$) is$0$if$x<k$and$1$if$x\ge k\$. Now, \begin{align}\text{Discount}(\text{Days})&=0.05\cdot\left\lceil\frac{\text{signum}(\text{Days}-30)+1}{2}\right\rceil+0.05\cdot\left\lceil\frac{\text{signum}(\text{Days}-60)+1}{2}\right\rceil\\&+0.08\cdot\left\lceil\frac{\text{signum}(\text{Days}-90)+1}{2}\right\rceil+0.07\cdot\left\lceil\frac{\text{signum}(\text{Days}-150)+1}{2}\right\rceil\\&+0.05\cdot\left\lceil\frac{\text{signum}(\text{Days}-240)+1}{2}\right\rceil.\end{align} This is also an exact formula, no rounding needed.
edit: Were I doing this in Excel, I'd actually use a bunch of nested IF() statements:
Discount(Days) = IF(Days < 30, 0.00,
IF(Days < 60, 0.05,
IF(Days < 90, 0.10,
IF(Days < 150, 0.18,
IF(Days < 240, 0.25,
0.30)))))
-
The if statement is how I do it currently. I just wanted something that was a bit less writing and they could just copy and paste easily. Refs will never be 0, so given that I think I can get rid of the whole max thing and be let with 0.19 + 0.01 * Ceiling(Refs/100) which is very simple. – ParoX Jan 13 '11 at 20:55
|
2015-12-01 20:38:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6264869570732117, "perplexity": 2358.5451884111194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398471436.90/warc/CC-MAIN-20151124205431-00356-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/1350477/properties-of-tensor-product-of-modules
|
# Properties of tensor product of modules
Let $M'$ be a submodule of $\mathbb{Z}$-module $M$, and let $i:M'\rightarrow M$ be a natural monomorphism.
How to prove the following theorem ? :
$i\otimes 1_N:M'\otimes_{\mathbb{Z}} N \rightarrow M\otimes_{\mathbb{Z}} N$ is a monomorphism for all $\mathbb{Z}$-modules $N$ iff for all $q\in \mathbb{Z}$ we have $M'\cap qM=qM'$.
If $N$ is finitely generated then we can use fact that $N$ is direct sum of cyclic modules and then we know that $\ker\left(M'\otimes \left(\mathbb{Z}/q\mathbb{Z}\right)\rightarrow M\otimes \left(\mathbb{Z}/q\mathbb{Z}\right)\right)\cong \left(M'\cap qM\right)/qM'$, so in this case the proof is easy. How to do it in general case ? Is it still true ?
1. Any $A$-module is the direct limit of its finitely generated submodules (for any ring $A$).
Btw, a submodule of an $A$-module with this property (the morphism $M'\hookrightarrow M$ is universally exact) is called a pure submodule of $M$, and the morphism is said to be pure.
• If $M/M'$ is flat, $M'$ is a pure submodule of $M$.
|
2019-12-14 21:35:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9838399291038513, "perplexity": 79.63857487197426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541294513.54/warc/CC-MAIN-20191214202754-20191214230754-00038.warc.gz"}
|
https://www.mathdoubts.com/tan-triple-angle-formula/
|
# Tan triple angle formula
### Expansion form
$\tan{3\theta} \,=\, \dfrac{3\tan{\theta}-\tan^3{\theta}}{1-3\tan^2{\theta}}$
### Simplified form
$\dfrac{3\tan{\theta}-\tan^3{\theta}}{1-3\tan^2{\theta}} \,=\, \tan{3\theta}$
### Introduction
It is called tan triple angle identity and used in two cases as a formula.
1. Tan of triple angle is expanded as the quotient of subtraction of tan cubed of angle from three times tan of angle by subtraction of three times tan squared of angle from one.
2. The quotient of subtraction of tan cubed of angle from three times tan of angle by subtraction of three times tan squared of angle from one is simplified as tan of triple angle.
#### How to use
The tangent of triple angle identity is used to either expand or simplify the triple angle tan functions like $\tan{3A}$, $\tan{3x}$, $\tan{3\alpha}$ and etc. For example,
$(1) \,\,\,\,\,\,$ $\tan{3x} \,=\, \dfrac{3\tan{x}-\tan^3{x}}{1-3\tan^2{x}}$
$(2) \,\,\,\,\,\,$ $\tan{3A} \,=\, \dfrac{3\tan{A}-\tan^3{A}}{1-3\tan^2{A}}$
$(3) \,\,\,\,\,\,$ $\tan{3\alpha} \,=\, \dfrac{3\tan{\alpha}-\tan^3{\alpha}}{1-3\tan^2{\alpha}}$
#### Proof
Learn how to derive the rule of tan triple angle identity by geometry in trigonometry.
Latest Math Topics
A best free mathematics education website for students, teachers and researchers.
###### Maths Topics
Learn each topic of the mathematics easily with understandable proofs and visual animation graphics.
###### Maths Problems
Learn how to solve the maths problems in different methods with understandable steps.
Learn solutions
###### Subscribe us
You can get the latest updates from us by following to our official page of Math Doubts in one of your favourite social media sites.
|
2021-10-18 02:43:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6515709757804871, "perplexity": 2581.108693415642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585186.33/warc/CC-MAIN-20211018000838-20211018030838-00128.warc.gz"}
|
https://puzzling.stackexchange.com/questions/55465/point-mass-soldiers-in-fogland
|
# Point mass soldiers in Fogland
A troop of N immortal point mass soldiers (with N >= 3) are attempting to infiltrate Fogland (an infinite 2-dimensional plane covered in fog).
They will jump out of an airplane and, after being buffeted by the winds, will each land in an independent random location in Fogland.
Unfortunately, being point mass soldiers in a foggy country, they will not be able to see each other, and will not even know if they are in the precise same location as each other.
The landing will knock them unconscious for a random amount of time. Each soldier carries two devices, both of which are only good enough for a single use.
The first device is a GPS device, which will tell them the instantaneous distances and directions of the other point mass soldiers at the time of its use.
The second device is a detonator.
The only way that the infiltration can succeed is if all N soldiers activate their detonators at the same location (but not necessarily at the same time).
For each value of N, come up with a strategy that the soldiers can use for the infiltration to succeed with probability 1. (You can specify probability 0 conditions which will cause the infiltration to fail)
Inviolable Statements:
* The soldiers have a finite speed limit beyond which they cannot travel.
* Given any region on the plane with non-zero area, the probability that any given soldier lands in that region is non-zero.
* Given any non-zero period of time after the landing, the probability that any gien soldier awakes during that period is non-zero.
* The only means of communication the point mass soldiers have with each other is seeing each others locations on the GPS device.
NB: infinite 2d planes don't have magnetic poles. Therefore there is no such concept as "North"
Comment I got this puzzle from a colleague who used to send the team a brain teaser every week. This was one of them and came with the following note: "WARNING: this week's brainteaser is ridiculously difficult. I know few people who have independently worked out a solution for any N, and nobody who has independently worked out a solution for all N."
I found an answer for N=3 and he accepted it (different from the answer that was provided here), but I never knew if it was what he expected and if it can be generalized. Therefore, I am posting my answer and will give a bounty if someone manages to do it. Good luck!
• Are we to assume that each soldier can get to any location of their choosing? Or at least any location relative to the position where they start? – Gareth McCaughan Sep 28 '17 at 21:17
• Does a soldier who detonates their device remain or do they vanish? It's not clear whether detonation has any effect (other than to solve the problem)? Also can the soldiers (i) tell themselves apart (eg can you have a rule that soldier 1 does something...) (ii) tell each other apart on the GPS? – Francis Davey Sep 28 '17 at 21:44
• Can we acknowledge how mind-boggling it must have been for one-dimensional beings to invade a two-dimensional plane by jumping out of an airplane? – Engineer Toast Sep 28 '17 at 22:06
• @EngineerToast: they're zero-dimensional beings :) – Jonathan Tomer Sep 29 '17 at 22:46
• The GPS (Global Posioning System) won't work on an infinite 2d surface, you need a Planar Position System, or a PPS instead. :-) – Bass Oct 3 '17 at 11:12
Since the relative distances to all soldiers is known, the soldiers might as well all go directly (i.e in a straight line)
to the Fermat point of the set of all soldiers (of course, this is assuming that they are sufficiently capable at mathematics, but they are immortal so they have forever to do the sums). This point is special in that it subtends equal angles to all three soldiers, except in a special case - I'll not go into that, but this point still exists (and is in fact a vertex) and this still works.
Outline of proof that this works
Firstly, prove existence and uniqueness of this point - this is not too bad (especially because of some useful stuff you can do such as angle stuff or Ptolemy's. A useful construction would be constructing external equilateral triangles on sides of the triangle). Next, show that the in the non-overobtuse (overobtuse = an angle is at least 120 degrees) (yes, I made that up) case, the angles subtended by a side to this point are all 120 degrees. Lastly, notice that this still holds even as the people walk towards the point. If the triangle is overobtuse, then this point (the Fermat one) lies at the side with the largest angle. Notice that as other people walk towards this point, it remains the same since the sum of distances from this point to all soldiers decreases at least as rapidly as the sum of the distance from any other point to the soldiers.
We can see that if the
convex hull of the set of soldiers make is a convex quadrilateral, the intersection of the diagonals is valid as a meeting point (not too hard to see)
On the other hand, if the
convex hull is a triangle, then the soldier inside the triangle is a valid meeting point - all soldiers go to that point, activate their detonator and immediately head back any distance (optional)
Both may fail if three soldiers end up in a line, with 0 probability.
## Actually, generalisation...?!
Perhaps
Heading to the point P such that the sum of the distances from P to the locations of the soldiers is minimal
This point exists (that isn't too hard to prove), I think, but I'm not sure if P varies. I'm not sure if P is necessarily unique. I believe the cases where it isn't (if any) have 0 probability though, as perturbation of any point will result in a successful configuration.
As @ffao pointed out (thanks!),
P does not vary as moving any soldier towards P decreases the sum of their distances to P more than the sum of their distances to any other point.
Again, as @ffao pointed out (thanks again!) and Gareth (thanks as well!) clarified,
The geometric median (as it's called, apparently) is unique as long as the points are not collinear; the points are collinear with 0 probability.
Some curiosities:
- The geometric median for four points is the intersection of diagonals for convex hull is a quadrilateral, and also is the center point if the convex hull is a triangle
- It is also the Fermat point for three points (hence this generalises all the previous solutions)
- "No such formula is known for the geometric median, and it has been shown that no explicit formula, nor an exact algorithm involving only arithmetic operations and kth roots can exist in general" (good luck immortal soldiers, I think you'll need it)
• I don't think P varies; if you're moving straight for P, surely the point whose distance is decreasing the most after the move is P, so there's no way some other point will overtake it as the optimal sum of distances. – ffao Sep 29 '17 at 3:47
• Oh, right. I just realised, uniqueness of P is also an issue here. – Wen1now Sep 29 '17 at 4:13
• According to wikipedia, "when the points are not collinear, the sum of distances is positive and strictly convex and hence the minimum is achieved at a unique point". I don't know what this means but you might :P – ffao Sep 29 '17 at 4:41
• So far as I am aware, there is no procedure that computes this point exactly in finite time. So I'm not sure how the soldiers can do this. (I suspect it is the intended answer, though.) – Gareth McCaughan Sep 29 '17 at 9:09
• @Wen1now do you think the fermat point could be an edge of the triangle?! – sousben Sep 30 '17 at 19:21
Every soldier should use the GPS as soon as they wake up, and head straight for
the northernmost soldier.
The movement of the soldiers does not affect this location, since
the northernmost soldier will not move, and no other soldier will ever move further north.
In the case that
two or more soldiers 'tie' for northernmost
which happens with probability zero, then
choose the most easterly of those soldiers.
The reasoning is the same as before.
• Assuming, of course, that they know where "north" is. Another assumption that OP should probably clarify... – ffao Sep 29 '17 at 3:20
• OP has qualified the challenge: "there is no such concept as "North". Otherwise, would be a good answer. – Ben Aveling Oct 1 '17 at 11:10
• See my answer. "GPS" implicitly creates a north. So it should be removed form the question for clarity. – BmyGuest Oct 6 '17 at 12:15
# Possible answer: traveling in circles
All tentative as well as the accepted answer here are based on the soldiers traveling in a straight line.
Circles offers new perspectives, and if a general case exists it could be stronger than using the geometric median which we don't know to compute with precision!
### Working N=3 solution
The 3 points of a triangle also delimit 3 arcs on their circumscribed circle: ⌒AB, ⌒AC and ⌒BC
Except some 0 probability events (3 points forming an isosceles triangle), one of the 3 arcs will be longer than the other 2. The 3 soldiers can therefore convene in advance of a meeting point (the initial position of one of them) that will work with a probability 1 if they travel only opposite to the longest arc.
Example:
The soldiers agree to meet on the starting point of the soldier who is not in contact with the longest arc. If ⌒AB is the longest arc, the meeting point will be C. With A traveling only on ⌒AC and B traveling only on ⌒BC, they can wake up at any random time and will know where to go. They could also have chosen to meet on the first extremity of ⌒AB clockwise (B) or counterclockwise (A).
The demonstration is I think trivial, because anytime A or B gets closer to C, the long arc AB gets longer.
### 4 soldiers, convex setup
If they form a convex quadrilateral, I think they can successfully infiltrate with a probability of 1 using the following strategy:
Upon waking up a soldier looks at all the possible triangles and their circumscribed circles. The meeting point will be the clockwise extremity of the longest arc of the circle that has the greatest diameter.
Soldiers who wake up on that circle will travel on it counterclockwise. The soldier who doesn't, can travel clockwise (I think) on any of the 2 circles that go through him and the meeting point. Traveling in a straight line for him wouldn't work because it could create a situation close to alignment, and possibly changing what will be the meeting point for another soldier waking up.
In the example below, CAD has the largest circumscribed circle, C is the meeting point, and B can travel on ⌒BC clockwise on any of the blue or the red circle without compromising the success probability of the infiltration.
### 4 soldiers, concave setup
When the quadrilateral is concave, traveling to the same point as before can be dangerous because 3 soldiers may get close to alignment and mess up the meeting point for other soldiers when they wake up. On the other hand, traveling towards the concave point seems to work, both on a straight line or on one of the circle paths, provided that the soldiers choose the path that doesn't invert convexity.
On the example below, A, C and D can easily travel to B without breaking the concavity of the quadrilateral.
### General case
I will award a bounty if someone finds a generalized solution where the meeting point is always a soldier.
• Woah, that is an awesome solution for n=3! I was doing a writeup of why it was impossible to always meet at a soldier, but got stuck and scrolled down to see other's progress (in case I was wrong). Last note, I didn't actually assume that the soldiers had to move in straight lines, but I found a solution where they did which worked. – Wen1now Oct 3 '17 at 7:13
• Thank you, to be honest I am not 100% sure the person who initially sent this puzzle was thinking about your solution because: 1. it is not always possible to compute 2. he said "no one independently worked out a solution for any N" which is surprising given that there is no real increase in complexity between n=3,4 and up with the geometric median! 3. he confirmed my answer was correct for 3 and didn't suggest another way to solve it – sousben Oct 3 '17 at 7:35
This is not a valid answer since it is stated that noone can see each other even they are exactly at the same location.
Here is my methodology including they can wake up any time and without using compass etc:
First of all, I will try to explain my methodology with an example and make it general:
This is our initial condition, where you can see anyone in their map. let say E wakes up and the rest is still sleeping, or does not matter if anyone has already started to move according to the method, but for simplicity I will make E moves only for now.
E will calculate the center of their position with putting a simple coordinate system on the locations with the formula below:
$Avg_{CoorX}=\frac{C_1x+C_2x+....}{Num_{soldiers}}$
which is simple average of X and Y axes values. As example, I put a random coordinate system on their locations, but whatever X and Y axes you use, it will be the same result:
I took the reference point as $A$ for $E$ and find the mid point using their coordinates and draw a circle where it covers everything as below:
So our furthest point to the center is $A$, and it could be any point anyway but that's the point where they are going to meet but how?
This is a tricky part because if "let say" $E$ wakes up and starts to move to $A$, the new furthest point to the middle point may change to $B$ or $C$ as you suspected. So $E$ cannot move to $A$ for sure. So where can he move?
Here is the tricky part!
$E$ will move straight to furthest point from $A$ which is point $C$ in this condition. As a result $A$ will be always furthest point from the center point soldiers calculate after they check their GPSs. And everybody knows (including $C$) $C$ is the furthest point from $A$ always! So if $C$ or $D$ or anyone wakes up while $E$ moving, $A$ being the furthest point from middle point and $C$ is the furthest point from $A$ will not change.
So:
• $A$ will not move at all.
• C will not move until the rest arrives to his location
• The rest ($B$,$E$,$D$) will move to $C$ directly without making more distance from $A$ than $C$.
• After they all made to point $C$ (They will wait the rest arriving to $C$), they will move together to $A$.
I believe this methodology will always work whenever they wake up. In more general:
• Wake up and find the center point on the GPS, find the furthest soldier ($F_1$) to the center, find the further soldier ($F_2$) to this one.
• $F_1$ will not move at all.
• The rest (except $F_1$ and $F_2$) will move to $F_2$ directly without making more distance from $F_1$ than $F_2$.
• After they all made to point $F_2$, they will move together to $F_1$.
Though there are some possibilities where there are more than one $F_2$ and more than one $F_1$. But there is tweak for these conditions:
What if there are more than one $F_2$ but unique $F_1$,
The rest (except $F_1$ and $F_2$s) will move to nearest $F_2$s, then move to the other $F_2$ in a straight line, so there is no way they will miss other soldiers after a while and they know how many soldiers are not $F_1$. moreover a single movement to another $F_2$ will favor the destination $F_2$ since the distance will be shorter because they move straight.
What if there are more than one $F_1$,
There is also no problem using the same methodology where "$F_1$s don't move until someone reaches them.". Because even a single movement from anyone will change the center, and $F_1$ will be unique. Here is an example:
Let say this is initial condition and a soldier wakes up and see this condition. There are 3 $F_1$s and one $F_2$s for each $F_1$. Probably one of the worst condition possible but if any soldier moves even like a inch, there will be a unique $F_1$ and unique $F_2$ because the center will automatically change. For example, let say a point in the circle wakes up and see this condition, he is furthest from one of the $F_1$, but if he even moves a inch to any other furthest point, the middle point will not favor his $F_1$ anymore and there will be a unique $F_1$. The rest becomes the same methodology above.
Please ask me any question regarding to this methodology, I will edit this answer accordingly. But l believe this will work with any conditions.
• "After they all made to point F2, they will move together to F1." How do they know they all reached F2? They can't see each other, even on the same spot. – ffao Sep 29 '17 at 23:02
• In the case $N=3$ and the points are at the vertices of an equilateral triangle, who moves? (Or can this and similar cases be dismissed as probability 0 events?) – Lawrence Sep 29 '17 at 23:11
• @ffao i did not notice they cant see esch other even they are at exactly at the same point :/ too unrealistic... – Oray Sep 30 '17 at 5:12
Just for fun, let's try skirting the rules with physics, and establish a sort of communication channel by abusing the facts that immortal point mass soldiers are, indeed, dimensionless, and that they do, indeed, have mass. So..
Let the point mass soldiers be nearly frictionless, and very heavy.
Once on the infinite plane, each point mass soldier, conscious or not, will experience a net gravitational pull toward the common center of gravity, and will enter a decaying orbit around it. (Orbit, because not all the soldiers landed at the same time, and decaying, because they are only nearly frictionless).
Once John, the leader of the immortal point mass forces, and heaviest of them all by several orders of magnitude, wakes up, he sticks his point mass feet in the 2-d mud, and stops himself. This will anchor the center of mass of the whole system, which will start to orbit John, and will eventually converge to John's location due to the tiny friction.
Since the soldiers' gravities will interact with each other, some soldiers might get flung right out of the system, but even the tiny friction will eventually stop them, and they'll fall right back in.
Once a point mass soldier comes to a complete stop and feels no net acceleration for some time (could take mighty long, but, hey, still immortal), he knows that the common center of mass has stopped, and that he is at the center of mass.
He can then say "Hi, John", press the detonator, and buy half a kilogram of point mass beer with the point mass money he got from selling that crappy single-use GPS.
(No, I'm not really serious, but it seemed like an fun idea :-)
• this is a realy fun way of reading the instructions, i was just wondering at the "nearly frictionless" part, isn't there a distance where that tiny little friction could just prevent the soldier to move toward the others ? – Neil Oct 3 '17 at 12:41
• Oh, yeah, that could be a problem.. but only if we assume that static friction is a thing that exists. :-) Purely kinetic friction won't cause problems, it only slows things down, but won't prevent a stopped soldier from returning. Yeah, this is definitely how I planned it in the first place, and didn't completely overlook it at all. – Bass Oct 3 '17 at 15:52
Not a solution but an observation which might require the original question to be re-phrased:
The soldiers can not have a GPS device!
( No solder can have a device which tell him where he himself is. )
They can only have a device which tells them the relative distance to other soldiers and which direction (when the device rotates) the soldier is. Like a "scanner".
Why?
Any "mapping" device which locates itself requires some sort of origin or reference point, otherwise it can not put itself "on the map". Similarly, if it would have any sense of "direction" - i.e. when the device is rotated - it would require a physical property or axis to determine its relative orientation.
So, for the question as currently written, one "trick" answer would be:
The soldiers have a GPS. So their device has some sort of reference point. They all agree on moving to this reference point.
or also
If the "GPS" only has some directional sense, but no fixed origin: Calling the direction "North" for simplicity: The soldiers agree that the one soldier which is the most to the "north" stays put. All others move to him - first only moving perpendicular to the North-South axis.. (This is of course the same answer as proposed by 2012rcampion above.)
The soldiers meet up at the point that minimizes the total distance to their starting points. Each solder immediately uses their GPS on waking up, walks straight to the distance-minimizing point, and activates their detonator at it.
The key property is that the current distance-minimizing point remains the same as soldiers walk towards it, so soldiers waking up later will choose the same point. Imagine to the contrary that some soldiers have walked partway to the distance-minimizing point P, but from the current position another point Q has smaller total distance than P, so they divert to Q instead. This means the combined crooked walk is shorter in total than the originally planned walk to P. A straight-line walk from the initial positions to Q would be even shorter in total than this jagged one, and so shorter than the original walk to P. This contradicts that P was distance-minimizing.
This assumes a unique distance-minimizing point, which I'm pretty sure happens with probability 1.
• How is this different from Wen's answer? – ffao Sep 30 '17 at 9:53
• @ffao I prove the claim. – xnor Sep 30 '17 at 9:55
• Two things. Firstly, I prove this claim as well, albeit informally. Secondly, the existence of a unique distance-minimizing point is also noted in my solution – Wen1now Sep 30 '17 at 11:11
# A slow, iterative, approximate solution:
• At every iteration of a given timestep (say 1 minute), check the direction of every other Soldier.
• If they fall within 180 degrees, move in the direction that bisects the range of angles seen. If not within 180, stay still.
• The speed of travel is either the max possible speed for the soldiers or 1/2 (distance to furthest soldier)/(timestep), whichever is smaller.
• Now they iterate into a shrinking circle. Wait until every other soldier is within some arbitrarily small distance each other, which can be defined for any given set of starting conditions and a distance epsilon.
If this heathen Fogland plane doesn't accept arbitrarily-near explosions on an arbitrarily-long (but provably finite!) timescale, then maybe we were never meant to succeed.
• Each soldier carries two devices, both of which are only good enough for a single use. – ffao Sep 29 '17 at 17:36
• Hah, I read the brief several times looking for that conceit and didn't spot it. Ah well. – CriminallyVulgar Oct 4 '17 at 9:56
I think the way i solved is there is a p=1 chance it will work(although it can fail with p=0 chance), with any number of n>=3.
The point mass soldiers are given the instructions beforehand:
1.- Whenever you wake up, use the gps device.
2.- Calculate where to go(specific, about to be described).
3.- Travel to it.
4.- Use the detonator.
So, the location a point mass soldier should go is the following:
Calculate every point mass soldier's distance sum from everyone else. For example at n=3, any point mass soldier will have a location sum of two number for all three point mass soldier(including himself).
Identify the one which is the minimum, if there are more than one minimum, chose randomly between them(let's call him bomberman). If it is you, you can use the detonator right now.
Why it works?
The location where the detonation will happen is the same, unless there will be a time where there will be at least 2 minimum(the sum) who are not in the exact same location.
This is because no distance sum can decrease by more than the bombermen's, when everyone is travelling there in a straight line(so the bomberman will stay the same). (the same amount can be subtracted from other's sum, who are in the same line of the travelling, at at least that distance where the bomberman is).
Because of it it doesn't even matter if someone is waking up during the time someone is travelling, the one called bomberman can not change.
Of course after the first guy uses the detonator, he will be also at the bomberman's location, but it won't matter for the ones who wake up after.
After the last point mass soldier wakes up, and get to the point to use the detonator, they win.
When can it fail?
Because of they are at random locations at the start, there is a p=0 chance that there will be more point mass soldier with the exact same sum min distance.
For example if they woke up in the corner's of a square, they can still fail(which means there will be two different location where someone will detonate).
If there wasn't at least two minimum sum, there won't be during their mission.
• The bomberman can change. For example, suppose there's an almost right-angled triangle ABC with almost-right angle at A and AC>AB. Specifically, the angle at A is less than 90. Now C wakes up first. As C walks towards A, when C is just near the foot of the perpendicular from B to AC, B wakes up. The bomberman, from B's POV, is C. Thus B moves towards Cs new location, and the detonators explode in different places. – Wen1now Sep 30 '17 at 11:17
|
2019-11-20 10:21:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5438295006752014, "perplexity": 776.0340984117394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670535.9/warc/CC-MAIN-20191120083921-20191120111921-00499.warc.gz"}
|
https://tex.stackexchange.com/questions/501322/glossaries-setting-unicode-character-as-entry-label
|
# glossaries: Setting unicode character as entry label
Why does the following compile with lualatex and fail with pdflatex when setting entry label to be Unicode character?
% arara: pdflatex: { options: [ '-synctex=1', '-shell-escape' ]}
% arara: makeglossaries
% arara: pdflatex: { options: [ '-synctex=1', '-shell-escape' ]}
\documentclass{elsarticle}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{booktabs}
\usepackage{calc,siunitx}
\usepackage[automake,stylemods,symbols,
abbreviations,
xindy={codepage=utf8, language=greek, glsnumbers=false}
]{glossaries-extra}
\makeglossaries
\glsnoexpandfields
\glsxtrnewsymbol[text={\alpha},description={spacing},symbol={[\si{\um}]},type=main]{α}{$\alpha$}
\begin{document}
\printglossary[title=Nomenclature]
\end{document}
• Probably for the same reason \documentclass{article}\begin{document}α\end{document} fails with pdflatex? It just doesn't know what to do with α. – schtandard Jul 24 '19 at 20:36
• @schtandard so, how to fix it while using a Unicode character in case of pdflatex? – Diaa Jul 24 '19 at 20:39
• I think you misunderstand, pdftex cannot do that. One of the big selling points of luatex and xetex is native Unicode support, which pdftex just does not have. Why do you insist both on using pdftex and typing α? – schtandard Jul 24 '19 at 20:48
• @schtandard that's rather misleading the inputenc support in (pdf)latex would allow an input α alpha to be defined, just as accented latin is defined. – David Carlisle Jul 24 '19 at 20:51
• @DavidCarlisle Ok, maybe my choice of words was a bit too absolute (after all, pdftex can do anything, even control a mars rover). What I meant was that pdftex is only set up with very limited Unicode support and the need for more would be reason enough to switch to luatex, I think. Of course, one could endeavor to extend inputenc towards completeness.. – schtandard Jul 24 '19 at 21:01
With lualatex α is a plain, simple letter. Not different to a.
With pdflatex α is a rather complicated command, and you can't use commands in such places.
To see the difference you can compile this:
\documentclass{article}
\begin{document}
\ExplSyntaxOn
\tl_analysis_show:n {α}
\ExplSyntaxOff
\end{document}
With lualatex you get:
The token list contains the tokens:
> α (the letter α).
|
2020-10-24 23:20:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9205819964408875, "perplexity": 7071.146852188202}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107885059.50/warc/CC-MAIN-20201024223210-20201025013210-00600.warc.gz"}
|
http://www.shirpeled.com/2015/12/casting-to-double-is-faster-than-kahans.html
|
## Wednesday, December 23, 2015
### A floating problem
Floating point precision problems are notoriously annoying, and personally I was somewhat surprised they are still an issue in this day and age, but they are.
For instance, by the end of the following code:
float x = 0;
float f = 1e-6;
for (unsigned i = 0; i < 1e6; ++i)
x = x + f;
x's value will be 1.00904 and not 1.
Even much simpler code reveals the issue:
float x = 1e6;
float y = 1e-6;
float z = x + y;
z's value here will be exactly 1,000,000 and not 1,000,000.000001.
The reason for this is that 32 bit (the number of bits used for a float type) can represent a number to a limited precision, and so it makes sense to use them for high precision when representing small numbers, and to sacrifice that precision gradually as the integral part of the number increases.
### Why use floats when you can use doubles?
Well, in many cases, doubles simply solve the problem. Double is also a finite representation, but is far more precise than float. The smallest positive number a float can represent is about 1e-38, whereas for a double it's something around 2e-308. To emphasize, the ratio between the two is something like 1e270 which is considerably more than the number of atoms in the universe raised to the third power.
The reason is that floats occupy half the space as doubles, and when writing a code that is meant to be vectorized by the compiler, which one often does in high performance computing, this can translate to a factor of 2 speed up. Additionally, if your data is basically a bunch of real numbers, it would also mean your data in floats is around half the size of your data in doubles, so transmitting it over a network may be twice as fast, and so on.
### Kahan's summation algorithm
For running sums, William Kahan suggested the following algorithm, cleverly using a second variable to keep track of the deviation from the "true" sum (the following snippet is from Wikipedia, and not in C++, but straightforward nevertheless):
function KahanSum(input)
var sum = 0.0
var y,t // Temporary values.
var c = 0.0 // A running compensation for lost low-order bits.
for i = 1 to input.length do
y = input[i] - c // So far, so good: c is zero.
t = sum + y // Alas, sum is big, y small, so low-order digits of y are lost.
c = (t - sum) - y // (t - sum) recovers the high-order part of y; subtracting y recovers -(low part of y)
sum = t // Algebraically, c should always be zero. Beware overly-aggressive optimizing compilers!
next i // Next time around, the lost low part will be added to y in a fresh attempt.
return sum
And this is, in my opinion, really beautiful.
However, as it turns out:
### Casting to double is faster
As it turns out, the dumber, less beautiful solution, is faster:
double x = 0;
float f = 1e-6;
for (unsigned i = 0; i < 1e6; ++i) {
x = x + f; // f is implicitly cast to double here
}
I ran some experiments, and for a very large set of numbers (adding 1e10 numbers, each equal to 1e-7), casting to double was twice as fast as using Kahan's summation algorithm.
A cool algorithm, though.
|
2018-10-23 02:39:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6142146587371826, "perplexity": 1613.172695791094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516003.73/warc/CC-MAIN-20181023023542-20181023045042-00315.warc.gz"}
|
https://www.physicsforums.com/threads/expected-value-of-random-sums-with-dependent-variables.410540/
|
# Expected value of random sums with dependent variables
1. Jun 16, 2010
### agarwalv
Hi all,
I have a question of computing the expectation of random sums.
E(sim_{k=1}^N X_k) = E(N)E(X) if N and X_1, X_2,.....are independent and X_k's are iid. Here both N and X_k's are r.vs.
But the condition of N and X_1, X_2,.....being independent is not true in many cases.
How will you compute E(sim_{k=1}^N X_k) if N and X_1, X_2,.....are not independent (even weakly dependent).
Can we use Law of iterative expectation? I am not sure what will E(sim_{k=1}^N X_k) equal to?
Thank you
Regards
Agrawal V
2. Jun 16, 2010
### techmologist
Yes. Condition on the value of N, and then take the expectation with respect to N. If X_1, X_2, ... are identically distributed (but not necessarily independent), you still get E(X)E(N).
EDIT #1: if N is not independent of the X's, then $$E(\sum_{k=1}^N X_k) = E(N)E(X)$$ is not generally true. Instead it would be something like
$$E(\sum_{k=1}^N X_k}) = \sum_{n} nP\{N=n\}E(X|N=n)$$
So you would need to know the conditional expectation E(X|N=n).
EDIT #2: And even that may not be quite general enough. It may happen that even though the X's are all identically distributed, E(X_i | N=n) does not equal E(X_j | N=n) for i and j different. So it would be
$$E(\sum_{k=1}^N X_k}) = \sum_{n} P\{N=n\} \left (\sum_{k=1}^nE(X_k|N=n) \right )$$
Last edited: Jun 16, 2010
3. Jun 17, 2010
### agarwalv
Hi techmologist
Thanks for the reply. In my case, I,m considering N is the stopping time and each X_i's act as a renewal process, i.e, each X_i is replaced by another X_j having a common distribution function F. So I was thinking more on the lines of renewal process and stopping time.
I can across Wald's equality where N depends upon the X_i's until X_{n-1} and is independent of X_n, X_{n+1},....., because at X_n the condition (any stopping condition) is satisfied...which gives similar expression as E(sim_{k=1}^N X_k) = E(N)E(X). Do you think this will address the issue of dependence between N and the X_is..
Also, can it take expectation with respect to N on this term as per law of iterative expectation....please suggest
$$\sum_{n} nP\{N=n\}E(X|N=n)$$
Thank you
4. Jun 17, 2010
### techmologist
Hi Agrawal,
I had to look up Wald's equation, and I think now I see what you are getting at. By the way, the current Wikipedia article on Wald's equation is very confusing. I would give that article time to "stabilize" before I paid any attention to it. Instead of that, I used Sheldon Ross's book Introduction to Probability Models, 8th edition. On pages 462-463, he talks about Wald's equation and stopping times.
So in the case you are talking about, the X_i 's are independent identically distributed random variables for a renewal process. To take an example from Ross's book, X could represent the time between arrivals of customers at a bank. But as you say, the stopping time N may depend on the X_i's. In the above example, the sequence could stop with the first customer to arrive after the bank has been open for an hour. Thus, if the twentieth customer arrived at 0:59:55, and the twenty-first customer arrived at 1:03:47, the stopping "time" would be N=21 and the sum of the waiting times would be 1:03:47.
Note: Ross's definition of stopping time is that the event N=n is independent of X_{n+1}, X_{n+2},...., but generally depends on X_1, ..., X_n. It might be that he is labelling the X_i's differently than you. In his book, X_i is the waiting time between the (i-1)st and the ith event.
I no longer think that conditioning on N is the way to do it, although it may be possible. That is what you meant by using the law of iterated expectations, right? In practice, finding E(X_i | N=n) is very difficult. Ross uses indicator variables to prove Wald's equation:
$$I_i=\left\{\begin{array}{cc}1,&\mbox{ if } i\leq N\\0, & \mbox{ if } i>N\end{array}\right$$
Now note that I_i depends only on X_1, ..., X_{i-1}. You have observed the first i-1 events, and if you have stopped then N<i. If you have not stopped, then N is at least i.
$$E\left( \sum_{i=1}^N X_i \right) = E\left(\sum_{i=1}^{\infty}X_iI_i\right) = \sum_{i=1}^{\infty}E(X_iI_i)$$
$$E\left( \sum_{i=1}^N X_i\right) = \sum_{i=1}^{\infty}E(X_i)E(I_i) = E(X)\sum_{i=1}^{\infty}E(I_i)$$
Now use the fact that: $$\sum_{i=1}^{\infty}E(I_i) = E\left( \sum_{i=1}^{\infty}I_i\right) = E(N)$$
5. Jun 18, 2010
### agarwalv
Thank you techmologist......
6. Jun 18, 2010
### techmologist
You are welcome. I got to learn something out of it, too. Wald's equation helped me solve a problem I had been wondering about for a while. Suppose Peter and Paul bet one dollar on successive flips of a coin until one of them is ahead \$5. How many flips, on average, will it take for their game to end? At least I think my approach using Wald's equation will work... it involves taking a limit.
|
2018-02-24 10:56:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8271394968032837, "perplexity": 900.1224413833233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815544.79/warc/CC-MAIN-20180224092906-20180224112906-00117.warc.gz"}
|
https://nl.mathworks.com/help/5g/ug/nr-cell-performance-evaluation-with-mimo.html
|
# NR Cell Performance Evaluation with MIMO
This example models a 5G New Radio (NR) cell with multiple-input multiple-output (MIMO) antenna configuration and evaluates the network performance. You can customize the scheduling strategy to leverage the MIMO capabilities and analyze the performance. This example performs downlink (DL) and uplink (UL) channel measurements using multi-port channel state information reference signals (CSI-RS) and sounding reference signals (SRS), respectively. The gNB uses the measured channel characteristics to make MIMO scheduling decisions.
### Introduction
MIMO improves network performance by improving the cell throughput and reliability. The example performs layer mapping and precoding to utilize MIMO in the DL and UL directions.
This example models:
• Single-codeword DL spatial multiplexing to perform multi-layer transmission. Single-codeword limits the number of transmission layers to 4.
• Single-codeword UL spatial multiplexing. The 3GPP specification allows only single-codeword in UL direction which limits the number of transmission layers to 4.
• Precoding to map the transmission layers to antenna ports. The example assumes one-to-one mapping from antenna ports to physical antennas.
• DL channel quality measurement by UEs based on the multi-port CSI-RS received from the gNB. The same CSI-RS configuration applies to all the UEs.
• UL channel quality measurement by gNB based on the multi-port SRS received from the UEs. The example does not support UL rank estimation and provides the rank to be used for estimating UL precoding matrix as a configuration parameter.
• DL rank indicator (RI), precoding matrix indicator (PMI), and channel quality indicator (CQI) reporting by UEs. The example supports Type-1 single-panel codebook for PMI.
• Free space path loss (FSPL), additive white Gaussian noise (AWGN), and clustered delay line (CDL) propagation channel model.
Nodes send the control packets (buffer status report (BSR), DL assignment, UL grants, PDSCH feedback, and CSI report) out of band, without the need of resources for transmission and assured error-free reception.
### MIMO
The key aspects of MIMO include spatial multiplexing, precoding, channel measurement and reporting.
#### Spatial multiplexing
Spatial multiplexing utilizes MIMO to perform multi-layer transmission. The minimum of number of transmit and receive antennas limits the number of layers (or maximum rank). The layer mapping process maps the modulated symbols of the codeword onto different layers. It maps every ${\mathit{n}}_{}^{\mathrm{th}}$ symbol of the codeword to ${\mathit{n}}_{}^{\mathrm{th}}$ layer. For instance, this figure shows the mapping of a codeword onto four layers.
Furthermore, in the DL direction, NR specification also allows two codewords and up to a maximum of 8 transmission layers. The example currently only supports single codeword for both DL and UL.
#### Precoding
Precoding, which follows the layer mapping, maps the transmission layers to antenna ports. Precoding applies a precoding matrix to the transmission layers and outputs data streams to the antenna ports.
#### Channel measurement and reporting
It consists of DL channel measurement and reporting by the UEs and UL channel measurement by gNB.
#### DL channel measurement and reporting
CSI reporting is the process by which a UE, for DL transmissions, advises a suitable number of transmission layers (rank), PMI, and CQI values to the gNB. The UE estimates these values by performing channel measurements on its configured CSI-RS resources. For more details, see the 5G NR Downlink CSI Reporting example. The gNB scheduler uses this advice to decide the number of transmission layers, precoding matrix, and modulation and coding scheme (MCS) for PDSCHs.
#### UL channel measurement
gNB uses SRS to measure UL channel characteristics in a way analogous to CSI-RS based DL channel measurements. The UL channel measurements serve as an important input to the scheduler to decide the number of transmission layers, precoding matrix and MCS for PUSCHs.
### NR Protocol Stack
A node (gNB or UE) is a composition of NR stack layers. The helper classes hNRGNB.m and hNRUE.m create gNB and UE nodes, respectively, containing the radio link control (RLC), medium access control (MAC), and physical layer (PHY). For more details, see the NR Cell Performance Evaluation with Physical Layer Integration example.
### Scenario Configuration
Configure simulation parameters in the simParameters structure.
rng('default'); % Reset the random number generator
simParameters = []; % Clear the simParameters variable
simParameters.NumFramesSim = 10; % Simulation time in terms of number of 10 ms frames
simParameters.SchedulingType = 0; % Set the value to 0 (slot based scheduling) or 1 (symbol based scheduling)
Specify the number of UEs in the cell, assuming that UEs have sequential radio network temporary identifiers (RNTIs) from 1 to simParameters.NumUEs. If you change the number of UEs, ensure that the number of rows in simParameters.UEPosition is equal to the value of simParameters.NumUEs.
simParameters.NumUEs = 4;
% Assign position to the UEs assuming that the gNB is at (0, 0, 0). N-by-3
% matrix where 'N' is the number of UEs. Each row has (x, y, z) position of a
% UE (in meters)
simParameters.UEPosition = [300 0 0;
700 0 0;
1200 0 0;
3000 0 0];
% Validate the UE positions
validateattributes(simParameters.UEPosition, {'numeric'}, {'nonempty', 'real', 'nrows', simParameters.NumUEs, 'ncols', 3, 'finite'}, 'simParameters.UEPosition', 'UEPosition');
Specify the antenna counts at the gNB and UEs.
simParameters.GNBTxAnts = 16;
simParameters.GNBRxAnts = 8;
simParameters.UETxAnts = 4*ones(simParameters.NumUEs, 1);
simParameters.UERxAnts = 2*ones(simParameters.NumUEs, 1);
% Validate the number of transmitter and receiver antennas at UE
validateattributes(simParameters.UETxAnts, {'numeric'}, {'nonempty', 'integer', 'nrows', simParameters.NumUEs, 'ncols', 1, 'finite'}, 'simParameters.UETxAnts', 'UETxAnts')
validateattributes(simParameters.UERxAnts, {'numeric'}, {'nonempty', 'integer', 'nrows', simParameters.NumUEs, 'ncols', 1, 'finite'}, 'simParameters.UERxAnts', 'UERxAnts')
Set the channel bandwidth to 5 MHz and the subcarrier spacing (SCS) to 15 kHz as defined in 3GPP TS 38.104 Section 5.3.2.
simParameters.NumRBs = 25;
simParameters.SCS = 15; % kHz
simParameters.DLCarrierFreq = 2.646e9; % Hz
simParameters.ULCarrierFreq = 2.535e9; % Hz
% The UL and DL carriers are assumed to have symmetric channel
% bandwidth
simParameters.DLBandwidth = 5e6; % Hz
simParameters.ULBandwidth = 5e6; % Hz
Specify the SRS configuration for each UE. The example assumes full-bandwidth SRS and transmission comb number as 4, so up to 4 UEs are frequency multiplexed in the same SRS symbol by giving different comb offset. When number of UEs are more than 4, they are assigned different SRS slot offsets.
srsConfig = cell(1, simParameters.NumUEs);
combNumber = 4; % SRS comb number
for ueIdx = 1:simParameters.NumUEs
% Ensure non-overlapping SRS resources when there are more than 4 UEs by giving different offset
srsPeriod = [10 3+floor((ueIdx-1)/4)];
srsBandwidthMapping = nrSRSConfig.BandwidthConfigurationTable{:,2};
csrs = find(srsBandwidthMapping <= simParameters.NumRBs, 1, 'last') - 1;
% Set full bandwidth SRS
srsConfig{ueIdx} = nrSRSConfig('NumSRSPorts', 4, 'SymbolStart', 13, 'SRSPeriod', srsPeriod, 'KTC', combNumber, 'KBarTC', mod(ueIdx-1, combNumber), 'BSRS', 0, 'CSRS', csrs);
end
simParameters.SRSConfig = srsConfig;
Specify the CSI-RS configuration.
csirs = nrCSIRSConfig('NID', 1, 'NumRB', simParameters.NumRBs, 'RowNumber', 11, 'SubcarrierLocations', [1 3 5 7], 'SymbolLocations', 0, 'CSIRSPeriod', [5 2]);
simParameters.CSIRSConfig = {csirs};
Specify the CSI report configuration.
csiReportConfig.PanelDimensions = [8 1]; % [N1 N2] as per 3GPP TS 38.214 Table 5.2.2.2.1-2
csiReportConfig.CQIMode = 'Subband'; % 'Wideband' or 'Subband'
csiReportConfig.PMIMode = 'Subband'; % 'Wideband' or 'Subband'
csiReportConfig.SubbandSize = 4; % Refer TS 38.214 Table 5.2.1.4-2 for valid subband sizes
% Set codebook mode as 1 or 2. It is applicable only when the number of transmission layers is 1 or 2 and
% number of CSI-RS ports is greater than 2
csiReportConfig.CodebookMode = 1;
simParameters.CSIReportConfig = {csiReportConfig};
Set the UL rank to be used for precoding matrix and MCS calculation. The example does not support UL rank estimation. For each UE, set a number less than or equal to the minimum of UE's transmit antennas and gNB's receive antennas.
simParameters.ULRankIndicator = 2*ones(1, simParameters.NumUEs);
Specify the signal-to-interference-plus-noise ratio (SINR) to a CQI index mapping table for a block error rate (BLER) of 0.1. The lookup table corresponds to the CQI table as per 3GPP TS 38.214 Table 5.2.2.1-3.
simParameters.DownlinkSINR90pc = [-3.4600 1.5400 6.5400 11.0500 13.5400 16.0400 17.5400 20.0400 22.0400 24.4300 26.9300 27.4300 29.4300 32.4300 35.4300];
simParameters.UplinkSINR90pc = [-5.4600 -0.4600 4.5400 9.0500 11.5400 14.0400 15.5400 18.0400 20.0400 22.4300 24.9300 25.4300 27.4300 30.4300 33.4300];
Specify the transmit power and antenna gain.
simParameters.UETxPower = 23; % Tx power for all the UEs in dBm
simParameters.GNBTxPower = 34; % Tx power for gNB in dBm
simParameters.GNBRxGain = 11; % Rx gain for gNB in dBi
Specify the scheduling strategy and the maximum limit on the RBs allotted for PDSCH and PUSCH. The transmission limit applies only to new transmissions and not to the retransmissions.
simParameters.SchedulerStrategy = 'PF'; % Supported scheduling strategies: 'PF', 'RR', and 'BestCQI'
simParameters.RBAllocationLimitUL = 15; % For PUSCH
simParameters.RBAllocationLimitDL = 15; % For PDSCH
#### Logging and visualization configuration
The CQIVisualization and RBVisualization parameters control the display of the CQI visualization and the RB assignment visualization respectively. To enable these visualization plots, set these parameters to true.
simParameters.CQIVisualization = false;
simParameters.RBVisualization = false;
Set the enableTraces to true to log the traces. If the enableTraces is set to false, then CQIVisualization and RBVisualization are disabled automatically and traces are not logged in the simulation. To speed up the simulation, set the enableTraces to false.
enableTraces = true;
The example updates the metrics plots periodically. Set the number of updates during the simulation.
simParameters.NumMetricsSteps = 10;
Write the logs to MAT-files. The example uses these logs for post-simulation analysis and visualization.
parametersLogFile = 'simParameters'; % For logging the simulation parameters
simulationLogFile = 'simulationLogs'; % For logging the simulation traces
simulationMetricsFile = 'simulationMetrics'; % For logging the simulation metrics
#### Application traffic configuration
Set the periodic DL and UL application traffic pattern for UEs.
dlAppDataRate = 40e3*ones(simParameters.NumUEs, 1); % DL application data rate in kilo bits per second (kbps)
ulAppDataRate = 40e3*ones(simParameters.NumUEs, 1); % UL application data rate in kbps
% Validate the DL application data rate
validateattributes(dlAppDataRate, {'numeric'}, {'nonempty', 'vector', 'numel', simParameters.NumUEs, 'finite', '>', 0}, 'dlAppDataRate', 'dlAppDataRate');
% Validate the UL application data rate
validateattributes(ulAppDataRate, {'numeric'}, {'nonempty', 'vector', 'numel', simParameters.NumUEs, 'finite', '>', 0}, 'ulAppDataRate', 'ulAppDataRate');
### Derived Parameters
Compute the derived parameters based on the primary configuration parameters specified in the previous section and set some example-specific constants.
simParameters.DuplexMode = 0; % FDD (Value as 0) or TDD (Value as 1)
simParameters.NCellID = 1; % Physical cell ID
simParameters.Position = [0 0 0]; % Position of gNB in (x,y,z) coordinates
Configure the channel model
channelModelUL = cell(1, simParameters.NumUEs);
channelModelDL = cell(1, simParameters.NumUEs);
waveformInfo = nrOFDMInfo(simParameters.NumRBs, simParameters.SCS);
for ueIdx = 1:simParameters.NumUEs
% Configure the uplink channel model
channel = nrCDLChannel;
channel.DelayProfile = 'CDL-C';
channel.Seed = 73 + (ueIdx - 1);
channel.CarrierFrequency = simParameters.ULCarrierFreq;
channel.SampleRate = waveformInfo.SampleRate;
channelModelUL{ueIdx} = channel;
% Configure the downlink channel model
channel = nrCDLChannel;
channel.DelayProfile = 'CDL-C';
channel.Seed = 73 + (ueIdx - 1);
channel.CarrierFrequency = simParameters.DLCarrierFreq;
channel.SampleRate = waveformInfo.SampleRate;
channelModelDL{ueIdx} = channel;
end
Compute the slot duration for the selected SCS and the number of slots in a 10 ms frame.
slotDuration = 1/(simParameters.SCS/15); % In ms
numSlotsFrame = 10/slotDuration; % Number of slots in a 10 ms frame
numSlotsSim = simParameters.NumFramesSim * numSlotsFrame; % Number of slots in the simulation
Set the interval at which the example updates metrics visualization in terms of number of slots. Because this example uses a time granularity of one slot, the MetricsStepSize field must be an integer.
simParameters.MetricsStepSize = ceil(numSlotsSim / simParameters.NumMetricsSteps);
if mod(numSlotsSim, simParameters.NumMetricsSteps) ~= 0
% Update the NumMetricsSteps parameter if NumSlotsSim is not
% completely divisible by it
simParameters.NumMetricsSteps = floor(numSlotsSim / simParameters.MetricsStepSize);
end
Specify one logical channel for each UE, and set the logical channel configuration for all nodes (UEs and gNBs) in the example.
numLogicalChannels = 1;
simParameters.LCHConfig.LCID = 4;
Specify the RLC entity type in the range [0, 3]. The values 0, 1, 2, and 3 indicate RLC UM unidirectional DL entity, RLC UM unidirectional UL entity, RLC UM bidirectional entity, and RLC AM entity, respectively.
simParameters.RLCConfig.EntityType = 2;
Create RLC channel configuration structure.
rlcChannelConfigStruct.LCGID = 1; % Mapping between logical channel and logical channel group ID
rlcChannelConfigStruct.Priority = 1; % Priority of each logical channel
rlcChannelConfigStruct.PBR = 8; % Prioritized bitrate (PBR), in kilobytes per second, of each logical channel
rlcChannelConfigStruct.BSD = 10; % Bucket size duration (BSD), in ms, of each logical channel
rlcChannelConfigStruct.EntityType = simParameters.RLCConfig.EntityType;
rlcChannelConfigStruct.LogicalChannelID = simParameters.LCHConfig.LCID;
Set the mapping type as per the configured scheduling type.
if ~isfield(simParameters, 'SchedulingType') || simParameters.SchedulingType == 0 % If no scheduling type is specified or slot based scheduling is specified
simParameters.PUSCHMappingType = 'A';
simParameters.PDSCHMappingType = 'A';
else % Symbol based scheduling
simParameters.PUSCHMappingType = 'B';
simParameters.PDSCHMappingType = 'B';
end
### gNB and UEs Setup
Create the gNB and UE objects, initialize the channel quality information for UEs, and set up the logical channel at the gNB and UE. The helper classes hNRGNB.m and hNRUE.m create the gNB node and the UE node, respectively, each containing the RLC, MAC and PHY.
gNB = hNRGNB(simParameters); % Create gNB node
% Create scheduler
switch(simParameters.SchedulerStrategy)
case 'RR' % Round-robin scheduler
scheduler = hNRSchedulerRoundRobin(simParameters);
case 'PF' % Proportional fair scheduler
scheduler = hNRSchedulerProportionalFair(simParameters);
case 'BestCQI' % Best CQI scheduler
scheduler = hNRSchedulerBestCQI(simParameters);
end
simParameters.ChannelModel = channelModelUL;
gNB.PhyEntity = hNRGNBPhy(simParameters); % Create the PHY instance
configurePhy(gNB, simParameters); % Configure the PHY
setPhyInterface(gNB); % Set the interface to PHY
% Create the set of UE nodes
UEs = cell(simParameters.NumUEs, 1);
ueParam = simParameters;
for ueIdx=1:simParameters.NumUEs
ueParam.Position = simParameters.UEPosition(ueIdx, :); % Position of the UE
ueParam.UERxAnts = simParameters.UERxAnts(ueIdx);
ueParam.UETxAnts = simParameters.UETxAnts(ueIdx);
ueParam.SRSConfig = simParameters.SRSConfig{ueIdx};
ueParam.CSIReportConfig = simParameters.CSIReportConfig{1}; % Assuming same CSI Report configuration for all UEs
ueParam.ChannelModel = channelModelDL{ueIdx};
UEs{ueIdx} = hNRUE(ueParam, ueIdx);
UEs{ueIdx}.PhyEntity = hNRUEPhy(ueParam, ueIdx); % Create the PHY instance
configurePhy(UEs{ueIdx}, ueParam); % Configure the PHY
setPhyInterface(UEs{ueIdx}); % Set up the interface to PHY
% Set up logical channel at gNB for the UE
configureLogicalChannel(gNB, ueIdx, rlcChannelConfigStruct);
% Set up logical channel at UE
configureLogicalChannel(UEs{ueIdx}, ueIdx, rlcChannelConfigStruct);
% Set up application traffic
% Create an object for on-off network traffic pattern for the specified
% UE and add it to the gNB. This object generates the downlink data
% traffic on the gNB for the UE
dlApp = networkTrafficOnOff('GeneratePacket', true, ...
'OnTime', simParameters.NumFramesSim*10e-3, 'OffTime', 0, 'DataRate', dlAppDataRate(ueIdx));
% Create an object for on-off network traffic pattern and add it to the
% specified UE. This object generates the uplink data traffic on the UE
ulApp = networkTrafficOnOff('GeneratePacket', true, ...
'OnTime', simParameters.NumFramesSim*10e-3, 'OffTime', 0, 'DataRate', ulAppDataRate(ueIdx));
end
Set up the packet distribution mechanism.
simParameters.MaxReceivers = simParameters.NumUEs + 1; % Number of nodes
% Create packet distribution object
packetDistributionObj = hNRPacketDistribution(simParameters);
hNRSetUpPacketDistribution(simParameters, gNB, UEs, packetDistributionObj);
### Processing Loop
Run the simulation symbol by symbol to execute these operations.
• Run the gNB.
• Run the UEs.
• Log and visualize metrics for each layer.
• Advance the timer for the nodes and send a trigger to application and RLC layers every millisecond. The application and RLC layers execute their scheduled operations based on a 1 ms timer trigger.
Create objects to log and visualize MAC traces and PHY traces.
if enableTraces
% Create an object for MAC traces logging
simSchedulingLogger = hNRSchedulingLogger(simParameters);
% Create an object for PHY traces logging
simPhyLogger = hNRPhyLogger(simParameters);
% Create an object for CQI and RB grid visualization
if simParameters.CQIVisualization || simParameters.RBVisualization
gridVisualizer = hNRGridVisualizer(simParameters, 'MACLogger', simSchedulingLogger);
end
end
Create an object for MAC and PHY metrics visualization.
nodes = struct('UEs', {UEs}, 'GNB', gNB);
metricsVisualizer = hNRMetricsVisualizer(simParameters, 'Nodes', nodes, 'EnableSchedulerMetricsPlots', true, 'EnablePhyMetricsPlots', true);
Run the processing loop.
slotNum = 0;
numSymbolsSim = numSlotsSim * 14; % Simulation time in units of symbol duration (assuming normal cyclic prefix)
tickGranularity = 1;
% Execute all the symbols in the simulation
for symbolNum = 1 : tickGranularity : numSymbolsSim
if mod(symbolNum - 1, 14) == 0
slotNum = slotNum + 1;
end
% Run the gNB
run(gNB);
% Run the UEs
for ueIdx = 1:simParameters.NumUEs
run(UEs{ueIdx});
end
if enableTraces
% MAC logging
logCellSchedulingStats(simSchedulingLogger, symbolNum, gNB, UEs);
% PHY logging
logCellPhyStats(simPhyLogger, symbolNum, gNB, UEs);
end
% Visualization
% Check slot boundary
if symbolNum > 1 && ((simParameters.SchedulingType == 1 && mod(symbolNum, 14) == 0) || (simParameters.SchedulingType == 0 && mod(symbolNum-1, 14) == 0))
% If the update periodicity is reached, plot scheduler metrics and PHY metrics at slot boundary
if mod(slotNum, simParameters.MetricsStepSize) == 0
plotLiveMetrics(metricsVisualizer);
end
end
% Advance timer ticks for gNB and UEs
for ueIdx = 1:simParameters.NumUEs
end
end
Get the simulation metrics and save it in a MAT-file. The simulation metrics are saved in a MAT-file with the file name as simulationMetricsFile.
metrics = getMetrics(metricsVisualizer);
save(simulationMetricsFile, 'metrics');
At the end of the simulation, the achieved value for system performance indicator is compared to their theoretical peak values (considering zero overheads). Performance indicators displayed are achieved data rate (UL and DL), achieved spectral efficiency (UL and DL), and BLER observed for UEs (DL and UL). The peak values are calculated as per 3GPP TR 37.910. The number of layers used for the peak DL and UL data rate calculation is taken as the average value of the maximum layers possible for each UE in the respective direction. The maximum number of DL layers possible for a UE is minimum of its Rx antennas and gNB's Tx antennas. Similarly, the maximum number of UL layers possible for a UE is minimum of its Tx antennas and gNB's Rx antennas.
displayPerformanceIndicators(metricsVisualizer);
Peak UL Throughput: 124.42 Mbps. Achieved Cell UL Throughput: 26.59 Mbps
Achieved UL Throughput for each UE: [9.27 10 6.89 0.43]
Achieved Cell UL Goodput: 26.59 Mbps
Achieved UL Goodput for each UE: [9.27 10 6.89 0.43]
Peak UL spectral efficiency: 24.88 bits/s/Hz. Achieved UL spectral efficiency for cell: 5.32 bits/s/Hz
Peak DL Throughput: 62.21 Mbps. Achieved Cell DL Throughput: 29.72 Mbps
Achieved DL Throughput for each UE: [9.58 5.3 9.57 5.27]
Achieved Cell DL Goodput: 29.26 Mbps
Achieved DL Goodput for each UE: [9.58 4.85 9.57 5.27]
Peak DL spectral efficiency: 12.44 bits/s/Hz. Achieved DL spectral efficiency for cell: 5.85 bits/s/Hz
Block error rate for each UE in the uplink direction: [0 0 0 0]
Block error rate for each UE in the downlink direction: [0 0.038 0 0]
### Simulation Visualization
The five types of run-time visualization shown are:
• Display of CQI values for UEs over the PUSCH or PDSCH bandwidth: For details, see the 'Channel Quality Visualization' figure description in NR PUSCH FDD Scheduling example.
• Display of resource grid assignment to UEs: The 2D time-frequency grid shows the resource allocation to the UEs. You can enable this visualization in the 'Scenario Configuration' section. For details, see the 'Resource Grid Allocation' figure description in NR PUSCH FDD Scheduling example.
• Display of UL scheduling metrics plots: For details, see 'Uplink Scheduler Performance Metrics' figure description in NR FDD Scheduling Performance Evaluation example.
• Display of DL scheduling metrics plots: For details, see 'Downlink Scheduler Performance Metrics ' figure description in NR FDD Scheduling Performance Evaluation example.
• Display of DL and UL Block Error Rates: The two sub-plots displayed in 'Block Error Rate (BLER) Visualization' shows the block error rate (for each UE) observed in the uplink and downlink directions, as the simulation progresses. The plot is updated every metricsStepSize slots.
### Simulation Logs
The parameters used for simulation and the simulation logs are saved in MAT-files for post-simulation analysis and visualization. The simulation parameters are saved in a MAT-file with the file name as the value of configuration parameter parametersLogFile. The per time step logs, scheduling assignment logs, and BLER logs are saved in the MAT-file simulationLogFile. After the simulation, open the file to load DLTimeStepLogs, ULTimeStepLogs, SchedulingAssignmentLogs, BLERLogs in the workspace.
Time step logs: Both the DL and UL time step logs follow the same format. For details of log format, see the 'Simulation Logs' section of NR PUSCH FDD Scheduling example.
Scheduling assignment logs: Information of all the scheduling assignments and related information is logged in this file. For details of log format, see the 'Simulation Logs' section of NR FDD Scheduling Performance Evaluation example.
BLER logs: Block error information observed in the DL and UL direction. For details of log format, see the 'Simulation Logs' section in the NR Cell Performance Evaluation with Physical Layer Integration example.
if enableTraces
simulationLogs = cell(1,1);
if simParameters.DuplexMode == 0 % FDD
logInfo = struct('DLTimeStepLogs', [], 'ULTimeStepLogs', [], 'SchedulingAssignmentLogs', [], 'BLERLogs', [], 'AvgBLERLogs', []);
else % TDD
logInfo = struct('TimeStepLogs', [], 'SchedulingAssignmentLogs', [], 'BLERLogs', [], 'AvgBLERLogs', []);
end
logInfo.SchedulingAssignmentLogs = getGrantLogs(simSchedulingLogger); % Scheduling assignments log
save(parametersLogFile, 'simParameters'); % Save simulation parameters in a MAT-file
save(simulationLogFile, 'simulationLogs'); % Save simulation logs in a MAT-file
end
You can run the script NRPostSimVisualization to get a post-simulation visualization of logs. For more details about the options to run this script, refer to the NR FDD Scheduling Performance Evaluation example.
### Further Exploration
You can use this example to further explore custom scheduling.
#### Custom scheduling
You can modify the existing scheduling strategy to implement a custom one. Plug In Custom Scheduler in System-Level Simulation example explains how to create a custom scheduling strategy and plug it into system-level simulation. MIMO configuration appends more fields to the scheduling assignment structure. Populate the fields of scheduling assignments with values for precoding matrix, number of layers as per your custom scheduling strategy. For more information about the information fields of a scheduling assignment, see the description of the scheduleDLResourcesSlot and scheduleULResourcesSlot functions in the hNRScheduler.m helper file.
The DL scheduler in the example selects the rank and precoding matrix which a UE reports in the CSI. You can also customize this behavior to select any rank and precoding matrix by overriding the function selectDLRankAndPrecodingMatrix in your custom scheduler. For more details, see the description of the selectDLRankAndPrecodingMatrix function in the hNRScheduler.m file. You can do similar customization for UL direction by overriding selectULRankAndPrecodingMatrix function in the hNRScheduler.m.
#### Importing ray-traces
You can modify this example to customize the CDL channel model parameters by using the output of a ray tracing analysis. Refer to the CellPerformanceWithRayTrace.m script, which demonstrates this workflow. The script follows the CDL Channel Model Customization with Ray Tracing example to configure the 'Custom' delay profile of nrCDLChannel object.
## References
[1] 3GPP TS 38.104. “NR; Base Station (BS) radio transmission and reception.” 3rd Generation Partnership Project; Technical Specification Group Radio Access Network.
[2] 3GPP TS 38.214. “NR; Physical layer procedures for data.” 3rd Generation Partnership Project; Technical Specification Group Radio Access Network.
[3] 3GPP TS 38.321. “NR; Medium Access Control (MAC) protocol specification.” 3rd Generation Partnership Project; Technical Specification Group Radio Access Network.
[4] 3GPP TS 38.322. “NR; Radio Link Control (RLC) protocol specification.” 3rd Generation Partnership Project; Technical Specification Group Radio Access Network.
[5] 3GPP TS 38.323. “NR; Packet Data Convergence Protocol (PDCP) specification.” 3rd Generation Partnership Project; Technical Specification Group Radio Access Network.
[6] 3GPP TS 38.331. “NR; Radio Resource Control (RRC) protocol specification.” 3rd Generation Partnership Project; Technical Specification Group Radio Access Network.
[7] 3GPP TR 37.910. “Study on self evaluation towards IMT-2020 submission.” 3rd Generation Partnership Project; Technical Specification Group Radio Access Network.
|
2022-08-16 01:22:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.384244829416275, "perplexity": 9256.762673769861}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572215.27/warc/CC-MAIN-20220815235954-20220816025954-00570.warc.gz"}
|
https://parasol-lab.gitlab.io/stapl-home/docs/sgl/creation/io/sharded.html
|
# Sharded Formats
As the single-file formats are inherently sequential, we also provide ways to read graphs that have been sharded across multiple files.
## Flat Sharded
The format assumes that all of the shards are in the same directory. This may not be the best method if you have thousands of shards, as that may stress your machine's parallel file system.
### Format
The file structure for a graph named "foo" would look like:
.
|-- foo.0
|-- foo.1
-- foo.metadata
This particular graph is sharded across two files, which take the name "foo.XXX". The metadata file contains information about the graph, which is the number of vertices, the number of edges and the number of shards:
\$ cat foo.metadata
64 112 2
Use the following function to read this kind of graph:
auto graph = sharded_graph_reader<graph_type>(metadata_filename, read_adj_list_line());
If the format for each line is an edge list, you can use read_edge_list_line instead of the adjacency list line reader.
## Nested Sharder
If the number of shards is large (for running on tens or hundreds of thousands of cores), it may be better to use the nested sharded format, which places files into a hierarchy of directories such that no single folder contains a large amount of shards.
### Format
#vertices #edges #shards
dim_0
dim_1
...
dim_k
The #shards value represents the total number of files that the file was split into. Each dimension represents how many files or directories are in a given level of the file system hierarchy.
All of the lines from the original graph will be split into new files, each following the path format i_0/i_1/.../i_k where each i represents a single number. These numbers shall not exceed their corresponding dimension limit from the metadata file.
For example, the files could be organized as follows:
├── 0
│ ├── 0
│ ├── 1
...
│ ├── 10
├── 1
...
├── 10
├── foo.md
Each leaf of this tree will be a sharded file that contains a subset of the lines of the original file. In this example, 0/10 is a file that contains a part of the graph. The foo.md file is the metadata file.
auto graph = nested_sharded_graph_reader<graph_type>(metadata_filename, read_adj_list_line());
If the format for each line is an edge list, you can use read_edge_list_line` instead of the adjacency list line reader.
|
2019-03-21 07:42:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3472900688648224, "perplexity": 1831.7358828668366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202506.45/warc/CC-MAIN-20190321072128-20190321094128-00293.warc.gz"}
|
http://mathhelpforum.com/algebra/283004-solution-satisfies-inequalities.html
|
# Thread: solution that satisfies inequalities
1. ## solution that satisfies inequalities
I'm interested in the condition for existence of a solution $\displaystyle (x_1, x_2)$ given
$\displaystyle |a-x_1|\leq C_1$
$\displaystyle |x_1-x_2|\leq C_2$
$\displaystyle |b-x_2|\leq C_3$
$\displaystyle x_1,x_2\leq C_4$
This is my conclusion. Let
$\displaystyle I_1 = [a-C_1, a+C_1]$
$\displaystyle I_2 = [b-C_3, b+C_3]$
Now, $\displaystyle x_2 \in [x_1-C_2, x_1+C_2]$ and $\displaystyle x_1 \in [x_2-C_2,x_2+C_2]$
So, if
$\displaystyle I_3 = [a-C_1-C_2, a+C_1+C_2]$
$\displaystyle I_4 = [b-C_3-C_2, b+C_3+C_2]$
Then a solution exist if $\displaystyle I_A = I_1 \cap I_3 \cap [0, C_4] \neq \emptyset$ and $\displaystyle I_B = I_2 \cap I_4 \cap [0, C_4] \neq \emptyset$ and $\displaystyle x_1 \in I_A, x_2 \in I_B$
Is this correct? Does anyone see a problem? Thanks
2. ## Re: solution that satisfies inequalities
I'm sorry for the rushed post. I meant to write that I'm looking for a pair of numbers $\displaystyle (x_1,x_2)$ that satisfies the first four inequalities listed in original post.
If such a pair of numbers exist then I expect $\displaystyle x_1$ to one of possibly many values in some interval $\displaystyle I_A$. Similarly, $\displaystyle x_2$ could be one of many values in some interval $\displaystyle I_B$ depending on the values of the constants $\displaystyle a, b, C_1, ..., C_4$
One mistake I made in my search for those intervals was mixing up $\displaystyle I_3$ and $\displaystyle I_4$.
It seems that $\displaystyle x_1 \in [a-C_1,a+C_1] = I_1$ and $\displaystyle x_1 \in [b-C_3-C_2, b+C_3+C_2] = I_3$ and also $\displaystyle x_2 \in [b-C_3, b+C_3] = I_2$ and $\displaystyle x_2 \in [a-C_1-C_2,a+C_1+C_2] = I_4$ and so the intervals I'm looking for are just the intersections of these but I'm not totally sure
|
2019-10-15 17:14:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.935418426990509, "perplexity": 231.9406737696724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660067.26/warc/CC-MAIN-20191015155056-20191015182556-00134.warc.gz"}
|
https://www.futurelearn.com/courses/python-in-hpc/0/steps/65091
|
## Want to keep learning?
This content is taken from the Partnership for Advanced Computing in Europe (PRACE)'s online course, Python in High Performance Computing. Join the course to learn more.
1.13
# Hands-on: Performance analysis of heat equation solver
It is time for our first hands-on exercise. In the subsequent weeks there will be many more of them.
In this exercise you can familiarize yourself with cProfile.
Start your virtual machine, log in, and open the Terminal. The code for this exercise is located under performance/cprofile in the git-repository you cloned:
~/hpc-python\$ cd performance/cprofile/
The file heat_main.py contains (very inefficient) implementation of the two dimensional heat equation. Use cProfile for investigating where the time is spent in the program. Note that the execution time can be between 40 - 60 s depending on your hardware. (You can see also results of the simulation in the heat_nnn.png output files).
What is the most time consuming part in your system? How long did it take? Please comment! During the course you will be able to bring the execution time down under one second.
|
2020-09-18 15:59:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38485583662986755, "perplexity": 1342.0263621877245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400188049.8/warc/CC-MAIN-20200918155203-20200918185203-00478.warc.gz"}
|
http://physics.stackexchange.com/questions/52724/the-definition-of-f-nl-and-transfer-function
|
# The definition of $f_{NL}$ and transfer function
To me there seems to be quite a few different definitions of $f_{NL}$ in cosmology and I would like to know if or how they are equivalent. Let me cite at least 3 such,
• One can see the equation 6.71 on page 100 of the book "The Primordial Density Perturbation" by Liddle and Lyth. The closest one can make 6.71 look like the following two is through equations 25.21 and 25.22 - but it still doesn't really become equal to the following.
• One can see equation 2 (page 3) of this paper. In this equation I am hoping that the quantity $\Phi(x)$ called the "Bardeen's curvature perturbation" really what is called $\xi$ in the Liidle and Lyth book or in Dodelson's.
• One can see the definition at the bottom of page 1 in this very famous paper. Here curiously though the equation looks exactly the same as in the paper above, the quantity $\Phi(x)$ is called the "gravitational potential in the matter era"
I wonder if one can hope to somewhat translate between the last two definitions by using the relation $\xi = \frac{2}{3}\Phi \vert_{post\text{ }inflation}$ - but even then the last two definitions are not really "equal".
Further this "discrepancy" between the definitions of $f_{NL}$ seem to be kind of related to the same problem with the definition of the transfer function" ($T(k)$),
• In the first paper cited above, one sees a definition of $T(k)$ in equation 9 and 10.
• But the closest one can get to the above is if one eliminates $\Phi(\vec{k},a)$ between equation 7.7 and 7.5 in Dodelson and then replaces $\Phi_p$ as $\Phi \vert_{post\text{ }inflation}$ and converts that into $\xi$ through $\xi = \frac{2}{3}\Phi \vert_{post\text{ }inflation}$ (..and hope that $\Phi$ in equation 2 and 9 of of the paper is actually $\xi$..)
But still the two definitions of transfer function don't match!
• And the above two definitions are anyway still very far from the definition of $T(k)$ as in equation 8.52 (page 125) of the above cited book by Liddle and Lyth.
It would be very helpful if someone can help reconcile the above!
-
|
2014-04-21 15:28:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8629039525985718, "perplexity": 238.2525398847514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/discrete-math/66961-dedekind-cuts.html
|
1. ## Cuts
I assume you are all familiar with the idea of a "cut": A non empty subset A of $\mathbb{Q}$ with the following properties:
(1) A is bounded above
(2) A has no maximum (least upper bound)
(3) A is closed downwards
: $x \in A$ And $y \leq A \Rightarrow y \in A$
I'm not sure with the following things:
(a) Prove that every cut is "dense" (Jason: just comes out of the closed downwards property?)
(b) Prove that the union of two cuts is always a cut.
(c) Let $\{ A_i : i \in I \}$ be any collection of cuts. Prove that their union
$U = \bigcup_{i \in I} A_i = \{x : x \in A_i for some i \in I\}$
is either a cut or else is equal to $\mathbb{Q}$.
(d) [Define what it means for a linearly ordered set to be "complete". (I know this, if every subset of X which is bounded above has the least upper bound, X is complete) ]
Recall that the set $\mathbb{R}$ of real numbers is defined as the set of all cuts. Prove that $\mathbb{R}$ is complete.
(Jason: I know I have to use the previous part some how. Every subset of $\mathbb{R}$ is a set of cuts and these are all bounded above, so is the proof just that these cuts have the least upper bound in $\mathbb{R}$? And if so, how are they cuts?!)
(e) For a rational number r with the following definition
$\overline{r} = \{x \in \mathbb{Q} : x < r\}$
I need to prove that $\overline{r}$ is a cut.
(f) Let $\mathbb{Q}^+$ denote the set of all positive rational numbers. Describe the intersection:
$\bigcap_{r \in \mathbb{Q}^+} \overline{r}$
is it a cut? (Jason: I thought this might be something similar to the empty set? In which case it's not a cut.)
(g) Prove that every set $A \subseteq \mathbb{R}$ that has a lower bound, also has a greatest lower bound. (Jason: here I think because $\mathbb{R}$ is complete and then take the negative of all cuts?)
-----------------------
Help on any of this is much appreciated.
2. Originally Posted by Jason Bourne
I assume you are all familiar with the idea of a "cut": A non empty subset A of $\mathbb{Q}$ with the following properties:
(1) A is bounded above
(2) A has no maximum (least upper bound)
(3) A is closed downwards
: $x \in A$ And $y \leq A \Rightarrow y \in A$
I'm not sure with the following things:
(a) Prove that every cut is "dense" (Jason: just comes out of the closed downwards property?)
What do you mean by "dense" here? I am familar with the property of being dense in some set but what set do you intend here? No the set of real numbers or rational numbers- that's not true.
(b) Prove that the union of two cuts is always a cut.
(c) Let $\{ A_i : i \in I \}$ be any collection of cuts. Prove that their union
$U = \bigcup_{i \in I} A_i = \{x : x \in A_i for some i \in I\}$
is either a cut or else is equal to $\mathbb{Q}$.
It might help to prove first: If A and B are cuts then one and only one is true: A= B, A is a proper subset of B, or B is a proper subset of A.
[/quote](d) [Define what it means for a linearly ordered set to be "complete". (I know this, if every subset of X which is bounded above has the least upper bound, X is complete) ]
Recall that the set $\mathbb{R}$ of real numbers is defined as the set of all cuts. Prove that $\mathbb{R}$ is complete.
(Jason: I know I have to use the previous part some how. Every subset of $\mathbb{R}$ is a set of cuts and these are all bounded above, so is the proof just that these cuts have the least upper bound in $\mathbb{R}$? And if so, how are they cuts?!)p[./qu0te]
If A is such a set of cuts, then, as above, the union of all cuts in A is itself a cut. (You need the upper bound on A to show this union is not the set of all rational numbers. Show that that union is the least upper bound.
(e) For a rational number r with the following definition
$\overline{r} = \{x \in \mathbb{Q} : x < r\}$
I need to prove that $\overline{r}$ is a cut.
Well, that's straight forward. Just show that the three properties you listed originally are true for this set.
(f) Let $\mathbb{Q}^+$ denote the set of all positive rational numbers. Describe the intersection:
$\bigcap_{r \in \mathbb{Q}^+} \overline{r}$
is it a cut? (Jason: I thought this might be something similar to the empty set? In which case it's not a cut.)
No, it's not the empty set. because r> 0, and $\overline{r}$ contains every rational number less than 0, the intersection contains every rational number less than 0.
(g) Prove that every set $A \subseteq \mathbb{R}$ that has a lower bound, also has a greatest lower bound. (Jason: here I think because $\mathbb{R}$ is complete and then take the negative of all cuts?)
Yes. If a is a lower bound for set A then -a is an upper bound for {-x| x in A}.
-----------------------
Help on any of this is much appreciated.
3. Firstly, thank you very much for taking time to reply.
What do you mean by "dense" here? I am familar with the property of being dense in some set but what set do you intend here? No the set of real numbers or rational numbers- that's not true.
A totally ordered set X is called dense if, for all $x,y \in X$ with $x < y$ there exists $z \in X$ such that $x < z < y$. From this definition, the set $\mathbb{R}$ and $\mathbb{Q}$ would be dense. So does this mean that every cut is dense because every cut is closed downwards and every cut is composed of rationals(which are dense)?
It might help to prove first: If A and B are cuts then one and only one is true: A= B, A is a proper subset of B, or B is a proper subset of A.
Thanks, so I guess because every set is either a proper subset of one or the other, it follows immediately that the union of the two sets must satisfy the conditions of being a cut.
I'm not sure how $U = \bigcup_{i \in I} A_i = \{x : x \in A_i for some i \in I\}$ could be equal to $\mathbb{Q}$. If the set $\{ A_i : i \in I \}$ is NOT bounded above, then $U=\mathbb{Q}$? So is the union of a finite number of cuts is a cut, but the union of an infinite number of cuts is infact the set of rationals?
But here I would have to prove that the set
$U$ satisfies the three conditions to be a cut?
Show that that union is the least upper bound.
I think I understand what you are saying (this stuff does my head in ). The union must be the least upper bound for the set of cuts $\{ A_i : i \in I \}$, because otherwise you would have $\mathbb{Q}$. $\{ A_i : i \in I \} \subseteq \mathbb{R}$ and so $\mathbb{R}$ is complete.
How would I prove that $\overline{r}$ has no maximum? Because the set of rationals is dense?
|
2017-03-30 00:59:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 49, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8785674571990967, "perplexity": 188.10884291447226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191444.45/warc/CC-MAIN-20170322212951-00393-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://opticsgirl.com/index-of-refraction-and-sellmeiers-equation/
|
# Index of Refraction and Sellmeier’s Equation
Now that we know how the classical dipole behave, we can solve the equation of motion for the macroscopic polarization density, given as:
$$\frac{d^2P}{dt^2}+(\gamma + \frac{2}{T_2′})\frac{dP}{dt}+\omega_0^2P = \frac{Ne^2}{m}E$$
which we found using our heuristic model of the polarization.
|
2022-01-25 01:40:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.47827860713005066, "perplexity": 569.5190061792193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304749.63/warc/CC-MAIN-20220125005757-20220125035757-00106.warc.gz"}
|
https://www.physicsforums.com/threads/calculate-the-rmsd-for-the-odd-solutions.179150/
|
# Calculate the RMSD for the odd solutions
1. Aug 2, 2007
### ehrenfest
Calculate the RMSD for the odd solutions of the infinite symmetric well problem.
The odd solutions have the form:
$$u_n(x) = 1/\sqrt{a} \cdot sin(n \pi x/a)$$
So, I should multiply u_n by x, then take the integral over the bottom of the well to get <x>. Then I should multiply u_n by x^2, take the integral over the bottom of the well to get <x^2>. From this I can get delta x.
Is all that correct?
2. Aug 2, 2007
### Gokul43201
Staff Emeritus
How is the expectation value of an observable defined?
3. Aug 2, 2007
### ehrenfest
I should take the integral over x times the modulus squared of u_n to get <x>. That better.
4. Aug 3, 2007
### Gokul43201
Staff Emeritus
Better, but it doesn't look like your wavefunctions have been normalized.
5. Aug 3, 2007
### ehrenfest
They have been. Remember this it represents a probability amplitude not a probability density and your region of integration should be -a to a.
6. Aug 3, 2007
### Gokul43201
Staff Emeritus
In that case, the wavefunction looks wrong to me.
7. Aug 4, 2007
### ehrenfest
Its my book that does it oddly. Instead of breaking it up into cases of n = 0,2,4,.. and n =1,3,5,... they used different arguments for sine and cosine so that they can use n = 1,2,3,... for both cases.
8. Aug 4, 2007
### Gokul43201
Staff Emeritus
I don't think that's particularly unusual.
If your well width is 2a I'd imagine your wavefunctions to be $\sqrt{1/a}~cos(n \pi x/2a)$, and if the well goes from 0 to a then you'd have $\phi_n(x) = \sqrt{2/a}~sin(n \pi x/a)$. The wavefunction you've written is neither of these.
Last edited: Aug 4, 2007
9. Aug 4, 2007
### ehrenfest
It goes from -a to a. The even solutions are of the form B*cos(k*x) , where k = sqrt(2mE/hbar^2). The boundary conditions give us that cosk(k*a) = 0, so set k_n*a = (n - 1/2)pi to get k_n = (n-1/2)pi/a. Plug that back into cosine and normalize.
|
2017-02-20 20:30:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8389256000518799, "perplexity": 2303.928746786114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170609.0/warc/CC-MAIN-20170219104610-00530-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://zbmath.org/?q=an:0695.14003
|
# zbMATH — the first resource for mathematics
Connexions et classes caractéristiques de Beilinson. (Connections and Beilinson characteristic classes). (French) Zbl 0695.14003
Algebraic $$K$$-theory and algebraic number theory, Proc. Semin., Honolulu/Hawaii 1987, Contemp. Math. 83, 349-376 (1989).
[For the entire collection see Zbl 0655.00010.]
Denote by $$\Lambda$$ a subring of $${\mathbb{R}}$$ such that $$\Lambda \otimes_{{\mathbb{Z}}}{\mathbb{Q}}$$ is a field and write $$\Lambda$$ (i) for the group $$(2\pi i)^ i\Lambda \subset {\mathbb{C}}$$. Let $$X=(X_ n)$$ be a smooth pointed (strict) simplicial scheme of finite type (over $${\mathbb{C}})$$ with fat geometric realization $$| X|$$. Also, let $$\bar X$$ be a compactification of X such that $$Y=\bar X\setminus X$$ is a simplicial normal crossings divisor. Then one can associate to X (or to the pair $$(X,\bar X))$$ a complex $$A^{\bullet}(X)$$ of complex differential forms with filtration $$F^ iA^{\bullet}(X)$$, a subcomplex $$C_ m^{\bullet}(X)$$, $$m\in {\mathbb{N}}$$, and the complexes $$S^{\bullet}(X,\Lambda (i))$$, respectively $$S^{\bullet}(X)$$, of differentiable singular cochains with coefficients in $$\Lambda$$ (i), respectively $${\mathbb{C}}$$. Using on the one hand the $$F^ iA^{\bullet}(X)$$, and on the other hand the $$C_ m^{\bullet}(X)$$, one obtains two explicit morphisms $$\Phi$$, resp. $$\Phi_ m$$, with common source $$A^{\bullet}(X)\oplus S^{\bullet}(X,\Lambda (i))$$ and targets $$(A^{\bullet}(X)/F^ iA^{\bullet}(X))\oplus S^{\bullet}(X)$$, resp. $$(A^{\bullet}(X)/C_ m^{\bullet}(X))\oplus S^{\bullet}(X)$$. The cohomology spaces $$H^ k$$ of the corresponding cone complexes (twisted by [-1]) are mutually isomorphic, and besides, they are isomorphic with the usual Deligne-Beilinson cohomology $$H^ k_{{\mathcal D}}(X,\Lambda (i))$$ if $$m+k=2i$$. The data of $$A^{\bullet}(X)$$ is equivalent to the one of the simplicial de Rham complex $$A^{\bullet}(| X|)$$ on $$| X|$$, so a (pointed) differentiable map $$f:\quad | X| \to BU=\lim_{a} \lim_{b}({\mathfrak G}_{a,b}),$$ where $${\mathfrak G}_{a,b}$$ is the complex Graßmannian of a-dimensional subspaces of $${\mathbb{C}}^{a+b}$$, gives rise to elements $$ch_ i(f)\in A^{2i}(X)$$ induced by the Chern classes of the universal bundle with unitary connection on the $${\mathfrak G}_{a,b}$$. The group $${\mathcal K}_ 0(X)$$ is defined as the abelian group generated by pairs (f,u) with f: $$| X| \to BU$$ a (pointed) differentiable map and $$u=(u_ i)$$ is a family of elements $$u_ i\in A^{2i-1}(X)/(F^ iA^{2i-1}(X)\oplus dA^{2i-2}(A))$$ such that $$du_ i\equiv ch_ i(f)\quad mod\quad F^ iA^{2i}(X),$$ subject to two rather evident relations assuring homotopy invariance and giving the group structure. For a smooth projective variety X, $${\mathcal K}_ 0(X)$$ is just Karoubi’s multiplicative K-group consisting of triples (E,D,$$\omega)$$, where E is a complex differentiable vector bundle on X with differentiable connection D and $$\omega =(\omega_ i)$$, $$\omega_ i\in A^{2i-1}/C_ 0^{2i-1}(X)\oplus dA^{2i-2}$$, such that $$d\omega_ i\equiv ch_ i(D)\quad mod\quad C_ 0^{2i}(X),$$ subject to two relations of the kind mentioned above. Also, one may define higher $${\mathcal K}_ m(X)$$ by $${\mathcal K}_ m(X)={\mathcal K}_ 0(X\times S^ m)/{\mathcal K}_ 0(X)\oplus {\mathcal K}_ 0(S^ m)$$, $$m\geq 1$$, where $$S^ m$$ is the finite simplicial set obtained by triangulating the m-sphere $$\Sigma^ m$$. One has bilinear, associative and anticommutative products $${\mathcal K}_ m(X)\times {\mathcal K}_ n(X)\to {\mathcal K}_{m+n}(X)$$. The $${\mathcal K}_ m(X)$$, $$m\geq 1$$, can be interpreted as the homotopy groups $$\pi_ m$$ of a space that can be explicitly described. There is an Atiyah-Hirzebruch spectral sequence with $$E_ 2$$-terms equal to Deligne-Beilinson cohomology and abutting to the higher $${\mathcal K}'s$$ (for suitable indices). Using results of V. Schechtman on the representability of algebraic K-groups of schemes, one obtains canonical natural morphisms $$\rho_ m: K_ m(X)\to {\mathcal K}_ m(X)$$ from algebraic to multiplicative K-theory.
The main results can now be formulated as follows: One can construct explicitly natural maps $$c_ i:\quad {\mathcal K}_ 0(X)\to H_{{\mathcal D}}^{2i}(X,\Lambda (i)),$$ such that their composition with $$H_{{\mathcal D}}^{2i}(X,\Lambda (i))\to H^{2i}(X,\Lambda (i))$$ is equal to the composition of $${\mathcal K}_ 0(X)\to K_ c^{top}(| X|)$$ with the i-th topological Chern class of X. For the higher $${\mathcal K}'s$$ one can construct $$c_{i,k}:\quad {\mathcal K}_ m(X)\to H^ k_{{\mathcal D}}(X,\Lambda (i)),$$ $$k+m=2i$$, such that for a smooth scheme of finite type the composed morphism $$c_{i,k}\circ \rho_ m:\quad K_ m(X)\to H^ k_{{\mathcal D}}(X,\Lambda (i))$$ coincides with Beilinson’s Chern class map $$c^ B_{i,k}:\quad K_ m(X)\to H^ k_{{\mathcal D}}(X,\Lambda (i)).$$ For a smooth projective variety X with flat algebraic vector bundle E the secondary Chern classes of E (Chern, Cheeger, Simons) can be compared with $$c^ B_ i: K_ 0(X)\to H_{{\mathcal D}}^{2i}(X,\Lambda (i))$$.
Reviewer: W.Hulsbergen
##### MSC:
14C35 Applications of methods of algebraic $$K$$-theory in algebraic geometry 18F25 Algebraic $$K$$-theory and $$L$$-theory (category-theoretic aspects) 55N15 Topological $$K$$-theory 14F40 de Rham cohomology and algebraic geometry 53C05 Connections, general theory
|
2021-04-14 16:29:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.936161994934082, "perplexity": 332.3893769384078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077843.17/warc/CC-MAIN-20210414155517-20210414185517-00191.warc.gz"}
|
https://codereview.stackexchange.com/questions/88487/accessing-nested-data-properties-in-handlebars
|
# Accessing nested data properties in Handlebars
I am using HandlebarsJS for my templating needs.
I have a nested object:
{
"amount": {
"preTax": 15.99,
"tax": 0.0,
"currencyCode": "USD",
"total": 15.99
}
}
And I have the following template:
{{#amount}}
<div>PreTax is {{preTax}}. Tax is {{tax}}. Currency Code is {{currencyCode}} and total is {{total}}</div>
{{/amount}}
Is this the best practice for accessing nested properties within a property? Like declaring the block for the data property or I need to use with?
Like so:
{{#with amount}}
<div>PreTax is {{preTax}}. Tax is {{tax}}. Currency Code is {{currencyCode}} and total is {{total}}</div>
{{/with}}
I know that both approaches work. And in mustache-js I always used the former approach.
The with form will be slightly faster than the generic block form due to some optimizations that can be made for known cases like that. You also have the option of making pathed lookups, {{amount.preTax}} for example. All work and unless you have extremely hot code I wouldn't worry too much about the performance of one vs. the other and leave to to which feels stylistically the best for you.
So I guess there is no real winner, unless as Kevin points out, you have an extremely used template then with might be a better option.
|
2020-01-29 03:37:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21112076938152313, "perplexity": 2439.2271339673557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783621.89/warc/CC-MAIN-20200129010251-20200129040251-00106.warc.gz"}
|
https://tex.stackexchange.com/questions/353653/subtotal-in-table-formatting
|
# subtotal in table formatting
I have a table similar to multirow table formatting part of which i can create using \makecel as suggested in accepted answer, but there is subtotal kind of row and then in last 2 rows last 2 columns need to be merged. i had done it like this but without horizontal and vertical lines and need to make it look like this image, can someone guide me to format it like this (sorry for blacking out values, its my thesis results table):
Here is partial code till no subtotal row is added and columns are not merged:
\begin{document}
\begin{table}[htb]
\resizebox{\columnwidth}{!}{%
\centering\small\sffamily%
\setlength\aboverulesep{1ex}
\setlength\belowrulesep{1ex}
\renewcommand{\cellalign}{lc}
\setlength\arrayrulewidth{0.5pt}
\begin{tabular}{ >{\rule[-0.35cm]{0pt}{0cm}}m{1cm}|m{1cm}|m{2cm}}
\toprule[2pt]
Xxxxxx & Xxxxx & xxxxx xxxx xxxx x \\
\midrule[0.5pt]
Xxxxx & \makecell{Xxxxxx xxxxx \\ Xxxxxxxxxxxx \\ Xxxxxxxx \\ Xxxxxxxxxxxxxxxxx \\ Xxxxxxxxxxxxxxxxxx \\ Xxxxxxxxxxxxxxx } & \makecell{xxxxxx \\ xxxxxx \\ xxxxx \\ xxxxx \\ xxxxx \\ xxxxx} \\%
Xxxxxxxxxxxxxxxx & xxxxxx & \\
Xxxxxxxxxxxxxxxxx & xxxxxx & \\
\bottomrule[2pt]%
\end{tabular}%
}
\caption{xxxxxxxxxxxxxxxxxxxxxxxxxxxxx.}
\label{tab:res}
\end{table}%
\end{document}
• Can you please replace your blacked out table with a real compilable example. That is, a minimal example of code starting with \documentclass and ending with \end{document}. In that way it is much easier to get started on your problem, and it is much more likely you will get answers. – StefanH Feb 13 '17 at 15:04
• @StefanH i edited my question to add partial code. – Miaon Feb 13 '17 at 15:25
If I understand it right you are looking for \multicolumn (and \cline).
\documentclass{article}
\usepackage{graphicx}
\usepackage{booktabs,array,makecell}
\begin{document}
\begin{table}[htb]
\resizebox{\columnwidth}{!}{%
\centering\small\sffamily%
\setlength\aboverulesep{1ex}
\setlength\belowrulesep{1ex}
\renewcommand{\cellalign}{lc}
\setlength\arrayrulewidth{0.5pt}
\begin{tabular}{ >{\rule[-0.35cm]{0pt}{0cm}}m{3cm}|m{3cm}|m{3cm}}
\toprule[2pt]
Xxxxxx & Xxxxx & xxxxx xxxx xxxx x \\
\midrule[0.5pt]
Xxxxx & \makecell{Xxxxxx xxxxx \\ Xxxxxxxxxxxx \\ Xxxxxxxx \\ Yyyyxxxxxxxxxxxxx \\ Xxxxxxxxxxxxxxxxxx \\ Xxxxxxxxxxxxxxx } & \makecell{xxxxxx \\ xxxxxx \\ xxxxx \\ xxxxx \\ xxxxx \\ xxxxx} \\\cline{2-3}
& \multicolumn{2}{c}{Zzzzzzz}\\\midrule %new row
Xxxxxxxxxxxxxxxx & \multicolumn{2}{c}{xxxxxx} \\
Xxxxxxxxxxxxxxxxx & \multicolumn{2}{c}{xxxxxx} \\
\bottomrule[2pt]%
\end{tabular}%
}
\caption{xxxxxxxxxxxxxxxxxxxxxxxxxxxxx.}
\label{tab:res}
\end{table}%
\end{document}
With use of multirow and tabularx, removing vertical lines (which are to my opinion ugly and degrade professional look of table), and without resizing of table width the table become:
\documentclass{article}
\usepackage{booktabs, makecell, multirow, tabularx}
\newcolumntype{L}{>{\raggedright\arraybackslash}X}
\newcommand\mc[1]{\multicolumn{2}{c}{#1}}
% if you like to see page layout, remove before next two lines
%\usepackage{showframe} %
%\renewcommand*\ShowFrameColor{\color{red}}
\begin{document}
\begin{table}[htb]
\renewcommand\tabularxcolumn[1]{m{#1}}
\centering
\sffamily%
\setlength\aboverulesep{1ex}
\setlength\belowrulesep{1ex}
\renewcommand{\cellalign}{lc}
\begin{tabularx}{\linewidth}{ L L L}
\toprule[2pt]
Xxxxxx & Xxxxx & xxxxx xxxx xxxx x \\
\midrule[0.5pt]
\multirow{7}{=}{Xxxxx}
& Xxxxxx xxxxx & yyyyyy \\
& Xxxxxxxxxxxx & yyyyyy \\
& Xxxxxxxx & yyyyyy \\
& Yyyyxxxxxxxxxxxxx & yyyyyy \\
& Xxxxxxxxxxxxxxxxxx & yyyyyy \\
& Xxxxxxxxxxxxxxx & yyyyyy \\
\cmidrule{2-3}
& \mc{Zzzzzzz}\\
\midrule
Xxxxxxxxxxxxxxxx & \mc{xxxxxx} \\
Xxxxxxxxxxxxxxxxx & \mc{xxxxxx} \\
\bottomrule[2pt]%
\end{tabularx}%
\end{table}%
\end{document}
|
2020-02-24 21:31:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 10628.674655363146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145981.35/warc/CC-MAIN-20200224193815-20200224223815-00165.warc.gz"}
|
https://asiahk16.kattis.com/problems/peaktram
|
Asia Hong Kong Regional Contest 2016
#### Start
2016-11-06 03:00 CET
## Asia Hong Kong Regional Contest 2016
#### End
2016-11-06 08:00 CET
The end is near!
Contest is over.
Not yet started.
Contest is starting in -617 days 2:17:32
5:00:00
0:00:00
# Problem DPeak Tram
Inspired by the Peak Tram in Hong Kong, you are going to build your own peak tram on a hill, together with the buildings near it. The peak tram and the buildings are aligned on a straight line. The peak tram is at position $0$ on the line. There are $n$ buildings, where building $i$ is at position $i$ on the line ($i=1,2,\dots ,n$). Let the height of building $i$ be $h_ i$. The peak tram starts ascending from height $0$, and the passengers would look at the building that first touches the horizontal ray from the peak tram at the current height. In other words, building $i$ is visible some time during the ascent if and only if $h_ i > h_ j$ for all $j < i$ (i.e., no other buildings are blocking the sight). Assume the hill is sufficiently tall so the peak tram can always ascend to at least the height of the tallest building.
You want the passengers to enjoy the view from the peak tram, so there should be at least $k$ buildings that are visible some time during the ascent. However, the problem is that you do not have any building yet, and you now have to build those $n$ buildings (i.e., you have to choose $h_ i$). Each building $i$ has a preferred height $p_ i$ and a cost per height difference $c_ i$. The cost for making the height of building $i$ to be $h_ i$ is given by $|h_ i - p_ i| \times c_ i$. Also the heights $h_ i$ you choose must be positive integers. Your task is to determine the minimum total cost so that at least $k$ buildings are visible during the ascent.
## Input
The first line of the input contains two integers $n$ and $k$ ($1 \leq k \leq n \leq 70$). Each of the following $n$ lines contains two integers $p_ i$ and $c_ i$ ($1 \leq p_ i \leq 10^9$, $1 \leq c_ i \leq 1\, 000$).
## Output
Output one integer, the minimum total cost so that at least $k$ buildings are visible during the ascent.
Sample Input 1 Sample Output 1
5 3
5 3
3 2
4 8
9 4
6 2
6
|
2018-07-16 04:17:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6515870094299316, "perplexity": 613.4522472137788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589179.32/warc/CC-MAIN-20180716041348-20180716061348-00372.warc.gz"}
|
https://maker.pro/forums/threads/suggested-applications-for-an-eye-tracking-system.239089/
|
# Suggested Applications for an Eye Tracking System?
#### fresheStF
Sep 19, 2011
3
I've just been assigned a project for school and I want to see what consumers demand. My task is to use the TM4 Eye Tracker to meet any kind of demand. Whether it be as simple as improving an aspect of the software provided with it or as complex as interfacing with the real world (control a robotic arm, turn a TV on, etc), I'd like to know what people want. Could it have military applications? Assist a quadriplegic? The possibilities are endless. The question is, which would help the most people?
http://www.eyetechds.com/assistive-technology/tm4
Jun 10, 2011
443
Why don't you write a précis of exactly what this device is, how it works, and why I'd want to use it. The web page was quite uninformative. I suggest if you want some good feedback, don't assume we know what it is you're talking about. I had a different vision of a device in mind when I read your post.
#### fresheStF
Sep 19, 2011
3
An introduction from the EyeTech Digitial Systems Hardware Manual:
"TM4 is a powerful tool, which gives complete access to computers. Our philosophy in designing this product was to give you the ability to operate a computer using only your eyes. With the EyeTech system and an on-screen keyboard you are able to:
-Communicate
-Access Windows menus, scroll bars, icons, and buttons
-Surf the Web
-Create sketches & CAD drawings with design software
-Perform any other task that requires a mouse"
How it works (also from the manual):
The EyeTech system uses IR lights to illuminate the eyes and provide reference points for the eye tracker. Incandescent lights or windows may degrade the operation of TM4, especially if the light source is behind the monitor or behind the user.
For the complete manual visit: http://www.eyetechds.com/wp-content/uploads/2011/02/TM4_Hardware_Manual1.pdf
For a brief video demonstration visit:
The EyeTech system comes complete with QuickGlance Software, which appears to be pretty well rounded, accounting for many different actions you'd expect from a mouse and keyboard.
Jun 10, 2011
443
OK, the video gives an introduction to what it is (I only watched it for about a minute).
You ought to tell what the thing costs to get set up on your computer. I'd imagine it's not terribly cheap.
Who is this thing aimed at? I suspect it would be of interest to people with repetitive motion injuries that preclude mouse/keyboard use or disabled people. I doubt many people who use a keyboard/mouse would be interested in using it. If this is true, then you've got a harder problem -- how do you get in contact with a goodly chunk of that injured/disabled population?
#### fresheStF
Sep 19, 2011
3
You're correct. It's not cheap. It's in the ball park of $7,000. I agree, it's mostly aimed at disabled people. As I mentioned before, for those suffering from quadriplegia (paralysis resulting in the partial or total loss of use of all their limbs and torso), this would be life changing. The first$7,000 they spend isn't going to be on a car. It's going to be on an eye tracker (or whatever I can design to improve their quality of life). Perhaps it would even be covered under their insurance. However, I want to further its capability.
Eye tracking can apply to even more than disabilities. It's currently used in heads-up displays (HUD) for military aircraft. It's far more advanced, but still within the scope of my project.
All in all, I'm just looking for better uses of this technology, regardless of the price or who it was designed to target.
Last edited:
Jun 10, 2011
443
It would be fun to blue-sky the uses of such a device. The use to help quadriplegics is obvious, but there could be lots of others depending on the capabilities of the device (which, I suspect, is pretty limited). For example, one use could be a nanny in a car to monitor the driver. Perhaps inattention or sleepiness could be determined.
I'd love to have something like that to control the input to an OCR device (I often need data from scanned things, but a regular OCR tool either gets too much or doesn't work right.
I could also see it being used to e.g. control a process that displays the process' state on the screen in a bunch of dials.
Of course, there's also the canonical use of monitoring how people visually scan documents and pictures (IIRC, there's been quite a bit of research on such things).
Perhaps an artist could use it to e.g. help with doing a sculpting task to control some tool that cuts the raw material.
Keep us informed on what ideas you uncover...
Replies
1
Views
620
D
Replies
0
Views
1K
David Lesher
D
G
Replies
18
Views
1K
Terry Given
T
|
2022-08-14 19:16:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26875606179237366, "perplexity": 1550.1933110291154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572063.65/warc/CC-MAIN-20220814173832-20220814203832-00462.warc.gz"}
|
https://www.math.ias.edu/seminars/abstract?event=43434
|
# The Tamagawa Number Formula via Chiral Homology
Joint IAS-PU Number Theory Seminar Topic: The Tamagawa Number Formula via Chiral Homology Speaker: Dennis Gaitsgory Affiliation: Harvard University Date: Thursday, March 1 Time/Room: 4:30pm - 5:30pm/S-101
Let X a curve over F_q and G a semi-simple simply-connected group. The initial observation is that the conjecture of Weil's which says that the volume of the adelic quotient of G with respect to the Tamagawa measure equals 1, is equivalent to the Atiyah-Bott formula for the cohomology of the moduli space Bun_G(X) of principal G-bundles on X. The latter formula makes sense over an arbitrary ground field and says that H^*(BunG(X)) is given by the chiral homology of the commutative chiral algebra corresponding to H^*(BG), where BG is the classifying space of G. When the ground field is C, the Atiyah-Bott formula can be easily proved by considerations from differential geometry, when we think of G-bundles as connections on the trivial bundle modulo gauge transformations. In algebraic geometry, we will give an alternative proof by approximating Bun_G(X) by means of the multi-point version of the affine Grassmannian of G using a recent result on the contractibility of the space of rational maps from X to G. (This is Joint work with J. Lurie.)
|
2018-09-24 22:08:21
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9169546961784363, "perplexity": 467.4139622540668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160754.91/warc/CC-MAIN-20180924205029-20180924225429-00024.warc.gz"}
|
https://forum.allaboutcircuits.com/threads/help-needed-please-component-and-wiring.177281/
|
# Help needed please, component and wiring
#### JRthemoon
Joined Mar 11, 2021
9
I and my son are Autistic, Asperger's and he also has PTSD.
I am retired (62) and he is unemployed 23.
We have designed something that is being bought by a company, hand built in Wales.
It has a vibrating motor, 12v supply and a variable speed.
However the problem is that to change the batteries we have to remove the top, annoying to users.
We can not locate a similar size case with a battery compartment, 52 mm, 155 mm, 180 mm
We thought we could place an extra female input on the side that cuts off the internal power when connected, ANY ideas, advice as to what would be so very much appreciated
Thank you
#### LesJones
Joined Jan 8, 2017
3,476
A normal 2.5mm (Or 2.1mm) low voltage panel connector should do what you require. This is one example but you can probably find them cheaper. I chose this ebay entry as it has a picture of the connection side of the socket. The center pin is normally used as positive. The negative side includes a set of changeover contacts that disconnects the battery negative when the power plug is inserted and connects the load to the power from the connector. You will probably be able to find them cheaper. If you want them from a normal supplier you could try CPC, Farnell or RS components.
Les.
#### JRthemoon
Joined Mar 11, 2021
9
YEY many thanks,
Is " 2.5mm (Or 2.1mm) low voltage panel connector " what I look for
#### AnalogKid
Joined Aug 1, 2013
9,351
Is there a circuit board inside the case? If so, a possibility is getting a power connector that is pc mounted with the other components, and just peeks out through a hole rather than being physically mounted to the case.
ak
#### Marley
Joined Apr 4, 2016
416
What type of internal battery does it use?
#### LesJones
Joined Jan 8, 2017
3,476
I found them on Farnell's website by searching for "2.5mm power connector" I then selected Power Entry Connectors which took me to this page . AK makes a good point about mounting them on your printed circuit board if you are using one. I imagine you are using PWM speed control so you may be manufacturing that yourself. If this is the case it would save the step of hand soldering wires between the board and the connector. Although I said the center pin is usually used for positive this NOT ALWAYS the case. If you plan to provide a power supply with your product make sure you wire the polarity to suit. Also mark the polarity, voltage and current on the case near the connector.
Les.
#### JRthemoon
Joined Mar 11, 2021
9
What type of internal battery does it use?
Idiot I am 8 x aa in a battery pack
#### DickCappels
Joined Aug 21, 2008
7,689
Looks like a good choice for 8 volts
#### Tonyr1084
Joined Sep 24, 2015
6,243
Wouldn't 8 cells be 12 volts?
Question for JRthemoon how long does the 8 batteries last?
#### MrSoftware
Joined Oct 29, 2013
2,015
If you want to go the external plug-in route, some female barrel connectors have the option to cut off a circuit when they are plugged in. The audio barrel connectors in particular have this, maybe some of the power ones do too. If you want a new enclosure, check out Polycase, or a place similar if you need something closer to home. They have a pile of enclosures to choose from, and they can machine and print them. I've used them for multiple products. They are not cheap but the result is excellent, which might help you sell the product.
#### BobaMosfet
Joined Jul 1, 2009
1,848
@JRthemoon You have 12V internal. Simply add a few components to allow you to use a 12V wall adapter to provide external power, and then the batteries will serve only as a mini UPS if wall-power occasionally drops.
Here's how:
Battery Pack positive in series with a 1N5818 Schottky diode - Anode towards the positive side of battery pack. Wall adapter positive from a 5mm DC barrel jack attached to the Cathode side of the 1N5818 (basically putting both power-sources in parallel, but the diode prevents the wall-power from charging the batteries.) The diode also drops the battery power to less than the wall paper, by about 0.33-0.55V, so that the wall power will overcome the battery power, whenever it is connected to wall power. Then connect the positive input (where the two power-sources connect after the diode) to your power-switch (a cherry rocker switch or something), and the output from the switch then goes to your downleg voltage regulator. I also put a fuse in, but I don't know what your current requirements are so you'd have to choose an appropriate fuse (F1) for that.
Incomplete Example to show it:
The above also has a green LED power-indicator so that it can easily be seen whether or not the device is ON/OFF. I have a 5V regulator for PCB logic, but I don't know what your needs are so you'd have to change it. If you get fancy, you can add a few extra components to allow you to test the battery and keep and eye on its state.
You can use an Uxcell (#A18050500UX0132) barrel jack mounted to your PCB, and a 6.xmm hole in the case:
Last edited:
xox
#### MisterBill2
Joined Jan 23, 2018
9,235
I consider the power connector soldered to and supported by those solder connections to be a very cheap and inferior arrangement.The reason is that bumpng and wiggling the plug leads to the connections of the soldered portions breaking. FAR more reliable is the incoming power connector that uses jam nuts to hold it in place., and wire leads for the connections to the PC board.
#### JRthemoon
Joined Mar 11, 2021
9
Here is what my Autistic son developed the problem is that the case has to be PULLED apart to replace the batteries hard for many disabled, we can not find one with a battery compartment. 180x150x55 approx
#### LesJones
Joined Jan 8, 2017
3,476
I was interested to know what it was used for and I found out with an internet search. I was surprised that anyone still developed film. The last time I developed a film was about 60 years ago when I was 16 It was ferraniacolor reversal film. This took so much more time than black and white film that decided it was much easier to buy colour reversal film and pay for proccesing.
Les.
#### MisterBill2
Joined Jan 23, 2018
9,235
One method of solving the battery replacement challenge will be rechargeable batteries and an external charging connector, such as those very standard barrel connectors. A regular audio style connector is a very poor choice because it is always short-circuited during insertion and removal.
Yes, I am aware that this may have been mentioned already, but it is the one option that is reasonable. The other would be to replace the battery pack with an internal mains-powered supply. That might not fit.
#### Sensacell
Joined Jun 19, 2012
2,890
Batteries are horrible, miserable things, why not just plug the device in?
You don't process film 'on the run' - so why does it need to be battery powered?
#### Yaakov
Joined Jan 27, 2019
3,464
A regular audio style connector is a very poor choice because it is always short-circuited during insertion and removal.
As an aside, I have a personal rule never to supply power through connectors that are expected to have low level signals on them, and never to use connectors expected to have power on them for low level signals.
There are so many options for connectors that using them in this way is not necessary and can lead to some pretty spectacular end of life events for devices.
#### MrSoftware
Joined Oct 29, 2013
2,015
Unless it needs to be super portable, I would agree that plug-in, or at least a plug-in option would be nice.
That said, if it's pre-production, why not choose a box with flip up latches and a hinge? Maybe not this exact box, but something that opens like this:
https://www.polycase.com/wq-48
#### dl324
Joined Mar 30, 2015
13,082
We thought we could place an extra female input on the side that cuts off the internal power when connected
Male connectors are more typical. I've never seen a female connector used for power.
Is " 2.5mm (Or 2.1mm) low voltage panel connector " what I look for
I've found that 2.1mm x 5.5mm is more common in power adapters. I've bought these on Ali Express for less than $0.15 each. Again, male connectors. #### Yaakov Joined Jan 27, 2019 3,464 Male connectors are more typical. I've never seen a female connector used for power. I've found that 2.1mm x 5.5mm is more common in power adapters. I've bought these on Ali Express for less than$0.15 each. Again, male connectors.
Coaxial power connectors can be confusing. The male has a sticky-out bit that is hidden and the female has the sticky-in bit hidden. People don’t know the female gets stuck into the male, which seems backwards if you don’t know how the taxonomy of connectors works. XLR connectors is another example.
|
2021-09-17 03:42:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2086460441350937, "perplexity": 2145.6341984504775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780054023.35/warc/CC-MAIN-20210917024943-20210917054943-00023.warc.gz"}
|
https://www.physicsforums.com/threads/how-to-train-myself-to-be-more-careful.933777/
|
# Featured How to train myself to be more careful?
1. Dec 8, 2017
### stephenkohnle53
I am not a very careful person and I am often overly ambitious. I often think of something that would be cool to have then try to design or build it without considering safety or my abilities. Do you guys know how I could train myself to not be overly ambitious and to be more careful?
2. Dec 8, 2017
### .Scott
This is really a matter of decision-making and choices.
Have you had any scary near-misses?
3. Dec 8, 2017
### stephenkohnle53
I have burned myself many times usually relatively minor, and I get cuts and bruises basically every time I touch something that could hurt me. I have nearly cut myself on power tools but luckily I haven't yet. Thus far I haven't gotten seriously injured at all but considering I hurt myself relatively minorly all the time I do need to be more careful. I also usually work far longer than I should and I have nearly drowned before due to fatigue and deciding it was a good idea to go swimming after getting extremely tired.At work I often carry things twice my size even though I am very weak causing me to get in a lot of pain.
4. Dec 8, 2017
### Staff: Mentor
Right.
@stephenkohnle53 your post isnt specific about the problem needing to be solved. Could you please clarify what negative consequences you've experienced or just missed? Can you give an example of a safety issue? What type of projects/dangers?
In general I strongly advocate in favor of ambition and failure if safety can be assured.
5. Dec 8, 2017
### stephenkohnle53
I do not pay attention to obvious signs of danger and often do not think about what could harm me. This maybe things such as if I am handling say a small rocket I will not pay attention to it being clearly hot. My problem is I wish to know if there is a way to train myself to be more careful similarly to people being trained through rewards or otherwise to work harder. Does that clarify it?
6. Dec 8, 2017
### .Scott
Sounds like you have a medical condition called "being a young male human". The auto insurance companies often call this "driving while male".
I am guessing that you are no older than 25.
If so, your mission is to survive relatively intact to age 26.
Your pretty much doomed to having more accidents.
But consider this: When you tackle a project, what constitutes success? When you're using a power tool, make yourself skilled - and skilled means that both you and the tool make it out the way they started. So the objective in not just to cut the board, but to operate the tool masterfully. It's not just to get to your destination, but to operate the car and yourself well within their limits. Every time you get a near miss or damage a tool, spend a moment to determine what you could have done to avoid that situation. The goal is mastery, not survival - because ignoring mastery reduces you value and risks your survival.
7. Dec 8, 2017
### stephenkohnle53
True, thanks for the feedback. By the way I am 17 so yeah I see your point. I think from now on I will pick something simple and work on it until it becomes really good and only then will I move on. I think I will start with a t shirt slingshot I read about that way its simple and it will help me get better with my hands.
8. Dec 8, 2017
### Staff: Mentor
Agreed, though I'm 42 and wondering if I'll ever grow out of it!
I rarely do a project where I don't sustain a small injury. Heck, this spring I even found a creative new way to crack a rib!
Here's some ointment to ease the symptoms, since I don't think there's a cure for the disease: learn to recognize the difference between minor/unimportant danger and serious danger. Then ignore the minor danger as a cost of doing business and focus only on staying safe from major danger.
And keep a well stocked first aid kit and a couple of band-aids with you in your wallet.
9. Dec 8, 2017
### .Scott
Sounds like a great project. Part of the mastery in using a slingshot is hitting the target and not hitting what isn't the target. You want to develop a great slingshot, a great setting for using the slingshot, great slingshot skills, and hopefully some great friends to share the experience with.
10. Dec 8, 2017
### Staff: Mentor
Do you have a mentor or other person who is helping you with these projects? A discussion about safety considerations usually goes along with mentoring.
Also, have you taken Wood Shop or Metal Shop in school yet? I would encourage you to do that, since you will learn a LOT about safety considerations around power tools. There are a lot of non-obvious safety considerations when working around power tools. Like not wearing loose-fitting clothing, and if you have long hair, not working without putting it back or up in a hat...
11. Dec 8, 2017
### stephenkohnle53
I am currently taking fundamentals of engineering which does use the shop, but my teacher rarely talks about safety and often is not in the shop when we are working but nearby. As for the projects no I do not have a mentor because the only people who know much about engineering that I know are either unavailable or are not willing to help. As for safety the people I do know are even less careful than me and rarely wears protection no matter what they are doing.
12. Dec 8, 2017
### Staff: Mentor
Well, don't be like them. Please try to set a good example.
And wear protective equipment as much as you can -- you never know when it will pay off. I had leather gloves on a couple days ago when I was trying to loosen a frozen nut on a large piece of equipment. The nut broke free in a way that I didn't anticipate, and I bashed my hand on a sharp protrusion on the equipment. If I hadn't been wearing those gloves, I'm pretty sure my finger would have been lacerated and broken. Still hurt like hell for the rest of the day, though.
Be safe!
Last edited: Dec 10, 2017
13. Dec 8, 2017
### Bystander
Blood sacrifice? Mandatory. Loss of fingers? Bad form.
Bleeding into gloves is much more sterile than bare-handed bleeding.
14. Dec 8, 2017
### Staff: Mentor
That was my guess, as well. When you're young, you're "ten feet tall and bullet-proof, and will live forever."
Same here, although I've sustained enough minor injuries so that many of them don't happen as regularly now.
Some of the things I've learned:
If you're working on house wiring, make sure that the circuit breaker is off. If you happen to drill through a joist and into a 30-A feeder wire for the range, it makes a disconcertingly loud sound. I guess a corollary would be, if you're drilling through a joist, make sure there aren't any wires on the other side.
If you're using a cutoff wheel, keep hands and fingers and other extremities well away from the rotating disk.
If you're using an electrical tool like a hedge trimmer, keep the lead well away from the blade. (More of a problem for my wife, who managed to saw through the extension cord.)
Don't spray yourself with a pressure washer. At 2400 PSI, one of these can shred wood, let alone human skin.
15. Dec 8, 2017
### Choppy
One suggestion is to look up something called "safety culture."
The fact the you recognize a pattern of behaviour that you want to improve is a very good thing and this is certainly possible. One big idea behind a safety culture is that everyone has a role to play. You have to take responsibility for your own decisions and actions and ultimately your own safety as well as the safety of those around you.
It sounds like this is an issue with learning to recognize when you've pushed yourself too hard, or at least when you're tired. You won't always be able to just stop doing something because you're tired. But you can try to make decisions that can both mitigate fatigue and take fewer risks when you are fatigued. You can also do things to keep yourself from getting as tired - build up your resilience. These include getting enough sleep, exercising, eating well, leading a balanced life, etc. And if you know you have a hard physical day at work coming up, don't go on a 20 km run that morning before you start. Plan your day so that you can be the most effective at your tasks as they come.
Doing something that will cause you injury or pain is a bad idea and your employer should not be asking you to do such things. In fact, most employers will specifically NOT want you to over-exert yourself because an employee who is off sick or injured is only costing the company money.
If you're doing this by choice, you might want to think about why. I understand the need to show that you're able to pull your own weight. But generally this comes from people doing their fair share, not straining themselves beyond their capacity.
16. Dec 8, 2017
### stephenkohnle53
Thanks for all of the feedback
Yeah I'm not sure why I strain myself, my employer is very kind and doesn't want me to strain myself or work longer than I was booked for. At my job we have a large area devoted to employee rights and 2 areas designated for safety related things. It is a smallish building so for it's size we have a lot devoted to that. Speaking of which I have a fire safety and general safety test to take at work this Sunday.
Oh and I am usually good about wearing protection. I do forget gloves sometimes but I am getting better about it.
17. Dec 8, 2017
### NTL2009
I'd suggest a couple things that work for me. Learn to listen to that little voice in your head. And keep that voice talking to you ALL the time, about safety.
When that little voice says "I'm getting tired, I'll just cut these last two pieces of wood and go to bed", I listen to the first part, and just stop right there and then. I can finish later when I've rested. And it will be easier to finish with all my fingers.
I also nicked my finger once, reaching for the cutoff board after I turned the saw off (but the blade was still spinning). I barely got in the path of the blade, and the blade had slowed, but it very cleanly left a 1/8" deep notch in my finger tip. It made it very real just how easily it would go through flesh and bone if I had gone further and/or the blade was spinning faster.
From that point on, the entire time I run a saw, I say, over and over again in my head "hand-blade... hand-blade...hand-blade". That keeps me constantly aware of where my hand is in relation to that blade. So far, has worked for me.
As others have said, not all safety is obvious, table saws in particular have some dangers that you need to learn about. The very dangerous different kick-back modes were not obvious to me at least.
A smart man learns from his mistakes, a wise man learns from the mistakes of others. Learn safety from experts, and apply it. And have fun!
18. Dec 10, 2017
### Greg Bernhardt
Usually being more careful is about slowing down. When you rush, you make mistakes.
19. Dec 10, 2017
Much of this high tech stuff, whether it is mechanical, or electrical, or chemical, is stuff that you need to exercise some precaution with. We had a posting on here about a year ago where someone was asking how much 12 M $NH_3$ solution would be needed to neutralize 18 M sulfuric acid. From the sound of his post, he had no idea that mixing the two is not the recommended thing at all,(that it could spew all over the place=quite hazardous because of the potential for chemical burns), and it is highly recommended to dilute both of them first, before mixing them, by pouring the acid or base slowly into a large volume of water, (and always wear goggles even when doing the slow dilution). It's called common sense. It is good that you @stephenkohnle53 are starting to recognize that it pays to be very careful with some of this stuff.
20. Dec 10, 2017
### Staff: Mentor
Have a kid. Very soon you will find yourself noticing dangers you previously neglected.
|
2018-10-17 05:18:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23747572302818298, "perplexity": 1531.884767701847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510998.39/warc/CC-MAIN-20181017044446-20181017065946-00506.warc.gz"}
|
http://danielgrin.net/2016/01/07/spectral-distortions-from-the-dissipation-of-tensor-perturbations/
|
# Spectral distortions from the dissipation of tensor perturbations
Left panel shows the temperature pattern seen by an electron in the metric field of a passing gravitational wave. Right panel shows spectrum of superposition of blackbodies compared with a blackbody at their mean temperature, as well as chemical potential and y-type spectral distortions. Shown as a function of dimensionless frequency (normalized relative to the temperature).
Acoustic waves generate CMB spectral distortions because the linear superposition of blackbodies seen by electrons as they rescatter CMB photons is not itself a blackbody (see right panel of above figure). Gravitational waves similarly produce CMB spectral distortions due to the induced quadrupole seen by electrons at the surface of last scattering (see left panel of above figure). With Jens Chluba, Liang Dai, Mustafa Amin, and Marc Kamionkowski, we computed the CMB spectral distortion resulting from a stochastic background of gravitational waves as a function of their spectral index. For very blue spectral indices, these spectral distortions are detectably large, as we can see in the figure below.
Results published in arXiv: 1407.3653 (MNRAS 446 (3): 2871-2886)
Collaborators: Jens Chluba, Liang Dai, Mustafa Amin, Marc Kamionkowski.
|
2021-05-09 08:11:08
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8839747905731201, "perplexity": 2531.968952546356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988961.17/warc/CC-MAIN-20210509062621-20210509092621-00042.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/calculus/calculus-10th-edition/chapter-5-logarithmic-exponential-and-other-transcendental-functions-5-5-exercises-page-363/80
|
Calculus 10th Edition
$\frac{32}{3ln(3)}$
solve for indefinite integral let u=$\frac{x}{4}$ $\frac{1}{4}dx=du$ $dx=4du$ $\int3^{\frac{x}{4}}dx$ $=\int 3^udu$ $=\frac{3^u}{ln(3)}+C$ $=\frac{3^{\frac{x}{4}}}{ln(3)}+C$ ---- $\int3^{\frac{x}{4}}dx$ [-4,4] $=\frac{3^{-1}}{ln(3)}-\frac{3}{ln(3)}$ $=\frac{32}{3ln(3)}$
|
2018-08-22 08:40:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.999183714389801, "perplexity": 1409.6115525343541}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219692.98/warc/CC-MAIN-20180822065454-20180822085454-00378.warc.gz"}
|
https://mathoverflow.net/questions/193528/minimality-condition-in-a-certain-class-of-hypergraphs
|
# Minimality condition in a certain class of hypergraphs
A hypergraph is a pair $H=(V,E)$ such that $V$ is a (possibly infinite) set and $E\subseteq \mathcal{P}(V)$. $C\subseteq E$ is said to be a cover if $\bigcup C = V$ and $C$is minimal if $C'\subseteq C$ and $C'\neq C$ imply $\bigcup C'\neq V$.
We call $H=(V,E)$ a flag complex if the following conditions are met:
1. $e\in E$ and $e'\subseteq e$ imply $e'\in E$;
2. $\bigcup E = V$;
3. $H$ is 2-determined, that is if $S\subseteq V$ and for all $s, t \in S$ we have $\{s,t\}\in E$ then $S\in E$.
A standard application of Zorn's Lemma shows that in a flag complex, every edge $e\in E$ is contained in a maximal edge $m\in E$ ($m$ being maximal in $E$ with respect to set inclusion). We denote the collection of maximal edges by $\text{Max}(E)$.
Question: Is there a flag complex $H=(V,E)$ and a cover $M\subseteq \text{Max}(E)$ such that for every cover $M'\subseteq M$ we have that $M'$ is not mimimal?
(Note: the example given in the answer of Strongly minimal covers does not work here, as it is not 2-determined.)
• Is the present problem an end to itself or is it a part of a larger picture (or an intermediate step, or has some ready applications)? – Włodzimierz Holsztyński Jan 9 '15 at 9:46
• Just a short answer: the question is an end to itself; I am toying around with (strongly) minimal coverings in hypergraphs in general. – Dominic van der Zypen Jan 9 '15 at 9:49
• What is the difference in meaning between "$(V,E)$ is a flag complex" and "$E$ is the family of all independent sets for some graph on the vertex set $V$"? What am I missing? – bof Jan 9 '15 at 10:04
• Thanks Wlodimierz - I deleted my comment and have recorded your information, you can delete yours now too – Dominic van der Zypen Jan 9 '15 at 10:07
• That's correct bof: the family of all independent sets of some graph $G=(V,E)$ does form a flag complex. On the other hand, given a flag complex, I would have to think about the question whether there is a graph whose collection of independent sets gives back the edge set of the original flag complex. – Dominic van der Zypen Jan 9 '15 at 10:10
NOTATION: $\ \mathbb Z_+\$ is the set of all non-negative integers $\ 0\ 1\ \ldots$.
The answer to the Question is YES, i.e.
THEOREM There exists a flag complex $\ H=(V,E)\$ and a cover $\ M\subseteq Max(E)\$ such that for every cover $\ K\subseteq M\$ we have that K is not minimal.
PROOF Let $\ V\$ be the set of all functions $\ f:\{0\ \ldots\ n\}\rightarrow \{0\,\ 1\}\$ such that
• $\ f(0):=0$
for every $\ n\in \mathbb Z_+.\$ (Values $\ f(n)\$ for $\ n>0\$ can be arbitrarily equal $\ 0\$ or $\ 1).\$ Then $\ E\$ is defined as the set of all chains $\ S\$ of functions, meaning that
• $\ \forall_{f:\{0\ \ldots\ n\}\rightarrow \mathbb Z_+\ and\ g:\{0\ \ldots\ m\}\rightarrow \mathbb Z_+}\ \ (\ n\le m\ \ \Rightarrow\ \ f=g|\{0\ldots n\}\ )$
Finally, let $\ M:=Max(E).\$ Obviously we truly have a flag complex $\ H,\$ and (as always) $\ Max(E)\$ is a cover. Thus let's consider an arbitrary cover $\ K\subseteq M.\$ Let $\ F\in K.\$. I'll show that $\ K\setminus\{F\}\$ is still a cover.
Indeed, Let $\ f\in F.\$ Then there exists a unique $\ f':\{0\ \ldots\ n\!+\!1\}\rightarrow\{0\,\ 1\}\$ such that $\ f'\in F\$ and $\ f=f'|\{0\ldots n\}.\$ Consider the unique $\ g:\{0\ \ldots\ n\!+\!1\}\rightarrow\{0\,\ 1\}\$ such that $\ f'\!\ne g\in E\$ and $\ f=g|\{0\ldots n\}.\$ Thus $\ g\notin F,\$ hence there exists $\ G\in K\setminus\{F\}\$ such that $\ g\in G.\$ But this means that also $\ f\in G.\$ Since $\ f\in F\$ was arbitrary, this means that $\ F\subseteq K\setminus\{F\}.$ END of proof
REMARK In my example the required cover $\ M\$ is special, it is the whole $\ Max(E)$.
• It seems to me that flag complexes are in a 1-1 canonical correspondence with the (possibly infinite) graphs $\ (V\ \binom V2).\$ The flags (hypergraph edges) correspond to cliques, and maximal flags to maximal cliques. – Włodzimierz Holsztyński Jan 12 '15 at 12:20
• Very nice example - thanks Wlodzimierz! – Dominic van der Zypen Jan 12 '15 at 12:58
|
2021-04-11 19:03:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8768964409828186, "perplexity": 192.1418938379752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038064898.14/warc/CC-MAIN-20210411174053-20210411204053-00244.warc.gz"}
|
https://www.geogebra.org/m/cKWyz4Y8
|
Exploring the Fundamental Theorem of Algebra
Topic:
Algebra
What do we see? To the left is a circle centred at the origin with radius where A lies on the x-axis. The point, z, lies on this circle. We think of z as a complex number. Indeed, z can be any complex number. Here . To the right there appears to be another circle, but this is not so. The closed path is the locus of where p is some polynomial with complex coefficients. Here . Check the following: Rotating z through one revolution about the circle causes w to rotate one revolution about its locus. As the radius of the circle diminishes (), the locus of w approximates a circle of radius r centred at 1. As this radius increases, the locus of w behaves strangely! Try A = 0.4, 0.6, 0.8 and 1. Describe what you see.
|
2022-09-29 05:04:14
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9248061776161194, "perplexity": 264.45636290387347}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00437.warc.gz"}
|
https://oar.princeton.edu/handle/88435/pr1vx2d?mode=full
|
# Causes of lifetime fitness of Darwin's finches in a fluctuating environment
## Author(s): Grant, Peter R.; Grant, B. Rosemary
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1vx2d
DC FieldValueLanguage
dc.contributor.authorGrant, Peter R.-
dc.contributor.authorGrant, B. Rosemary-
dc.date.accessioned2019-04-19T18:35:11Z-
dc.date.available2019-04-19T18:35:11Z-
dc.date.issued2011-01-11en_US
dc.identifier.citationGrant, P.R., Grant, B.R. (2011). Causes of lifetime fitness of Darwin's finches in a fluctuating environment. Proceedings of the National Academy of Sciences, 108 (2), 674 - 679. doi:10.1073/pnas.1018080108en_US
dc.identifier.issn0027-8424-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1vx2d-
dc.description.abstractThe genetic basis of variation in fitness of many organisms has been studied in the laboratory, but relatively little is known of fitness variation in natural environments or its causes. Lifetime fitness (recruitment) may be determined solely by producing many offspring, modified by stochastic effects on their subsequent survival up to the point of breeding, or by an additional contribution made by the high quality of the offspring owing to nonrandom mate choice. To investigate the determinants of lifetime fitness, we measured offspring production, longevity, and lifetime number of mates in four cohorts of two long-lived species of socially monogamous Darwin’s finch species, Geospiza fortis and G. scandens, on the equatorial Galápagos Island of Daphne Major. Regression analysis showed that the lifetime production of fledglings was predicted by lifetime number of clutches and that recruitment was predicted by lifetime number of fledglings and longevity. There was little support for a hypothesis of selective mating by females. The offspring sired by extrapair mates were no more fit in terms of recruitment than were half-sibs sired by social mates. These findings provide insight into the evolution of life history strategies of tropical birds. Darwin’s finches deviate from the standard tropical pattern of a slow pace of life by combining tropical (long lifespan) and temperate (large clutch size) characteristics. Our study of fitness shows why this is so in terms of selective pressures (fledgling production and adult longevity) and ecological opportunities (pulsed food supply and relatively low predation).en_US
dc.format.extent674 - 679en_US
dc.language.isoen_USen_US
dc.relation.ispartofProceedings of the National Academy of Sciencesen_US
dc.rightsFinal published version. Article is made available in OAR by the publisher's permission or policy.en_US
dc.titleCauses of lifetime fitness of Darwin's finches in a fluctuating environmenten_US
dc.typeJournal Articleen_US
dc.identifier.doidoi:10.1073/pnas.1018080108-
dc.date.eissued2011-01-03en_US
dc.identifier.eissn1091-6490-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US
Files in This Item:
File Description SizeFormat
|
2020-10-21 13:40:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19613611698150635, "perplexity": 7893.962618581445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876500.43/warc/CC-MAIN-20201021122208-20201021152208-00472.warc.gz"}
|
https://www.techwhiff.com/learn/430-ma-current-is-carried-by-uniformly-wound/270384
|
# A 43.0 mA current is carried by a uniformly wound air-core solenoid with 420 turns, a...
###### Question:
A 43.0 mA current is carried by a uniformly wound air-core solenoid with 420 turns, a 13.0 mm diameter, and 11.0 cm length. (a) Compute the magnetic field inside the solenoid. HT (b) Compute the magnetic flux through each turn. Tm2 (c) Compute the inductance of the solenoid. mH (d) Which of these quantities depends on the current? (Select all that apply.) magnetic field inside the solenoid magnetic flux through each turn m inductance of the solenoid
#### Similar Solved Questions
##### 3. What is the relationship between frictional unemployment and the natural rate of unemployment? Why does...
3. What is the relationship between frictional unemployment and the natural rate of unemployment? Why does frictional unemployment differ among different age groups in the population? If a society is aging overall, how will this affect unemployment? Be as thorough as possible. 2. Explain the benefit...
##### Auckland Co. had a credit balance in their factory overhead account at the end of 2013...
Auckland Co. had a credit balance in their factory overhead account at the end of 2013 overhead of $84,000. Before adjusting for their over or under applied overhead their ending inventories had the following balances: Direct material$250,000, WIP $350,000, and Finished goods$ 600,000. Also the u...
##### On April 1, 2017, Jiro Nozomi created a new travel agency, Adventure Travel. The following transactions...
On April 1, 2017, Jiro Nozomi created a new travel agency, Adventure Travel. The following transactions occurred during the company’s first month. April 1 Nozomi invested $48,000 cash and computer equipment worth$40,000 in the company in exchange for common stock. 2 The company rent...
##### Write a function similar to the function indexLargestElement (page 535 in Example 8 6) except that...
Write a function similar to the function indexLargestElement (page 535 in Example 8 6) except that this function is called indexSmallestElement and it returns the index of the minimum value found in an array....
##### Magnetic Light Switch Ranking Task
Magnetic Light Switch RankingSix identical verticalmetal bars start at the positions shown below and move at constantvelocities through identical magnetic fields. The bars makeelectricalcontact with and move along frictionless metal rodsattached to light bulbs.Part AAt the instant shown, rank these ...
##### 18) The premium on bonds payable: a) represents the increase in interest expense over the life of the bonds b) represents a decrease in interest expense over the life of the bonds c) decreases int...
18) The premium on bonds payable: a) represents the increase in interest expense over the life of the bonds b) represents a decrease in interest expense over the life of the bonds c) decreases interest expense to the effective interest rate d) both (b) and (c) are correct e) none of the above. 19) I...
please label the peaks this is citral im trying to assess the purity please help thanks! PerkinElmer Spectrum Vers. Tuesday, November 05, 2015 Analyst Date Administrator Tuesday, November 05, 2019 1:59 PM % T 2500 550 3500 3000 2000 1500 1000 cm-1 Administrator 613 Sample 613 By Administrator Date ...
##### (Pay attention to units.) A closed circuit with a width of 20 cm and a length...
(Pay attention to units.) A closed circuit with a width of 20 cm and a length of 35 cm consists of 3 resistors connected in series. They have a resistance of 50, 100 and 250 ?. A magnetic field directed into the plane of the circuit changes from O T to 12 T in 3 sec. 1. Sketch the circuit and indica...
##### The proposed mechanism for a reaction is Cl2 => Cl+ + Cl- Slow Cl- + H2S...
The proposed mechanism for a reaction is Cl2 => Cl+ + Cl- Slow Cl- + H2S => HCl + HS- Fast Cl+ + HS- => HCl + S Fast Which of the following would be a rate law for the reaction? A. rate = k[Cl2] B. rate = k[Cl2][H2S] C. rate = k[Cl2]1/2[H2S] D. rate = k[Cl-][H2S] E....
##### Question 3 (3 points) Saved Firms undertake investment projects if the present value of the project...
Question 3 (3 points) Saved Firms undertake investment projects if the present value of the project exceeds the cost. True False Previous Page Next Page Page 3 of 27...
##### The electric eel (Electrophorus electricus) can generate apotential difference of 600 V between a region...
The electric eel (Electrophorus electricus) can generate a potential difference of 600 V between a region just behind its head and its tail. Suppose an unsuspecting fish swims into this area, resulting in a lethal current of 1.1 A passing through the length of its body. If the fish is approximately ...
|
2023-02-04 15:01:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23098373413085938, "perplexity": 2865.3018745095674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500140.36/warc/CC-MAIN-20230204142302-20230204172302-00145.warc.gz"}
|