url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://www.exampleproblems.com/wiki/index.php/Characteristic_curve
|
# Method of characteristics
(Redirected from Characteristic curve)
In mathematics, the method of characteristics is a way of solving partial differential equations, and systems of PDEs.
For a first-order PDE, a characteristic is a line in phase space (comprising the independents, the dependent, and the partial derivatives of the dependent with respect to the independents) along which the PDE degenerates into an ordinary differential equation.
In two dimensions, even a non-linear first-order PDE can always be written in the form
$F(x,y,u,u_{x},u_{y})=0,$
where x and y are the independents, u(x,y) is the unknown solution, and the remaining arguments are the partial derivatives of the solution u. In the method of characteristics the contour map of F, comprising the level curves where F is constant, is used to recover the solution. This procedure is exact as long as the solution is smooth and differentiable.
While level curves do not intersect, characteristics sometimes do. When this happens the solution function is multivalued, the correct branch must be selected, and where they meet discontinuities arise in the form of shockwaves. Characteristics may also fail to cover part of the domain of the PDE: This is called a rarefaction, and a solution typically exists only in a weak, i.e., integral equation, sense.
The first order wave equation
$c{\frac {\partial u}{\partial x}}+{\frac {\partial u}{\partial t}}=0\!$
describes the movement of a wave in one direction with no change of shape. A solution is shown in Figure 1.3 below as a surface plot and a contour plot.
## Example
Consider the one-dimensional scalar conservation equation
$u_{t}+f_{x}(u)=0.$
Here u and f are scalar, with u(x,t) a function of x and t.
By the chain rule this equation implies that
$u_{t}+f_{u}u_{x}=0$
Consider a curve in the x-t plane $\eta (t)$ parameterized as a function of t so that $\eta (0)=x_{0}$ for some $x_{0}$ and $\eta _{t}=f_{u}$
${\frac {d(u)}{dt}}\,$ $=u_{t}+\eta _{t}u_{\eta }\,$ $=u_{t}+f_{u}u_{\eta }\,$ $=0\,$
$\int _{0}^{t}{\frac {d(u)}{dt}}dt=\int _{0}^{t}0\,dt$
$\int _{0}^{t}\,du=0$
$u[\eta (t),t]-u[\eta (0),0]=0$
$u[\eta (t),t]=u(x_{0},0)$
So anywhere along the curve $\eta (t)$ (the characteristic curve) the function is the same as at the beginning.
A similar analysis shows that the curve is in fact a straight line for this case.
## Bibliography
• L.C. Evans, Partial Differential Equations, American Mathematical Society, Providence, 1998. ISBN 0-8218-0772-2
• A. D. Polyanin, V. F. Zaitsev, and A. Moussiaux, Handbook of First Order Partial Differential Equations, Taylor & Francis, London, 2002. ISBN 0-415-27267-X
• A. D. Polyanin, Handbook of Linear Partial Differential Equations for Engineers and Scientists, Chapman & Hall/CRC Press, Boca Raton, 2002. ISBN 1-58488-299-9
|
2018-12-19 03:19:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9200799465179443, "perplexity": 413.24972140017417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376830479.82/warc/CC-MAIN-20181219025453-20181219051453-00311.warc.gz"}
|
http://tex.stackexchange.com/questions/37202/latex-does-not-read-figure-file
|
# Latex does not read figure file [duplicate]
Possible Duplicate:
Why does the image not appear?
I have a LaTeX file, which I compile with pdflatex and I have put some figure with the command
\resizebox{6.5cm}{!}{\includegraphics{DrifterPicture/Flux_Total.png}}
pdflatex did not report any error at compilation but if the file is absent then it reports an error in the log.
The strange thing is that the figure is not shown in the resulting .pdf file. Instead the path to the figure is shown. I have checked that the figure file is ok and that the problem persist if instead of .png I use .pdf.
-
## marked as duplicate by Joseph Wright♦Dec 6 '11 at 8:05
Don't use the draft option. (Or use the final option for graphicx or the graphics itself). – Ulrike Fischer Dec 5 '11 at 16:37
Welcome to TeX.sx! A tip: If you indent lines by 4 spaces, they'll be marked as a code sample. Also, you can use backticks ` to mark your inline code as I did in my edit. You can also highlight the code and click the "code" button (with "{}" on it), or hit Ctrl + K. – Torbjørn T. Dec 5 '11 at 16:39
Any reason for using \resizebox instead of \includegraphics[width=6.5cm]{...}? – Torbjørn T. Dec 5 '11 at 16:41
Using the draft option for either \documentclass[draft]{...} or \usepackage[draft]{graphicx} will cause printing of a boxed filename (of appropriate size).
From the graphicx package documentation (p 8):
draft suppress all the ‘special’ features. In particular graphics files are not included (but they are still read for size info) just the filename is printed in a box of the correct size.
Removing it should display the image, or using the final option:
\usepackage[final]{graphicx}
final The opposite of draft. Useful to over-ride a global draft option specified in the \documentclass command.
-
|
2015-11-30 04:48:39
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8874233365058899, "perplexity": 3423.900920177948}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398460982.68/warc/CC-MAIN-20151124205420-00285-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://proofwiki.org/wiki/Definition:Extended_Real_Number_Space
|
# Definition:Topology on Extended Real Numbers
## Definition
Let $\overline \R$ denote the extended real numbers.
The (standard) topology on $\overline \R$ is the order topology $\tau$ associated to the ordering on $\overline \R$.
### Extended Real Number Space
The topological space $\left({\overline \R, \tau}\right)$ may be referred to as the extended real number space.
## Also see
• Results about the extended real number space can be found here.
|
2022-07-01 20:51:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8698515295982361, "perplexity": 540.4466545971125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103945490.54/warc/CC-MAIN-20220701185955-20220701215955-00648.warc.gz"}
|
http://mathhelpforum.com/calculus/49653-finding-horizontal-asymptotes.html
|
# Math Help - finding horizontal asymptotes
1. ## finding horizontal asymptotes
Find the horizontal asymptote of arccot(x^2-x^4)
thanks so much!!
2. Originally Posted by henry5
Find the horizontal asymptote of arccot(x^2-x^4)
thanks so much!!
Horizontal Asymptotes occur when $f'(x)=0\text{ or }f'(0)$ is undefined.
$f'(x)=-\frac{2x-4x^3}{1+(x^2-x^4)^2}$
$f'(x)=0$ when $-2x+4x^3=0$. I leave it for you to find the values of x that cause this to be true.
I gave enough info to help get you through the problem. Try to finish this off.
--Chris
3. thanks for the reply
the x values that i get are +/- square root of 0.5 and 0.
the question though asks for the y values (like horizontal asymptote at y=2)
how would i find this?
|
2014-04-20 10:01:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9521108269691467, "perplexity": 1091.9692313881947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://ebrainanswer.com/mathematics/question13787741
|
, 08.11.2019 11:31, breannaasmith1122
# Which diagram shows how to correctly multiply 1,234 x 987?
Which diagram shows how to correctly multiply 1,234 x 987?
### Other questions on the subject: Mathematics
Mathematics, 21.06.2019 14:30, lizisapenguin
Which of these people has balanced their checkbook correctly? oa. gary: the balance in his check register is $500 and the balance in his bank statement is$500. b. gail: the balance in her check register is $400 and the balance in her bank statement is$500. c. gavin: the balance in his check register is $500 and the balance in his bank statement is$510.
Mathematics, 21.06.2019 14:30, Carrchris021
Because of your favorite lemonade it is \$3.84 for 3 gallons. write this as an unit rate
Mathematics, 21.06.2019 15:00, suewignall
Need ! give step by step solutions on how to solve number one \frac{9-2\sqrt{3} }{12+\sqrt{3} } number two x+4=\sqrt{13x-20} number three (domain and range) f(x)=2\sqrt[3]{x} +1
Mathematics, 21.06.2019 16:00, EstherAbuwaah
Which function is a quadratic function? o p(x) = 2x(x2 + 6) + 1 om(x) = -4(x + 3) - 2 t(x) = -8x²(x2 - 6) h1 h(x) = 3x(x - 2) - 4
|
2020-09-20 20:14:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7137407064437866, "perplexity": 10624.26688400053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198652.6/warc/CC-MAIN-20200920192131-20200920222131-00508.warc.gz"}
|
http://nbviewer.jupyter.org/url/taaviburns.ca/presentations/log_analysis_with_pandas/nb/2-Pandas%20Crash%20Course.ipynb
|
A Series is like an array: a 1-D list of homogenously-typed items.
In [1]:
import pandas as pd
a = pd.Series([1, 2, 3])
a
Out[1]:
0 1
1 2
2 3
In [2]:
a.dtype
Out[2]:
dtype('int64')
Here's a Series() of floating point numbers:
In [3]:
b = pd.Series([1, 2.3, 3])
b
Out[3]:
0 1.0
1 2.3
2 3.0
In [4]:
b.dtype
Out[4]:
dtype('float64')
Of course if you mix things up, everything in Python is an object in the end.
In [5]:
c = pd.Series(['a', None, 5])
c
Out[5]:
0 a
1 None
2 5
In [6]:
c.dtype
Out[6]:
dtype('object')
# Broadcasting operations across a Series¶
You can apply conditional expressions to a Series, and it will return another Series with the result of that expression applied to each value. NumPy calls this "broadcasting".
In [7]:
a
Out[7]:
0 1
1 2
2 3
In [8]:
a > 1
Out[8]:
0 False
1 True
2 True
In [9]:
a == 1
Out[9]:
0 True
1 False
2 False
It's also easy to broadcast your own callable by using Series.map():
In [10]:
a.map(lambda x: x % 2 == 0)
Out[10]:
0 False
1 True
2 False
# DataFrames¶
A DataFrame is essentially a set of Series objects (as columns) with a shared index (the row labels).
In [11]:
d = pd.DataFrame(
[
[1, 2.3, 'three'],
[4, 5, 6],
[7, 8, 9],
[10, 11, 12]],
columns=['Integers', 'Floats', 'Objects'],
index=[1, 2, 3, 4])
d
Out[11]:
Integers Floats Objects
1 1 2.3 three
2 4 5.0 6
3 7 8.0 9
4 10 11.0 12
In [12]:
d.dtypes
Out[12]:
Integers int64
Floats float64
Objects object
# Selecting data¶
Selecting by column by using a key lookup:
In [13]:
d['Floats']
Out[13]:
1 2.3
2 5.0
3 8.0
4 11.0
Name: Floats
You can look up two columns by indexing using a list of columns:
In [14]:
d[['Integers', 'Objects']]
Out[14]:
Integers Objects
1 1 three
2 4 6
3 7 9
4 10 12
You can select a range of rows using list slices. Note that this refers to the rows as if they were in a Python list()!
In [15]:
d[2:]
Out[15]:
Integers Floats Objects
3 7 8 9
4 10 11 12
You can also avoid the magic and just use DataFrame.xs() to access the rows by their indexed name:
In [16]:
d.xs(3, axis=0)
Out[16]:
Integers 7
Floats 8
Objects 9
Name: 3
Or specifying column names:
In [17]:
d.xs('Floats', axis=1)
Out[17]:
1 2.3
2 5.0
3 8.0
4 11.0
Name: Floats
Row indexing can also be done using a mask:
In [18]:
mask = [True, False, True, False]
Out[18]:
Integers Floats Objects
1 1 2.3 three
3 7 8.0 9
Combined with conditional expression broadcasting, and you can get some really interesting results:
In [19]:
where_Integers_is_gt_4 = d['Integers'] > 4
d[where_Integers_is_gt_4]
Out[19]:
Integers Floats Objects
3 7 8 9
4 10 11 12
In [20]:
d[np.invert(where_Integers_is_gt_4)]
Out[20]:
Integers Floats Objects
1 1 2.3 three
2 4 5.0 6
To get a subset of the rows based on the index value:
In [21]:
d[d.index > 2]
Out[21]:
Integers Floats Objects
3 7 8 9
4 10 11 12
In [22]:
imshow(imread('masking.png'), cmap='gray')
Out[22]:
<matplotlib.image.AxesImage at 0x10521a3d0>
In [22]:
|
2017-03-30 00:57:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2898833155632019, "perplexity": 4558.930426609748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191444.45/warc/CC-MAIN-20170322212951-00175-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://healthtech.samel.com.br/yellow-woodland-atd/1eede0-first-line-of-lyman-series
|
Be the first to write the explanation for this question by commenting below. Calculate the wavelengths of the first four members of the Lyman series i… Add To Playlist Add to Existing Playlist. The IE2 for X is? Be the first to write the explanation for this question by commenting below. Nov 09,2020 - If the wavelength of the first line of Lyman series of hydrogen is 1215 Å. the wavelength of the second line of the series isa)911Åb)1025Åc)1097Åd)1008ÅCorrect answer is option 'B'. The wavelength of first line of lyman series i.e the electron will jump from n=1 to n=2. The wavelength of first line of Balmer series is 6563Å. Brackett of the United States and Friedrich Paschen of Germany. 1. Currently only available for. The Lyman series lies in the ultraviolet, whereas the Paschen, Brackett, and Pfund series … Electrons are falling to the 1-level to produce lines in the Lyman series. α line of Lyman series p = 1 and n = 2; α line of Lyman series p = 1 and n = 3; γ line of Lyman series p = 1 and n = 4; the longest line of Lyman series p = 1 and n = 2; the shortest line of Lyman series p = 1 and n = ∞ Lyman series is obtained when an electron jumps from n>1 to n = 1 energy level of hydrogen atom. If the interaction between radiation and the electron is V = eE:r = e(Ecx + Eyy + E,z), which (n, €, m) states mix with the state (1,0,0) to give this absorption line, called Lyman a? 712.2 Å. This formula gives a wavelength of lines in the Lyman series of the hydrogen spectrum. asked Dec 23, … The wavelength of the first line of Lyman series of hydrogen atom is equal to that of the second line of Balmer series of a hydrogen like ion . Create a New Plyalist. For example, in the Lyman series, n 1 is always 1. And, this energy level is the lowest energy level of the hydrogen atom. The wavelength of the second line of the same series will be. 2. Explanation: No explanation available. OR. R = Rydberg constant = 1.097 × 10 +7 m. n 1 = 1 n 2 = 2. The first line in Lyman series has wavelength λ. Related Questions: A stationary ion emitted a photon corresponding to a first line of the Lyman series. The wavelength of the first line of Lyman series for hydrogen atom is equal to that of the second line of Balmer series tor a hydrogen like ion. Share Question. Let v 1 be the frequency of series limit of Lyman series, v 2 the frequency of the first line of Lyman series, and v 3 the frequency of series limit of Balmer series. New questions in Chemistry. Explanation: No explanation available. The so-called Lyman series of lines in the emission spectrum of hydrogen corresponds to transitions from various excited states to the n = 1 orbit. As En = - 13.6n3 eVAt ground level (n = 1), E1 = - 13.612 = - 13.6 eVAt first excited state (n= 2), E2 = - 13.622 = - 3.4 eVAs hv = E2 - E1 = - 3.4 + 13.6 = 10.2 eV = 1.6 × 10-19 × 10.2 = 1.63 × 10-18 JAlso, c = vλSo λ = cv = chE2 - E1 = (3 x 108) x (6.63 x 10-34)1.63 x 10-18 = 1.22 × 10-7 m ≈ 122 nm The first line in the spectrum of the Lyman series was discovered in 1906 by Harvard physicist Theodore Lyman, who was studying the ultraviolet spectrum of electrically excited hydrogen gas. 4. Atoms. Option A is correct. Rutherfords experiment on scattering of particles showed for the first time that the atom has (a) electrons (b) protons (c) nucleus (d) neutrons Further, you can put the value of Rh to get the numerical values Q. What is the… Zigya App. We have step-by-step solutions for your textbooks written by Bartleby experts! Calculate the wavelength of the first line in the Lyman series and show that… 02:05. a. The formation of this line series is due to the ultraviolet emission lines of … Where, = Wavelength of radiation = Rydberg's Constant = Higher energy level = 2 = Lower energy level = 1 (Lyman series) Putting the values, in above equation, we get Thus . Download the PDF Question Papers Free for off line practice and view the Solutions online. The photon liberated a photoelectron from a stationary H atom in ground state. Find the ratio of series limit wavelength of Balmer series to wavelength of first time line of paschen series. 911.2 Å. For example, the 2 → 1 line is called "Lyman-alpha" (Ly-α), while the 7 → 3 line is called "Paschen-delta” (Pa-δ). So , for max value of 1/wavelength , first line of Lyman series , that is n1=1 and n2=infinity . Calculate the wavelength of the lowest-energy line in the Lyman series to three significant figures. Solution for The first line of the Lyman series of the hydrogen atom emission results from a transition from the n = 2 level to the n = 1 level. Example $$\PageIndex{1}$$: The Lyman Series. If $\upsilon_{1}$ is the frequency of the series limit of Lyman series, $\upsilon_{2}$ is the frequency of the first line of Lyman series and $\upsilon_{3}$ is the frequency of the series limit of the Balmer series… Wave length λ = 0.8227 × 10 7 = 8.227 × 10 6 m-1 Then which of the following is correct? Calculate the wavelength corresponding to series limit of Lyman series. 17. Can you explain this answer? Maximum wave length corresponds to minimum frequency i.e., n 1 = 1, n 2 = 2. The rest of the lines of the spectrum (all in the ultraviolet) were discovered by Lyman from 1906-1914. 1. The wavelength of the first line of Lyman series in hydrogen atom is 1216. The Lyman series of the Hydrogen Spectral Emissions is the first level where n' = 1. The atomic number ‘Z’ of hydrogen like ion is _____ | EduRev GATE Question is disucussed on EduRev Study Group by 133 GATE Students. The Lyman series means that the final energy level is 1 which is the minimum energy level, the ground state, in other words. It is the transitions from higher electron orbitals to this level that release photons in the UltraViolet band of the ElectroMagnetic Spectrum. 3.4k SHARES. The first line in the Lyman series in the spectrum of hydrogen atom occurs at a wavelength of 1215 Å and the limit for Balmer series is 3645 Å. Create. What is Lyman Series? Class 10 Class 12. The first line in each series is the transition from the next lowest number in the series to the lowest (so in the Lyman series the first line would be from n=2 to n=1) and the second line would be from from the third lowest to the lowest (in Lyman it would be n=3 to n=1) etc etc. Doubtnut is better on App. Different lines of Lyman series are . Related Questions: Energy, ΔE=13.6( n 1 2 1 − n 2 2 1 ) eV For the first line of Lyman series: n 1 =1, n 2 =2 ΔE=13.6( 1 2 1 − 2 2 1 ) eV=10.2 eV and energy decreases as we move on to the next series. 3.4k VIEWS. Add to playlist. Lyman series is a hydrogen spectral line series that forms when an excited electron comes to the n=1 energy level. Correct Answer: 27/5 λ. Lines are named sequentially starting from the longest wavelength/lowest frequency of the series, using Greek letters within each series. Options (a) 1215.4Å (b) 2500Å (c) 7500Å (d) 600Å. The wavelength of first line of Lyman series will be . Paiye sabhi sawalon ka Video solution sirf photo khinch kar. The wavelengths in the hydrogen spectrum with m=1 form a series of spectral lines called the Lyman series. First line is Lyman Series, where n 1 = 1, n 2 = 2. 1:25 16.5k LIKES. n 2 is the level being jumped from. Options (a) 2/9 λ (b) 9/2 λ (c) 5/27 λ (d) 27/5 λ. The atomic number Z of hydrogen-like ion is. Ans: (a) Sol: Series Limit means Shortest possible wavelength . The Questions and Answers of The wavelength of the first line of lyman series of hydrogen is identical to that of second line of balmer series for same hydrogen like ion 'X'. The wavelength of the first line of Lyman series for 20 times ionized sodium atom will be added 0.1 A˚ (b) Identify the region of the electromagnetic spectrum in which these lines appear. OR. The four other spectral line series, in addition to the Balmer series, are named after their discoverers, Theodore Lyman, A.H. Pfund, and F.S. The wavelength of the first line in Balmer series is . 3.6k VIEWS. The wavelength of the first line of Lyman series of hydrogen is 1216 A. And this initial energy level has to be higher than this one in order to have a transition down to it and so the first line is gonna have an initial equal to 2. As per formula , 1/wavelength = Rh ( 1/n1^2 —1/n2^2) , and E=hc/wavelength , for energy to be max , 1/wavelength must max . 3. The wavelength of the first line of Lyman series for hydrogen atom is equal to that of the second line of Balmer series for a hydrogen-like ion. 678.4 Å The wavelength of first line of Lyman series will be 5:26 42.9k LIKES. are solved by group of students and teacher of JEE, which is also the largest student community of JEE. Textbook solution for Modern Physics 3rd Edition Raymond A. Serway Chapter 4 Problem 12P. For the Balmer series, n 1 is always 2, because electrons are falling to the 2-level. … 6.8 The first line in the Lyman series for the H atom corresponds to the n = 1 → n = 2 transition. Assuming f to be frequency of first line in Balmer series, the frequency of the immediate next( ie, second) line is a) 0.50 / b)1.35 / c)2.05 / d)2.70 / The spectral lines are grouped into series according to n′. Copy Link. 812.2 Å . (a) v 1 – v 2 = v 3 (b) v 2 – v 1 = v 3 (c) v 3 = ½ (v 1 + v 2) (d) v 2 + v 1 = v 3. 3.6k SHARES. The wavelengths of the Lyman series for hydrogen are given by $$\frac{1}{\lambda}=R_{\mathrm{H}}\left(1-\frac{1}{n^{2}}\right) \qquad n=2,3,4, \ldots$$ (a) Calculate the wavelengths of the first three lines in this series. Correct Answer: 1215.4Å. What is the velocity of photoelectron? This Question by commenting below λ ( b ) Identify the region of the series! Hydrogen atom n=1 energy level is the lowest energy level is the transitions from higher orbitals. = Rydberg constant = 1.097 × 10 +7 m. n 1 is always 2, because are! Is always 2, because electrons are falling to the n=1 energy level of the electromagnetic spectrum Raymond... The second line of Lyman series is a hydrogen spectral line series that forms when excited... ( all in the Lyman series is line series that forms when excited! Of series limit wavelength of first line in Lyman series photon liberated photoelectron. Solutions online = 1 → n = 1 n 2 = 2 is 2... To write the explanation for this Question by commenting below have step-by-step for. Max, 1/wavelength = Rh ( 1/n1^2 —1/n2^2 ), and Pfund series … what is Lyman is... Teacher of JEE, which is also the largest student community of JEE, which also! Series and show that… 02:05. a the Lyman series to three significant figures \ ) the... United States and Friedrich Paschen of Germany sirf photo khinch kar … what the…. Hydrogen-Like ion is of Germany first four members of the first to write the for. ) 600Å the Lyman series is show that… 02:05. a orbitals to this level that release photons in Lyman... Community of JEE limit wavelength of first line of the lowest-energy line in Lyman.! \Pageindex { 1 } \ ): the Lyman series of the first four members of the series that. Photo khinch kar GATE Students sabhi sawalon ka Video solution sirf photo khinch kar n1=1 and.... M. n 1 = 1 solved by Group of Students and teacher of JEE of! Hydrogen is 1216 a ( 1/n1^2 —1/n2^2 ), and E=hc/wavelength, for max value of,! ) 1215.4Å ( b ) 9/2 λ ( c ) 5/27 λ ( d ).. Lyman from 1906-1914 band of the lines of the first four members of the line... Into series according to n′ energy to be max, 1/wavelength = (... ) 1215.4Å ( b ) Identify the region of the first line of series! = 1 States and Friedrich Paschen of Germany solution for Modern Physics 3rd Edition Raymond A. Serway Chapter Problem. Shortest possible wavelength to Playlist Add to Playlist Add to Existing Playlist 3rd Edition Raymond A. Serway Chapter Problem! ) 1215.4Å ( b ) Identify the region of the United States and Friedrich of! Whereas the Paschen, brackett, and Pfund series … what is series. That forms when an excited electron comes to the 2-level EduRev GATE Question is disucussed on Study... Greek letters within each series wavelength corresponding to series limit means Shortest possible wavelength of Germany number... In ground state falling to the n=1 energy level of the first line in Balmer series to wavelength of first! Were discovered by Lyman from 1906-1914 of Students and teacher of JEE, is... To Playlist Add to Playlist Add to Playlist Add to Playlist Add to Playlist Add to Existing.! As per formula, 1/wavelength must max is 1216 a release photons in the Lyman.. Ultraviolet ) were discovered by Lyman from 1906-1914 Question is disucussed on EduRev Group! Student community of JEE, which is also the largest student community of,... Ultraviolet band of the electromagnetic spectrum write the explanation for this Question by commenting below line series forms! Example \ ( \PageIndex { 1 } \ ): the Lyman series the lines of the line! Hydrogen atom the Solutions online in the Lyman series and show that… a. Is 1216 a explanation for this Question by commenting below grouped into series according to.... Within each series series, using Greek letters within each series ( all in the Lyman series, n =! Is a hydrogen spectral Emissions is the lowest energy level is the lowest level. From higher electron orbitals to this level that release photons in the Lyman series i… Add to Existing.... Liberated a photoelectron from a stationary H atom corresponds to minimum frequency i.e. n!, where n ' = 1 → n = 2 were discovered by Lyman from 1906-1914 the second of! Lines appear for off line practice and view the Solutions online into according! By commenting below whereas the Paschen, brackett, and Pfund series … what is the… the corresponding. 9/2 λ ( b ) Identify the region of the first line is Lyman series = 1 commenting! Band of the first line in the ultraviolet band of the same series will be and! Sabhi sawalon ka Video solution sirf photo khinch kar for off line practice and the... Level is the lowest energy level is the transitions from higher electron orbitals to this level release... Ground state wavelength λ is also the largest student community of JEE the in. 1215.4Å ( b ) 2500Å ( c ) 5/27 λ ( b ) λ... Using Greek letters within each series m. n 1 = 1 → n = 2 transition c ) λ. Lyman series i… Add to Playlist Add to Existing Playlist from higher electron orbitals to this level that photons! The n = 2 Identify the region of the electromagnetic spectrum of Students and of. Excited electron comes to the 2-level solved by Group of Students and teacher of.! A hydrogen spectral line series that forms when an excited electron comes to the 1-level produce. The second line of Lyman series always 2, because electrons are falling to the n=1 energy level is lowest! As per formula, 1/wavelength must max from a stationary H atom to! Constant = 1.097 × 10 +7 m. n 1 = 1 n 2 = 2 transition brackett of spectrum... 3Rd Edition Raymond A. Serway Chapter 4 Problem 12P that forms when an excited electron comes to the.. Practice and view the Solutions online excited electron comes to the 1-level to produce lines in the first line of lyman series spectral is. Three significant figures wavelength/lowest frequency of the first line in Lyman series wavelength! Rydberg constant = 1.097 × 10 +7 m. n 1 = 1, n 2 2. I… Add to Existing Playlist write the explanation for this Question by below! ( a ) 1215.4Å ( b ) Identify the region of the same series will be also the largest community... The largest student community of JEE, which is also the largest community! For your textbooks written by Bartleby experts series is a hydrogen spectral line series that forms when an electron. C ) 5/27 λ ( d ) 600Å ) 7500Å ( d ) 27/5.... 3Rd Edition Raymond A. Serway Chapter 4 Problem 12P by Bartleby experts of Paschen series sabhi sawalon Video. The lowest-energy line in the hydrogen spectrum with m=1 form a series of hydrogen is 1216 a =. The lowest-energy line in Balmer series to wavelength of the lines of first! { 1 } \ ): the Lyman series series, that is n1=1 and n2=infinity series lies in Lyman... 02:05. a 10 +7 m. n 1 = 1, n 1 = first line of lyman series Shortest possible.! Spectral line series that forms when an excited electron comes to the 2-level from... ), and Pfund series … what is the… the wavelength of the series, using Greek letters each... The Balmer series to wavelength of the Lyman series, where n ' = 1, n 1 1. The wavelengths of the lines of the first to write the explanation this... Max value of 1/wavelength, first line in the ultraviolet ) were discovered by Lyman from 1906-1914 photo... Series has wavelength λ, using Greek letters within each series the atomic number Z of hydrogen-like is! Of spectral lines called the Lyman series is ( d ) 600Å the lowest-energy line in the series... ) 1215.4Å ( b ) 2500Å ( c ) 5/27 λ ( c 5/27. First line in the Lyman series of the first to write the explanation for this by. Series in hydrogen atom limit of Lyman series to three significant figures Sol. Shortest possible wavelength the largest student community of JEE, which is also the largest student community JEE. 1/Wavelength must max i.e., n 2 = 2 → n = 2 transition is n1=1 and n2=infinity )! Solved by Group of Students and teacher of JEE, which is also the largest student community JEE. Discovered by Lyman from 1906-1914 ) were discovered by Lyman from 1906-1914 calculate the wavelength first! For the H atom in ground state Rydberg constant = 1.097 × first line of lyman series +7 m. n =! Series in hydrogen atom is 1216 a to this level that release in. A series of hydrogen is 1216 a series and show that… 02:05. a ).. 2 transition significant figures series is 2 transition the 2-level by 133 Students... The first line of Lyman series of spectral lines called the Lyman series i… Add to Existing Playlist 1 \... This Question by commenting below it is the first line of Lyman series of the States. The first line of Lyman series for the Balmer series is a hydrogen spectral line series forms! We have step-by-step Solutions for your textbooks written by Bartleby experts n = n. Ultraviolet band of the hydrogen atom is 1216 Study Group by 133 Students! Atom in ground state 7500Å ( d ) 27/5 λ possible wavelength 2 = 2 community! Into series according to n′ teacher of JEE, which is also the largest student community of,.
Linen Wrap Console Command, Skyrim Followers List And Locations, Last Holiday Google Drive, Mcps Grading Policy Covid-19, Dear Theodosia Violin Sheet Music, Short Haired Chihuahua Hypoallergenic, Rhino-rack Pioneer Platform With Backbone, Central Brass 3-handle Tub/shower Repair Kit,
|
2021-09-24 06:07:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7002501487731934, "perplexity": 1646.566320910333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057504.60/warc/CC-MAIN-20210924050055-20210924080055-00328.warc.gz"}
|
http://math.stackexchange.com/questions/496397/why-does-the-empty-diagram-exist
|
# Why does the empty diagram exist?
Let $\mathcal C$ be a category. Why does the Functor $\mathcal F:\emptyset \to \mathcal C$ exist? In general $Ob(\mathcal C)$ is not a set, so $\mathcal F:\emptyset \to Ob(\mathcal C)$ is not a function and i don't have the vacuous condition from the set-theoretic case, right?
-
## 1 Answer
When the set of objects is not a set for you, what is it then? A class perhaps? Note that still for every class $T$ there is a unique map $\emptyset \to T$. Recall that a map $S \to T$ between classes is just a formula $\phi$ (actually a subclass of $S \times T$) such that $\forall s (s \in S \Longrightarrow \exists ! t \in T (\phi(s,t))$. If $S=\emptyset$, this is satisfied for every $\phi$ (but all of them define the same map).
-
sorry, i was dazzled by the class notion which is very new to me but no sorcery at all, though :) – user83496 Sep 17 '13 at 13:42
|
2015-01-27 18:54:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8789433836936951, "perplexity": 204.77842457370747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115856041.43/warc/CC-MAIN-20150124161056-00011-ip-10-180-212-252.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/number-of-subspaces-of-a-vector-space-over-a-finite-field.332498/
|
# Number of subspaces of a vector space over a finite field
1. Aug 25, 2009
### winter85
1. The problem statement, all variables and given/known data
Prove: If V is an n-dimensional vector space of a finite field, and if 0 <= m <= n, then the number of m-dimensional subspaces of V is the same as the number of (n-m)-dimensional subspaces.
3. The attempt at a solution
Well here's a sketch of my argument. Let U be an m-dimensional subspace of V, then the annihiator of U, U^0 is a (n-m)-dimensional subspace of V*, the dual space of V. Let W be the subspace of V whose dual space is U^0. I plan to show that W is in 1-to-1 correspondence with U, so there is an injection between the set of m-dimensional subspaces of V and the set of (n-m) dimensional subspaces. Since the situation is symmetric, it follows those sets have a bijection and therefore the same cardinality.
Now before I work out the details, I wanna ask about one thing, this argument nowhere uses the fact that V is a vector space over a finite field (except perhaps at the very last, to substitute "cardinality" by "number of elements"). So is there something wrong with it? why is the problem specifically about vector spaces over finite fields if it works in the general case?
Thanks.
|
2018-03-18 02:36:54
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8156532049179077, "perplexity": 193.28534815380777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645413.2/warc/CC-MAIN-20180318013134-20180318033134-00528.warc.gz"}
|
http://www.zentralblatt-math.org/ioport/en/?q=au:Ward%2C%20R*
|
History
1
240
Please fill in your query. A complete syntax description you will find on the General Help page.
first | previous | 1 21 41 61 81 101 | next | last
Result 1 to 20 of 240 total
Sparse Legendre expansions via $\ell_1$-minimization. (English)
J. Approx. Theory 164, No. 5, 517-533 (2012).
1
Lower bounds for the error decay incurred by coarse quantization schemes. (English)
Appl. Comput. Harmon. Anal. 32, No. 1, 131-138 (2012).
2
Quantifier elimination in the theory of $L_p(L_q)$-Banach lattices. (English)
J. Log. Anal. 3, Article 11, 29 p., electronic only (2011).
3
Low-rank matrix recovery via iteratively reweighted least squares minimization. (English)
SIAM J. Optim. 21, No. 4, 1614-1640 (2011).
4
New and improved Johnson-Lindenstrauss embeddings via the restricted isometry property. (English)
SIAM J. Math. Anal. 43, No. 3, 1269-1281 (2011).
5
Trust in human-computer interactions as measured by frustration, surprise, and workload. (English)
Schmorrow, Dylan D. (ed.) et al., Foundations of augmented cognition. Directing the future of adaptive systems. 6th international conference, FAC 2011, held as Part of HCI international 2011, Orlando, FL, USA, July 9‒14, 2011. Proceedings. Berlin: Springer (ISBN 978-3-642-21851-4/pbk). Lecture Notes in Computer Science 6780. Lecture Notes in Artificial Intelligence, 507-516 (2011).
6
Increasing energy efficiency in sensor networks: blue noise sampling and non-convex matrix completion. (English)
Int. J. Sens. Netw. 9, No. 3-4, 158-169 (2011).
7
Some empirical advances in matrix completion. (English)
Signal Process. 91, No. 5, 1334-1338 (2011).
8
Computing the confidence levels for a root-mean-square test of goodness-of-fit. (English)
Appl. Math. Comput. 217, No. 22, 9072-9084 (2011).
9
Security analysis and complexity comparison of some recent lightweight RFID protocols. (English)
Herrero, Álvaro (ed.) et al., Computational intelligence in security for information systems. 4th international conference, CISIS 2011, held at IWANN 2011, Torremolinos-Málaga, Spain, June 8‒10, 2011. Proceedings. Berlin: Springer (ISBN 978-3-642-21322-9/pbk). Lecture Notes in Computer Science 6694, 92-99 (2011).
10
New and improved Johnson-lindenstrauss embeddings via the restricted isometry property (English)
SIAM J. Math. Analysis 43, No. 3, 1269-1281 (2011).
11
Robust image watermarking based on multiscale gradient direction quantization (English)
IEEE Transactions on Information Forensics and Security 6, No. 4, 1200-1213 (2011).
12
Probabilistic analysis of blocking attack in RFID systems (English)
IEEE Transactions on Information Forensics and Security 6, No. 3-1, 803-817 (2011).
13
A robust and fast video copy detection system using content-based fingerprinting (English)
IEEE Transactions on Information Forensics and Security 6, No. 1, 213-226 (2011).
14
A new scheme for robust gradient vector estimation in color images (English)
IEEE Transactions on Image Processing 20, No. 8, 2211-2220 (2011).
15
Optimizing a tone curve for backward-compatible high dynamic range image and video compression (English)
IEEE Transactions on Image Processing 20, No. 6, 1558-1571 (2011).
16
Computing the confidence levels for a root-mean-square test of goodness-of-fit (English)
Applied Mathematics and Computation 217, No. 22, 9072-9084 (2011).
17
Probabilistic analysis and correction of Chen’s tag estimate method (English)
IEEE T. Automation Science and Engineering 8, No. 3, 659-663 (2011).
18
A new data hiding method using angle quantization index modulation in gradient domain (English)
ICASSP, 2440-2443 (2011).
19
Compressed sensing based MR image reconstruction from multiple partial K-space scans (English)
SiPS, 340-343 (2011).
20
first | previous | 1 21 41 61 81 101 | next | last
Result 1 to 20 of 240 total
|
2013-06-18 20:55:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21131065487861633, "perplexity": 3602.7788511324543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707187122/warc/CC-MAIN-20130516122627-00057-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://ai.stackexchange.com/questions/12469/the-problem-with-the-gamblers-problem-in-rl/12473
|
# The problem with the Gambler's Problem in RL
Recently I simulated the Gambler's Problem in RL:
Now, the problem is, the curve does not at all appear the way as given in the book. The "best policy" curve appears a lot more undulating than it is shown based on the following factors:
• Sensitivity (i.e. the threshold for which you decide the state values have converged).
• Depending the value of sensitivity it also depends on whether I find the policy by finding the action (bet) which cause the maximum return by using $$>$$ or by using $$>=$$ in the following code i.e:
initialize maximum = -inf
best_action = None
loop over states:
loop over actions of the state:
if(action_reward>maximum):
best_action = action
Also note that if we make the final reward as 101 instead of 100 the curve becomes more uniform. This problem has also been noted in the following thread.
So what is the actual intuitive explanation behind such a behaviour of the solution. Also here is the thread where this problem is discussed.
As Neil notes, for low values of $$p$$, the probability that you win a gamble, it is the case that there is a unique optimal policy.
|
2020-05-31 04:24:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8404189348220825, "perplexity": 492.65214673893286}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347410745.37/warc/CC-MAIN-20200531023023-20200531053023-00023.warc.gz"}
|
https://www.studyxapp.com/homework-help/problems-resolution-of-forces-into-x-and-y-components-26-determine-the-x-and-y-q1564839735036743681
|
# Question Solved1 Answer Problems Resolution of forces into $$x$$ and $$y$$ components. 2.6 Determine the $$x$$ and $$y$$ components of the force, $$F$$, shown. $$2.6$$ $$F x=$$ $$\mathrm{Fy}=$$ Problems Resolution of forces into x and y components. 2.6 Determine the x and y components of the force, F, shown. 2.6 Fx= Fy=
4ZZI5S The Asker · Civil Engineering
Transcribed Image Text: Problems Resolution of forces into x and y components. 2.6 Determine the x and y components of the force, F, shown. 2.6 Fx= Fy=
More
Transcribed Image Text: Problems Resolution of forces into x and y components. 2.6 Determine the x and y components of the force, F, shown. 2.6 Fx= Fy=
|
2023-01-27 18:17:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17517825961112976, "perplexity": 1269.3533680431044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495001.99/warc/CC-MAIN-20230127164242-20230127194242-00840.warc.gz"}
|
https://cs.stackexchange.com/questions/53266/how-to-handle-horizontal-lines-in-the-polyfill-algorithm
|
# How to handle horizontal lines in the Polyfill Algorithm?
When I look at polyfill algorithm tutorials/articles or examples, nothing mentioned about how to handle horizontal lines. Does anyone have any idea how horizontal lines should be handled?
For instance, consider the following image:
For intersection I wrote if the line on the scan line count it as 1 point, even I count as 2 point or no point, wont help. Any idea how should I handle the horizontal line?
Edit: here a complex example of horizontal line intersection, let's take a look at line 3:
Alright, if we take a look at next point and previous point of the intersection of both end of horizontal line, if those point are in same side, means I can count them in or out, doesn't effect the whole calculation.
But if they are are opposite side, I have to count them in and count one side extra.
What do I mean?
Take InterS1 and InterS2 as an example, so when I get this I have count that horizontal line as 3 points:
point1 = InterS1
point2 = InterS2
point3 = InterS2
Why I need to do that?
1. When next point and previous point are in different direction, is letting us know that our intersection line crossing outside and passing through inside of shape.
2. Also because line can be created when there is start point and end point, so I need to have even number to draw lines.
So now I have no idea base on what I need to add the extra point to one side.
Do I make sense?
• What are you doing when you have a V shaped corner? Either you have to ignore it or consider it as two vertices, right!! – Shreesh Feb 18 '16 at 7:24
• @Shreesh in that case i will look at next and previous vertex, is they are not on the same side, i will count it as intersection, if they are on same side, i will just ignore it, like this: drive.google.com/file/d/0BwoMn9VKDw-taXlNendza09BQXM/… – Bear Feb 18 '16 at 7:38
• Do the same for horizontal line. – Shreesh Feb 18 '16 at 10:16
• yeah, i did that, this is what i get:drive.google.com/file/d/0BwoMn9VKDw-tRElHSElrejcwd0k/… haven't apply ignore the horizontal yet. – Bear Feb 18 '16 at 10:44
• @Shreesh nah, not really, okay here what i did, for horizontal lines, if the previous point and next point are in same direction, means they are in the shape, but for those horizontal that next and previous points going different direction, i have to count those 2 points as 3 point, but i have no idea base on what i need to add the extra one to which side, okay let me draw it. maybe it's more clear what i mean. – Bear Feb 22 '16 at 1:34
|
2020-08-13 11:46:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.493930459022522, "perplexity": 653.3854358394835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738982.70/warc/CC-MAIN-20200813103121-20200813133121-00314.warc.gz"}
|
https://homework.cpm.org/category/CCI_CT/textbook/int3/chapter/9/lesson/9.1.7/problem/9-88
|
### Home > INT3 > Chapter 9 > Lesson 9.1.7 > Problem9-88
9-88.
1. What is the measure in degrees of a central angle measuring radians? Homework Help ✎
1. What other angles correspond to the same point on the circle?
2. Make a sketch of the unit circle showing the resulting right triangle.
3. What are sin, cos and tan exactly?
$\text{What\:number\:of\:degrees\:is\:equivalent\:to}\:\frac{\pi}{3}\:\text{radians?}$
If you can't remember, calculate.
$\pi=180\degree \ \ \ \ \ \ \ \ \frac{\pi}{3} = 60^\circ$
Now multiply the angle by 7.
420°
The distance around the unit circle is 2π, no matter what point you start from.
$\frac{\pi}{3} \:\pm\:2\pi n$
The angle you are working with, 420°, is more than 360°. How much more?
Notice that the triangle formed is a 30°-60°-90° triangle.
$\text{sin\left(\frac{7\pi}{3}\right)=\frac{\sqrt{3}}{2}}$
$\text{cos\left(\frac{7\pi}{3}\right)=\frac{1}{2}}$
$\text{tan\left(\frac{7\pi}{3}\right)=\sqrt{3}}$
|
2019-10-14 01:20:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5485000014305115, "perplexity": 1780.7295403894784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986648481.7/warc/CC-MAIN-20191014003258-20191014030258-00013.warc.gz"}
|
https://faculty.math.illinois.edu/Macaulay2/doc/Macaulay2-1.17/share/doc/Macaulay2/Macaulay2Doc/html/_to__Sequence.html
|
# toSequence -- convert to sequence
## Description
toSequence x -- yields the elements of a list x as a sequence.
If x is a sequence, then x is returned.
i1 : toSequence {1,2,3} o1 = (1, 2, 3) o1 : Sequence
## Ways to use toSequence :
• "toSequence(BasicList)"
## For the programmer
The object toSequence is .
|
2021-06-18 11:48:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39523932337760925, "perplexity": 7948.79188288103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487636559.57/warc/CC-MAIN-20210618104405-20210618134405-00074.warc.gz"}
|
http://sourcemage.org/Spell/Book
|
# Spell writing handbook
## Introduction
This is a technical reference document.
The first part describes all of the files that are checked for in a spell, their purpose, and documented attributes.
The second part describes the standards we use in the grimoire.
## Spell files
These are the files used during a cast, in their execution order:
• PREPARE – Useful for those rare times when the DETAILS file needs to be modified or the spell or package needs to be configured before the DETAILS file is executed.
• DETAILS – Informational file, required for all spells.
• CONFIGURE – Used to select and modify a package's compile time options.
• DEPENDS – Lists all other spells that are required or optionally required to be cast.
• UP_TRIGGERS – Provides the opportunity to use runtime registration of on_cast triggers.
• SUB_DEPENDS – Used to make a spell depend on another spell with certain features enabled.
• PRE_SUB_DEPENDS – Tells Sorcery whether or not the sub-dependee is providing the given sub-depends.
• TRIGGER_CHECK – Used to inspect each trigger in a spell.
(…Processing of the depended on spells…)
(…DETAILS is run again…)
Build API begin —
— Build API end —
(…The triggered spells are cast now…)
These are the files used during the download, in their execution order:
These are the files used during a dispel, in their execution order:
These are the files used during a resurrect, in their execution order:
These files are processed during a scribe update, cleanse --tablet, or cleanse --tablet_spell, in their execution order:
These files are known as spell filters:
• excluded
• protected
• volatiles
• configs
These are the other files:
• HISTORY
• PROVIDES
• EXPORTS
• SOLO
• services
• init.d directory
• pam.d directory
• xinetd.d directory
• desktop directory
|
2021-07-28 01:37:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2462516874074936, "perplexity": 11959.471278051533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153515.0/warc/CC-MAIN-20210727233849-20210728023849-00139.warc.gz"}
|
http://mathematica.stackexchange.com/questions/6269/creating-a-plot-without-an-x-axis-and-automatic-ticks-for-the-y-axis?answertab=votes
|
# Creating a plot without an x-axis and automatic ticks for the y-axis
I am trying to plot a function with specific options for the appearance of the plot. I want the X axis to not show up, and the Y axis to be labeled normally (i.e. numbers outside the plot).. Yet, I don't get any numbers in the Y axis' ticks:
The code I am using is the following (the function to be plotted is different, but still returns this behavior):
Plot[5 t + 3, {t, 0, 10},
PlotRange -> All,
AxesOrigin -> {10, 0},
LabelStyle -> {FontFamily -> "Arial", Bold, 40},
Axes -> {False, True},
PlotStyle -> {{Thickness[0.007], Darker[Blue, 0.5]}},
AxesStyle -> AbsoluteThickness[3], ImageSize -> {640, 480}
]
-
That doesn't help me. Either I get blank space within the plot when I change AxesOrigin, and resizing does no change to the plot :S – Sosi May 31 '12 at 14:23
The problem with your plot is that you are using the y-axis on the left, but positioning it on the right by shifting the origin. However, its behaviour is still that of a left-axis (i.e., ticks are to the left of the line). What you need is a right-axis, where the ticks are to the right (with the origin remaining where it should be).
You can do this by using Frame and setting only the right frame to be visible. Note that all the Axes* options will now be named Frame*.
Plot[5 t + 3, {t, 0, 10},
LabelStyle -> {FontFamily -> "Arial", Bold, 40},
Frame -> {{False, True}, {False, False}},
FrameTicks -> {{None, All}, {None, None}},
PlotStyle -> {{Thickness[0.007], Darker[Blue, 0.5]}},
FrameStyle -> AbsoluteThickness[3], ImageSize -> {640, 480},
PlotRange -> All
]
-
Oh, I see!! Thanks, I will check it out now! Looks great! Edit:Indeed, it works great! Thanks again! – Sosi May 31 '12 at 14:31
AxesOrigin is unnecessary. Only nitpicking because you beat me by 1 sec :) – István Zachar May 31 '12 at 14:33
You don't need to switch from axes to frame. And you may not be able to use a frame if the axis happens to be inside the plot instead of at the border.
So the way to get what you want with the minimum changes to your original code is this:
Plot[5 t + 3, {t, 0, 10},
PlotRange -> All,
AxesOrigin -> {10, 0},
LabelStyle -> {FontFamily -> "Arial", Bold, 40},
PlotStyle -> {{Thickness[0.007], Darker[Blue, 0.5]}},
AxesStyle -> {Opacity[0], AbsoluteThickness[3]},
ImageSize -> {640, 480}]
All I did is to remove Axes -> {False, True}, and then modified the AxesStyle to make the x axis invisible.
To see how this differs from the solution using Frame, I'll move the axis into the middle:
Plot[5 t + 3, {t, 0, 10},
PlotRange -> All,
AxesOrigin -> {5, 0},
LabelStyle -> {FontFamily -> "Arial", Bold, 40},
PlotStyle -> {{Thickness[0.007], Darker[Blue, 0.5]}},
AxesStyle -> {Opacity[0], AbsoluteThickness[3]},
ImageSize -> {640, 480}]
-
"... Y axis to be labeled normally (i.e. numbers outside the plot)..." That's why I used a frame. Yours still puts the numbers inside the plot. Of course, you could possibly control it with a tick function, but then it isn't minimal anymore :) – rm -rf May 31 '12 at 19:39
@R.M Well, if you're going to take things literally: I could just change mine to AxesOrigin -> {0, 0}... – Jens May 31 '12 at 20:02
Nah, I interpreted their question as wanting to place the axis to the right, but since they didn't know how to, they simply moved it as far right as possible. I do agree that yours is useful, since you can't have frames in arbitrary positions – rm -rf May 31 '12 at 20:18
Indeed I wanted to have the axis on the right-hand side. Yet, thanks for your help Jens, I learnt something now! :) – Sosi Jun 1 '12 at 9:54
|
2014-07-31 13:51:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39487728476524353, "perplexity": 2907.771603230223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273350.41/warc/CC-MAIN-20140728011753-00177-ip-10-146-231-18.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/170113/custodial-symmetry-and-higgs-kibble
|
# Custodial symmetry and Higgs-Kibble
In the context of Higgs mechanism only on $SU(2)_L$ model without the hypercharge, one writes the lagrangian with traces also for the Higgs, i.e. $$\cdots+\text{Tr}[(D_\mu H)^\dagger D^\mu H)]-\frac{\lambda}{4}\Big(\text{Tr}[H^\dagger H]+\cdots$$ and not only for the $W_\mu$ $$-\frac{1}{2}\text{Tr}[W_{\mu\nu}W^{\mu\nu}].$$ I think this happen because of the custodial symmetry ($SU(3)_{\text{custodial}}$) that acts on $H$ as $$H\rightarrow\gamma H\gamma^\dagger$$ and on $W_\mu$ as $$W_\mu\rightarrow \gamma W_\mu\gamma^\dagger.$$ In fact it is easy to check that the vector bosons part is invariant with respect to the symmetry, but now my question is: How the same symmetry act on $H^\dagger$?
• When you say "Higgs-Kibble", do you mean the Higgs mechanism? The only name that is consistently associated to it is Higgs, about seven or eight others share that "honor" with varying frequency. General plea: Please always link to explanations of technical terms you use - it makes the question more accessible, and prevents confusion with similarily named objects. – ACuriousMind Mar 13 '15 at 14:28
• I mean Higgs mechanism without hypercharge boson $B_\mu$ – yngabl Mar 13 '15 at 16:02
• – Cosmas Zachos May 27 at 0:09
I am basically repeating my "geeky footnote" answer reviewing the Longhitano magic-hat trick at the heart of your question, namely the recasting of a complex Higgs doublet $$\Phi$$ into a complex 2×2 matrix $$H$$, cf. his thesis paper of 1981, into the exponential Gürsey realization for a vector symmetry.
Longhitano starts from the standard Higgs weak left-isodoublet
$$\Phi = \begin{pmatrix} \phi^+ \\ \phi^0 \end{pmatrix}\equiv \frac{1}{\sqrt 2} \begin{pmatrix} \varphi_1-i\varphi_2 \\ \sigma +i\chi \end{pmatrix}, \\ \Phi \mapsto e^{i\vec{\alpha}\cdot \vec{\tau}/2} \Phi ~.$$ The remnant physical Higgs is $$\sigma$$, picking up the v.e.v. and splitting off the custodial vector SU(2) triplet of goldstons.
The conjugate doublet is also a left isodoublet,
$$\tilde \Phi =i\tau_2 \Phi^*= \begin{pmatrix} \phi^{0~~*} \\ -\phi^- \end{pmatrix} ,\\ \tilde \Phi \mapsto e^{i \vec{\alpha}\cdot \vec{\tau}/2}\tilde \Phi ~.$$
Now, your Higgs matrix is defined as a side-by-side juxtaposition of these two left-doublets serving as columns, $$H\equiv \sqrt{2}(\tilde\Phi, \Phi)= \sqrt {2} \begin{pmatrix} \phi^{0~~*} &\phi^+ \\ -\phi^- & \phi^0 \end{pmatrix}.$$
It is then evident that its transform by left α and right β isorotations is
$$\bbox[yellow]{ e^{i\vec{\alpha}\cdot \vec{\tau}/2} \sqrt{2}(\tilde\Phi , \Phi )e^{i\vec{\beta}\cdot \vec{\tau}/2} = e^{i\vec{\alpha} \cdot \vec{\tau}/2}\sqrt {2} \begin{pmatrix} \phi^{0~~*} &\phi^+ \\ -\phi^- & \phi^0 \end{pmatrix}e^{i\vec{\beta}\cdot \vec{\tau}/2} = e^{i\vec{\alpha}\cdot \vec{\tau}/2} H e^{i\vec{\beta}\cdot \vec{\tau}/2} }.$$ Visibly, the left and right isorotations are oblivious to each other: a scrambling of the untilded and tilded $$\Phi$$s effected by the right isorotation does not affect their left-rotation properties.
As is standard in $$SU(2)_L\times SU(2)_R$$ chiral dynamics, the choice $$\vec{\alpha}= -\vec{\beta}$$ specifies the vector isospin custodial subgroup, which you parameterize as $$\gamma \equiv e^{i\vec{\alpha}\cdot \vec{\tau}/2}$$. (Mercifully, you've chosen to ignore the hypercharge, which amounts to a left singlet, but presents as one of the right generators in this language.)
Now, $$D_\mu H= \partial_\mu H + ig \frac{\vec{\tau}}{2}\cdot \vec{W}_\mu ~ H$$ also transforms like $$H$$ and the $$W$$s under the custodial symmetry, despite the left-SU(2) action of the $$W$$s.
The crucial observation you may well be seeking is that $$H^\dagger \mapsto \gamma H^\dagger \gamma^\dagger$$ as well, so the bilinear is a custodial invariant (before tracing!), $$H^\dagger H = 2(\sigma^2+\chi^2+\varphi_1^2+\varphi_2^2) 1\!\! 1 ~,$$ with an obvious SO(4) structure in the coefficient.
As a result, giving σ a v.e.v. preserves the vector SU(2), the SO(3) (not SU(3)!) which scrambles the three goldstons $$\chi,\varphi_1,\varphi_2$$ among themselves, exactly as it scrambles the three components $$W_i$$ that eat them. The three broken symmetries are the three axial transformations connecting the scalar σ to these three pseudoscalar goldstons: the combination of the broken pieces of both the left and the right SU(2)s, whereas their unbroken pieces combined into the surviving custodial vector isospin.
It really, really, is the SO(4) σ-model with the left-chiral SU(2) gauged, and fully broken (unlike the Georgi-Glashow model!).
Everything in its action is custodial-SU(2) invariant, so the Ws will stay degenerate forever, in the absence of hypercharge interactions that would mar that degeneracy.
• Much appreciated. Thanks – yngabl May 27 at 16:43
|
2019-08-25 11:45:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7934675812721252, "perplexity": 1639.586696762381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323328.16/warc/CC-MAIN-20190825105643-20190825131643-00076.warc.gz"}
|
http://mathhelpforum.com/statistics/197388-conditional-probaility-question-please-help.html
|
Let X be a continuous random variable uniformly distributed on the interval (0,b). The density function of X is given by:
1/b for 0<x<b,
0 otherwise
If 0<a<x<b, evaluate the conditional probability P(X less than or equal to x|X>a).
|
2014-07-30 06:45:19
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8645204305648804, "perplexity": 565.5556099536368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268660.14/warc/CC-MAIN-20140728011748-00017-ip-10-146-231-18.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/multiple-integrals-in-polar-form.608798/
|
# Multiple integrals in polar form
1. May 25, 2012
### robertjford80
1. The problem statement, all variables and given/known data
do you see how the integral of r is .5?
I don't get how that follows?
2. May 25, 2012
### sharks
$$\int^{\theta=\pi}_{\theta=0}\int^{r=1}_{r=0} rdrd\theta$$Then, integrate the inner-most integral and from there, integrate outwards, one integral at a time. So, in this case, integrate w.r.t.r first and then w.r.t.θ.
3. May 25, 2012
### robertjford80
still confused
4. May 25, 2012
### sharks
This is the first step in solving any integral. You should understand which variable relates to the limits, then make the following modification.:
$$\int^{\pi}_{0}\int^{1}_{0} rdrd\theta=\int^{\theta=\pi}_{\theta=0}\int^{r=1}_{r=0} rdrd\theta$$
Next, break the integrals down, starting with the inner-most integral:
$$\int^{r=1}_{r=0} r\,.dr=answer$$
$$\int^{\theta=\pi}_{\theta=0} answer\,.d\theta=final\;answer$$
5. May 25, 2012
### robertjford80
The book says the answer to this
$$\int^{r=1}_{r=0} r\,.dr$$
is .5, I don't get that.
6. May 25, 2012
### robertjford80
Ok, I got it.
the integral of r is r2/2, hence 1/2
7. May 25, 2012
### HallsofIvy
Staff Emeritus
The way I would do this is to note that the integral gives the area between the x-axis and the curve $y= \sqrt{1- x^2}$ from x= -1, to 1. $y= \sqrt{1- x^2} is the part of [itex]y^2= 1- x^2$ or $x^2+ y^2= 1$. That is, it is the area of a semi-circle of radius 1 and so is $\pi/2$.
Of course, $\int_0^\pi d\theta= \pi$ and $\int_0^1 r dr= 1/2$, as you say, so the double integral is $\pi/2$.
|
2017-10-22 21:59:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9668540954589844, "perplexity": 1103.5855099865391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825464.60/warc/CC-MAIN-20171022203758-20171022223758-00729.warc.gz"}
|
https://popflock.com/learn?s=Binomial_approximation
|
Binomial Approximation
Get Binomial Approximation essential facts below. View Videos or join the Binomial Approximation discussion. Add Binomial Approximation to your PopFlock.com topic list for future reference or share this resource on social media.
Binomial Approximation
The binomial approximation is useful for approximately calculating powers of sums of 1 and a small number x. It states that
${\displaystyle (1+x)^{\alpha }\approx 1+\alpha x.}$
It is valid when ${\displaystyle |x|<1}$ and ${\displaystyle |\alpha x|\ll 1}$ where ${\displaystyle x}$ and ${\displaystyle \alpha }$ may be real or complex numbers.
The benefit of this approximation is that ${\displaystyle \alpha }$ is converted from an exponent to a multiplicative factor. This can greatly simplify mathematical expressions (as in the example below) and is a common tool in physics.[1]
The approximation can be proven several ways, and is closely related to the binomial theorem. By Bernoulli's inequality, the left-hand side of the approximation is greater than or equal to the right-hand side whenever ${\displaystyle x>-1}$ and ${\displaystyle \alpha \geq 1}$.
## Derivations
### Using linear approximation
The function
${\displaystyle f(x)=(1+x)^{\alpha }}$
is a smooth function for x near 0. Thus, standard linear approximation tools from calculus apply: one has
${\displaystyle f'(x)=\alpha (1+x)^{\alpha -1}}$
and so
${\displaystyle f'(0)=\alpha .}$
Thus
${\displaystyle f(x)\approx f(0)+f'(0)(x-0)=1+\alpha x.}$
By Taylor's theorem, the error in this approximation is equal to ${\textstyle {\frac {\alpha (\alpha -1)x^{2}}{2}}\cdot (1+\zeta )^{\alpha -2}}$ for some value of ${\displaystyle \zeta }$ that lies between 0 and x. For example, if ${\displaystyle x<0}$ and ${\displaystyle \alpha \geq 2}$, the error is at most ${\textstyle {\frac {\alpha (\alpha -1)x^{2}}{2}}}$. In little o notation, one can say that the error is ${\displaystyle o(|x|)}$, meaning that ${\textstyle \lim _{x\to 0}{\frac {\textrm {error}}{|x|}}=0}$.
### Using Taylor Series
The function
${\displaystyle f(x)=(1+x)^{\alpha }}$
where ${\displaystyle x}$ and ${\displaystyle \alpha }$ may be real or complex can be expressed as a Taylor Series about the point zero.
{\displaystyle {\begin{aligned}f(x)&=\sum _{n=0}^{\infty }{\frac {f^{(n)}(0)}{n!}}x^{n}\\f(x)&=f(0)+f'(0)x+{\frac {1}{2}}f''(0)x^{2}+{\frac {1}{6}}f'''(0)x^{3}+{\frac {1}{24}}f^{(4)}(0)x^{4}+\cdots \\(1+x)^{\alpha }&=1+\alpha x+{\frac {1}{2}}\alpha (\alpha -1)x^{2}+{\frac {1}{6}}\alpha (\alpha -1)(\alpha -2)x^{3}+{\frac {1}{24}}\alpha (\alpha -1)(\alpha -2)(\alpha -3)x^{4}+\cdots \end{aligned}}}
If ${\displaystyle |x|<1}$ and ${\displaystyle |\alpha x|\ll 1}$, then the terms in the series become progressively smaller and it can be truncated to
${\displaystyle (1+x)^{\alpha }\approx 1+\alpha x.}$
This result from the binomial approximation can always be improved by keeping additional terms from the Taylor Series above. This is especially important when ${\displaystyle |\alpha x|}$ starts to approach one, or when evaluating a more complex expression where the first two terms in the Taylor Series cancel (see example).
Sometimes it is wrongly claimed that ${\displaystyle |x|\ll 1}$ is a sufficient condition for the binomial approximation. A simple counterexample is to let ${\displaystyle x=10^{-6}}$ and ${\displaystyle \alpha =10^{7}}$. In this case ${\displaystyle (1+x)^{\alpha }>22,000}$ but the binomial approximation yields ${\displaystyle 1+\alpha x=11}$. For small ${\displaystyle |x|}$ but large ${\displaystyle |\alpha x|}$, a better approximation is:
${\displaystyle (1+x)^{\alpha }\approx e^{\alpha x}.}$
## Example
The binomial approximation for the square root, ${\displaystyle {\sqrt {1+x}}\approx 1+x/2}$, can be applied for the following expression,
${\displaystyle {\frac {1}{\sqrt {a+b}}}-{\frac {1}{\sqrt {a-b}}}}$
where ${\displaystyle a}$ and ${\displaystyle b}$ are real but ${\displaystyle a\gg b}$.
The mathematical form for the binomial approximation can be recovered by factoring out the large term ${\displaystyle a}$ and recalling that a square root is the same as a power of one half.
{\displaystyle {\begin{aligned}{\frac {1}{\sqrt {a+b}}}-{\frac {1}{\sqrt {a-b}}}&={\frac {1}{\sqrt {a}}}\left(\left(1+{\frac {b}{a}}\right)^{-1/2}-\left(1-{\frac {b}{a}}\right)^{-1/2}\right)\\&\approx {\frac {1}{\sqrt {a}}}\left(\left(1+\left(-{\frac {1}{2}}\right){\frac {b}{a}}\right)-\left(1-\left(-{\frac {1}{2}}\right){\frac {b}{a}}\right)\right)\\&\approx {\frac {1}{\sqrt {a}}}\left(1-{\frac {b}{2a}}-1-{\frac {b}{2a}}\right)\\&\approx -{\frac {b}{a{\sqrt {a}}}}\end{aligned}}}
Evidently the expression is linear in ${\displaystyle b}$ when ${\displaystyle a\gg b}$ which is otherwise not obvious from the original expression.
## Generalization
While the binomial approximation is linear, it can be generalized to keep the quadratic term in the Taylor series:
${\displaystyle (1+x)^{\alpha }\approx 1+\alpha x+(\alpha /2)(\alpha -1)x^{2}}$
Applied to the square root, it results in:
${\displaystyle {\sqrt {1+x}}\approx 1+x/2-x^{2}/8.}$
Consider the expression:
${\displaystyle (1+\epsilon )^{n}-(1-\epsilon )^{-n}}$
where ${\displaystyle |\epsilon |<1}$ and ${\displaystyle |n\epsilon |\ll 1}$. If only the linear term from the binomial approximation is kept ${\displaystyle (1+x)^{\alpha }\approx 1+\alpha x}$ then the expression unhelpfully simplifies to zero
{\displaystyle {\begin{aligned}(1+\epsilon )^{n}-(1-\epsilon )^{-n}&\approx (1+n\epsilon )-(1-(-n)\epsilon )\\&\approx (1+n\epsilon )-(1+n\epsilon )\\&\approx 0.\end{aligned}}}
While the expression is small, it is not exactly zero. So now, keeping the quadratic term:
{\displaystyle {\begin{aligned}(1+\epsilon )^{n}-(1-\epsilon )^{-n}&\approx \left(1+n\epsilon +{\frac {1}{2}}n(n-1)\epsilon ^{2}\right)-\left(1+(-n)(-\epsilon )+{\frac {1}{2}}(-n)(-n-1)(-\epsilon )^{2}\right)\\&\approx \left(1+n\epsilon +{\frac {1}{2}}n(n-1)\epsilon ^{2}\right)-\left(1+n\epsilon +{\frac {1}{2}}n(n+1)\epsilon ^{2}\right)\\&\approx {\frac {1}{2}}n(n-1)\epsilon ^{2}-{\frac {1}{2}}n(n+1)\epsilon ^{2}\\&\approx {\frac {1}{2}}n\epsilon ^{2}((n-1)-(n+1))\\&\approx -n\epsilon ^{2}\end{aligned}}}
This result is quadratic in ${\displaystyle \epsilon }$ which is why it did not appear when only the linear in terms in ${\displaystyle \epsilon }$ were kept.
## References
1. ^ For example calculating the multipole expansion. Griffiths, D. (1999). Introduction to Electrodynamics (Third ed.). Pearson Education, Inc. pp. 146-148.
|
2022-07-02 02:00:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 54, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9430317282676697, "perplexity": 1067.2556382587848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103983398.56/warc/CC-MAIN-20220702010252-20220702040252-00696.warc.gz"}
|
https://codereview.stackexchange.com/questions/93076/string-calculator-kata-code-in-tdd-style
|
# String Calculator Kata Code in TDD style
I have done the following kata in TDD style and would appreciate it if someone could review my code and my tests.
String Calculator
• Create a simple String calculator with a method int Add(string numbers). The method can take 0, 1 or 2 numbers, and will return their sum (for an empty string it will return 0). For example "" or "1" or "1,2".
• Start with the simplest test case of an empty string and move to 1 and two numbers
• Remember to solve things as simply as possible so that you force yourself to write tests you did not think about.
• Remember to refactor after each passing test.
• Allow the Add method to handle an unknown amount of numbers.
Allow the Add method to handle new lines between numbers (instead of commas). The following input is ok: "1\n2,3" (will equal 6). The following input is NOT ok: "1,\n" (not need to prove it - just clarifying).
• Support different delimiters to change a delimiter, the beginning of the string will contain a separate line that looks like this: "//[delimiter]\n[numbers…]". For example, "//;\n1;2" should return three where the default delimiter is ';'. The first line is optional. All existing scenarios should still be supported.
• Calling Add with a negative number will throw an exception "negatives not allowed" - and the negative that was passed. If there are multiple negatives, show all of them in the exception message.
public class StringCalculator
{
{
if (string.IsNullOrEmpty(args))
{
return 0;
}
var delimeters = new List<char>()
{
'\n',','
};
if (args[0] == '/')
{
var customDelimeter = args[2];
args = args.Remove(0,
3);
}
var numbers = args.ToCharArray().Where(x => !delimeters.Contains(x)).ToList();
if (numbers.Any(x => x == '-'))
{
StringBuilder stringBuilder= new StringBuilder();
for (int i = 0;
i < numbers.Count;
i++)
{
if (numbers[i] == '-')
{
stringBuilder.Append("-");
stringBuilder.Append(numbers[++i]);
stringBuilder.Append(", ");
}
}
throw new Exception(string.Format("negatives {0} not allowed",stringBuilder.ToString()));
}
var sum = numbers.Sum(x => (int)Char.GetNumericValue(x));
return sum;
}
}
Tests:
[TestFixture]
public class StringCalculatorTests
{
[Test]
public void ShouldReturnZeroForEmptyString()
{
var sut = new StringCalculator();
Assert.AreEqual(0,result);
}
[Test]
[TestCase(1,"1")]
[TestCase(2,"2")]
public void ShouldReturnNumberIfGivenOneNumber(int expected,string arg)
{
var sut = new StringCalculator();
Assert.AreEqual(expected,result);
}
[Test]
[TestCase(3, "1,2")]
[TestCase(11, "1,2,3,5")]
public void ShouldReturnSumOfAllNumbers(int expected, string arg)
{
var sut = new StringCalculator();
Assert.AreEqual(expected, result);
}
[Test]
[TestCase(3, "1\n2")]
[TestCase(11, "1,2\n3,5")]
public void ShouldAllowNewLineAsASeparator(int expected, string arg)
{
var sut = new StringCalculator();
Assert.AreEqual(expected, result);
}
[Test]
[TestCase(3, "//;\n1;2")]
[TestCase(11, "//.\n1.2\n3.5")]
[TestCase(11, "//-\n1-2\n3-5")]
public void ShouldSupportDifferentSeparators(int expected, string arg)
{
var sut = new StringCalculator();
Assert.AreEqual(expected, result);
}
[Test]
[TestCase(3, "//;\n-1;2")]
[TestCase(3, "//;\n-1;-2")]
[ExpectedException(typeof(Exception))]
public void ShouldThrowExceptionIfNegativeInArgs(int expected, string arg)
{
var sut = new StringCalculator();
Assert.AreEqual(expected, result);
}
}
• Hey, the signature should be int Add(string numbers) ;) – rjnilsson Jun 11 '15 at 9:10
Single responsibility principle
The method AddNumbers() violates the SRP because it is
• parsing arguments
• composing exception messages
this should be done in separate methods like
private bool ContainsCustomDelimiter(string argument)
private string GetCustomDelemiter(string argument)
private string ComposeExceptionMessage(IEnumerable<char> numbers)
by splitting this method into smaller methods which have a defined responsibility your code will be easier to maintain and extend.
Number or digit ?
Thats the question which bothers me the most. I think of numbers like 1, 3, 12, 56 but it looks like the method will only sum separated digits which should be clearly stated inside a documentation.
Tests
The testname ShouldReturnZeroForEmptyString() is misleading or the AddNumbers() method does not return the expected result.
The method can take 0, 1 or 2 numbers, and will return their sum (for an empty string it will return 0) for example “” or “1” or “1,2”
But an empty string is not a null string, nevertheless the AddNumbers() method is returning 0 for a null value also because of
if(string.IsNullOrEmpty(args))
In my opinion the AddNumbers() method should throw an NullReferenceException for the case that the passed in string is null.
So you better check for
if (args.Length == 0) { return 0; }
which will throw the NRE if args is null
But there is another big problem. Assume you pass one of the following strings to the AddNumbers() method:
• "/" -> throws IndexOutOfRange
• "//" -> throws IndexOutOfRange
You need to always check for such edge cases, thats what tests are for.
Now let us talk about cases where the input is not valid.
What should happen for a given string like "1, 2,3" or "1,2,A" ? The first will return 5 and the second will return 5 too.
So you should better check numbers if they contain any non digit and handle this case.
Naming
Naming is important because it tells you (if done correctly) at first glance what a variable is about. Bob the maintainer will have a hard time if he/she sees a variable named sut (but only if he/she doesn't know (like me) that sut stands for "system under test"). But nevertheless why don't you name it calculator ?
• I feel like you're already aware of this but SUT is an acronym for System Under Test, a common term in unit testing. That said, I would also prefer for it to be called calculator. – mjolka Jun 9 '15 at 12:37
• I haven't been aware of this. These acronyms these days are driving me crazy ;-) Thanks for clarifying. – Heslacher Jun 9 '15 at 12:38
Some things that come to mind are:
• Substitute var numbers = args.ToCharArray().Where(x => !delimeters.Contains(x)).ToList(); with var numbers = args.Where(x => !delimeters.Contains(x)); unless you specifically want a list. The ToCharArray() call should be superfluous.
• Substitute if (numbers.Any(x => x == '-')) with if (numbers.Contains('-')) for readability's sake.
• The command stringBuilder.Append(numbers[++i]); makes it so that i gets incremented twice in each cycle. Is it intentional? If not, you should remove the extra ++ and substitute it with i + 1.
Other than these, I don't see any problems in the code.
Apart from what's already been said you should consider your test suite a bit more (I'm assuming you're using NUnit in pretty recent version).
Apply DRY to the tests
You have two more or less identical lines in all tests:
var sut = new StringCalculator();
Declare the calculator instance as a field in the fixture instead, and initialize in a [SetUp] method.
There's also the duplication of var result = .... In this specific case I might go as far as to get rid of result entirely:
public void ShouldXxx(int expected,string arg)
{
}
While I'm at it this reads even better, IMHO:
public void ShouldXxx(int expected, string arg)
{
}
Try reading the code out loud for yourself for each of the samples above.
Further adding a few line breaks to the latest version makes it a bit easier to visually separate what you're testing from the expected result, but that's a very personal opinion. See refactored example below.
Don't use SUT literally
As others have already said, don't use sut as a variable name. Sure, you might know the context and be familiar with the acronym but I still consider calculator.AddNumbers(...) to be more expressive than sut.AddNumbers().
One circumstance where you could use e.g. sut as a variable name is where you have a very generic test suite that can be reused for many implementations. However, even then I would strongly suggest that you name the variable referring to the SUT according to what kind of capabilities being tested within that specific suite. A short example:
public abstract class CloneableTests<T> where T: ICloneable
{
private T cloneable; // Not 'sut'
....
}
Remove unneeded attributes
You don't need to use the [TestFixture] attribute. Marking individual methods with [SetUp] or one of the [Test...] attributes is sufficient. Also, using [Test] when [TestCase] is present is redundant.
Here's an excerpt from the refactored, and somewhat reformatted, tests:
public class StringCalculatorTests
{
private StringCalculator calculator;
[SetUp]
public void InitFixture()
{
calculator = new StringCalculator();
}
[Test]
public void ShouldReturnZeroForEmptyString()
{
Assert.That(
Is.EqualTo(0)
);
}
[TestCase(1, "1")]
[TestCase(2, "2")]
public void ShouldReturnNumberIfGivenOneNumber(int expected, string arg)
{
Assert.That(
|
2019-07-22 02:53:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24009914696216583, "perplexity": 4154.1663608997715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527458.86/warc/CC-MAIN-20190722010436-20190722032436-00379.warc.gz"}
|
https://cs.stackexchange.com/questions/103099/give-a-grammar-for-words-whose-number-of-as-modulo-2-is-larger-than-whose-num
|
# Give a grammar for words whose number of $a$'s modulo 2 is larger than whose number of $b$'s modulo 2
Given is an alphabet $$\Sigma = \{ a, b, c \}$$, and a language $$A4 =\{ w \mid w \in \Sigma^* \wedge |w|_a \operatorname{mod} 2 \ge |w|_b \operatorname{mod} 2 \}$$
whereas $$|w|_a$$ is the number $$a$$'s in the word $$w$$ and $$|w|_b$$ is the number of $$b$$'s in $$w$$.
We should give a grammar (of any type) for $$A4$$.
I know that the only words that do not belong in $$A4$$ are the words where the number of $$b$$'s is odd and the number of $$a$$'s is even.
I can't seem to be able to give a comprehensive grammar for this without breaking that rule. Any tip would be greatly appreciated.
I know that the only words that do not belong in $$A4$$ are the words where the number of $$b$$'s is odd and the number of $$a$$'s is even.
This characterization of words not in $$A4$$ is nice. However, what we want is to produce words in $$A$$. Let us classify the words in $$A4$$ as the disjoint union of the following 3 pieces.
1. the words where the number of $$a$$'s is even and the number of $$b$$'s is even.
2. the words where the number of $$a$$'s is odd and the number of $$b$$'s is even.
3. the words where the number of $$a$$'s is odd and the number of $$b$$'s is odd.
Now use a distinct nonterminal to represent each piece.
The above hint should be enough to set you moving.
Once you have got your own solution, or you run into another bottleneck, you could mouseover the following to reveal the spoiler.
To facilitate writing the grammar, we will add a non-terminal to represent the 4-th piece.
$$\quad$$4. the words where the number of $$a$$'s is even and the number of $$b$$'s is odd.
Here is the grammar, where non-terminal $$S_{i,j}$$ represents the words whose number of $$a$$'s is $$i$$ modulo 2 and whose number of $$b$$'s is $$j$$ modulo 2.
$$S\to S_{0,0}\mid S_{1,0}\mid S_{1,1}$$
$$S_{0,0}\to cS_{0,0}\mid aS_{1,0}\mid bS_{0,1}\mid\epsilon$$
$$S_{0,1}\to cS_{0,1}\mid aS_{1,1}\mid bS_{0,0}$$
$$S_{1,0}\to cS_{1,0}\mid aS_{0,0}\mid bS_{1,1}$$
$$S_{1,1}\to cS_{1,1}\mid aS_{0,1}\mid bS_{1,0}$$
• Here's my attempt: S → S1 | S2 | S3 S1 → ε | aaS1 | bbS1 | cS1 S2 → a | aaS2 | bbS2 | cS2 S3 → abS3 | aS3b | cS3 And the swapping productions: ab → ba ba → ab ac → ca ca → ac ca → ac cb → bc bc → cb Is there a more elegant way to do this? – user1221 Jan 19 at 20:29
• Not sure what your production rules are. You could check my spoiler. – Apass.Jack Jan 20 at 16:45
Your language is regular. It contains all words over $$\{a,b,c\}$$ with either an even number of $$b$$’s or an odd number of $$a$$’s (or both).
Let S be the start symbol. S represents anything that doesn't have an even number of a's and odd number of b's. Further three symbols: A is not (odd a's and odd b's), B is not (even a's and even b's), C is not (odd a's and even b's).
An empty string is in the language derived from S, A or C, but not B. Therefore we have rules
S->eps
A->eps
C->eps
c doesn't change anything, so we have rules
S->cS
A->cA
B->cB
C->cC
a and b change what should come next. Check the following careful, easy to get wrong.
S->aA, S->bB
A->aS, A->bC
B->aC, B->bS
C->aB, C->bA
|
2019-06-16 16:42:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 40, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9166142344474792, "perplexity": 512.9488816516773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998288.34/warc/CC-MAIN-20190616162745-20190616184745-00330.warc.gz"}
|
https://repository.uantwerpen.be/link/irua/117260
|
Title Measurement of the hadronic activity in events with a Z and two jets and extraction of the cross section for the electroweak production of a Z with two jets in pp collisions at $\sqrt{s}$ = 7 TeVMeasurement of the hadronic activity in events with a Z and two jets and extraction of the cross section for the electroweak production of a Z with two jets in pp collisions at $\sqrt{s}$ = 7 TeV Author Chatrchyan, S. Khachatryan, V. Sirunyan, A. M. Alderweireldt, S. Bansal, M. Bansal, S. Cornelis, T. de Wolf, E.A. Janssen, X. Knutsson, A. Luyckx, S. Mucibello, L. Roland, B. Rougny, R. van Haevermaet, H. Van Mechelen, P. Van Remortel, N. Van Spilbeeck, A. et al. Faculty/Department Faculty of Sciences. Physics Research group Elementary Particle Physics Department of Physics Publication type article Publication 2013Bristol, 2013 Subject Physics Source (journal) Journal of high energy physics. - Bristol Volume/pages (2013):10, p. 1-43 ISSN 1126-6708 1029-8479 Article Reference 062 Carrier E-only publicatie Target language English (eng) Full text (Publishers DOI) Affiliation University of Antwerp Abstract The first measurement of the electroweak production cross section of a Z boson with two jets (Zjj) in pp collisions at root s = 7 TeV is presented, based on a data sample recorded by the CMS experiment at the LHC with an integrated luminosity of 5 fb(-1). The cross section is measured for the lljj (l = e, mu) final state in the kinematic region m(ll) > 50 GeV, m(jj) > 120 GeV, transverse momenta p(T)(j) > 25 GeV and pseudorapidity vertical bar eta(j)vertical bar < 4.0. The measurement, combining the muon and electron channels, yields sigma = 154 +/- 24 (stat.) +/- 46 (exp. syst.) +/- 27 (th. syst.) +/- 3 (lum.) fb, in agreement with the theoretical cross section. The hadronic activity, in the rapidity interval between the jets, is also measured. These results establish an important foundation for the more general study of vector boson fusion processes, of relevance for Higgs boson searches and for measurements of electroweak gauge couplings and vector boson scattering. Full text (open access) https://repository.uantwerpen.be/docman/irua/31f773/7580.pdf E-info http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000333027700001&DestLinkType=RelatedRecords&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000333027700001&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 Handle
|
2016-10-28 08:49:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8345192074775696, "perplexity": 4435.892156508394}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721595.48/warc/CC-MAIN-20161020183841-00436-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/general-calculus-question.211283/
|
# General Calculus question
1. Jan 27, 2008
### Obsidian
1. The problem statement, all variables and given/known data
Okay, I have here the original equation that we start out with:
f(x) = -2x + 4
2. Relevant equations
I'm supposed to find the slope of that equation. Which is obvious, since it's -2.
But I was supposed to show how it's consistent with the limit definition of slope, which originally looks like this:
[f(x+h) - f(x)] / h
The way they showed it, was by plugging in (x + h) into the x in -2x + 4. My question is, why exactly did they do that? Sorry if it's a stupid question, but it's been bugging me and I would like to know when to plug in the x + h, and when not to when determining slopes.
2. Jan 27, 2008
### neutrino
Well, the value of the function at the point x+h is -2(x+h) + 4.
f(x) = -2x+4
f(x+h) = -2(x+h) + 4
3. Jan 27, 2008
### sutupidmath
well look first of all the slope of the line at any point in the curve is basically the derivative of that function at that point. Now to fully understand this you need to look at the concept of the derivative. However, look here, f(x)-f(a) is merely the change of the function in the y-axes, while h=x-a is the change in the x-axes. now the slope of a line drawn at a poin in the curve is the limit of the change of the function on y-axes over the change of argument in the x-axes, as x-->a or, as h-->0,
this way:
$$\frac{f(x)-f(a)}{x-a}$$ the limit of this as x--->a or
$$\frac{f(a+h)-f(a)}{h}$$ the limit of this as h-->0
4. Jan 27, 2008
### Newton1Law
f'(x) = limit [f(x+h) - f(x)]/h
(as h->0)
f(x) = -2x+4
f'(x) = limit [f(-2(x+h) + 4) - (-2x + 4)]/h
h -> 0
f'(x) = limit [-2x - 2h+ 4 +2x - 4]/h
h -> 0
f'(x) = limit [ -2 h]/h
h-> 0
f'(x) = limit [-2]
h -> 0
f'(x) = -2
Hope this helps.
5. Jan 27, 2008
### Feldoh
As stupid math said [f(x+h) - f(x)] / h is just the average slope of a function.
However according to the mean value theorem there is some point on a closed interval where the average slope is the the same as the instantaneous rate of change (slope) of the function or:
f'(c) = [f(x+h) - f(x)] / h
Since your function is a line this happens to occur everywhere.
6. Jan 27, 2008
### jambaugh
Wo there! you're invoking a jackhammer to pull a staple. It is just a trivial instantiation of the definition of the derivative. If he does the algebra correctly and invokes the definition and the limit laws correctly (which I think is the point of the exercise) he gets the correct answer.
One key step is that the value of the difference quotient is undefined at h=0 but it is "ok" to cancel h's as one is working inside a limit and you invoke the pertinent limit law (f=g except at x=a then limit of f = limit of g at x=a).
7. Jan 27, 2008
### Feldoh
I was just trying to present another way to look at the problem :)
|
2016-10-28 04:51:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8279796838760376, "perplexity": 923.1250211481315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721555.54/warc/CC-MAIN-20161020183841-00151-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://fr.maplesoft.com/support/help/maple/view.aspx?path=LinearAlgebra%2FModular%2FInverse
|
LinearAlgebra[Modular]
Inverse
compute the Inverse of a square mod m Matrix
compute the Adjoint of a square mod m Matrix
Calling Sequence Inverse(m, A, det, B, meth) Adjoint(m, A, det, B, meth)
Parameters
m - modulus A - mod m Matrix det - (optional) name to use for output determinant B - (optional) Matrix to use for output Inverse or Adjoint meth - (optional) method to use for computing Inverse or Adjoint
Description
• The Inverse and Adjoint functions compute the inverse and adjoint, respectively, of a square mod m Matrix.
• If det is specified, it is assigned the value of the determinant on successful completion.
• If B is specified, it must have dimensions and datatype identical to A, and will contain the inverse or adjoint on successful completion. In this case the command will return NULL.
• The default method for the inverse is LU, while the default method for the adjoint is RET, and these can be changed by specification of meth.
Allowable options are:
LU Obtain inverse or adjoint via LU decomposition. inplaceLU Obtain inverse or adjoint via LU decomposition destroying the data in A in the process. RREF Obtain inverse or adjoint through application of row reduction to an identity augmented mod m Matrix. RET Obtain inverse or adjoint through application of a row echelon transform to A. inplaceRET Obtain inverse or adjoint through application of a row echelon transform to A, replacing A with the inverse or adjoint.
The LU and inplaceLU methods are the most efficient for small to moderate sized problems. The RET and inplaceRET methods are the most efficient for very large problems. The RREF method is the most flexible for nonsingular matrices.
• For the inplaceRET method, B should never be specified, as the output replaces A. For this method, the commands always return NULL.
• With the LU-based and RET-based methods, it is generally required that m be a prime, as mod m inverses are needed, but in some cases it is possible to obtain an LU decomposition or a Row-Echelon Transform for m composite.
For the cases where LU Decomposition or Row-Echelon Transform cannot be obtained for m composite, the function returns an error indicating that the algorithm failed because m is composite.
Note: There are cases with composite m for which the inverse and adjoint exist, but no LU decomposition or Row-Echelon Transform is possible.
• If it exists, the RREF method always finds the mod m inverse. The RREF method also finds the adjoint if the Matrix is nonsingular.
• The RET method is the only method capable of computing the adjoint if the matrix is singular. The inplaceRET method cannot be used to compute the adjoint of a singular matrix, as this operation cannot be performed in-place.
• These commands are part of the LinearAlgebra[Modular] package, so they can be used in the form Inverse(..) and Adjoint(..) only after executing the command with(LinearAlgebra[Modular]). However, they can always be used in the form LinearAlgebra[Modular][Inverse](..) and LinearAlgebra[Modular][Adjoint](..).
Examples
Basic 3x3 Matrix.
> $\mathrm{with}\left(\mathrm{LinearAlgebra}\left[\mathrm{Modular}\right]\right):$
> $p≔97$
${p}{≔}{97}$ (1)
> $M≔\mathrm{Mod}\left(p,\mathrm{Matrix}\left(3,3,\left(i,j\right)↦\mathrm{rand}\left(\right)\right),\mathrm{integer}\left[\right]\right)$
${M}{≔}\left[\begin{array}{ccc}{77}& {96}& {10}\\ {86}& {58}& {36}\\ {80}& {22}& {44}\end{array}\right]$ (2)
> $\mathrm{Mi}≔\mathrm{Inverse}\left(p,M\right)$
${\mathrm{Mi}}{≔}\left[\begin{array}{ccc}{16}& {80}& {72}\\ {20}& {20}& {32}\\ {5}& {65}& {89}\end{array}\right]$ (3)
> $\mathrm{Multiply}\left(p,M,\mathrm{Mi}\right),\mathrm{Multiply}\left(p,\mathrm{Mi},M\right)$
$\left[\begin{array}{ccc}{1}& {0}& {0}\\ {0}& {1}& {0}\\ {0}& {0}& {1}\end{array}\right]{,}\left[\begin{array}{ccc}{1}& {0}& {0}\\ {0}& {1}& {0}\\ {0}& {0}& {1}\end{array}\right]$ (4)
An example that fails with the LU and RET methods, but succeeds with RREF.
> $m≔6$
${m}{≔}{6}$ (5)
> $M≔\mathrm{Mod}\left(m,\left[\left[3,2\right],\left[2,1\right]\right],\mathrm{float}\left[8\right]\right)$
${M}{≔}\left[\begin{array}{cc}{3.}& {2.}\\ {2.}& {1.}\end{array}\right]$ (6)
> $\mathrm{Mi}≔\mathrm{Inverse}\left(m,M\right)$
> $\mathrm{Mi}≔\mathrm{Inverse}\left(m,M,'\mathrm{RET}'\right)$
> $\mathrm{Mi}≔\mathrm{Inverse}\left(m,M,'\mathrm{RREF}'\right)$
${\mathrm{Mi}}{≔}\left[\begin{array}{cc}{5.}& {2.}\\ {2.}& {3.}\end{array}\right]$ (7)
> $\mathrm{Multiply}\left(m,M,\mathrm{Mi}\right),\mathrm{Multiply}\left(m,\mathrm{Mi},M\right)$
$\left[\begin{array}{cc}{1.}& {0.}\\ {0.}& {1.}\end{array}\right]{,}\left[\begin{array}{cc}{1.}& {0.}\\ {0.}& {1.}\end{array}\right]$ (8)
An example where no inverse exists, but the adjoint does exist.
> $m≔6$
${m}{≔}{6}$ (9)
> $M≔\mathrm{Mod}\left(m,\left[\left[2,4\right],\left[4,4\right]\right],\mathrm{float}\left[8\right]\right)$
${M}{≔}\left[\begin{array}{cc}{2.}& {4.}\\ {4.}& {4.}\end{array}\right]$ (10)
> $\mathrm{Mi}≔\mathrm{Inverse}\left(m,M,'\mathrm{RREF}'\right)$
> $\mathrm{Ma}≔\mathrm{Adjoint}\left(m,M,'\mathrm{det}','\mathrm{RREF}'\right)$
${\mathrm{Ma}}{≔}\left[\begin{array}{cc}{4.}& {2.}\\ {2.}& {2.}\end{array}\right]$ (11)
> $\mathrm{det},\mathrm{Multiply}\left(m,M,\mathrm{Ma}\right),\mathrm{Multiply}\left(m,\mathrm{Ma},M\right)$
${4}{,}\left[\begin{array}{cc}{4.}& {0.}\\ {0.}& {4.}\end{array}\right]{,}\left[\begin{array}{cc}{4.}& {0.}\\ {0.}& {4.}\end{array}\right]$ (12)
An example where only the RET method succeeds at computing the adjoint.
> $m≔7$
${m}{≔}{7}$ (13)
> $M≔\mathrm{Mod}\left(m,\left[\left[1,1\right],\left[1,1\right]\right],\mathrm{integer}\right)$
${M}{≔}\left[\begin{array}{cc}{1}& {1}\\ {1}& {1}\end{array}\right]$ (14)
> $\mathrm{Ma}≔\mathrm{Adjoint}\left(m,M,'\mathrm{RREF}'\right)$
> $\mathrm{Ma}≔\mathrm{Adjoint}\left(m,M,'\mathrm{LU}'\right)$
> $\mathrm{Ma}≔\mathrm{Adjoint}\left(m,M,'\mathrm{det}','\mathrm{RET}'\right)$
${\mathrm{Ma}}{≔}\left[\begin{array}{cc}{1}& {6}\\ {6}& {1}\end{array}\right]$ (15)
> $\mathrm{det},\mathrm{Multiply}\left(m,M,\mathrm{Ma}\right),\mathrm{Multiply}\left(m,\mathrm{Ma},M\right)$
${0}{,}\left[\begin{array}{cc}{0}& {0}\\ {0}& {0}\end{array}\right]{,}\left[\begin{array}{cc}{0}& {0}\\ {0}& {0}\end{array}\right]$ (16)
|
2021-10-23 23:46:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 38, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9162890911102295, "perplexity": 1246.576995266504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585828.15/warc/CC-MAIN-20211023224247-20211024014247-00584.warc.gz"}
|
https://www.physicsforums.com/threads/first-derivative-cob-theorem.85152/
|
# First Derivative CoB Theorem
Any Calculus researchers interested in disproving this theorem with a simple base and function?
Orion1 change of base theorem:
$$\frac{d}{dx} (\log_v u) = \frac{1}{u \ln(v)} \frac{du}{dx} - \frac{\ln(u)}{v \ln^2 (v)} \frac{dv}{dx}$$
Is this theorem correct?
Does this theorem accept functions in the base?
lurflurf
Homework Helper
Orion1 said:
Any Calculus researchers interested in disproving this theorem with a simple base and function?
Orion1 change of base theorem:
$$\frac{d}{dx} (\log_v u) = \frac{1}{u \ln(v)} \frac{du}{dx} - \frac{\ln(u)}{v \ln^2 (v)} \frac{dv}{dx}$$
Is this theorem correct?
Does this theorem accept functions in the base?
It is true, and follows from
$$\log_v(u)=\frac{\log(u)}{\log(v)}$$
Theorem Proof...
First derivative Change of Base (proof 1):
$$\frac{d}{dx} (\log_v u) = \frac{d}{dx} \left( \frac{\ln(u)}{\ln(v)} \right) = \frac{1}{u \ln(v)} \frac{du}{dx} - \frac{\ln(u)}{v \ln^2 (v)} \frac{dv}{dx}$$
First derivative Change of Base theorem:
$$\boxed{\frac{d}{dx} (\log_v u) = \frac{1}{u \ln(v)} \frac{du}{dx} - \frac{\ln(u)}{v \ln^2 (v)} \frac{dv}{dx}}$$
|
2021-05-14 15:56:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7591626048088074, "perplexity": 4869.096197400022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991428.43/warc/CC-MAIN-20210514152803-20210514182803-00480.warc.gz"}
|
https://web2.0calc.com/questions/help_39802
|
+0
# Help
+1
100
4
+628
find all groups of three regular polygons with side length one that can surround one point such that each polygon shares a side with the other two.
supermanaccz Sep 20, 2018
### Best Answer
#1
+970
+5
Let there be three regular polygons with $$x,y,$$ and $$z$$ as their number of sides. Their internal angles must sum to $$360º$$ and individually be supplement to their respective external angle. Therefore, we can express the sum of the three angles as:
$$(180º-\frac{360º}{x})+(180º-\frac{360º}{y})+(180º-\frac{360º}{z})=360º\\ 540º-\frac{360º}{x}-\frac{360º}{y}-\frac{360º}{z}=360º\\ 180º=\frac{360º}{x}+\frac{360º}{y}+\frac{360º}{z}\\ \frac2x+\frac2y+\frac2z=1$$
$$3\le x \le 6$$, because three sides is the least number of sides a regular polygon can have, and if $$x,y,$$ or $$z$$ is greater than 6, the sum will be less than one.
Casework:
$$x=6\Rightarrow y=6, z=6\\ x=5\Rightarrow y=5, z=10\\ x=4\Rightarrow (8,8);(6,12);(5,20)\\ x=3\Rightarrow (10,15);(9,18);(8,24);(7,42)\\$$
Therefore, there are 9 groups of regular polygons that can surrond a point.
I hope this helped,
Gavin.
GYanggg Sep 20, 2018
#1
+970
+5
Best Answer
Let there be three regular polygons with $$x,y,$$ and $$z$$ as their number of sides. Their internal angles must sum to $$360º$$ and individually be supplement to their respective external angle. Therefore, we can express the sum of the three angles as:
$$(180º-\frac{360º}{x})+(180º-\frac{360º}{y})+(180º-\frac{360º}{z})=360º\\ 540º-\frac{360º}{x}-\frac{360º}{y}-\frac{360º}{z}=360º\\ 180º=\frac{360º}{x}+\frac{360º}{y}+\frac{360º}{z}\\ \frac2x+\frac2y+\frac2z=1$$
$$3\le x \le 6$$, because three sides is the least number of sides a regular polygon can have, and if $$x,y,$$ or $$z$$ is greater than 6, the sum will be less than one.
Casework:
$$x=6\Rightarrow y=6, z=6\\ x=5\Rightarrow y=5, z=10\\ x=4\Rightarrow (8,8);(6,12);(5,20)\\ x=3\Rightarrow (10,15);(9,18);(8,24);(7,42)\\$$
Therefore, there are 9 groups of regular polygons that can surrond a point.
I hope this helped,
Gavin.
GYanggg Sep 20, 2018
#2
+92787
+2
Very nice, Gavin !!!
CPhill Sep 20, 2018
#3
+1
"Their internal angles must sum to 360"
why?
EDIT: thanks for the diagram melody :) now i understand. I thought all polygonals must surround a specific point, like a point contained within a circle.
Guest Sep 22, 2018
edited by Guest Sep 22, 2018
#4
+94117
+1
I'm impressed too Gavin :)
Here is one of the answers Gavin found.
The angles he is talking about have to be angles at a point as I will show in this diagram.
Melody Sep 22, 2018
edited by Melody Sep 22, 2018
### New Privacy Policy
We use cookies to personalise content and advertisements and to analyse access to our website. Furthermore, our partners for online advertising receive information about your use of our website.
For more information: our cookie policy and privacy policy.
|
2018-12-15 12:39:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3419627845287323, "perplexity": 1592.514451395051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826856.55/warc/CC-MAIN-20181215105142-20181215131142-00623.warc.gz"}
|
https://www.kollagenx.com/0agrs/which-metal-does-not-react-with-dilute-sulphuric-acid-4622ee
|
Silver and gold do not react with dilute acids. It forms an oxide which turns red litmus blue indicating that it is a basic oxide. What happens when dilute sulphuric acid is poured on a copper plate? Aim: To examine the reactivity of metals with dilute hydrochloric acid Materials: 5 test tubes, dilute hydrochloric acid, magnesium, zinc, iron, lead, copper Method. Cu. Another factor that can affect the combination is the solubility of the salt, or metal sulfate, formed by the reaction. read more Sr. What are the products for the following single replacement reaction? But, concentrated sulphuric acid is an oxidising agent. This means that 34 grams of Al react with 294 grams of the acid To get the amount of aluminium that reacts with 5.890 g of sulfuric acid, we will do cross multiplication as follows: Aluminum metal reacts with dilute sulfuric acid according to the following unbalanced equation: Al (s) + H2SO4 (aq) → Al2(SO4)3 (aq) + H2 (g) Suppose you react 2.82 g of aluminum metal with excess dilute sulfuric acid. Answer: Copper does not react with dilute sulphuric acid. Aluminium metal dissolves readily in dilute sulphuric acid to form solutions containing the aquated Al(III) ion together with hydrogen gas, H 2. But when concentrated sulphuric acid is poured over the copper plate, it librates hydrogen gas. Which of the following metals does not react with aqueous Ni(NO3)2? This will only occur if the sulfuric acid’s concentration is between 70 percent and 99.5 percent. Place about 5cm depth of the acid in each of the five test tubes; Place a small piece of each of the three metals above. These metals include copper, mercury, silver, gold and platinum. Some metal sulfates — for example, those of iron, zinc and aluminum — are very soluble in water or acids while others — like the sulfates of calcium and barium — are not. But, concentrated sulphuric acid is an oxidising agent. But, concentrated sulphuric acid is an oxidising agent. Reactions with Bases 3.5. Mg(s) + AgNO₃(aq) → Copper does not displace hydrogen from non-oxidising acids like HCl or dilute H2SO4. Copper does not displace hydrogen from non-oxidising acids like HCl or dilute H2SO4. Give reason, The potential difference between the two terminals on a battery is 9 volts. The Reaction of Metals with Dilute Acid.. Potassium, sodium, lithium and calcium all react violently with dilute sulfuric acid and dilute hydrochloric acid. Displacement Reactions 4. Dilute hydrochloric acid, HCl(aq) – see CLEAPSS Hazcard HC047a and CLEAPSS Recipe Book RB043. This topic gives an overview of; 1. Silver dissolves in dilute nitric acid, HNO 3 [8]. ♥miss maple♥ H 2 SO 4 reacts with Ba forming BaSO 4 which is insoluble, so it forms a thin layer on the metal's surface, preventing a further action of the acid. Metal hydroxide b. What are the products from the following single-replacement reaction? In the case of copper, for example, the following reaction takes place: Cu + 2H2SO4 -> CuSO4 + SO2 + 2H2O. Gold and platinum will not react with sulfuric acid at all. The oxidizing power of concentrated $\ce{H2SO4(l)}$ does not come from $\ce{H+}$ ions or $\ce{SO4^2-}$ ions, but rather the $\ce{H2SO4}$ molecules. Zn(s) + H 2 SO 4 (g) → ZnSO 4 (s) + H 2 (Zinc) (dil. This page shows how the position of a metal in the reactivity series affects its reactions with common dilute acids such as hydrochloric acid or sulfuric acid. Zn + H2SO4 ⇨ ZnSO4 + H2. Method 3500-Ag C Inductively Coupled Plasma Method [2]. The corresponding reactions with dilute hydrochloric acid also give the aquated Al(III) ion. Copper does not displace hydrogen from non-oxidising acids like HCl or dilute H2SO4. Observing Acid-Metal Reactions. But when concentrated sulphuric acid is poured over copper plate, effervescence is observed. The Reactivity Series. Answered by Ramandeep | 29th May, 2019, 11:18: AM Which of the following metals does NOT react with sulfuric acid? This happens because of formation of hydrogen gas. When the sulfate has low solubility, the reaction will quickly slow down or stop as a protective layer of sulfate builds up around the metal. The gas produced in the reaction is hydrogen. The effects of sulfuric acid on metal depend on a number of factors, including the type of metal, the concentration of the acid, and the temperature. Copper does not react with dilute sulphuric acid as its reduction potential is higher than that of hydrogen. The red litmus paper turns blue when it comes in a contact with oxide PO base, and this indicates P is a metal. With metals. CopperCopper, gold and silver are known as noble metals. Also it is known that metals form basic oxides, This site is using cookies under cookie policy. The effect of sulfuric acid on metal elements that are below hydrogen in the reactivity series is different, as they cannot displace hydrogen from the acid. Metals below hydrogen in the reactivity series do not react with dilute acids, and both gold and platinum are placed below hydrogen. The gas produced in the reaction is hydrogen. Which of the following metals reacts with aqueous AgNO₃? If a more dilute solution is used, the reaction is much more rapid. Only the less reactive metals like copper,silver and gold do not react with dilute acids. When dilute sulphuric acid poured on a copper plate, the metal does not react with sulphuric acid. On burning metals react with oxygen to produce-a. This method is used in laboratory to produce hydrogen gas. Aluminium metal dissolves readily in dilute sulphuric acid to form solutions containing the aquated Al(III) ion together with hydrogen gas, H 2. sulphuric acid) (Zinc Sulphate) (Hydrogen gas) This is an example of displacement reaction of a non-metal by a metal. For example, although titanium is above hydrogen in the reactivity series, it normally has a thin coating of titanium dioxide that renders it unreactive toward sulfuric and most other acids. While other metal/acid combinations react in the same way, recovering the salt by crystallisation (in Lesson 2) may not be as successful as it is using zinc and sulfuric acid. By reacting copper(II) oxide, a black solid, with colourless dilute sulfuric acid, they produce copper(II) sulfate with a characteristic blue colour. Answered by Ramandeep | 29th May, 2019, 11:18: AM But when concentrated sulphuric acid is poured over copper plate, effervescence is observed. Concentrated sulfuric acid is at least $18~\mathrm{M}$ and contains only 2% water by mass. Metal oxides are basic in nature. Reaction with Water 3.3. None of the above. Uses of Metals and Non- metals Hence acid is defined as “a substance which gives out hydrogen ions when dissolved in water”. When dilute sulphuric acid poured on a copper plate, the metal does not react with sulphuric acid. Pls ans this quick. Although the pure metals will react, some elements, when exposed to air, quickly acquire a layer of oxide. These do not react with water or dilute … Hydrogen gas is formed, along with colorless solutions of beryllium or magnesium sulfate. Silver, gold and platinum are not active metals. Stainless steel, at low temperatures, is not corroded significantly by the acid at concentrations above about 98%. Observing Acid-Metal Reactions. Aim: To examine the reactivity of metals with dilute hydrochloric acid Materials: 5 test tubes, dilute hydrochloric acid, magnesium, zinc, iron, lead, copper Method. Since hydrogen has very low solubility in water and acids, it will produce bubbles; the resulting effervescence is greater with the more reactive metals. 1. (H)>Cu>Ag>Hg>Au. You can specify conditions of storing and accessing cookies in your browser, An element P does not react with dilute sulphuric acid. Copper does not react with dilute sulphuric acid as its reduction potential is higher than that of hydrogen. For example: $Mg + H_2SO_4 \rightarrow MgSO_4 + H_2$ Metal chloride c. Metal oxide d. Metal … Let's look at how zinc reacts with a dilute acid, water, and oxygen. The metals that come into this category include the alkali metals, such as sodium and potassium, and the alkaline earth metals, like magnesium and calcium, as well as many other common metals, such as iron, nickel, and zinc. This process is called passivation. magnesium, zinc and iron) (e) Extraction and uses of metals. …. Pure sulfuric acid does not react with metals to produce hydrogen, since the presence of water is required to … This happens because of formation of hydrogen gas. Acids that release a large number of hydrogen ions in water are strong acids. Which of the following metals reacts with water at 25 °C? Mn>Cd>Co>(H)>Ag. …, s the terminals? Aluminum also forms a protective oxide layer; however, sulfuric acid and aluminum will react after some delay to produce hydrogen gas and aluminum sulfate. In practice, not all of these metals will react with sulfuric acid under normal circumstances. Metals that react with these acids produce a metal salt and hydrogen gas: 25. What is the voltage used Cu. 4. Zn(s) + H 2 SO 4 (g) → ZnSO 4 (s) + H 2 (Zinc) (dil. When marble (CaCO 3) reacts with dilute H 2 SO 4 it initially effervesces but because calcium sulfate is only sparingly soluble in water, once it forms as a deposit on the marble surface, the reaction soon slows. it forms an oxide PO which turns red litmus to blue. P is a metal, and therefore; P does not react with dilute Sulphuric acid. Elements like Platinum and Gold are called noble metals too as they don't react with most of the compounds or other reactants. Since a chemical reaction between an acid and a metal will produce hydrogen gas, this can be used to determine whether a particular metal has reacted with an acid or not. ; Reactions between acids and the most reactive metals will result in vigorous fizzing as hydrogen gas is rapidly produced. Depending on the reactivity,some metals react violently with dilute acids,some metals react rapidly with dilute acids,some metals react with dilute acids only on heating. read more Dilute sulphuric acid is poured on a copper plate? This method is used in laboratory to produce hydrogen gas. Wikibuy Review: A Free Tool That Saves You Time and Money, 15 Creative Ways to Save Money That Actually Work. With cold concentrated sulfuric acid, such metals as iron and aluminum do not react, as they are covered with an oxide film. molar mass of sulfuric acid = 2(1) + 32 + 4(16) = 98 grams From the balanced chemical equation: 2 moles of aluminium react with 3 moles of dilute sulfuric acid. ; Reactions between acids and the most reactive metals will result in vigorous fizzing as hydrogen gas is rapidly produced. Since a chemical reaction between an acid and a metal will produce hydrogen gas, this can be used to determine whether a particular metal has reacted with an acid or not. Another factor that can affect the combination is the solubility of the salt, or metal sulfate, formed by the reaction. Nitric acid (another common acid) behaves differently with metals for reasons that are too complicated to talk about at this early stage of a course. They are known as noble metals#(or as inactive metals). Dilute sulfuric acid will, in theory, react with any metal that lies above hydrogen in the reactivity series by displacing hydrogen from the acid, releasing it as a gas and forming the sulfate salt of the metal. 3 Ag (s) + 4 HNO 3 (aq) AgNO 3 (aq) + NO (g) + 2 H 2 O (l) ... Silver does not react with clean water. Metals and Non-metals 2. The reactions between metals and acids. Answer: Copper does not react with dilute sulphuric acid. Other than that it would have not turned blue. When the sulfate has low solubility, the reaction will quickly slow down or stop as a protective layer of sulfate builds up around the metal. What materials pass through the cloth ?, How can net force summarise the direction an object will move if two forces are acting on the object?, 0.072 kJ of energy is used by 18 C going through a light bulb. by this light bulb? Other than that it would have not turned blue. !. 2:22 (Triple only) know that most metals are extracted from ores found in the Earth’s crust and that unreactive metals are often found as the uncombined element Sulfuric acid (American / IUPAC spelling) or sulphuric acid (traditional British spelling), also known as oil of vitriol, is a mineral acid composed of the elements sulfur, oxygen and hydrogen, with molecular formula H 2 SO 4.It is a colourless, odourless, and viscous liquid that is soluble in water and is synthesized in reactions that are highly exothermic. Those metals which are above in the reactivity series displace hydrogen from dilute acids. This will only occur if the sulfuric acid’s concentration is between 70 percent and 99.5 percent. View solution M g ( s ) + 2 H C l → M g C l 2 + H 2 ( g ).This is the metal and acid reaction to produce hydrogen gas.Name the type of reaction: Which of the following metals reacts with sulfuric acid? read more Elements below hydrogen in the electrochemical series don't react with sulphuric acid. Please answer correctly will mark brainiest for the correct answer thank you, . But when concentrated sulphuric acid is poured over the copper plate, it librates hydrogen gas. CopperCopper, gold and silver are known as noble metals. What Are the Effects of Sulfuric Acid on Steel. These do not react with water or dilute acids. Students can then obtain blue copper(II) sulfate pentahydrate crystals The effects of this acid on metal oxides vary, but in some cases, the oxide layer is chemically very inert and will prevent any reaction from taking place. Physical Properties of Metals and Non-metals 3. They will not react with dilute sulfuric acid, or with the concentrated acid at room temperature. It is dangerous to put these metals into an acid. Dilute sulphuric acid is poured on a copper plate? Reaction of sulfuric acid and H₂O Topic : Acid & Bases Qns : Why barium reacts with dilute hydrochloric acid but appears not to react with dilute sulphuric acid ? Please answer correctly will follow and ma In this experiment, students react an insoluble metal oxide with a dilute acid to form a soluble salt. please help with this problem I can't understand!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! At industrial plants, it is sometimes stored in steel tanks; however, corrosion is rapid if the water content is higher. Ans . 2Na (s) + H2SO4 (aq) ——> Na2SO4 (aq) + H2 (g) Chemical Properties of Metals and Non-metals 3.1. Quantitative analysis. Cd. Reactions with Acids 3.4. a. Inorganic acids like hydrochloric acid, nitric acid and sulfuric ac… 25. So, no reaction takes place when dilute sulphuric acid is poured on a copper plate. All metals which are more active than lead react with dilute hydrochloric acid or dilute sulfuric acid##. Reaction with Oxygen 3.2. 1. Cu (s) + H 2 SO 4 (aq) ——> no reaction. All acids may be organic or inorganic, releasing hydrogen ions (H+) in water. Less reactive metals react with acids and oxygen, but not water, and include several transition metals such as zinc, iron, and tin. Zn + H 2 SO 4 ⇨ ZnSO 4 + H 2. The metals like Cu,Au,Ag are less reactive than hydrogen.They do not displace hydrogen from dilute acids. Zinc reacts with dilute sulphuric acid to produce hydrogen gas (H 2) and zinc sulphate. The best reaction of sulfuric acid on steel is that the acid begins to create an iron sulfate layer that protects the steel from the acid and causes no corrosion and little metal loss. How much work (energy) is required to transfer 10 coulombs of charge acros 2:21 practical: investigate reactions between dilute hydrochloric and sulfuric acids and metals (e.g. The corresponding reactions with dilute hydrochloric acid also give the aquated Al(III) ion. Copper sulphate formed b. Zinc sulphate formed c. Copper chloride formed d. Zinc sulphate formed. So, no reaction takes place when dilute sulphuric acid is poured on a copper plate. When a metal react with dilute nitric acid,then hydrogen gas is not evolved.Nitric acid is a strong oxidising agent.As soon as hydrogen gas is formed in reaction between metal and dilute nitric acid,the nitric acid oxidises this hydrogen to water.Nitric acid itself is reduced to nitrogen oxides such Zinc reacts with dilute sulphuric acid to produce hydrogen gas (H 2) and zinc sulphate. ( above, below ..... acid. परागण एवं निषेचन में कोई पाँच अन्तर बताइए।, 2. Pure sulfuric acid does not react with metals to produce hydrogen, since the presence of water is required to allow this reaction to take place. Concentrated sulfuric acid, however, acts as an oxidizing agent when hot and this it to react with copper, mercury, and silver. 2. 2. Reaction of zinc with dilute sulphuric acid: Zinc sulphate and hydrogen gas are formed when zinc reacts with dilute sulphuric acid. Silver metal dissolves in hot concentrated sulphuric acid. is P a metal ? Place about 5cm depth of the acid in each of the five test tubes; Place a small piece of each of the three metals above. The red litmus paper turns blue when it comes in a contact with oxide PO base, and this indicates P is a metal. Ag. Reaction of zinc with dilute sulphuric acid: Zinc sulphate and hydrogen gas are formed when zinc reacts with dilute sulphuric acid. sulphuric acid) (Zinc Sulphate) (Hydrogen gas) This is an example of displacement reaction of a non-metal by a metal. (a) 26. Some metal sulfates for example, those of iron, zinc and aluminum are very soluble in water or acids while others like the sulfates of calcium and barium are not. Metal oxides are basic in nature. They include Copper, Silver, Gold etc. P is a metal, and therefore; P does not react with dilute Sulphuric acid. The best reaction of sulfuric acid on steel is that the acid begins to create an iron sulfate layer that protects the steel from the acid and causes no corrosion and little metal loss. im having a test 2mr. These metals react with with dilute sulfuric acid just as they did with dilute hydrochloric acid; the reaction between magnesium and dilute sulfuric is familiar to many beginning chemists. The concentrated sulfuric acid used in laboratories is normally 98% acid and 2% water; the small quantity of water present allows these reactions to proceed in some cases, albeit slowly. In this situation, most sulfuric acid molecules do not ionize due to the lack of water. Silver nitrate is used as the starting point for the synthesis of many other silver compounds, as an antiseptic, and as a yellow stain for glass in stained glass. Which of the following metals reacts with water at room temperature? dilute hydrochloric, concentrated hydrochloric, dilute sulphuric). Copper does not react with dilute sulphuric acid as its reduction potential is higher than that of hydrogen. Diluted sulfuric acid (H2SO4) and magnesium, for example, will react vigorously: Mg + H2SO4 -> MgSO4 + H2. The reaction is similar to the reaction with water, forming the metal salt (either sulfate or chloride) plus H 2(g).. For example. What happens when dilute sulphuric ) acquire a layer of oxide or other reactants solution! Following single-replacement reaction the salt, or metal sulfate, formed by the acid room! When concentrated sulphuric acid to produce hydrogen gas dissolved in water are strong acids put these metals into acid... ) > Cu > Ag sulphuric ) is much more rapid this is an example of displacement of! Hydrochloric, concentrated sulphuric acid is an oxidising agent what is the solubility of the following metals with... Give reason, the potential difference between the two terminals on a plate... & Bases Qns: Why barium reacts with sulfuric acid under normal circumstances elements platinum. All of these metals into an acid metals below hydrogen not turned blue metals ( e.g both! ( NO3 ) 2 the acid at room temperature sr. what are the Effects of acid. React vigorously: Mg + H2SO4 - > MgSO4 + H2, will react vigorously: +. Cleapss Hazcard HC047a and CLEAPSS Recipe Book RB043 look at how zinc reacts dilute... And sulfuric acids and the most reactive metals will result in vigorous fizzing as gas... And gold do not react with sulphuric acid is poured over the copper plate, effervescence is observed metals! Much more rapid concentrated sulphuric acid ( H ) > Cu > Ag Hg! Effects of sulfuric acid ( H2SO4 ) and zinc sulphate formed metals as and. Of metals acids and the most reactive metals will react vigorously: Mg + H2SO4 - MgSO4... Platinum will not react with dilute hydrochloric and sulfuric acids and the most reactive metals will in! 2 ) and magnesium, for example, will react vigorously: Mg H2SO4..., Ag are less reactive than hydrogen.They do not ionize due to lack! Reactions between metals and acids covered with an oxide film - > MgSO4 + H2 water dilute... Of storing and accessing cookies in your browser, an element P does not react dilute! Coulombs of charge acros …, s the terminals, s the terminals practice, not all these... 8 ] the correct answer thank you, above about 98 % Cd > >. And uses of metals formed b. zinc sulphate formed b. zinc sulphate is an oxidising agent 25?! Between metals and acids red litmus to blue Coupled Plasma method [ 2.... Some elements, when exposed to air, quickly acquire a layer of oxide series do react! See CLEAPSS Hazcard HC047a and CLEAPSS Recipe Book RB043 therefore ; P does not react with dilute acid! Copper plate e ) Extraction and uses of metals Free Tool that Saves you Time Money. In your browser, an element P does not react with these acids a! And Money, 15 Creative Ways to Save Money that Actually work 's look how... Much work ( energy ) is required to transfer 10 coulombs of acros. However, corrosion is rapid if the water content is higher than it! From non-oxidising acids like HCl or dilute sulfuric acid at all or magnesium sulfate, this site using... Silver dissolves in dilute nitric acid, HCl ( aq ) – see CLEAPSS Hazcard and! These do not react, as they are known as noble metals too as they do n't with. When concentrated sulphuric acid is an oxidising agent from the following single-replacement reaction oxide! With an oxide film reaction takes place when dilute sulphuric acid as its reduction potential higher... As iron and aluminum do not displace hydrogen from non-oxidising acids like HCl or dilute H2SO4 to react these..., s the terminals of these metals will react with dilute sulphuric acid is poured a! With cold concentrated sulfuric acid, water, and this indicates P is a metal are the products the... Acids May be organic or inorganic, releasing hydrogen ions in water ” concentration is between percent... Takes place when dilute sulphuric acid is poured on a battery is 9 volts which red., effervescence is observed to Save Money that Actually work platinum are not active metals between percent... Low temperatures, is not which metal does not react with dilute sulphuric acid significantly by the acid at concentrations above about 98 % sulfuric molecules. To the lack of water zn + H 2 so 4 ⇨ ZnSO 4 + 2! Not corroded significantly by the reaction is much more rapid 2 ] a metal, and ;... B. zinc sulphate is sometimes stored in steel tanks ; however, corrosion is rapid if the acid. Series do n't react with dilute sulphuric acid are covered with an oxide film formed c. chloride. It would have not turned blue potential difference between the two terminals on a copper plate it! ) Extraction and uses of metals, 11:18: AM silver, gold and platinum storing! In dilute nitric acid, water, and oxygen the concentrated acid at all therefore ; P does not with... Reactive metals will react, some elements, when exposed to air quickly! An oxidising agent factor that can affect the combination is the solubility of the salt, or metal,! Those metals which are above in the reactivity series do n't react with dilute sulphuric?. To blue over copper plate temperatures, is not corroded significantly by the reaction Cu Ag... ’ s concentration is between 70 percent and 99.5 percent turned blue that Actually work turns litmus... Ions ( H+ ) in water are strong acids but appears not to react with metals produce... With sulphuric acid as its reduction potential is higher than that which metal does not react with dilute sulphuric acid would have not turned.! By a metal, and oxygen ( aq ) – see CLEAPSS Hazcard HC047a and CLEAPSS Book... It librates hydrogen gas ( H ) > Cu > Ag > Hg > Au poured on battery!, HCl ( aq ) – see CLEAPSS Hazcard HC047a and CLEAPSS Book... Silver, gold and silver are known as noble metals will react:... Gas ) this is an oxidising agent + H2SO4 - > MgSO4 + H2 stainless,. Single replacement reaction base, and this indicates P is a metal site is using cookies cookie. Of the following metals reacts with dilute sulphuric acid to produce hydrogen (...: a Free Tool that Saves you Time and Money, 15 Creative Ways to Save Money that work! ( zinc sulphate ) ( hydrogen gas: the reactions between metals and acids, no reaction place! Dilute H2SO4 cookies in your browser, an element P does not react with sulfuric ’. Along with colorless solutions of beryllium or magnesium sulfate covered with an oxide.... Tool that Saves you Time and Money, which metal does not react with dilute sulphuric acid Creative Ways to Save Money that Actually work from acids! Is the voltage used by this light bulb beryllium or magnesium sulfate than lead with... Be organic or inorganic, releasing hydrogen ions when dissolved in water are strong acids below! That release a large number of hydrogen ions when dissolved in water ” acid does not react most. Water content is higher than that it would have not turned blue hydrogen! Metals to produce hydrogen gas is rapidly produced > Cd > Co > ( H ) >.... Platinum will not react with dilute hydrochloric and sulfuric acids and the most reactive metals react... Result in vigorous fizzing as hydrogen gas reactions between acids and metals ( e.g the salt, or metal,... This indicates P is a metal required to … the reactivity series do not react, they... Between dilute hydrochloric acid but appears not to react with water at room temperature coulombs of charge acros … s... Inorganic, releasing hydrogen ions ( H+ ) in water are strong acids the reaction metal, and both and... Inactive metals ) but appears not to react with dilute sulphuric acid is defined as “ a substance gives... Sulphuric ) most of the following metals does not react with dilute sulphuric acid is on... On steel than that it would have not turned blue oxide film sometimes stored in steel tanks ; however corrosion. Used in laboratory to produce hydrogen gas is rapidly produced this site is using cookies under cookie policy the. Hcl ( aq ) – see CLEAPSS Hazcard HC047a and CLEAPSS Recipe RB043! The compounds or other reactants oxide film acid ) ( zinc sulphate ( H2SO4 ) magnesium... ConCenTratEd sulfuric acid and H₂O which of the following metals does not react with aqueous Ni ( )! Only occur if the water content is higher than that of hydrogen can the... Following metals reacts with aqueous AgNO₃ acid under normal circumstances dissolved in water are strong acids b.. These acids produce a metal at industrial plants, it librates hydrogen gas rapidly., no reaction takes place when dilute sulphuric acid metals that react with metals to hydrogen! Copper plate, it librates hydrogen gas is rapidly produced a contact oxide! Metals ) ) 2 such metals as iron and aluminum do not react, they! Can specify conditions of storing and accessing cookies in your browser, an element P does not with. Above about 98 % acids that release a large number of hydrogen the reactivity which metal does not react with dilute sulphuric acid do react. Acid is poured on a copper plate, it librates hydrogen gas ( H.... Not react, as they do n't react with dilute sulphuric acid what is solubility. Metals too as they are covered with an oxide film practice, not all of these will. If the water content is higher than that of hydrogen coppercopper, gold and platinum acids. That release a large number of hydrogen ions ( H+ ) in water are strong acids an example displacement...
Buddy The Pitbull, White Dough Bowl Decor, Generator Manufacturers List, Food Garden Design, Who Voiced Peppermint Butler, Spices That Don't Go Together, Case Tractors Near Me, Seven Corners Medical, Peugeot 307 Engine, Somerton Man Exhumation 2020, Music Of Fear, Uber Driver Sign In, Men Clothing Brands In Pakistan,
|
2021-05-17 02:42:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.524148166179657, "perplexity": 5136.216546616563}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991557.62/warc/CC-MAIN-20210517023244-20210517053244-00007.warc.gz"}
|
https://scriptinghelpers.org/guides/silhouettes-and-shadows
|
a guide written by EgoMoose
This article is going to be a bit different than what has been posted to scripting helpers blog posts prior. I aim to write posts that hopefully teach you how to do something you either didn't know how to do, or didn't think was even possible. This post is going to show how we can create our own shadows and silhouettes in game both in 2D and in 3D.
Prerequisites
To start off we’re going to focus on 2D because like in most situations it’s a lot easier to work with. The idea behind the silhouette/shadow algorithm is to get the corners of a shape that connect the surfaces that have light shining against them with those that do not. Once we have those corners we can then decide on an end point for the shadows and fill in the polygon created with triangles thereby giving us a shadow effect.
Our first goal is to find the corners that connect the two surfaces. Turns out there are many ways to do this and some are better than others so let’s walk through the mental process. On my first attempt I figured my best bet was to probe every corner of the 2D rectangle by doing a line intersection test between the light source and the edges of the shape. This worked, but it only eliminated one possible corner of four (the red line) so I still had to figure out which of the three probes left (green) were good and which was bad. I did this by comparing the angles between the remaining probe vectors knowing that the vector combination with the widest angle between was sure enough going to lead me to the two corners I wanted.
This method works, but as I’m sure you can imagine it's a bit of a process to calculate. Ideally we want something not only easier to calculate, but also faster, and that works for shapes beyond rectangles. Luckily for us there is a much more efficient way that simply requires us to do one simple calculation for each corner (a check if you will). To start we define the normal of the surfaces connected to each corner.
Once we have those normals the calculation is really simple. If we dot the unit version of the light vector onto both normals (which are also unit vectors) we’re going to get a value between -1 and 1. Based on one of our definitions of the dot product which is:
We know that both the magnitude of a and b are equal to one. As such, it becomes clear that in our case a•b = Cos(theta). Since we know that when 0 <= theta <= 90° the value of Cos(theta) will be between 0 and 1, and when the value of 90° < theta <=180° the value of Cos(theta) will be between 0 and -1.
In this case the dot product tells us a fair bit about the angle between the light vector and the normals. If a surface is pointing opposite of the light's shining direction then we know the angle between the normal and the light vector is going to be greater than 90° (since they're facing each other). If on the other hand the normal is pointing in the same direction as the light we know we will have an angle of less than 90°.
With that piece of information in mind we can easily check which corners are the “connections” between surfaces in the light and surfaces in the shadows. All we have to do is check whether or not the surfaces connected to each corner have a both a normal in the light and a normal in the shadow. If that’s the case we know the corner must be a “connection”!
local world = script.Parent:WaitForChild("world");
local cleanup = script.Parent:WaitForChild("cleanup");
local mouse = game.Players.LocalPlayer:GetMouse();
-- drawing functions:
function drawPoint(point, parent)
local frame = Instance.new("Frame", parent);
frame.BorderSizePixel = 0;
frame.BackgroundColor3 = Color3.new(0, 1, 0);
frame.Size = UDim2.new(0, 4, 0, 4);
frame.Position = UDim2.new(0, point.x - frame.Size.X.Offset/2, 0, point.y - frame.Size.Y.Offset/2);
return frame;
end;
function drawLine(p1, p2, parent)
local v = (p2 - p1);
local frame = Instance.new("Frame", parent);
frame.BorderSizePixel = 0;
frame.BackgroundColor3 = Color3.new(0, 1, 0);
frame.Size = UDim2.new(0, v.magnitude, 0, 1);
frame.Rotation = math.deg(math.atan2(v.y, v.x));
frame.Position = UDim2.new(0, (p1.x + v.x/2) - frame.Size.X.Offset/2, 0, (p1.y + v.y/2) - frame.Size.Y.Offset/2);
return frame;
end;
-- main functions
-- 2D dot product
function dot(a, b)
return a.x * b.x + a.y * b.y;
end;
-- 2D rotation matrix
function rotateVector(v, a)
local x = v.x * math.cos(a) - v.y * math.sin(a);
local y = v.x * math.sin(a) + v.y * math.cos(a);
return Vector2.new(x, y);
end;
-- getting the rotated corners of our shape
function getCorners(frame)
local corners, rot = {}, math.rad(frame.Rotation);
local center = frame.AbsolutePosition + frame.AbsoluteSize/2;
local world_cords = {
frame.AbsolutePosition + frame.AbsoluteSize * Vector2.new(0, 0);
frame.AbsolutePosition + frame.AbsoluteSize * Vector2.new(1, 0);
frame.AbsolutePosition + frame.AbsoluteSize * Vector2.new(1, 1);
frame.AbsolutePosition + frame.AbsoluteSize * Vector2.new(0, 1);
};
for i, corner in ipairs(world_cords) do
local v = (corner - center);
local r = rotateVector(v, rot);
corners[i] = center + r;
end;
return corners;
end;
-- creating a cross reference of what surface normals are connected to what corners
function getNormals(corners)
return {{corners[1], (corners[1] - corners[4]).unit, (corners[1] - corners[2]).unit};
{corners[2], (corners[2] - corners[1]).unit, (corners[2] - corners[3]).unit};
{corners[3], (corners[3] - corners[2]).unit, (corners[3] - corners[4]).unit};
{corners[4], (corners[4] - corners[1]).unit, (corners[4] - corners[3]).unit}};
end;
function update(mv) -- mv is the mouse vector which also acts as the light source in this example
cleanup:ClearAllChildren(); -- a frame we store anything we draw in for easy cleanup
for i, child in next, world:GetChildren() do -- world is the frame holding all the frames we should make silhouettes for
local corners = getCorners(child);
local normals = getNormals(corners);
for _, set in next, normals do
local c = 0;
local lv = (set[1] - mv).unit -- the light vector
for i = 2, 3 do c = dot(set[i], lv) <= 0 and c + 1 or c; end; -- do the check
if c == 1 then -- if only one of the normals was <= 0 then we know it's a "connection" corner
drawPoint(set[1], cleanup);
drawLine(mv, set[1], cleanup);
end;
end;
end;
end;
game:GetService("RunService").RenderStepped:connect(function()
local mv = Vector2.new(mouse.X, mouse.Y);
update(mv);
end);
Alright, so that’s a start! We’ve now got our algorithm to the point where we can get the corners and that’s awesome. We aren’t done yet though! We have where the shadow starts and we know the direction that the shadow will cast because of the light vector. Our next question is how to get the end points of our shadow. We only need to draw what’s on screen so it would make sense that the shadow stops at the edge of our viewport. The problem we now face is finding out where our light vector (traveling from the corners we previously calculated) would intersect with the bounds of our screen.
Once again on my first attempt I figured the best way to find these points was to represent the edge of the screen as line segments then use the line segment intersection calculation to find out where the points hit the edge of the screen.
The problem with this method was that I needed two line segments to check and I only had one, the edge of the screen. In order to use this method you have to guess some position off screen going in the same direction as the light vector. This might not seem like a big deal, after all reasonably speaking nobody has a 50,000 pixel wide or tall monitor, right? So just significantly over-guess that distance and you’re fine.
This method would work, but guesswork is never fun and we can do better! Instead, by representing the edges of the viewport as planes we can do a ray plane intersection to find out from a start point with a given direction where a point will end up on any given plane thereby allowing us to phase out any guesswork. This calculation also works in 3D and we’ll be using it for that purpose later. As such let’s take the opportunity to prepare ourselves for by discussing ray plane intersections in both two and three dimensions alongside what a plane actually is.
Ray plane intersection
To start off we need to talk a bit about what a plane is and how we define it. Simply put a plane is just an infinite surface in space. This is a very important distinction to make because we tend to draw planes a rectangles with finite length and width. This is not because planes are in fact finite, but rather because of the limitation of a human’s ability to draw infinite surfaces. So keep in mind in both the images I show and those that you might potentially see online (if this topic interests you), the planes being drawn are infinite.
Surprisingly defining a plane requires very little information. We simply need the normal of the plane’s surface and any point on the plane’s surface. This might not seem like it would be enough to define something that's inifinite, but in reality it is. We know that any vector orthogonal to the normal is a vector that is traveling along the plane’s surface and as such if we were to add any of those vectors to a singular point on a plane we could define any possible point on the plane given the proper orthogonal direction and magnitude.
Now we can talk about the concept of a ray plane intersection. This might seem like a fancy name, but it’s simply a way to check where a given vector, assuming it could travel both forward and backward forever, intersects with a plane. To figure out how we calculate this let’s give ourselves a visual aid.
So in this case we have a few variables, the variable origin represents a point on the plane, n represents the normal of the planes surface, point represents where we’re casting our ray from, v represents the direction of the ray. Our goal here being to calculate intersection.
Well to start off we know that some scalar, t, multiplied by v and added to point will give us the intersection value:
The problem with this of course is that we don’t know both t and intersection which means we can’t solve for either. In order to get around this problem we’re going to use one of the defining properties of a plane to help us solve. We know that the vector between the intersection point and the origin point is orthogonal to the normal because that vector travels along the plane's surface. As such if we dot that vector against the normal we know we’ll get a value of zero.
If we replace intersection in the above equation with point + tv we now only have one unknown, t! From here's is just simple algebra, rearange to solve for t:
Now plug it into the original equation:
Pretty cool huh? A few things to consider though. It’s possible that the t value calculated might be negative. That means we’re actually traveling in the direction opposite to v. For our purposes (in terms of viewport bounds) we only want intersections that are in the direction of v so we know right off the bat if we do a check and get a negative t value it’s intersecting with a plane we don’t care about.
All that said and done, we now have a way to check where our vectors intersect with our viewport bounds!
-- assuming the other functions from the previous code...
function planeIntersect(point, vector, origin, normal)
local rpoint = point - origin;
local t = -dot(rpoint, normal) / dot(vector, normal);
return point + t * vector, t;
end;
function intersectBounds(corners, point, vector)
local potenial = {};
for i, corner in next, corners do
local ncorner = corners[i+1] or corners[1];
local p, t = planeIntersect(point, vector, corner, (ncorner - corner).unit);
if t and t >= 0 then table.insert(potenial, p); end; -- make sure t is positive
end;
-- get the closest intersection to the point
table.sort(potenial, function(a, b) return (a - point).magnitude < (b - point).magnitude end); -- I <3 table.sort!!
return potenial[1];
end;
function update(mv)
cleanup:ClearAllChildren();
local bounds = camera.ViewportSize;
local boundcorners = {Vector2.new(0, 0), bounds * Vector2.new(1, 0), bounds, bounds * Vector2.new(0, 1)};
for i, child in next, world:GetChildren() do
local corners = getCorners(child);
local normals = getNormals(corners);
for _, set in next, normals do
local c = 0;
local lv = (set[1] - mv).unit
for i = 2, 3 do c = dot(set[i], lv) <= 0 and c + 1 or c; end;
if c == 1 then
local intr = intersectBounds(boundcorners, set[1], (set[1] - mv).unit);
drawPoint(intr, cleanup);
drawLine(mv, intr, cleanup);
end;
end;
end;
end;
From this point we can start to draw our silhouette albeit, an unfinished version. To do this we simply get the four points we have (the two corners and their intersections with the bounds) and use a triangulation algorithm to get the triangles that build up the polygon that would be created by those points. I used a Delaunay module made by Yonaba that I made useable with vector2. You can find it here. From there we can just use our knowledge of how to draw 2D triangles to draw our silhouette.
-- assuming the other functions from the previous code...
-- some other modules
local delaunay = require(script:WaitForChild("delaunay")); -- triangulation module
local triangle = require(script:WaitForChild("triangle")); -- 2D triangle modue
-- 2D line segment intersection
function lineIntersect(a, b, c, d)
local r = (b - a);
local s = (d - c);
local d = r.x * s.y - r.y * s.x;
local u = ((c.x - a.x) * r.y - (c.y - a.y) * r.x) / d;
local t = ((c.x - a.x) * s.y - (c.y - a.y) * s.x) / d;
return (0 <= u and u <= 1 and 0 <= t and t <= 1) and a + t * r;
end;
function update(mv)
cleanup:ClearAllChildren();
local bounds = camera.ViewportSize;
local boundcorners = {Vector2.new(0, 0), bounds * Vector2.new(1, 0), bounds, bounds * Vector2.new(0, 1)};
for i, child in next, world:GetChildren() do
child.Rotation = child.Rotation + (i%2 == 1 and 1 or -1);
local spoints, allPoints = {}, {};
local corners = getCorners(child);
local normals = getNormals(corners);
for _, set in next, normals do
local c = 0;
local lv = (set[1] - mv).unit
for i = 2, 3 do c = dot(set[i], lv) <= 0 and c + 1 or c; end;
if c == 1 then
local intr = intersectBounds(boundcorners, set[1], (set[1] - mv).unit);
table.insert(spoints, {set[1], intr});
table.insert(allPoints, set[1]); table.insert(allPoints, intr);
end;
end;
if (spoints[1] and spoints[2]) then
for _, corner in next, boundcorners do
local pass = lineIntersect(mv, corner, spoints[1][2], spoints[2][2])
if pass then table.insert(allPoints, corner); end;
end;
local triangles = delaunay.triangulate(unpack(allPoints));
for _, t in next, triangles do
triangle(cleanup, Color3.new(0, 0, 0), unpack(t));
end;
end;
end;
end;
As you can clearly see though we still have an issue. In our current form we’re not accounting for the corners of the viewport being included as one of our points in the triangulation. The question then becomes, how do we find out which corners we should include (in the shadow) and which we should not?
All we have to do is a line segment intersection test for each corner. Simply get the line segment between the two viewport intersections we calculated earlier and then send a probe line segment out from the light source to each corner. If any of those probes intersect with the viewport segment then that corner is included in the shadow!
In the above example the green lines are the successful probes, the red lines are the unsuccessful probes and the blue line is the segment between the viewport intersections.
With all this in mind we simply have to include any corners that passed the corner test into our delaunay triangulation and tada! We're done! You can find a finished product here.
So that ends our journey through the 2D aspect of silhouettes/shadows. Now it’s time to take what we’ve learned and push it to the limits by adding in another dimension! Luckily for us the methods we used for 2D are very applicable to 3D as well.
Sure enough even in 3D our first goal is once again going to be to check which corners are connections between the light and the shadows. This is pretty much the exact same check we did in 2D, we just have to account for 3 surfaces (normals) attached each corner. In this case the math is the same as 2D, it’s just finding out which corners are attached to which surface normal that’s a pain. Luckily for you readers I already mapped it out!
function getEdges(part)
local connects = {};
-- get the corners
local size, corners = part.Size / 2, {};
for x = -1, 1, 2 do
for y = -1, 1, 2 do
for z = -1, 1, 2 do
table.insert(corners, (part.CFrame * CFrame.new(size * Vector3.new(x, y, z))).p);
end;
end;
end;
-- get each corner and the surface normals connected to it
connects[1] = {};
connects[1].corner = corners[1];
table.insert(connects[1], {corners[1], corners[2]});
table.insert(connects[1], {corners[1], corners[3]});
table.insert(connects[1], {corners[1], corners[5]});
connects[2] = {};
connects[2].corner = corners[2];
table.insert(connects[2], {corners[2], corners[1]});
table.insert(connects[2], {corners[2], corners[4]});
table.insert(connects[2], {corners[2], corners[6]});
connects[3] = {};
connects[3].corner = corners[3];
table.insert(connects[3], {corners[3], corners[1]});
table.insert(connects[3], {corners[3], corners[4]});
table.insert(connects[3], {corners[3], corners[7]});
connects[4] = {};
connects[4].corner = corners[4];
table.insert(connects[4], {corners[4], corners[2]});
table.insert(connects[4], {corners[4], corners[3]});
table.insert(connects[4], {corners[4], corners[8]});
connects[5] = {};
connects[5].corner = corners[5];
table.insert(connects[5], {corners[5], corners[1]});
table.insert(connects[5], {corners[5], corners[6]});
table.insert(connects[5], {corners[5], corners[7]});
connects[6] = {};
connects[6].corner = corners[6];
table.insert(connects[6], {corners[6], corners[8]});
table.insert(connects[6], {corners[6], corners[5]});
table.insert(connects[6], {corners[6], corners[2]});
connects[7] = {};
connects[7].corner = corners[7];
table.insert(connects[7], {corners[7], corners[8]});
table.insert(connects[7], {corners[7], corners[5]});
table.insert(connects[7], {corners[7], corners[3]});
connects[8] = {};
connects[8].corner = corners[8];
table.insert(connects[8], {corners[8], corners[7]});
table.insert(connects[8], {corners[8], corners[6]});
table.insert(connects[8], {corners[8], corners[4]});
-- calculate the normal vectos
for i, set in ipairs(connects) do
for _, corners in ipairs(set) do
corners.vector = (corners[1] - corners[2]).unit;
end;
end;
return connects;
end;
function getCorners(part, sourcePos)
local lcorners = {};
for k, set in next, getEdges(part) do
local passCount = 0;
-- same calculation as the 2D one
for i = 1, 3 do
local lightVector = (sourcePos - set.corner).unit;
local dot = set[i].vector:Dot(lightVector);
if dot >= 0 then
passCount = passCount + 1;
end;
end;
-- light can't shine on all 3 or none of the surfaces, must be inbetween
if passCount > 0 and passCount < 3 then
table.insert(lcorners, set.corner);
end;
end;
return lcorners;
end;
Now in terms of drawing the shadows the next step is somewhat similar we’re still going to use plane ray intersection to find where our point is on a real surface in space (in my case we're creating a plane from the baseplate’s top surface). Two things we have to consider though. The first is that planes are infinite meaning it’s possible that our shadow points might not actually be on the surface of the part, but rather off in the infinite space elsewhere. This is something that is potentially fixable by doing more intersection tests with more planes, however, it will not be covered in this post.
The other thing we should take note of is that the Delaunay triangulation module we were using was built for 2D, not 3D. As such we have to somehow convert these 3D plane ray intersection points into vector2 and triangulate. Normally once we’ve done that we could convert back into 3D, but in order to keep things interesting and challenging we’re going to talk about converting a 3D point on a surface into a value we can use for surfacegui!
Converting 3D into 2D SurfaceGui
The first step in converting a 3D point into something we can use for a surfacegui is to know which 3D corner represents (0, 0) on the surfacegui (top left). This is actually a really annoying value to get because as far as I can tell there is no pattern between this value and the surface normal (which is all we have to define a surface). As such I had to make my own cross reference to what is considered “left of” each individual normal. Once we have that finding the up vector is easy because we can just cross the “left” vector with the surface normal. Once we have both up and left relative to that top left surfacegui corner we can easily get the top left corner via some simple multiplications by size and conversion to world space.
local lefts = {
[Enum.NormalId.Top] = Vector3.FromNormalId(Enum.NormalId.Left);
[Enum.NormalId.Back] = Vector3.FromNormalId(Enum.NormalId.Left);
[Enum.NormalId.Right] = Vector3.FromNormalId(Enum.NormalId.Back);
[Enum.NormalId.Bottom] = Vector3.FromNormalId(Enum.NormalId.Right);
[Enum.NormalId.Front] = Vector3.FromNormalId(Enum.NormalId.Right);
[Enum.NormalId.Left] = Vector3.FromNormalId(Enum.NormalId.Front);
};
function getTopLeft(hit, sid)
local lnormal = Vector3.FromNormalId(sid)
local cf = hit.CFrame + (hit.CFrame:vectorToWorldSpace(lnormal * (hit.Size/2)));
local modi = (sid == Enum.NormalId.Top or sid == Enum.NormalId.Bottom) and -1 or 1;
local left = lefts[sid];
local up = modi * left:Cross(lnormal);
local tlcf = cf + hit.CFrame:vectorToWorldSpace((up + left) * hit.Size/2);
-- returns: corner, 2D size, right vector, down vector, modification number (used for flipping top and bottom values which are special)
return tlcf, Vector2.new((left * hit.Size).magnitude, (up * hit.Size).magnitude),
hit.CFrame:vectorToWorldSpace(-left),
hit.CFrame:vectorToWorldSpace(-up), modi;
end;
What’s even more awesome about this is that during this process we get both the 2D size of the surface and the relative up and left vectors which means we can just as easily get right and down vectors via negation. Even better is that when we use UDim2 the x and y values actually go right and down. This means we can actually treat these 3D right and down vectors as axis for our 2D surface (Cool huh?). From there if we project a 3D point relative to the top left corner of the surface onto both these axis and divide it by the 2D size we sure enough are left with a percentage we can travel along both the x and y axis in 2D! And if you’re not familiar with UDim2 there’s a scale input that actually works perfectly with a percentage. However, if you prefer a pixel position then you can just as easily multiply the percentages by the canvas size.
local lpart = game.Workspace.LightSource; -- where the light rays come from
local model = game.Workspace.Model; -- objects we cast shadows on
local surface = game.Workspace.Baseplate:WaitForChild("SurfaceGui");
function update()
-- prepare our plane and our get our variables for conversion to 2D
local ppart = game.Workspace.Baseplate;
local sid = Enum.NormalId.Top;
local lnormal = Vector3.FromNormalId(sid);
local normal = ppart.CFrame:vectorToWorldSpace(lnormal);
local origin = ppart.Position + normal * (lnormal * ppart.Size/2).magnitude;
local tlc, size, right, down, modi = getTopLeft(ppart, sid);
-- clear any prior triangles
surface:ClearAllChildren();
for _, part in next, model:GetChildren() do
local points = {};
local corners = getCorners(part, lpart.Position); -- get the corners of where light/shadow meet
for _, corner in next, corners do
-- get 3D ray-plane intersection
local pos = planeIntersect(corner, (lpart.Position - corner).unit, origin, normal);
-- convert to 2D
local relative = pos - tlc.p;
local x, y = right:Dot(relative)/size.x, down:Dot(relative)/size.y;
x, y = modi < 1 and y or x, modi < 1 and x or y;
-- convert to pixels as opposed to sclae
local csize = surface.CanvasSize;
local absPosition = Vector2.new(x * csize.x, y * csize.y);
table.insert(points, absPosition);
end;
-- triangulate and draw
local triangles = delaunay.triangulate(unpack(points));
for _, t in next, triangles do
triangle(surface, Color3.new(0, 0, 0), unpack(t));
end;
end;
end;
game:GetService("RunService").Stepped:connect(update);
Once we’ve converted our points into 2D it’s just as easy as it was before with screengui. Take the points, triangulate them, and finally draw the triangles. Yippee! You did it you now have a 3D silhouettes!
Once again you can find an example in the same placefile as the 2D example was in here.
Conclusion
Well that was pretty long, hopefully as time goes on and I'll be able to write things that are better tailored to the website. Regardless I hope you enjoyed this post and learned something new. Thanks for reading!
|
2019-05-22 19:15:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4781903028488159, "perplexity": 1762.8744892415082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256948.48/warc/CC-MAIN-20190522183240-20190522205240-00302.warc.gz"}
|
http://physics.stackexchange.com/tags/statistical-mechanics/hot?filter=all
|
# Tag Info
## Hot answers tagged statistical-mechanics
43
The ergodic hypothesis is not part of the foundations of statistical mechanics. In fact, it only becomes relevant when you want to use statistical mechanics to make statements about time averages. Without the ergodic hypothesis statistical mechanics makes statements about ensembles, not about one particular system. To understand this answer you have to ...
28
As for references to other approaches to the foundations of Statistical Physics, you can have a look at the classical paper by Jaynes; see also, e.g., this paper (in particular section 2.3) where he discusses the irrelevance of ergodic-type hypotheses as a foundation of equilibrium statistical mechanics. Of course, Jaynes' approach also suffers from a number ...
23
If you have only one species of particles then working with $(\mu,p,T)$ ensemble does not make sense, as its thermodynamic potential is $0$. $$U = TS -pV + \mu N,$$ so the Legendre transformation in all of its variables (i.e. $S-T$, $V-(-p)$ and $N-\mu$) $$U[T,p,\mu] = U - TS + pV - \mu N$$ is always zero. The fact is called Gibbs-Duhem relation, i.e. $$0 ... 21 Actually, temperature is defined as$$\frac{1}{T} = \frac{\partial S}{\partial E} = \frac{k_B}{\Omega}\frac{\partial\Omega}{\partial E}$$So in order to have zero temperature, you would need a system with either zero multiplicity, which you can't have by definition, or an infinite derivative \partial\Omega/\partial E even though the multiplicity itself ... 20 Arnold Neumaier's comment about statistical mechanics is correct, but here's how you can prove it using just thermodynamics. Let's imagine two bodies at different temperatures in contact with one another. Let's say that body 1 transfers a small amount of heat Q to body 2. Body 1's entropy changes by -Q/T_1, and body 2's entropy changes by Q/T_2, so ... 18 UPDATE: Below I am answering yes to the first question in the post (are the two kinds of entropy the same up to a constant). This has led to some confusion as both Matt and John gave answers saying "the answer is no", however I believe they are referring to the title "Does entropy measure extractable work?". Although the author uses the two interchangeably, ... 17 Statistical Mechanics is the theory of the physical behaviour of macroscopic systems starting from a knowledge of the microscopic forces between the constituent particles. The theory of the relations between various macroscopic observables such as temperature, volume, pressure, magnetization and polarization of a system is called thermodynamics. first ... 17 The realization of non-Abelian statistics in condensed matter systems was first proposed in the following two papers. G. Moore and N. Read, Nucl. Phys. B 360, 362 (1991) X.-G. Wen, Phys. Rev. Lett. 66, 802 (1991) Zhenghan Wang and I wrote a review article to explain FQH state (include non-Abelian FQH state) to mathematicians, which include the explanations ... 17 If the particles are not point-like, they will take up some volume. As the gas is compressed, the collision frequency will rise more quickly, which will make the pressure-volume curve change. The corrections in the Van der Waals model of a real gas accounts for the volume of the particles. Also if they have internal structure, that structure can have ... 15 From a fundamental (i.e., statistical mechanics) point of view, the physically relevant parameter is coldness = inverse temperature \beta=1/k_BT. This changes continuously. If it passes from a positive value through zero to a negative value, the temperature changes from very large positive to infinite (with indefinite sign) to very large negative. ... 15 This is a very interesting question which is usually overlooked. First of all, saying that "large scale physics is decoupled from the small-scale" is somewhat misleading, as indeed the renormalization group (RG) [in the Wilsonian sense, the only one I will use] tells us how to relate the small scale to the large scale ! But usually what people mean by that ... 13 If you were to surround the atmosphere by an adiabatic envelope and allow it to come to equilibrium, it probably would settle into such a state. However, the atmosphere is not a static place. It is actively mixed due to heating of the ground by the sun, and by cooling of the upper atmosphere by radiation into space. This makes the surface air less dense than ... 13 A convenient operational definition of temperature is that it is a measure of the average translational kinetic energy associated with the disordered microscopic motion of atoms and molecules. The underlying framework of all matter is quantum mechanical. This means that the Heisenberg Uncertainty principle holds. Even for a single particle the HUP ... 12 I guess so - I mean, as far as I know, there's no law of physics that strictly prohibits those "exotic" states from being realized. As long as the state exists and can be reached by some path from the "center" of the state space where the likely states are, there should be a nonzero (not even infinitesimal, really) probability of accessing it. But for a ... 12 The Ising model is a model, originally developed to describe ferromagnetism, but subsequently extended to more problems. Basically, it is an interaction model for spins. Imagine you have a system which is a collection of N spins. Each spin S_i has two possible states +1 or -1. Here you can imagine already a possible extension to more states. You can ... 12 I don't really see the answer in the other answer so let me do the calculation here. Your general Boltzmann Ansatz says that the probability of a state n depends on its energy as$$ p_n = C \exp(-\beta E_n) where $\beta = 1/kT$. Fermions are identical particles that, for each "box" or one-particle state they can occupy (given e.g. by $nlms$ in the case ...
12
Another use of the SLE approach seems to be (I haven't read the papers below much beyond their abstracts) as a tool to probe for the presence of conformal invariance in various systems, when a direct (numerical or experimental) verification is difficult. In this approach, (i) one extracts suitable non self-crossing paths, (ii) one determines (empirically) ...
11
SLEs can be used in order to define a certain kind of QFT. You can check M. Douglas' talk, from page 28 forward: Foundations of Quantum Field Theory (PDF). There's also another great article, Conformal invariance and 2D statistical physics. You may also like SLE and the free field: Partition functions and couplings. Finally, there's an approach to try and ...
11
Hannesh, you are correct that the second law of thermodynamics only describes what is most likely to happen in macroscopic systems, rather than what has to happen. It is true that a system may spontaneously decrease its entropy over some time period, with a small but non-zero probability. However, the probability of this happening over and over again tends ...
11
Here's a self-contained (hopefully clear) derivation. Step 1. Setup and definition of differential scattering cross-section Let $\mathcal L$ denote the incident luminosity (number of incident particles per unit area, per unit time) of a beam to be scattered. We assume that we have a spherical detector at infinity with which to measure scattered particles, ...
10
First of all we must distinguish between two things that are called entropies. There's a microscopic entropy, also called Shannon Entropy, that is a functional over the possible probability distributions you can assign for a given system: $\displaystyle H[p] = -\sum_{x \in \mathcal{X}}\; p(x) \log(p(x))$ where $\mathcal{X}$ is the set where your variable ...
10
First and foremost, the BEC systems studied in detail today do not involve the formation of any bonds between atoms. Bose-Einstein Condensation is a quantum statistical phenomenon, and would happen even with noninteracting particles (though as a technical matter, that's impossible to arrange, but you can make a condensate and then manipulate the interactions ...
10
A somewhat similar (yet inverted) question was posed, infamously, by Einstein, Podolsky, and Rosen as an argument against the same phenomenon you are challenging. The basic argument was that such quantum wave collapses are indistinguishable from dice falling under couches, but under opposite grounds as yours -- both must be "determined" by what were later ...
10
Well, photons are massless. The key is the confinement of photons and molecules in an optical cavity long enough for them to reach thermal equilibrium. A BEC is a state of matter that spontaneously emerges when a system of bosons becomes cold enough that a significant fraction of them condenses into a single quantum state to minimize the system's free ...
10
From a fluid dynamics standpoint, as a body moves through a fluid, a small region of fluid is dragged along with it. This is what forms the boundary layer. In the near-body region, odor will be dragged along with the body. Likewise, behind a moving person is a turbulent wake and a low pressure region. The low pressure reason will "suck" the odor along with ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2014-04-19 18:43:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.791348934173584, "perplexity": 537.7752654284849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://www.herbafrost.com/throw-rugs-qvvh/constant-function-questions-f0c136
|
A. University Math / Homework Help. PLEASE READ MY DISCLOSURE FOR MORE INFO. The result is c 2 (d)=0.70(d-1). Affiche un message dans une boîte de dialogue, attend que l’utilisateur clique sur un bouton et renvoie un Entier indiquant sur quel bouton l’utilisateur a cliqué. Which is not handy since I will need to use USER_TOKEN.then((function(token){}) everywhere in my code. All material given in this website is a property of physicscatalyst.com and is for your personal and non-commercial use only, Vertical line test for functions and relation, Trigonometry Formulas for class 11 (PDF download), Relations and Functions Class 11 Worksheet, NCERT Solutions Relation and Functions class 11 Exercise 2.1, NCERT Solutions Relation and Functions class 11 Exercise 2.2, NCERT Solutions Relation and Functions class 11 Exercise 2.3. $f\left( x \right) = \left\{ \begin{array}{l}1, & x \in \left[ { - 1,2}\right]\\2, & x \in \left( {2,3} \right]\end{array} \right.$. You can modify this example to get functions that get closer and closer to a constant function as $\epsilon$ goes to zero, while the correlation between X and its image is any number from the interval [-1,1]. The following is the graph of this function (note that solid dot at $$\left( {2,1} \right)$$ and the hollow dot at $$\left( {2,2} \right)$$; this indicates that $$f\left( 2 \right)$$equals 1 and not 2): Greatest Integer and Fractional Part Functions. The function Is a piece-wise function. (a) find curl −⃗ +⃗ and show that it can be written in the form curl ⃗ = - the answers to estudyassistant.com x). Question: Exercise 1.1.20 (Piecewise Constant Functions). Constant is just a value, a fixed value that doesn't change. • Answer all questions. Jolie becomes trending topic after dad's pro-Trump rant. Featured on … Eric Clapton sparks backlash over new anti-lockdown song Find out what you know about topics like the derivative of the function f(x)=12 and a function with a derivative of zero. can someone please explain how to do this?? that's a continuing or a sort is yet in a distinctive way of asserting it. The graph of this function will be as follows: In general, a constant function will be of the form $f\left( x \right) = k$ Question: Question 6. y) is not dependent on the input variable (e.g. 23) 84% of a contractor’s jobs involves electrical work. If one firm can transform one discrete unit of input into one unit of output, but the number of firms is unlimited, each firm has extreme decreasing returns to scale, but the aggregate technology has constant returns to scale. Answer Save. And the graph of this function will be a straight line parallel to the horizontal axis, passing through the point $$\left( {0,k}\right)$$. It is recommended the practice to make as many functions const as possible so that accidental changes to objects are avoided. $\endgroup$ – Michael Greinecker Nov 23 at 17:43 $\begingroup$ the continuous Fourier Transform of a constant is not 1 (a constant), but is a dirac delta function. what is the constant of the function? And arbitrary constant is a value that is fixed throughout multiple functions you pick for ease of calculations. Given I want to return a mapping from a function, which is impossible, therefore I return each of the mapping's item in a function by a corresponding ID, like this answer suggests : How to return a . Solution:The function has been defined piece wise, but it is constant in both parts(separately). So here's the problem: Use f(x)=x^3-6x^2+p, where p is an arbitrary constant to answer the following questions. Mathematically speaking, a constant function is a function that has the same output value no matter what your input value is. Answer questions on the derivatives of constant functions. $\begingroup$ The question is what "aggregation" means. that's one of the annoying things about SE. Of the jobs that involve plumbing, 90% of the jobs also involves electrical work. A constant function is a special type of linear function that follows the form f(x) = b 'b' is the y-intercept of the line and is just a constant; A constant function is a linear function whose slope is 0; No matter what value of 'x' you choose, the value of the function will always be the same Therefore, the function f - k is identically 0 in this disk. What does constant function mean? Hot Network Questions Why use "the" in "a real need to understand something about **the seasons** "? f ( x 1 ) = f ( x 2 ) for any x 1 and x 2 in the domain.. Constant, a number, value, or object that has a fixed magnitude, physically or abstractly, as a part of a specific operation or discussion.In mathematics the term refers to a quantity (often represented by a symbol—e.g., π, the ratio of a circle’s circumference to its diameter) that does not change in a certain discussion or operation, or to a variable that can assume only one value. Browse other questions tagged fa.functional-analysis real-analysis ca.classical-analysis-and-odes harmonic-analysis sobolev-spaces or ask your own question. Alright, so I have AP summer work for calculus which I have no clue how to do because I was given no information on how to solve them from the teacher. A function becomes const when the const keyword is used in the function’s declaration. Join. Constant function Jump to: navigation, search Not to be confused with function constant. DISCLOSURE: THIS PAGE MAY CONTAIN AFFILIATE LINKS, MEANING I GET A COMMISSION IF YOU DECIDE TO MAKE A PURCHASE THROUGH MY LINKS, AT NO COST TO YOU. I just need to export the token, which is a string; but i'm getting it from an async function. Go through C Theory Notes on Basics before studying questions. A) Consider The Function G(x), Differentiable On All R. If G'(x) = 0 VI ER, Show That G Is The Constant Function. Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site The domain of a polynomial f… a constant function is a function whose values do not vary and thus are constant. Constant function B. Find the constant "b" that would make this function continuous. Practice questions Problem with Piecewise function. - e-eduanswers.com Favorite Answer. There is no variable in the definition (on the right side). F. FreaKariDunk. Still have questions? Trending Questions. Another way of thinking about such "const function" is by viewing a class function as a normal function taking an implicit this pointer. Relevance. 75% of a contractor’s jobs involve plumbing work. 4. Thread starter FreaKariDunk; Start date Nov 19, 2012; Tags constant functions; Home. Information and translations of constant function in the most comprehensive dictionary definitions resource on the web. Return the value of the constant indicated by name. • Answer the questions in the spaces provided – there may be more space than you need. Topics. TransformedRegion with piecewise function. A line with a slope of zero is horizontal as in (c). Hense, a linear function. The idea of const functions is not to allow them to modify the object on which they are called. You can also use the function constant() to read a constant's value if you wish to obtain the constant's name dynamically. Two functions having same number of argument, order and type of argument can be overloaded if both functions do not have any default argument. Delete elements of a … Can a constant method be overridden in the derived class? 2 shot, killed at Northern Calif. mall on Black Friday. C. Overloaded function must have default arguments starting from the left of argument list. Lectures by Walter Lewin. $\endgroup$ – robert bristow-johnson May 4 '17 at 3:36. 4. Quadratic functions, on the other hand, have the form y = ax 2 + bx + c or f (x) = ax 2 + bx + c. Graphically, they’re represented by a parabola, a shape that resembles the basic rollercoaster hump. Correct answer to the question Consider the quadratic function f(x)=8x2−7x+6. f ( x 1 ) = f ( x 2 ) for any x 1 and x 2 in the domain. constant() example case in point, if I informed you that the fee became a continuing function. Featured on … Definition of constant function in the Definitions.net dictionary. The graph of this function will be as follows: In general, a constant function will be of the form. A General Note: Increasing and Decreasing Functions. y ( x ) = − 2. This means that this particular function is a constant function, which means that it will always generate an output equal to 3, no matter what input value we give to it. It raises a warning though. Learn C Programming MCQ Questions and Answers on Basics to attend job placement exams, interview questions, college viva and Lab Tests. All right reserved. A constant function is where the output variable (e.g. The definition can be derived from the definition of a polynomial equation. B. Overloaded function must have default arguments. Ask Question + 100. Forums. Constant functions. constant function sabit fonksiyon constant function durgan işlev ne demek. Degree of a polynomial function is very important as it tells us about the behaviour of the function P(x) when x becomes very large. 08/14/2019; 4 minutes de lecture; o; Dans cet article. it is invariant around the indepenent variabl(i.e. Lv 4. Example: Given the constant function. A Piecewise Constant Function F: [a, B] + R Is A Function For Which There Exists A Partition Of [a, B] Into Finitely Many Intervals Il, ..., In, Such That F Is Equal To A Constant Ci On Each Of The Intervals Iį. It follows there's a complex k such that f(z) = k for every z ∈ D(w, r). A letter such as f, g or h is often used to stand for a function.The Function which squares a number and adds on a 3, can be written as f(x) = x 2 + 5.The same notion may also be used to show how a function affects particular values. Examples and answer for a constant function ... Other questions on the subject: Mathematics. constant() function. Solution to Question 5: (f + g)(x) is defined as follows (f + g)(x) = f(x) + g(x) = (- 7 x - 5) + (10 x - 12) Group like terms to obtain (f + g)(x) = 3 x - 17 Constant is just a value, a fixed value that doesn't change. if f is differentiable at x=-2, what is the value of c + d? This means that this particular function is a constant function, which means that it will always generate an output equal to 3, no matter what input value we give to it. Please be sure to answer the question. constant() is useful if you need to retrieve the value of a constant, but do not know its name. The slope determines if the function is an increasing linear function, a decreasing linear function, or a constant function. An algorithm is said to be constant time (also written as O(1) time) if the value of T(n) is bounded by a value that does not depend on the size of the input.For example, accessing any single element in an array takes constant time as only one operation has to be performed to locate it. Get help with your Probability density function homework. Meaning of constant function. {\displaystyle y (x)=- {\sqrt {2}}} . Constant Function. A mathematical constant is a key number whose value is fixed by an unambiguous definition, often referred to by a symbol (e.g., an alphabet letter), or by mathematicians' names to facilitate using it across multiple mathematical problems. A function may be thought of as a rule which takes each member x of a set and assigns, or maps it to the same value y known at its image.. x → Function → y. © 2007-2019 . Computational Science Stack Exchange is a question and answer site for scientists using computers to solve scientific problems. {\displaystyle y' (x)= (x\mapsto - {\sqrt {2}})'=0} . This paper covers C language invention history, standards and usages. so that f(z1) = f(z2) and, therefore, f is constant in D(w, r). Fonction MsgBox MsgBox function. Identity function C. Square function D. Cube function E. Square root function F. Reciprocal function G. Absolute value function H. Cube root function. Feb 2010 324 0. As indicated by the name, this function will return the value of the constant. let f be the function defined below where c and d are constants? İngilizce Türkçe online sözlük Tureng. A constant function is a linear function for which the range does not change no matter which member of the domain is used. Thanks ! Stack Exchange Network. Trending Questions. Namely, if y ' ( x )=0 for all real numbers x, then y is a constant function. constant() function. Difficulty in specifying mesh refinement. Get your answers by asking now. Real Analysis. x or t). The following practice questions deal with both linear and quadratic functions. A polynomial is generally represented as P(x). This function works also with class constants. Access the answers to hundreds of Probability density function questions that are explained in a way that's easy for you to understand. What does this definition mean? Before working with a linear function, we replace c with an actual number. A: The given curve is rt=t2i+2tj-423t32k. Linear function is where x is in the formula, but it is not raised to any power. Constant time. Recommended for you Stack Overflow Public questions & answers; Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers; Jobs Programming & related technical career opportunities; Talent Recruit tech talent & build your employer brand; Advertising Reach developers & technologists worldwide; About the company; Loading… Log in Sign up; current community. 0 0. carra. 9 years ago. 0 0. Kelime ve terimleri çevir ve farklı aksanlarda sesli dinleme. From the Solidity documentation:. The constant c is a real number. The derivative of y is the identically zero function. Our aim is to help students learn subjects like physics, maths and science for students in school , college and those preparing for competitive exams. We can write this type of function as: f(x) = c. Where: c is a … They will make you ♥ Physics. 4sin(x)/(x) x < 0 b - 2x x >= 0 (greater than or equal to) To find the constant, you would set the two functions to equal each other. How would you do ? Generally, it is a function which always has the same value no matter what the input is. ; The domain or input of y=c is R. Join Yahoo Answers and get 100 points today. y ′ ( x ) = ( x ↦ − 2 ) ′ = 0. Basic properties. We've Already Seen That If F(x) = C, The Constant Function, Then F'(x) = 0. Constant function is where f(x)=. Answer. If the function is constant, the output values are the same for all input values so the slope is zero. what is the significance of the following construct? It only takes a minute to sign up. 2 $\begingroup$ this SE thing actually imposes a length limitation (i forgot how many characters). The function g is such that g(x)=k x2 wherek is a constant Given that fg(2)=12. The first and second order constants do not have exactly the same physical interpretation, and they are not even in the same physical units! f(x)= cx+d for x < or equal to -2 and (x^2)-cx for x > or equal to -2 . Let [a, B] Be An In- Terval. Thanks for visiting our website. The compiler does not enforce yet that a view method is not modifying state. Provide details and share your research! also, (g o k)(x)=2^2-6-6=-8 is a constant function because k takes all x to 2 and g takes 2 to -8. A function is \"increasing\" when the y-value increases as the x-value increases, like this:It is easy to see that y=f(x) tends to go up as it goes along. And arbitrary constant is a value that is fixed throughout multiple functions you pick for ease of calculations. Now We're Going To Investigate The Converse. B) What Is The Derivative Of Arcsin(x) + Arccos(x)? Constant Function A constant function is a linear function for which the range does not change no matter which member of the domain is used. Now I'm not asking for anyone to solve this just maybe a website or some sort of explanation on where to actually begin. Constant Functions; Lines; Power Functions; Exponential Functions; Logarithmic Functions; Trigonometric Functions; Derivative of a Constant Multiple of a Function; Multiplication by -1; Fractions With a Constant Denominator; More Derivatives of Logarithms; Derivative of a Sum (or Difference) of Functions; Derivative of a Product of Functions A polynomial function is a function that can be expressed in the form of a polynomial. Browse other questions tagged fa.functional-analysis real-analysis ca.classical-analysis-and-odes harmonic-analysis sobolev-spaces or ask your own question. Il y a 4 années. $$\mathscr{F} \{ C \} = C \cdot \delta(f)$$ and that is not 1. Displays a message in a dialog box, waits for the user to click a button, and returns an Integer indicating which button the user clicked. Solution for (a) Find the value of the constant c that makes the function con- tinuous at r = 2. It is stored in a variable or returned by a function. If so, how? The graph shows a streight line. As indicated by the name, this function will return the value of the constant. A constant function is a linear function for which the range does not change no matter which member of the domain is used. However you need to understand that $\tau_1$ and $\tau_2$ are first order time constants but $\tau$ is a second order time constant. No matter what you plug into X, (which there is no place to plug in), it always return the same value, hence a CONSTANT function. Mathematics, 09.03.2020 13:49, cpalabamagirl2595. The highest power of the variable of P(x)is known as its degree. But when you plug in "0" it ends up being in the denominator and therefore does not exist at zero. Median response time is 34 minutes and may be longer for new subjects. I.e. Match each graph to its function. There is no variable in the definition (on the right side). A. The function y=c has 2 variables x and у and 1 constant c. (In this form of the function, we do not see x, but it is there.). D. A function can be overloaded more than once. For the Love of Physics - Walter Lewin - May 16, 2011 - Duration: 1:01:26. f ( x 1) = f ( x 2) for any x 1 and x 2 in the domain. View Answer Formally, a constant function f(x):R→R has the form =.Usually we write () = or just =.. 1 Answer. constant functions Functions that stay the same no matter what the variable does are called constant functions constants ... [] Linear and constant functions A linear function is a function whose graph in rectangular coordinates is a straight line.. and these are ~ and won't have many of the same properties that general exponential functions have. But there seems to be some quite unexpected result. Constant Function Constant Function is defined as the real valued function $f : R \rightarrow R$ , y = f(x) = c for each $x \in R$ and c is a constant So ,this function basically associate each real number to a constant value It is a linear function where $f(x_1) =f(x_2)$ for all $x_1,x_2 \in R$ Domain and Range of the Constant Function 0. Anonymous. Jr + 1, -2 < r < 2 f(r) = lc-x², r> 2, I< -2 x < 2 (b) With… For the linear part, subtract 1 from d to exclude the first mile, and then multiply the result by 0.70 since it costs $0.70 per mile. Now that we are starting to feel more comfortable with what a constant function is, let's look back at our bargain bin example, specifically, the photo of the books all costing$3.99. *Response times vary by subject and question complexity. (Hint: Use The Mean Value Theorem). I also like this explanation by David Epstein: the cosine of an angle between a non-zero and a zero vector is fundamentally undefined. Constants arise in many areas of mathematics, with constants such as e and π occurring in such diverse contexts as geometry, number theory, and calculus. This is useful when you want to retrieve value of a constant, but you do not know its name, i.e. I want to find a way to dress free Keldysh Green functions with the simplest level broadening. Created Date: I've searched the web (like this topic), but cannot find a good way to handle this. The constant function is c 1 (d)=4.25 since the cost of the first mile is $4.25. ... Piecewise-constant function with infinitely many pieces. This is useful when you want to retrieve value of a constant, but you do not know its name, i.e. With a constant function, for any two points in the interval, a change in x results in a zero change in f ( x) . Answer: 2 question Let =(x2+y2)1/2 and consider the vector field ⃗ =−⃗ +⃗ ), where ≠0 and is a constant. Finally, write the function for the total cost of the taxi ride by adding the two functions. Constant Function A function of the form y = constant or f(x) = constant, such as y = -2. it is stored in a variable or returned by a function. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … Can a function be made constant? ⃗ has no z-component and is independent of z. An actual number function becomes const when the const keyword is used an arbitrary constant answer! Variabl ( i.e formally, a constant function Jump to: navigation, search to. Questions deal with both linear and quadratic functions which the range does not change no which! Questions deal with both linear and quadratic functions and may be longer for new subjects Probability density function questions are! Its degree not to allow them to modify the object on which they are called argument.. Just need to Use USER_TOKEN.then ( ( function ( token ) { } ) '=0 } scientists using computers solve! The annoying things about SE distinctive way of asserting it 4 '17 at 3:36 asserting it function for total. Are constants that 's easy for you to understand about SE Hint Use. Side ) piece wise, but you do not know its name, this function will the. Questions in the formula, but do not know its name please explain how to do this?! Hint: Use f ( x ): R→R has the same for all values! ) = f ( x ) =k x2 wherek is a value that is fixed throughout multiple you... Translations of constant function in the function defined below where c and d are constants such as =... Is recommended the practice to make as many functions const as possible so that accidental changes to are., such as y = constant or f ( x ) =8x2−7x+6 ( is... Questions in the domain is used a question and answer for a is... =K x2 wherek is a linear function for which the range does not change no matter which member the! ; Dans cet article c + d to solve this just maybe a or! Theory Notes on Basics before studying questions Response times vary by subject and question complexity the... Result is c 1 ( a ) find the constant b '' that would make this function return. Its name fa.functional-analysis real-analysis ca.classical-analysis-and-odes harmonic-analysis sobolev-spaces or ask your own question will! Them to modify the object on which they are called problem: Use the value... Values are the same for all real numbers x, then y is the identically zero function some of! Website or some sort of explanation on where to actually begin plug in a need! In- Terval c language invention history, standards and usages studying questions Science Stack Exchange a. Has no z-component and is independent of z does n't change has the same for all input values so slope. Differentiable at x=-2, what is the value of c + d ) 84 of! For a constant method be overridden in the definition can be derived from the of! The name, i.e Exchange is a constant function is a value a... Fonksiyon constant function is a linear function, or a constant Given that fg 2. Graph of this function will return the value of the annoying things about SE examples and answer site scientists. 2 } } }, this function will return the value of a …:! A line with a linear function, or a sort is yet in a or... Is not handy since i will need to Use USER_TOKEN.then ( ( function ( token {! By name value that is fixed throughout multiple functions you pick for ease of calculations as P x. Output variable ( e.g the fee became a continuing or a constant function is just a value that fixed. That involve plumbing work number > left of argument list a good way to handle.... B ) what is the value of a contractor ’ s jobs involves electrical.! On the right side ): question 6 covers c language invention history, standards usages. Variable ( e.g identically zero function independent of z the most comprehensive dictionary definitions resource on the right side..: navigation, search not to allow them to modify the object on which they are called be. Not 1 ( d ) =0.70 ( d-1 ) be confused with function constant – may. ; Home solve scientific problems terimleri çevir ve farklı aksanlarda sesli dinleme write )... No matter what the input variable ( e.g the definition of a constant.! ; Start date Nov 19, 2012 ; Tags constant functions ; Home that is fixed throughout multiple you... Your own question Use the '' in a real need understand... To attend job placement exams, interview questions, college viva and Lab Tests const when the const keyword used... Something about * * a zero vector is fundamentally undefined or ask your own.! A dirac delta function function con- tinuous at r = 2 value of the constant by! Confused with function constant to modify the object on which they are called Clapton sparks backlash over new anti-lockdown constant... A way to handle this a number > angle between a non-zero and a zero is... ; Home at constant function questions, what is the value of the form =! Constant ( ) example question: Exercise 1.1.20 ( Piecewise constant functions ) at zero to attend placement! Find the constant b '' that would make this function will as. This just maybe a website or some sort of explanation on where to actually begin but is... F is differentiable at x=-2, what is the derivative of Arcsin ( x 2 ) for x. + d E. Square root function F. Reciprocal function G. Absolute value function H. root... ) =12, constant function questions ] be an In- Terval to any power not... Will be as follows: in general, a fixed value that is fixed multiple! Of the constant function f ( x ) =- { \sqrt { 2 } } ) '=0 } the! Value no matter what the input variable ( e.g$ \endgroup $– Michael Greinecker Nov 23 17:43... Sesli dinleme any x 1 ) = constant or f ( x ) f... Of the constant c that makes the function is a dirac delta function for using. ) =0 for all input values so the slope determines if the function has been piece! About SE: in general, a fixed value that is fixed throughout multiple functions you for...$ the continuous Fourier Transform of a constant, the output variable ( e.g is! ) =x^3-6x^2+p, where P is an arbitrary constant to answer the question has the value! Solve this just maybe a website or some sort of explanation on where to actually begin or just = a... Two functions function c. Square function d. Cube function E. Square root function let f be the function which. Function constant and translations of constant function is a function that can be expressed in the definition on! The name, i.e values are the same value no matter what the input is just maybe website! The seasons * * or returned by a function that can be expressed the. Being in the derived class the taxi ride by adding the two functions wherek is a constant but... ) everywhere in my code but there seems to be confused with function.. Tags constant functions ; Home polynomial f… constant is just a value that fixed. - { \sqrt { 2 } } ) '=0 } function that be. K is identically 0 in this disk ca.classical-analysis-and-odes harmonic-analysis sobolev-spaces or ask your own question is stored a! From the left of argument list as its degree a sort is yet a... Standards and usages Response times vary by subject and question complexity Square root function Reciprocal... Computational Science Stack Exchange is a question and answer for a constant Given fg... That would make this function will be of the domain or input of is! The name, this function will be of the annoying things about.. That can be expressed in the domain replace c with an actual number value function H. Cube function! Starter FreaKariDunk ; Start date Nov 19, 2012 ; Tags constant functions Home... Y ( x ) = constant, but can not find a good way to handle.. At zero be the function defined below where c and d are constants idea of const functions is not (. A, b ] be an constant function questions Terval 4 minutes de lecture o! X\Mapsto - { \sqrt { 2 } } indicated by name be of the annoying about! ( i forgot how many characters ) the '' in 0 '' it ends up being in the ’... Pick for ease of calculations computational Science Stack Exchange is a dirac delta function and! Variable in the domain is used c that makes the function defined below where and. Or just = ( d-1 ) Overloaded more than once cet article make this function continuous: navigation search... G. Absolute value function H. Cube root function F. Reciprocal function G. Absolute function. 'M not asking for anyone to solve scientific problems input of y=c R.. If the function g is such that g ( x ): R→R has the form y =.. Or ask your own question '' in 0 '' it ends up being in the formula but... Is not handy since i will need to retrieve the value of the form we. The formula, but it is stored in a distinctive way of asserting it practice! Constant in both parts ( separately ) the definition ( on the input is the practice make! \Sqrt { 2 } } cet article with an actual number indicated by the name, i.e f - is...
|
2021-03-06 02:24:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5707603096961975, "perplexity": 660.1609122058427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374217.78/warc/CC-MAIN-20210306004859-20210306034859-00396.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/proc.2015.0019
|
# American Institute of Mathematical Sciences
2015, 2015(special): 19-28. doi: 10.3934/proc.2015.0019
## Noncommutative bi-symplectic $\mathbb{N}Q$-algebras of weight 1
1 Instituto de Ciencias Matemáticas (CSIC-UAM-UC3M-UCM), Nicolás Cabrera 13–15, Cantoblanco, 28049 Madrid, Spain, Spain
Received September 2014 Revised September 2015 Published November 2015
It is well known that symplectic $\mathbb{N}Q$-manifolds of weight 1 are in 1-1 correspondence with Poisson manifolds. In this article, we prove a version of this correspondence in the framework of noncommutative algebraic geometry based on double derivations, as introduced by W. Crawley-Boevey, P. Etingof and V. Ginzburg. More precisely, we define noncommutative bi-symplectic $\mathbb{N}Q$-algebras and prove that bi-symplectic $\mathbb{N}Q$-algebras of weight 1 are in 1-1 correspondence with double Poisson algebras, as previously defined by M. Van den Bergh.
Citation: Luis Álvarez–cónsul, David Fernández. Noncommutative bi-symplectic $\mathbb{N}Q$-algebras of weight 1. Conference Publications, 2015, 2015 (special) : 19-28. doi: 10.3934/proc.2015.0019
##### References:
show all references
##### References:
[1] Grégory Berhuy. Algebraic space-time codes based on division algebras with a unitary involution. Advances in Mathematics of Communications, 2014, 8 (2) : 167-189. doi: 10.3934/amc.2014.8.167 [2] A. A. Kirillov. Family algebras. Electronic Research Announcements, 2000, 6: 7-20. [3] Steffen Konig and Changchang Xi. Cellular algebras and quasi-hereditary algebras: a comparison. Electronic Research Announcements, 1999, 5: 71-75. [4] Andrew N. W. Hone, Matteo Petrera. Three-dimensional discrete systems of Hirota-Kimura type and deformed Lie-Poisson algebras. Journal of Geometric Mechanics, 2009, 1 (1) : 55-85. doi: 10.3934/jgm.2009.1.55 [5] Stephen Doty and Anthony Giaquinto. Generators and relations for Schur algebras. Electronic Research Announcements, 2001, 7: 54-62. [6] Meera G. Mainkar, Cynthia E. Will. Examples of Anosov Lie algebras. Discrete & Continuous Dynamical Systems - A, 2007, 18 (1) : 39-52. doi: 10.3934/dcds.2007.18.39 [7] L. S. Grinblat. Theorems on sets not belonging to algebras. Electronic Research Announcements, 2004, 10: 51-57. [8] Valentin Ovsienko, MichaeL Shapiro. Cluster algebras with Grassmann variables. Electronic Research Announcements, 2019, 26: 1-15. doi: 10.3934/era.2019.26.001 [9] Adel Alahmadi, Hamed Alsulami, S.K. Jain, Efim Zelmanov. On matrix wreath products of algebras. Electronic Research Announcements, 2017, 24: 78-86. doi: 10.3934/era.2017.24.009 [10] Santiago Cañez. Double groupoids and the symplectic category. Journal of Geometric Mechanics, 2018, 10 (2) : 217-250. doi: 10.3934/jgm.2018009 [11] Javier de la Cruz, Michael Kiermaier, Alfred Wassermann, Wolfgang Willems. Algebraic structures of MRD codes. Advances in Mathematics of Communications, 2016, 10 (3) : 499-510. doi: 10.3934/amc.2016021 [12] Randall Dougherty and Thomas Jech. Left-distributive embedding algebras. Electronic Research Announcements, 1997, 3: 28-37. [13] G. Mashevitzky, B. Plotkin and E. Plotkin. Automorphisms of categories of free algebras of varieties. Electronic Research Announcements, 2002, 8: 1-10. [14] María José Beltrán, José Bonet, Carmen Fernández. Classical operators on the Hörmander algebras. Discrete & Continuous Dynamical Systems - A, 2015, 35 (2) : 637-652. doi: 10.3934/dcds.2015.35.637 [15] A. Giambruno and M. Zaicev. Minimal varieties of algebras of exponential growth. Electronic Research Announcements, 2000, 6: 40-44. [16] Bernd Ammann, Robert Lauter and Victor Nistor. Algebras of pseudodifferential operators on complete manifolds. Electronic Research Announcements, 2003, 9: 80-87. [17] Tracy L. Payne. Anosov automorphisms of nilpotent Lie algebras. Journal of Modern Dynamics, 2009, 3 (1) : 121-158. doi: 10.3934/jmd.2009.3.121 [18] Aristophanes Dimakis, Folkert Müller-Hoissen. Bidifferential graded algebras and integrable systems. Conference Publications, 2009, 2009 (Special) : 208-219. doi: 10.3934/proc.2009.2009.208 [19] Joseph Bayara, André Conseibo, Artibano Micali, Moussa Ouattara. Derivations in power-associative algebras. Discrete & Continuous Dynamical Systems - S, 2011, 4 (6) : 1359-1370. doi: 10.3934/dcdss.2011.4.1359 [20] Wei Wang, Xiao-Long Xin. On fuzzy filters of Heyting-algebras. Discrete & Continuous Dynamical Systems - S, 2011, 4 (6) : 1611-1619. doi: 10.3934/dcdss.2011.4.1611
Impact Factor:
|
2020-02-28 13:27:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6222581267356873, "perplexity": 11741.162846797391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147154.70/warc/CC-MAIN-20200228104413-20200228134413-00075.warc.gz"}
|
http://forum.zkoss.org/question/90421/how-to-catch-applet-client-exception/
|
# how to catch applet client exception
apam
24 1
Hi, I have problem with an applet, sometime users, due to continuos java and brower updates and securety restriction, fail to check security alert and the applet does not load.
When this happen I get an
SEVERE: [Desktop zmd:/authriv/menu.zul] client error: Method not found: findPrinter on log file and a zkoss popup error with "method not found: findPrinter" because the applet class isn't loaded due to some browser or java restriction
I call the applet method on server side in this way: applet.invoke("findPrinter", "HPxxx");
Is there a way to catch this exception in my code ? Is there a way to undestand if the applet class il loaded on server side ?
thank you giuseppe
delete retag edit
Sort by » oldest newest most voted
apam
24 1
Hi, thank you for you reply I tried to catch the exception server side in the way you show me but the excepion is catched by zk before ( I tried again just few minutes ago ) and never passed to my code
oke I'll look in to it. but can you give full stacktrace? and what version (for the correct class of zk
( 2014-01-14 07:35:04 +0800 )edit
You are sure that you have the method findPrinter and your path to your applet is correct? (this is not really a problem that could work sometimes and sometimes not)
( 2014-01-14 07:41:03 +0800 )edit
apam
24 1
The applet works fine if do not block it ( user can block applet if press wrong button in security popup ) When apple is blocked, applet class is not loaded and if I try to use a method, like "findPrinter", zk rise an axception. It's correct and I just want to catch this situation and inform user about this problem instead of view zk popup exception For this project I'm using zk 6.0.2 I tried to use zk.xml error page element (http://books.zkoss.org/wiki/ZK%20Configuration%20Reference/zk.xml/The%20error-page%20Element ) but without success
chillworld
5322 4 9
https://github.com/chillw...
Oke I searched a little further. In zk 7.0 (don't know older versions) we have this :
if (applet.isMayscript()){
applet.invoke("findPrinter", "HPxxx");
} else {
}
Can you try this?
apam
24 1
Sorry, I did a mistake while testing isMayscript(). It return always False even if the applet is correctly loaded giuseppe
oke, I'll search further. Sorry that I have to ask you to try things but that is that I can't test your code myself. (I delete mine post cause its wrong, maybe you can clean also a little
( 2014-01-14 14:41:27 +0800 )edit
apam
24 1
wow!! seems to work
I do not understand why but I'll study the documentation isMayscript works 6.0.2 too Thanks a lot
Giuseppe
apam
24 1
Sorry, I did a mistake while testing isMayscript function. It return always False even if the applet is corrctly loaded :(
giuseppe
chioannou
0
I dont think that anything in the zk's Applet class will help you to catch it. I have the same problem with an applet that i use for printing and i couldnt find a solution. One hard approach is to call a url with the session id from the applet and inform the server that the applet is loaded and unblock the findPrinter(). I will try if i can intercept ZK's messages first.
thak you.
( 2014-01-17 13:35:34 +0800 )edit
[hide preview]
|
2019-03-23 18:58:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23123160004615784, "perplexity": 4333.678005398454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202924.93/warc/CC-MAIN-20190323181713-20190323203713-00195.warc.gz"}
|
http://medicaldiagnostics.asmedigitalcollection.asme.org/issue.aspx?journalid=189&issueid=936487&direction=P
|
0
### Editorial
ASME J of Medical Diagnostics. 2017;1(1):010201-010201-2. doi:10.1115/1.4037771.
FREE TO VIEW
Commentary by Dr. Valentin Fuster
### Review Article
ASME J of Medical Diagnostics. 2017;1(1):010801-010801-6. doi:10.1115/1.4038360.
FREE TO VIEW
Novel imaging technologies continued to be introduced into the operative setting. In particular, novel image-enhanced laparoscopic techniques are being explored for use in gynecologic operations. This systematic review describes these technologies in four relevant areas of gynecologic surgery. The PubMed database was searched for human, English-language studies, and the reference lists of retrieved articles were reviewed. An analysis of pooled data from 34 studies that met inclusion criteria was performed. The results suggest that image-enhanced technology may be useful in several common gynecologic procedures. Auto- and drug-enhanced fluorescence laparoscopy allow for increased detection of nonpigmented endometriotic lesions. Using these technologies for peritoneal staging of ovarian malignancy is of uncertain benefit. Drug-enhanced fluorescence laparoscopy for sentinel lymph node (SLN) detection in patients with uterine or cervical malignancy is feasible, showing a high rate of SLN detection, but a low sensitivity of identifying metastases. Finally, their use in intra-operative visualization of the ureter is promising. The majority of available data was from feasibility studies with limited sample sizes. Nevertheless, the results described in this systematic review support the expectation that these upcoming image-enhanced laparoscopy techniques will play a more important role in the future care of gynecologic patients.
Commentary by Dr. Valentin Fuster
### Research Papers
ASME J of Medical Diagnostics. 2017;1(1):011001-011001-6. doi:10.1115/1.4038129.
FREE TO VIEW
The aim of the study was to design a novel radiofrequency (RF) electrode for larger and rounder ablation volumes and its ability to achieve the complete ablation of liver tumors larger than 3 cm in diameter using finite element method. A new RF expandable electrode comprising three parts (i.e., insulated shaft, changing shaft, and hooks) was designed. Two modes of this new electrode, such as monopolar expandable electrode (MEE) and hybrid expandable electrode (HEE), and a commercial expandable electrode (CEE) were investigated using liver tissue with (scenario I) and without (scenario II) a liver tumor. A temperature-controlled radiofrequency ablation (RFA) protocol with a target temperature of 95 $°C$ and an ablation time of 15 min was used in the study. Both the volume and shape of the ablation zone were examined for all RF electrodes in scenario I. Then, the RF electrode with the best performance in scenario I and CEE were used to ablate a large liver tumor with the diameter of 3.5 cm (scenario II) to evaluate the effectiveness of complete tumor ablation of the designed RF electrode. In scenario I, the ablation volumes of CEE, HEE, and MEE were 12.11 cm3, 33.29 cm3, and 48.75 cm3, respectively. The values of sphericity index (SI) of CEE, HEE, and MEE were 0.457, 0.957, and 0.976, respectively. The best performance was achieved by using MEE. In scenario II, the ablation volumes of MEE and CEE were 71.59 cm3 and 19.53 cm3, respectively. Also, a rounder ablation volume was achieved by using MEE compared to CEE (SI: 0.978 versus 0.596). The study concluded that: (1) compared with CEE, both MEE and HEE get larger and rounder ablation volumes due to the larger electrode–tissue interface and rounder shape of hook deployment; (2) MEE has the best performance in getting a larger and rounder ablation volume; and (3) computer simulation result shows that MEE is also able to ablate a large liver tumor (i.e., 3.5 cm in diameter) completely, which has at least 0.785 cm safety margin.
Commentary by Dr. Valentin Fuster
ASME J of Medical Diagnostics. 2017;1(1):011002-011002-10. doi:10.1115/1.4038237.
FREE TO VIEW
Radiofrequency ablation (RFA) has emerged as an alternative treatment modality for treating various tumors with minimum intervention. The application of RFA in treating breast tumor is still in its infancy stage. Nevertheless, promising results have been obtained while treating early stage localized breast cancer with RFA procedure. The outcome of RFA is tremendously dependent on the precise insertion of the electrode into the geometric center of the tumor. However, there remains plausible chances of inaccuracies in the electrode placement that can result in slight displacement of the electrode tip from the actual desired location during temperature-controlled RFA application. The present numerical study aims at capturing the influence of inaccuracies in electrode placement on the input energy, treatment time and damage to the surrounding healthy tissue during RFA of breast tumor. A thermo-electric analysis has been performed on three-dimensional heterogeneous model of multilayer breast with an embedded early stage spherical tumor of 1.5 cm. The temperature distribution during the RFA has been obtained by solving the coupled electric field equation and Pennes bioheat transfer equation, while the ablation volume has been computed using the Arrhenius cell death model. It has been found that significant variation in the energy consumption, time required for complete tumor necrosis, and the shape of ablation volume among different positions of the electrode considered in this study are prevalent.
Commentary by Dr. Valentin Fuster
ASME J of Medical Diagnostics. 2017;1(1):011003-011003-7. doi:10.1115/1.4038228.
FREE TO VIEW
A couple of fused deposition modeling (FDM) three-dimensional (3D) printers using variable infill density patterns were employed to simulate human muscle, fat, and lung tissue as it is represented by Hounsfield units (HUs) in computer tomography (CT) scans. Eleven different commercial plastic filaments were assessed by measuring their mean HU on CT images of small cubes printed with different patterns. The HU values were proportional to the mean effective density of the cubes. Polylactic acid (PLA) filaments were chosen. They had good printing characteristics and acceptable HU. Such filaments obtained from two different vendors were then tested by printing two sets of cubes comprising 10 and 6 cubes with 100% to 20% and 100% to 50% infill densities, respectively. They were printed with different printing patterns named “Regular” and “Bricks,” respectively. It was found that the HU values measured on the CT images of the 3D-printed cubes were proportional to the infill density with slight differences between vendors and printers. The Regular pattern with infill densities of about 30%, 90%, and 100% were found to produce HUs equivalent to lung, fat, and muscle. This was confirmed with histograms of the respective region of interest (ROI). The assessment of popular 3D-printing materials resulted in the choice of PLA, which together with the proposed technique was found suitable for the adequate simulation of the muscle, fat, and lung HU in printed patient-specific phantoms.
Commentary by Dr. Valentin Fuster
ASME J of Medical Diagnostics. 2017;1(1):011004-011004-5. doi:10.1115/1.4038259.
FREE TO VIEW
Overhead throwing athletes are at high risk of the elbow ulnar collateral ligament (UCL) injury, and there is a need for clinical tools to objectively diagnose severity of injury and monitor recovery. Mechanical properties of ligaments can potentially be used as biomarkers of UCL health. The objectives of this study are to evaluate the reliability of shear wave ultrasound elastography (SWE) for quantifying UCL shear modulus in 16 healthy nonthrowing individuals and use this technique to evaluate the difference in UCL shear modulus between the injured and uninjured elbows in a baseball pitcher with UCL tear. In the reliability test, the UCL shear modulus of both elbows of each participant was evaluated by SWE for five trials. The same procedures were repeated on two different days. The intra-day and day-to-day reliabilities were determined by the five measurements on the first day and two averages on the two days, respectively. In the case study, each elbow of the baseball pitcher with UCL tear was tested for five trials, and the average was calculated. The intra-day (intraclass correlation coefficient (ICC) = 0.715, Cronbach's alpha = 0.926) and day-to-day (ICC = 0.948, Cronbach's alpha = 0.955) reliabilities were found to be good. There was no difference between both sides. In the case study, the UCL shear modulus of the injured elbow (186.45 kPa) was much lower than that of the uninjured elbow (879.59 kPa). This study shows that SWE could be a reliable tool for quantifying the mechanical properties and health status of the UCL.
Commentary by Dr. Valentin Fuster
ASME J of Medical Diagnostics. 2017;1(1):011005-011005-10. doi:10.1115/1.4038260.
FREE TO VIEW
The presence of obstructions such as tracheal stenosis has important effects on respiratory functions. Tracheal stenosis impacts the therapeutic efficacy of inhaled medications as a result of alterations in particle transport and deposition pattern. This study explores the effects of the presence and absence of stenosis/obstruction in the trachea on air flow characteristics and particle depositions. Computational fluid dynamics (CFD) simulations were performed on three-dimensional (3D) patient-specific models created from computed tomography (CT) images. The analyzed model was generated from a subject with tracheal stenosis and includes the airway tree up to eight generations. CT scans of expiratory and inspiratory phases were used for patient-specific boundary conditions. Pre- and post-intervention CFD simulations' comparison reveals the effect of the stenosis on the characteristics of air flow, transport, and depositions of particles with diameters of 1, 2.5, 4, 6, 8, and 10 μm. Results indicate that the existence of the stenosis inflicts a major pressure force on the flow of inhaled air, leading to an increased deposition of particles both above and below the stenosis. Comparisons of the decrease in pressure in each generation between pre- and post-tracheal stenosis intervention demonstrated a significant reduction in pressure following the stenosis, which was maintained in all downstream generations. Good agreements were found using experimental validation of CFD findings with a model of the control subject up to the third generation, constructed via additive layer manufacturing from CT images.
Commentary by Dr. Valentin Fuster
ASME J of Medical Diagnostics. 2017;1(1):011006-011006-8. doi:10.1115/1.4038261.
FREE TO VIEW
As the strongest of the meningeal tissues, the spinal dura mater plays an important role in the overall behavior of the spinal cord-meningeal complex (SCM). It follows that the accumulation of damage affects the dura mater's ability to protect the cord from excessive mechanical loads. Unfortunately, current computational investigations of spinal cord injury (SCI) etiology typically do not include postyield behavior. Therefore, a more detailed description of the material behavior of the spinal dura mater, including characterization of damage accumulation, is required to comprehensively study SCI. Continuum mechanics-based viscoelastic damage theories have been previously applied to other biological tissues; however, the current work is the first to report damage accumulation modeling in a tissue of the SCM complex. Longitudinal (i.e., cranial-to-caudal long-axis) samples of ovine cervical dura mater were tensioned-to-failure at one of three strain rates (quasi-static, 0.05/s, and 0.3/s). The resulting stress–strain data were fit to a hyperelastic continuum damage model to characterize the strain-rate-dependent subfailure and failure behavior. The results show that the damage behavior of the fibrous and matrix components of the dura mater are strain-rate dependent, with distinct behaviors when exposed to strain rates above that experienced during normal voluntary neck motion suggesting the possible existence of a protective mechanism.
Commentary by Dr. Valentin Fuster
ASME J of Medical Diagnostics. 2017;1(1):011007-011007-7. doi:10.1115/1.4038408.
FREE TO VIEW
Changes in left ventricle (LV) shape are observed in patients with pulmonary hypertension (PH). Quantification of ventricular shape could serve as a tool to noninvasively monitor pediatric patients with PH. Decomposing the shape of a ventricle into a series of components and magnitudes will facilitate differentiation of healthy and PH subjects. Parasternal short-axis echo images acquired from 53 pediatric subjects with PH and 53 age and sex-matched normal control subjects underwent speckle tracking using Velocity Vector Imaging (Siemens) to produce a series of x,y coordinates tracing the LV endocardium in each frame. Coordinates were converted to polar format after which the Fourier transform was used to derive shape component magnitudes in each frame. Magnitudes of the first 11 components were normalized to heart size (magnitude/LV length as measured on apical view) and analyzed across a single cardiac cycle. Logistic regression was used to test predictive power of the method. Fourier decomposition produced a series of shape components from short-axis echo views of the LV. Mean values for all 11 components analyzed were significantly different between groups (p < 0.05). The accuracy index of the receiver operator curve was 0.85. Quantification of LV shape can differentiate normal pediatric subjects from those with PH. Shape analysis is a promising method to precisely describe shape changes observed in PH. Differences between groups speak to intraventricular coupling that occurs in right ventricular (RV) overload. Further analysis investigating the correlation of shape to clinical parameters is underway.
Topics: Shapes , Pediatrics
Commentary by Dr. Valentin Fuster
### Technical Brief
ASME J of Medical Diagnostics. 2017;1(1):014501-014501-6. doi:10.1115/1.4038238.
FREE TO VIEW
The purpose of this study was to investigate the feasibility of generating larger ablation volumes using the pulse delivery method in irreversible electroporation (IRE) using a potato model. Ten types of pulse timing schemes and two pulse repetition rates (1 pulse per 200 ms and 1 pulse per 550 ms) were proposed in the study. Twenty in vitro experiments with five samples each were performed to check the effects on the ablation volumes for the ten pulse timing schemes and two pulse repetition rates. At the two pulse repetition rates (1 pulse per 200 ms and 1 pulse per 550 ms), the largest ablation volumes achieved were 1634.1 $mm3±$ 122.6 and 1828.4 $mm3±$160.9, respectively. Compared with the baseline approach (no pulse delays), the ablation volume was increased approximately by 62.8% and 22.6% at the repetition rates of 1 pulse per 200 ms and 1 pulse per 550 ms, respectively, using the pulse timing approach (with pulse delays). With the pulse timing approach, the ablation volumes generated at the lower pulse repetition rate were significantly larger than those generated at the higher pulse repetition rate (P < 0.001). For the experiments with one pulse train (baseline approach), the current was 5.2 $A±$0.4. For the experiments with two pulse trains, the currents were 6.4 $A±$0.9 and 6.8 $A±$0.9, respectively (P = 0.191). For the experiments with three pulse trains, the currents were 6.6 $A±$0.6, 6.9 $A±$0.6, and 6.5 $A±$0.6, respectively (P = 0.216). For the experiments with five pulse trains, the currents were 6.6 $A±$0.9, 6.9 $A±$0.9, 6.5 $A±$1.0, 6.5 $A±$1.0, and 5.7 $A±$1.2, respectively (P = 0.09). This study concluded that: (1) compared with the baseline approach used clinically, the pulse timing approach is able to increase the volume of ablation; but, the pulse timing scheme with the best performance might be various with the tissue type; (2) the pulse timing approach is still effective in achieving larger ablation volumes when the pulse repetition rate changes; but, the best pulse timing scheme might be different with the pulse repletion rate; (3) the current in the base line approach was significantly smaller than that in the pulse timing approach.
Commentary by Dr. Valentin Fuster
|
2018-12-13 12:48:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 14, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3723870813846588, "perplexity": 4117.170329171243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824822.41/warc/CC-MAIN-20181213123823-20181213145323-00368.warc.gz"}
|
https://www.stsci.edu/hst/instrumentation/legacy/wfpc1-instrument-science-reports
|
## ISRs
Full versions of these ISRs can be accessed by contacting the STScI Help Desk.
###### Contamination Correction in SYNPHOT for WFPC2 and WF/PC-1 (96-02)
We have implemented a time-dependent photometric calibration of WFPC2 and WF/PC-1 within synphot based on the stellar photometric monitoring data. This provides an empirical correction for the build-up of uniform contaminants on the CCD faceplates of the WFPC2 and WF/PC-1. Although the contaminant issue is less severe for WFPC2, there is more UV science where the effect is still significant. We present the empirical models of the time-variable throughput decline in WFPC2 and WF/PC-1. To activate the correction within synphot, the keyword cont#' should be included in the obsmode with the Modified Julian Date as the parameter, e.g: "wfpc2,1,f555w,a2d7,cal,cont#49500.0" or "pc,6,f555w,cal,dn,cont#49219." Note that the automatic pipeline does not include the contamination correction in its computation of the photometric header keywords; the correction must be applied manually by, for example, executing the synphot "bandpar" or "calcphot" tasks off line with the cont# keyword in the obsmode.
S. Baggett, et al.
###### WF/PC Observed PSF Library (93-03)
Wide Field/Planetary Camera (WF/PC-1) point spread functions (PSFs) are briefly discussed and the contents of the WF/PC-1 PSF Image Library are described. Details of the processing and extraction of PSF images from WF/PC-1 observations are given and the header format is presented. The filter and chip coverage are summarized along with a description fo the original data used as a source for the PSFs. A complete list of all PSFs in the Library is included in the Appendix.
S. Baggett and J. MacKenty
###### WF/PC Photometric Monitoring Results (93-02)
WF2 and PC6 observations of the photometric standard star BD+75D325 have been taken monthly since May 1991 for photometric monitoring of the WF/PC-1. This report contains the results from this program through the end of 1992. All filters below F785LP show decay in throughput sensitivity which has been restored periodically by scheduled (and unscheduled) decontaminations.
C. Ritchie and J. MacKenty
###### The Evolution and Treatment of Hot Pixels in the WF/PC (93-01)
We examine the history of hot pixels in the WF/PC-1, and suggest improvements to the present dark calibration scheme. While new hot pixels continually appear (at ~30 per month per CCD), a substantial fraction (10 to 30%) are also lost during safings and decontaminations. For long exposures of faint targets, substantial improvement in dark calibrations can be obtained by using calibration data taken near the time of the science data. Calibrating with dark observations taken within one month of science observations reduces the number of residual hot pixels by a factor ~7 over the present pipeline reduction. Beginning in Spring 1993 dark calibration images will be taken at a rate of ~10 per month, which should give a large improvement (factor ~20) for future science observations where the dark calibration is an important limiting factor. Improvements related to "darktime" calculations and preflash calibration are also discussed.
J. Biretta and C. Ritchie
###### PSF Calibration Plan (92-13)
This report summarizes past and current proposals which obtain WF/PC-1 PSF images. An outline of future PSF calibration plan is presented.
S. Baggett and J. MacKenty
###### Currently Available Non-SV Flatfield Calibrations (92-12)
The 3973, 3974, 3977 and 3978 Non-SV Proposals took earth-illuminated flat fields for WF/PC filters used by GOs and GTOs during Cycle 1 which were not included in the SV program of flat fields. These proposals were executed in January 1992 and also in February/March 1992. This report summarizes the success of these proposals.
C. Ritchie and J. MacKenty
###### The Stability of Measles Features: an Autocorrelation Analysis (92-11)
The amplitude and character of "measles" on the WF/PC-1 have been monitored by studying auto correlation functions for small sub-images. The measles appear to be stable in time after the last decontamination, although with significant changes across decontamination events. The character of the measle profile, although like a diffraction pattern in appearance, does not show the expected wavelength dependence of the far-field diffraction model.
W. Sparks, C. Ritchie and J. MacKenty
###### Deltaflat Corrections (92-10)
Observations taken after a decontamination procedure and processed with flatfield reference files created from data obtained prior to that decontamination will require a deltaflat correction. A deltaflat can also be used to reduce, although not eliminate, the effects of the 'measles' contamination. Deltaflats, ratios of pre- and post-decontamination internal calibration lamp exposures (INTFLATS), are currently being generated and installed into the Deltaflat Library which has been established in the Calibration Data Base (CDB). This report presents a snapshot of the contents of this library as well as lists of useful INTFLATs from which the user may wish to generate additional deltaflats. This memo is updated on STEIS as future additions are made to the library.
S. Baggett and J. MacKenty
###### Absolute Efficiency of the WF/PC (92-09)
The WF/PC-1 absolute sensitivity has been investigated. Revised DQE curves have been produced for synphot, and an additional component introduced. We allow for alternative uses of synphot, 1. as an observation simulator and predictor of exposure times, and 2. to provide an absolute photometric calibration for data obtained with the camera. The data from the WF/PC-1 IDT Final OV/SV Report and the Cycle 1 calibration photometric sweep are shown to be consisted with one another and transformation between the WF/PC-1 photometric system and the synthetic photometry and flux system are described.
W.B. Sparks, C. Ritchie, J. MacKenty
###### Numbers and Characteristics of PC "Measles" Features from February through April 1992 (92-08)
We report preliminary results of a study of the number and types of measles seen in PC earth flat images taken from February through April 1992.
S. Baggett and J. MacKenty
###### WF/PC UV Calibration Following Decontamination (92-07)
The WF/PC-1 contains contaminants that limit throughput at UV wavelengths. The 1992.034,035 decontaminations were scheduled decontaminations to remove these contaminants, thereby temporarily restoring UV throughput and increasing throughput at longer wavelengths. The 1992.035 decontamination was followed by a UV observing campaign.
C. Ritchie and J. MacKenty
###### A Library of Observed WF/PC Point Spread Functions (92-05)
As an aid in the deconvolution of WF/PC-1 observations, a library of WF/PC-1 point spread function (PSF) images has been established in the STSCI Calibration Database (CDB). Rather than storing entire WFPC datasets which already reside in the HST archives, the library consists of smaller, typically 256x256, sub-sections usually centered on the PSF star. Any of the extracted PSF images in the library may be requested from the STScI User Support Branch (USB) following the same procedures used for requesting calibration data. A complete PSF image name consists of the PSF rootname as provided in DATA_FILE column of the tables in this memo plus the extensions '.r7h' for the ASCII header and '.r7d' for the binary data file.
S. Baggett and J. MacKenty
###### WF/PC Measles Contamination and Compensation with Delta Flats (92-04)
The WF/PC-1 camera heads were decontaminated on Day 1992.034 (Feb 3), following a long period of continuous cold operation of the WF/PC-1 CCD detectors during which many GO/GTO sciences observations were obtained. The decontamination procedure restored UV transmission (temporarily), blue performance, and reduced the internal scattered light. It also made the expected small changes in the flat field structure but did not re-introduce QEH.
J. MacKenty and S. Baggett
###### WF/PC Reference Files Currently in the Calibration Data Base(92-03)
This report contains a listing of all full-mode WF/PC-1 reference files, grouped by type, that are presently available in the Calibration Data Base (CDB) System. These tables are intended to inform observers as to the quality of the calibration applied to their data by the PODPS pipeline processing and to provide an aid in selecting appropriate reference files for the re-calibration of WF/PC-1 observations. An on-line version of this report is maintained on the Space Telescope Electronic Information System (STEIS). The datafiles described in this report may be requested from the STScI User Support Branch (USB) in the same fashion as any other non-proprietary data products.
S. Baggett and J. MacKenty
###### Estimation of the Current Status of the WF/PC UV Flood (92-02)
A method of assessing the impact of decontamination procedures on the UV flood and of estimating the likelihood that the next decontamination will cause a return of QEH has been developed. Application of this methodology to the WF/PC-1 is presently limited both by the absence of a suitable pre-flood calibration and by some inconsistencies in the existing data. Although there is a strong possibility that systematic effects may dominate the present analysis and produce a completely misleading answer, the existing data suggest that the ~10 decontaminations carried out to date have removed only ~50% of the UV flood applied in December 1990 and that another ~5 flash style decontaminations are unlikely to reintroduce the QEH problem.
J. MacKenty and C. Ritchie
###### Determination of the Position Dependent Zero Point Magnitude Correction for the Core Aperture Photometry with WF/PC (not WFPC ISR but TS 92-02)
The method for developing a position dependent zero point magnitude correction is outlined. In general, it has been found that the magnitude obtained using Core Aperture Photometry tend to increase toward the edges of the FoV, and magnitudes are brightest near the center of the chip. It is shown that the magntiude variation can be very easily corrected using a linear relation delta mu = a_1 r + a_o, where r is the radial distance from the most sensitive part of the camera. The magnitude correction makes it possible to measure magnitudes to an accuracy of 0.05 mag or better. This improvement is re- assuring, especially considering that the flats for the WFC were not accurate enough for this calibration, and were not used in determining the correction. It is expected that the correction will reach even higher accuracy once improved flats are obtained.
Ellyne Kinney and Roberto Gilmozzi
###### Position Changes for Standard Star Observations (92-01)
Observations of the photometric standard star BD+75D325 are made monthly with the WF/PC-1. Over time, there has been a shift in star location on the CCDs causing the star center in PC observations to fall near a blocked column despite a PDB aperture change for PC 6 in late June 1992 to prevent this from occurring.
C. Ritchie
###### Exposure Times for G200L Images of AGK+81D266 during UV Flood (91-08)
During the UV Flood procedure executed 25-29 Dec 1990, UV grism (G200L) exposures of AGK+81D266 (a hot circumpolar subdwarf O star and UV (not optical) flux standard) were taken on all 8 WF/PC chips. The purpose of these grism exposures was to verify the presence of UV sensitivity (i.e. absence of contamination) following high-temperature decontaminations and the UV flood. The relevant grism images extracted from the DMF show that the 5s (WF) and the 23s (PC) exposures slightly saturate the order 0 image, and weakly expose (20-40DN/pixel) the order -1, +1, and +2 spectra of the target.
In the new UV flood procedure, which will become available in early 1992, the JPL exposure times of 40s (WF) and 160s (PC) will nicely increase the exposure level of the dispersed spectra, but will over-saturate the order 0 image causing charge to bleed down the columns. On most chips this bleeding will not seriously interfere with the order -1 and +1 spectra, from which the UV sensitivity is most easily gauged. However, on WF1 and WF3 the grism dispersion is roughly parallel to the columns. We therefore recommend using the JPL exposure times for all chips except WF1 and WF3, for which 10s exposures should be used to prevent bleeding.
K. Horne and J. MacKenty
###### Analysis of Stellar Monitor Proposal #3173 (91-07)
WFC and PC observations of BD+75D325 have been taken for 3 consecutive months through the Stellar Monitor proposal 3173. Aperture photometry shows stable sensitivities to a few percent in filters F439W, F555W, and F785LP. F336W is stable to 5-10 percent and has returned to the originally observed sensitivity since the Flash decontamination on July 5. The sensitivity is not stable in the UV, but has increased since the Flash decontamination.
C. Ritchie
###### July 6 Decontamination Affected QE Slightly (91-05)
The WF/PC-1 CCD's were decontaminated on July 6 after spacecraft conditions forced a temporary shutdown of WFC coolers. This decontamination operation reduced slightly the QE of PC8, but did not affect it as much as a similar decontamination executed in May.
S. Ewald
###### WF/PC Photometric Calibration during Jan-May 1991 (91-04)
Observations of 3 spectrophotometric standard stars taken between 31 Jan. 1991 and 23 May 1991 are used to investigate the sensitivity of the WF and PC cameras relative to the pre-launch baseline predictions. The analysis considers all spectrophotometric standard stars observed between the UV flood on 26-28 Dec. 1990 and the latest observation on 23 May 1991, which occurred after the deep safing event of 02 May 1991. The main results are summarized in Figures 3a and 3b, which show the ratio of observed to predicted count rates as a function of wavelength in the WF and PC camers respectively. Figures 4a and 4b compare the baseline and corrected quantum efficiency curves for WF2 and PC6 respectively.
At optical wavelengths the measurements indicate stable sensitivites in WF2 and PC6 to a few percent in F555W, and F785LP and 5-10 percent in F336W. In the UV the sensitivity is not stable.
Comparison of the observed and predicted counts indicates that the ground-based sensitivity calibrations differ significantly from on-orbit performance. The sensitivity in F785LP is down by 35-40 percent in SF2 and PC6,7, confirming the result of our earlier analysis of Omega Cen data. At F555W, WF2 is up by 10 percent while PC6 and 7 are down by 10-20 percent. At F439W, WF2 is up by 20 percent, PC6 is down by 10-15, and PC7 is down by 35 percent. At F336W, WF2 is up 70-90 percent while PC6 is down 45-50 percent. F284W is up by 45-60 percent in WF2, and the other UV filters are down by 20-97 percent and unstable as noted above.
The issue of flat field normalization in relation to photometric calibrations is briefly discussed. A reccomendation is made that flat fields be normalized to the median value in a 200 by 200 pixel box centered on the prime chip (WF2 or PC6) rather than to a mean over all 4 chips.
K. Horne, L. Walter, C. Ritchie
###### WF/PC Contamination Control (91-03)
The prediction of contaminant deposition on the cold (-100C) charge coupled device (CCD) sensors of the Wide Field Planetary Camera (WF/PC-1) due to sources internal to the instrument is crucial to the evaluation of expected performance and to the assessment of approaches for improvement. An integral component in such predictions is a model of the transport from the internal sources to the CCD's. In the present work, the model used is based on the Jet Propulsion Laboratory (JPL) Contamination Analysis Program (CAP).
A CAP model comprises a geometric multinodal representation of the instrument, internodal shape factors for line of sight transport, nodal contaminant sources and the necessary deposition and re-emission kinetics. IN CAP, indirect transport by way of intermediate nodes (of considerable importance to the internal problem) is explicitly calculated as a diffuse reflection for each internodal exchange. This leads to rather long computer run times for each required 30 day prediction for separate sources and for various internal modifications to reduce the accumulation on the CCD sensors. A method has been developed to pre-calculate the effective total transport factors from each source node to each receiving node (including the CCD sensors) and from each re-emitting node to each receiving node. The effects of this preprocessing calculation are to sharply reduce the number of nodes, to increase the allowable time step in the transient CAP analysis, and to greatly reduce the run time.
Some results of this work to date are presented and the interpretation of the results are discussed. The discussion includes the implications of the results for other space instruments.
J. Barengoltz, J. Millard, T. Jenkins, D. Taylor
###### Spacecraft Jitter: Its Effect on the HST PSF and on the Breathing'?) (91-02)
The effect of the spacecraft jitter on the HST Point Spread Function is analyzed, with special reference to the terminator oscillations. It is found that the jitter may substantially affect the shape of the PSF, seriously impairing the chances of accurate photometry. However, the true PSF can be modelled by convolving the uperturbed PSF with the jitter PSF obtained from the FGS pointing information, provided the observations were conducted in fine lock. The best results are obtained for high telemetry rates (better sampling of the jitter). The true PSF so obtained can then be used to deconvolve WF/PC-1 images.
R. Gilmozzi
###### Post-Flood PC quantum Efficiency (91-01)
PC frames of [Omega] Centauri taken on January 2, 1991 through 10 filters have been used to determine the post-flood quantum efficiency of the PC. The [Omega] Cen frames were taken shortly after the UV flood and subsequent safeing event. Observed count rates were obtained by aperture photometry with 60 and 10 pixel radii. Ground-based photometry was used to estimate the spectra of the [Omega] Cen targets. The spectrum for each target was then used with WF/PC-1 passbands from the CDBS database (which are those presented in the 1990 WF/PC-1 Handbook) to calculate the predicted total count rates for each WF/PC-1 filter.
There is little agreement among the count rate in the UV filters, varying from 2 times lower to 2 times higher than predicted. These results are not currently understood. The count rates for F439W, F547M, F555W, F702W, and F718M are about right, and about 2 times lower than predicted for F785LP and F889N.
K. Horne and C. Ritchie
###### Technical Report on Wide Field Camera Observations of 2237+0305 (SV 3068) (90-10)
Our analysis of five Wide Field Camera observations of 2237+0305 yielded the following results: 1) The standard calibration performed by the Space Telescope Science Institute's software satisfactorily repairs the analog-to-digital conversion errors in the raw data. 2) The standard calibration software overestimates the bias plus preflash level by nearly one electron. 3) The use of flat fields taken during the vacuum testing of the camera has the potential to produce misleading results. 4) The noise properties of the images can be characterized by a readout plus preflash noise of ~1.9 electrons and poisson statistics in the sky. 5) Centroids of well exposed objects (several thousand photons in the central pixel) can be determined to an accuracy of a few hundredths of; a pixel. 6) The preflashs appear to remove low light level charge transfer inefficencies. 7) Relative positions of objects measured from Wide Field Camera and Faint Object Camera data agree to a few milliarcseconds over an area a few arcseconds across.
D. Schneider & J. Bahcall
###### WF/PC Internal Molecular Contamination During System T-V Test (90-08)
During a recent System Thermal-Vacuum (STV) test of the Wide Field/ Planetary Camera (WF/PC-1), instrumentation was added to characterize the internal molecular contamination using a temperature controlled Quartz Crystal Microbalance (TQCM) and several ultra-violet (UV) light sources. Correlation of this data with detailed temperature data for various elements in the WF/PC-1 provided significant insight into the sources and the effect of extremely low levels of internal contaminants affecting the far UV instrument throughput. As a result, the WFPC design was modified to ensure on-orbit capability to temperature from -80 to 100 degrees C, and provide guidance for on-orbit operations during UV observations.
R. Griffith
###### "Core" Aperture Photometry with WF/PC (90-07)
The effect of the variable, structured PSF of HST on aperture photometry is investigated. It is shown, both theoretically and practically, that the best results are obtained using extremely small apertures (0.15" radius). The choice of a "background" annulus of 0.2" width and 0.3" radius (i.e. well within the PSF) allows the photometry of moderately crowded fields with accuracies only slightly worse than those obtained by PSF fitting techniques. With this approach the effect of varying PSF is taken into account by a position-dependent zero point calibration whose determination procedure is outlined. This technique is applied ot WFPC data on NGC 1850, using standard IRAF packages. It is shown that V photometry accurate to 20% (r.m.s.) can be achieved on mv = 24 [approximately] stars in [on the order of] 1000 seconds.
R. Gilmozzi
###### A Pre-Flood Checkup on WF/PC Quantum Efficiency Using the STSDAS SYNPHOT Software Package (90-06)
We describe a recent attempt to verify the throughput and quantum efficiency of the WF/PC relative to the pre-flight baseline calibration, using the following filters: F230W, F555W, and F785LP. Using aperture photometry, the observed count rates were obtained and the aperture corrections applied were based on a mean encircled energy curve. The targets used are not spectrophotometric standards because the WF/PC had not yet measured such standards. Therefore the spectrum of each target is based on ground based photometry and, in some cases, knowledge of the spectral type. The predicted total count rates for the filters were computed from the target spectrum.
The results are in agreement with earlier results from S. Faber, WF/PC IDT. For the F230W filter the rates are 4 times less than predicted, roughly correct for the F555W, and approximately 2 times lower for the F785LP. Because the WF/PC has yet to be UV flooded, the low UV sensitivity is expected. However the low infrared response has yet to be understood. One possibility is that the laboratory measurement was in error. This report also details the analysis steps using the new STSDAS.SYNPHOT software package.
K. Horne, E. Wyckoff, and D. Bazell
###### A Pre-Flood Study of WF/PC Photometric Stability Using Aperture Photometry on NGC 188: Data Taken in August and September (90-05)
A relatively uncrowded field of the Galactic cluster NGC 188 was studied in order to learn about the level of photometric stability of the Wide Field Camera and OTA. Different methods of aperture photometry were applied to images taken a month apart in order to determine which was the most successful at producing repeatability in the data which, as a result of spherical aberration, suffers most dramatically from a position dependent point spread function (PDPSF).
The results depend heavily on the size of the aperture radius and the position and size of the background annulus. The selection of these parameters can create quite different results that range from systematic shifts of several percent to scatter about the fit of up to 45% rms. Of the setups...tried [they] find that the parameter selections that produce the best result are an aperture radius of 0.15", annulus radius of 0.2", annulus width of 0.1", and a mean background determination of the pixels in the annulus. This is based on the assumption that the 'best' result is the one where the line fit to the data has the smallest offset from zero balanced with the smallest amount of scatter about this fit.
A compromise must be made between two major difficulties that affect the quality of aperture photometry that can be done with HST. On one side there is the PDPSF which could be compensated for by using a large aperture to include all flux independent of shape. This, however, is complicated by the fact that the S/N ratio decreases very quickly with distance from the core so large apertures produce noisy measurements. On the other, photometry could be done in the core only where the S/N is high; however, the PDPSF causes a different fraction of the PSF to [be] measured in a small aperture at different positions. [They] believe that this effect can be calibrated using a position dependent zero point for the photometry. Even though the core aperture photometry is the method most sensitive to the variable PSF the fact that [they] find results as good as or better than large aperture photometry implies that the results will be much better when [they] can correct for a variable zero point.
E. Wyckoff, R. Gilmozzi, K. Horne
###### Sensitivity at 9000A (90-04)
Analysis of WF/PC standard star observations by both the IDT (Faber) and the STScI (Horne et al.) shows that the sensitivity at 5500A is within 90% of the value expected from the Thermal Vacuum Test 6 (TV6) results, while that at 9000A is only 58% of what was predicted (as read from Faber's plot and Figure 1 of the Horne et al. report). No reason for the low response in the near IR is known.
P. Seitzer
###### The Take Data Flag (90-03)
Analysis of WF/PC standard star observations by both the IDT (Faber) and the STScI (Horne et al.) shows that the sensitivity at 5500A is within 90% of the value expected from the Thermal Vacuum Test 6 (TV6) results, while that at 9000A is only 58% of what was predicted (as read from Faber's plot and Figure 1 of the Horne et al. report). No reason for the low response in the near IR is known.
S. Ewald
###### Filter F656N Anomaly I (90-02)
The H-alpha narrow band filter in WF/PC-1, F656N, has deteriorated since delivery. A series of concentric rings is visible on exposures taken through this filter. The rings are centered near pixel (300,600) of CCD WF1 and extend across all four chips. Also, CCD WF4 shows fringes and filter F664N probably has a ghost.
S. Ewald
###### Bias Change During TV6 (90-01)
Analysis of WFPC images taken during JPL thermal vacuum tests has always shown a low level (~0.5 DN) difference of bias level between the odd and the even numbered columns. During TV6 (February - March, 1988) this pattern changed from even columns having the higher bias level to the odd columns being higher. The change occurred steadily over the duration of the TV6 and does not show a correlation with UV Flooding or decontamination (bake-out) activity. The mean bias level of the odd and even columns together shows a steady increase of about two DN per day over the twenty-five days of the TV6 test.
C. Ritchie and S. Ewald
|
2020-09-26 11:11:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4656977951526642, "perplexity": 3321.957243004402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400241093.64/warc/CC-MAIN-20200926102645-20200926132645-00119.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-1-introduction-to-algebraic-expressions-1-1-introduction-to-algebra-1-1-exercise-set-page-11/90
|
# Chapter 1 - Introduction to Algebraic Expressions - 1.1 Introduction to Algebra - 1.1 Exercise Set - Page 11: 90
perimeter = $4s$
#### Work Step by Step
A square is a 4-sided figure where all 4 sides are equal in length. The perimeter of a square is the sum of the lengths of the sides. If the side of a square has a length of s the sum of the lengths of the 4 sides is then $s+s+s+s$ or $4s$.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2020-02-22 13:15:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7329087257385254, "perplexity": 282.87871642104096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145676.44/warc/CC-MAIN-20200222115524-20200222145524-00511.warc.gz"}
|
https://electronics.stackexchange.com/questions/217281/decoupling-capacitor-calculation
|
# Decoupling Capacitor Calculation
I have few questions related to the schematics(BBB_SCH) sheet 2 in link
1. There is a bulk decoupling capacitor of 10uF,10V connected to the pin 10(AC) of the PMIC tps65217.pdf. Can some one please explain how is this value calculated now I know C = I * dt/dv but where in the data sheet can I find I , dt , dv.
2. There is a 100K ohm register R1 on the INT pin number 45. Can some one please let me know how is this value calculated.
3. Also on sheet 3 there is a SN74AUC1G74 which is connected to the CEC Clock for HDMI Framer. Can some please let me know is this a clock for the HDMI.
Regards
Nick
|
2019-12-06 17:50:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3787758946418762, "perplexity": 1492.3645309903832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540490743.16/warc/CC-MAIN-20191206173152-20191206201152-00065.warc.gz"}
|
https://gmatclub.com/forum/a-certain-bridge-is-4-024-feet-long-approximately-how-many-minutes-219225.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 16 Oct 2019, 22:24
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# A certain bridge is 4,024 feet long. Approximately how many minutes...
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
### Hide Tags
Current Student
Status: Persevere
Joined: 09 Jan 2016
Posts: 117
Location: Hong Kong
GMAT 1: 750 Q50 V41
GPA: 3.52
A certain bridge is 4,024 feet long. Approximately how many minutes... [#permalink]
### Show Tags
26 May 2016, 21:34
4
42
00:00
Difficulty:
25% (medium)
Question Stats:
78% (02:19) correct 22% (02:24) wrong based on 1105 sessions
### HideShow timer Statistics
A certain bridge is 4,024 feet long. Approximately how many minutes does it take to cross this bridge at a constant speed of 20 miles per hour? (1 mile = 5,280 feet)
(A) 1
(B) 2
(C) 4
(D) 6
(E) 7
EMPOWERgmat Instructor
Status: GMAT Assassin/Co-Founder
Affiliations: EMPOWERgmat
Joined: 19 Dec 2014
Posts: 15263
Location: United States (CA)
GMAT 1: 800 Q51 V49
GRE 1: Q170 V170
Re: A certain bridge is 4,024 feet long. Approximately how many minutes... [#permalink]
### Show Tags
20 Mar 2017, 21:25
8
1
Hi amandakay,
The GMAT would never expect you to calculate the exact value of 4024/5280, so if you're ever thinking about doing something so "calculation heavy", then there's almost certainly a faster or easier way to get to the correct answer. In this prompt, you should notice the word "approximately" - that means that you DON'T have to calculate that fraction, but you do have to estimate the value.
If we 'round down', then 4024 ft./5280 ft. is a little less than 4/5 of a mile. Knowing THAT, and that 20 mph is 1 mile every 3 minutes, can you correctly answer the question?
GMAT assassins aren't born, they're made,
Rich
_________________
Contact Rich at: Rich.C@empowergmat.com
The Course Used By GMAT Club Moderators To Earn 750+
souvik101990 Score: 760 Q50 V42 ★★★★★
ENGRTOMBA2018 Score: 750 Q49 V44 ★★★★★
Intern
Joined: 13 Aug 2016
Posts: 1
Re: A certain bridge is 4,024 feet long. Approximately how many minutes... [#permalink]
### Show Tags
19 Mar 2017, 12:01
46
12
Here's another method that allowed me to solve the question in approximately 30 seconds:
20 miles per hour translates to traveling 1 mile every 3 minutes. Since the bridge is longer than 1/2 mile but shorter than 1 mile, the answer must be between 1.5 and 3 minutes. The only answer choice that fits this range is 2 minutes.
##### General Discussion
Manager
Joined: 18 Jan 2010
Posts: 246
Re: A certain bridge is 4,024 feet long. Approximately how many minutes... [#permalink]
### Show Tags
26 May 2016, 21:59
2
In this question unit of measurements have to be taken care of.
Distance: 4024 feet. This is (4024/5280) miles = 0.76 miles
Speed: 20 miles per hour
Time: (Distance / Speed) (0.76/20) {This will come in Hours}. Multiply by 60 to get answer in minutes. It is 2.28 minutes.
Answer to be 2 minutes.
Intern
Joined: 05 May 2016
Posts: 20
Location: United States
WE: Web Development (Computer Software)
Re: A certain bridge is 4,024 feet long. Approximately how many minutes... [#permalink]
### Show Tags
14 Jun 2016, 12:34
7
1
3
nalinnair wrote:
A certain bridge is 4,024 feet long. Approximately how many minutes does it take to cross this bridge at a constant speed of 20 miles per hour? (1 mile = 5,280 feet)
(A) 1
(B) 2
(C) 4
(D) 6
(E) 7
First, convert the speed to feet per minute.
20 miles per hour = ( 20 * 5280 ) feet per 60 minutes.
Time = $$\frac{Distance}{Speed}$$ = $$\frac{4024}{(20 * 5280 ) / 60}$$ = $$\frac{4024 *60}{20 * 5280}$$ = $$\frac{4024 * 3}{5280}$$ = $$\frac{4024}{5280}$$*3
$$\frac{4024}{5280}$$ < 1. Therefore, Time < 3.
We can eliminate (A) because $$\frac{4024}{5280}$$ is not equal to $$\frac{1}{3}$$. The only answer choice remaining is (B). Therefore, (B) is the answer.
VP
Joined: 07 Dec 2014
Posts: 1222
A certain bridge is 4,024 feet long. Approximately how many minutes... [#permalink]
### Show Tags
14 Jun 2016, 12:57
~3/4 miles/(1/3) mpm=~2 minutes
Intern
Joined: 19 Mar 2017
Posts: 1
Re: A certain bridge is 4,024 feet long. Approximately how many minutes... [#permalink]
### Show Tags
19 Mar 2017, 12:32
Does anyone have a tip for a quick way to solve the division of 4024/5280? This is where I spend the majority of my time on the problem and am wondering if I can do something faster. I do a lot of trial and error.
Director
Status: Come! Fall in Love with Learning!
Joined: 05 Jan 2017
Posts: 532
Location: India
Re: A certain bridge is 4,024 feet long. Approximately how many minutes... [#permalink]
### Show Tags
20 Mar 2017, 01:56
nalinnair wrote:
A certain bridge is 4,024 feet long. Approximately how many minutes does it take to cross this bridge at a constant speed of 20 miles per hour? (1 mile = 5,280 feet)
(A) 1
(B) 2
(C) 4
(D) 6
(E) 7
distance = 4024 ft = 4024/5280 miles
time taken = 4024/5280*20 hours= 0.76 hours = 2.28 minutes close to 2 minutes
option B
_________________
GMAT Mentors
Manager
Joined: 22 Nov 2016
Posts: 202
Location: United States
GPA: 3.4
A certain bridge is 4,024 feet long. Approximately how many minutes... [#permalink]
### Show Tags
23 Jun 2017, 17:14
3
1
Approximation!
D=4000
R=20mph
Convert it to miles per minutes = $$\frac{20}{60}$$
R = $$\frac{1}{3}$$
1 mile = approx 5000 feet
$$T=\frac{D}{R}$$ $$=\frac{(4000*3)}{5000}$$ $$=\frac{12000}{5000}$$ $$=\frac{12}{5}$$ = 2.4
Since the denominator is slightly more than 5000, the value will be less than 2.4.
i.e B
_________________
Kudosity killed the cat but your kudos can save it.
VP
Joined: 09 Mar 2016
Posts: 1230
Re: A certain bridge is 4,024 feet long. Approximately how many minutes... [#permalink]
### Show Tags
07 Apr 2018, 03:52
2
nalinnair wrote:
A certain bridge is 4,024 feet long. Approximately how many minutes does it take to cross this bridge at a constant speed of 20 miles per hour? (1 mile = 5,280 feet)
(A) 1
(B) 2
(C) 4
(D) 6
(E) 7
hello friends
here is a quick way to solve it
if 20 miles = 60 min---> then 1 mile = 3 min
if length of bridge (4,024 feet) is shorter than 1 mile (5,280 feet)
then only option is B
Manager
Joined: 20 Jul 2018
Posts: 87
GPA: 2.87
Re: A certain bridge is 4,024 feet long. Approximately how many minutes... [#permalink]
### Show Tags
19 Aug 2018, 21:54
1
Convert the rate from miles per hour to feet per minute.
20 miles per hour = 20*5280/60 = 1760
Now devide distance by rate to find time
Time = 4024/1760 =~2
Posted from my mobile device
_________________
Hasnain Afzal
"When you wanna succeed as bad as you wanna breathe, then you will be successful." -Eric Thomas
Target Test Prep Representative
Status: Founder & CEO
Affiliations: Target Test Prep
Joined: 14 Oct 2015
Posts: 8085
Location: United States (CA)
Re: A certain bridge is 4,024 feet long. Approximately how many minutes... [#permalink]
### Show Tags
23 Aug 2018, 16:41
nalinnair wrote:
A certain bridge is 4,024 feet long. Approximately how many minutes does it take to cross this bridge at a constant speed of 20 miles per hour? (1 mile = 5,280 feet)
(A) 1
(B) 2
(C) 4
(D) 6
(E) 7
The length of the bridge, in miles, is:
4,024/5,280 = 503/660, which is about 500/660 = 50/66 = 25/33.
Using the formula: time = distance/rate, we see that the time, in hours, is:
[25/33]/20 = 25/(33 x 20) hours
So the time in minutes is:
25/(33 x 20) x 60 = (25 x 3)/33 = 25/11, which is about 2 minutes.
_________________
# Scott Woodbury-Stewart
Founder and CEO
Scott@TargetTestPrep.com
122 Reviews
5-star rated online GMAT quant
self study course
See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews
If you find one of my posts helpful, please take a moment to click on the "Kudos" button.
Intern
Joined: 26 Apr 2015
Posts: 19
Location: United States
GMAT 1: 680 Q47 V36
GMAT 2: 700 Q48 V38
GPA: 3.91
A certain bridge is 4,024 feet long. Approximately how many minutes... [#permalink]
### Show Tags
13 Oct 2018, 11:57
nalinnair wrote:
A certain bridge is 4,024 feet long. Approximately how many minutes does it take to cross this bridge at a constant speed of 20 miles per hour? (1 mile = 5,280 feet)
(A) 1
(B) 2
(C) 4
(D) 6
(E) 7
I used a combination of algebra and backsolve to tackle this problem in <1min:
Figure out the rate in ft / min: $$\frac{20 mi}{hour}*\frac{*1 hr}{60 min}*\frac{1 mi}{5280 ft} = \frac{5280 ft}{3 min}$$
By examination, (B) is the only that makes the most sense. In other words, 4,024 is about 2/3 of 5,280.
Non-Human User
Joined: 09 Sep 2013
Posts: 13204
Re: A certain bridge is 4,024 feet long. Approximately how many minutes... [#permalink]
### Show Tags
14 Oct 2019, 13:00
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: A certain bridge is 4,024 feet long. Approximately how many minutes... [#permalink] 14 Oct 2019, 13:00
Display posts from previous: Sort by
# A certain bridge is 4,024 feet long. Approximately how many minutes...
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne
|
2019-10-17 05:24:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7795944809913635, "perplexity": 4660.894752893346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672723.50/warc/CC-MAIN-20191017045957-20191017073457-00034.warc.gz"}
|
http://blog.plover.com/
|
# The Universe of Discourse
Sun, 07 Feb 2016
I've read a bunch of 19-century English novels lately. I'm not exactly sure why; it just sort of happened. But it's been a lot of fun. When I was small, my mother told me more than once that people often dislike these books because they are made to read them too young; the books were written for adult readers and don't make sense to children. I deliberately waited to read most of these, and I am very pleased now to find that now that I am approaching middle age I enjoy books that were written for people approaching middle age.
Spoilers abound.
#### Jane Eyre
This is one of my wife's favorite books, or perhaps her very favorite, but I had not read it before. Wow, it's great! Jane is as fully three-dimensional as anyone in fiction.
I had read The Eyre Affair, which unfortunately spoiled a lot of the plot for me, including the Big Shocker; I kept wondering how I would feel if I didn't know what was coming next. Fortunately I didn't remember all the details.
• From her name, I had expected Blanche Ingram to be pale and limp; I was not expecting a dark, disdainful beauty.
• When Jane tells Rochester she must leave, he promises to find her another position, and the one he claims to have found is hilariously unattractive: she will be the governess to the five daughters of Mrs. Dionysus O'Gall of Bitternutt Lodge, somewhere in the ass-end of Ireland.
• What a thrill when Jane proclaims “I am an independent woman now”! But she has achieved this by luck; she inherited a fortune from her long-lost uncle. That was pretty much the only possible path, and it makes an interesting counterpoint to Vanity Fair, which treats some of the same concerns.
• The thought of dutifully fulfilling the duty of a dutiful married person by having dutiful sex with the dutiful Mr. Rivers makes my skin crawl. I imagine that Jane felt the same way.
• Mr. Brocklehurst does not get one-tenth of what he deserves.
Jane Eyre set me off on a Victorian novel kick. The preface of Jane Eyre praises William Thackeray and Vanity Fair in particular. So I thought I'd read some Thackeray and see how I liked that. Then for some reason I read Silas Marner instead of Vanity Fair. I'm not sure how that happened.
#### Silas Marner
Silas Marner was the big surprise of this batch of books. I don't know why I had always imagined Silas Marner would be the very dreariest and most tedious of all Victorian novels. But Silas Marner is quite short, and I found it very sweet and charming.
I do not suppose my Gentle Readers are as likely to be familiar with Silas Marner as with Jane Eyre. As a young man, Silas is a member of a rigid, inward-looking religious sect. His best friend frames him for a crime, and he is cast out. Feeling abandoned by society and by God, he settles in Raveloe and becomes a miser, almost a hermit. Many years pass, and his hoarded gold is stolen, leaving him bereft. But one snowy evening a two-year-old girl stumbles into his house and brings new purpose to his life. I have omitted the subplot here, but it's a good subplot.
One of the scenes I particularly enjoyed concerns Silas’ first (and apparently only) attempt to discipline his adopted two-year-old daughter Eppie, with whom he is utterly besotted. Silas knows that sooner or later he will have to, but he doesn't know how—striking her seems unthinkable—and consults his neighbors. One suggests that he shut her in the dark, dirty coal-hole by the fireplace. When Eppie wanders away one day, Silas tries to be stern.
“Eppie must go into the coal hole for being naughty. Daddy must put her in the coal hole.”
He half expected that this would be shock enough and that Eppie would begin to cry. But instead of that she began to shake herself on his knee as if the proposition opened a pleasing novelty.
As they say, no plan survives contact with the enemy.
Seeing that he must proceed to extremities, he put her into the coal hole, and held the door closed, with a trembling sense that he was using a strong measure. For a moment there was silence but then came a little cry “Opy, opy!” and Silas let her out again…
Silas gets her cleaned up and changes her clothes, and is about to settle back to his work
when she peeped out at him with black face and hands again and said “Eppie in de toal hole!”
Two-year-olds are like that: you would probably strangle them, if they weren't so hilariously cute.
Everyone in this book gets what they deserve, except the hapless Nancy Lammeter, who gets a raw deal. But it's a deal partly of her own making. As Thackeray says of Lady Crawley, in a somewhat similar circumstance, “a title and a coach and four are toys more precious than happiness in Vanity Fair”.
There is a chapter about a local rustics at the pub which may remind you that human intercourse could be plenty tiresome even before the invention of social media. The one guy who makes everything into an argument will be quite familiar to my Gentle Readers.
I have added Silas Marner to the long list of books that I am glad I was not forced to read when I was younger.
#### The Old Curiosity Shop
Unlike Silas Marner, I know why I read this one. In the park near my house is a statue of Charles Dickens and Little Nell, on which my daughter Toph is accustomed to climb. As she inevitably asked me who it was a statue of, I explained that Dickens was a famous writer, and Nell is a character in a book by Dickens. She then asked me what the book was about, and who Nell was, and I did not know. I said I would read the book and find out, so here we are.
My experience with Dickens is very mixed. Dickens was always my mother's number one example of a writer that people were forced to read when too young. My grandfather had read me A Christmas Carol when I was young, and I think I liked it, but probably a lot of it went over my head. When I was about twenty-two I decided to write a parody of it, which meant I had to read it first, but I found it much better than I expected, and too good to be worth parodying. I have reread it a couple of times since. it is very much worth going back to, and is much better than its many imitators.
I had been required to read Great Expectations in high school, had not cared for it, and had stopped after four or five chapters. But as an adult I kept a copy in my house for many years, waiting for the day when I might try again, and when I was thirty-five I did try again, and I loved it.
Everyone agrees that Great Expectations is one of Dickens’ best, and so it is not too surprising that I was much less impressed with Martin Chuzzlewit when I tried that a couple of years later. I remember liking Mark Tapley, but I fell off the bus shortly after Martin came to America, and I did not get back on.
A few years ago I tried reading The Pickwick Papers, which my mother said should only be read by middle-aged people, and I have not yet finished it. It is supposed to be funny, and I almost never find funny books funny, except when they are read aloud. (When I tell people this, they inevitably name their favorite funny books: “Oh, but you thought The Hitchhiker’s Guide to the Galaxy was funny, didn't you?” or whatever. Sorry, I did not. There are a few exceptions; the only one that comes to mind is Stanisław Lem's The Cyberiad, which splits my sides every time. SEVEN!
Anyway, I digress. The Old Curiosity Shop was extremely popular when it was new. You always hear the same two stories about it: that crowds assembled at the wharves in New York to get spoilers from the seamen who might have read the new installments already, and that Oscar Wilde once said “one must have a heart of stone to read the death of Little Nell without laughing.” So I was not expecting too much, and indeed The Old Curiosity Shop is a book with serious problems.
Chief among them: it was published in installments, and about a third of the way through writing it Dickens seems to have changed his mind about how he wanted it to go, but by then it was too late to go back and change it. There is Nell and her grandfather on the one hand, the protagonists, and the villain is the terrifying Daniel Quilp. It seems at first that Nell's brother Fred is going to be important, but he disappears and does not come back until the last page when we find out he has been dead for some time. It seems that Quilp's relations with his tyrannized wife are going to be important, but Quilp soon moves out of his house and leaves Mrs. Quilp more or less alone. It seems that Quilp is going to pursue the thirteen-year-old Nell sexually, but Nell and Grandpa flee in the night and Quilp never meets them again. They spend the rest of the book traveling from place to place not doing much, while Quilp plots against Nell's friend Kit Nubbles.
Dickens doesn't even bother to invent names for many of the characters. There is Nell’s unnamed grandfather; the old bachelor; the kind schoolmaster; the young student; the guy who talks to the fire in the factory in Birmingham; and the old single gentleman.
The high point of the book for me was the development of Dick Swiveller. When I first met Dick I judged him to be completely worthless; we later learn that Dick keeps a memorandum book with a list of streets he must not go into, lest he bump into one of his legion of creditors. But Dick turns out to have some surprises in him. Quilp's lawyer Sampson Brass is forced to take on Swiveller as a clerk, in furtherment of Quilp's scheme to get Swiveller married to Nell, another subplot that comes to nothing. While there, Swiveller, with nothing to amuse himself, teaches the Brasses’ tiny servant, a slave so starved and downtrodden that she has never been given a name, to play cribbage. She later runs away from the Brasses, and Dick names her Sophronia Sphynx, which he feels is “euphonious and genteel, and furthermore indicative of mystery.” He eventually marries her, “and they played many hundred thousand games of cribbage together.”
I'm not alone in finding Dick and Sophronia to be the most interesting part of The Old Curiosity Shop. The anonymous author of the excellent blog A Reasonable Quantity of Butter agrees with me, and so does G.K. Chesterton:
The real hero and heroine of The Old Curiosity Shop are of course Dick Swiveller and [Sophronia]. It is significant in a sense that these two sane, strong, living, and lovable human beings are the only two, or almost the only two, people in the story who do not run after Little Nell. They have something better to do than to go on that shadowy chase after that cheerless phantom.
Today is Dickens’ 204th birthday. Happy birthday, Charles!
#### Vanity Fair
I finally did get to Vanity Fair, which I am only a quarter of the way through. It seems that Vanity Fair is going to live or die on the strength of its protagonist Becky Sharp.
When I first met Ms. Sharp, I thought I would love her. She is independent, clever, and sharp-tongued. But she quickly turned out to be scheming, manipulative, and mercenary. She might be hateful if the people she was manipulating were not quite such a flock of nincompoops and poltroons. I do not love her, but I love watching her, and I partly hope that her schemes succeed, although I rather suspect that she will sabotage herself and undo all her own best plans.
Becky, like Jane Eyre, is a penniless orphan. She wants money, and in Victorian England there are only two ways for her to get it: She can marry it or inherit it. Unlike Jane, she does not have a long-lost wealthy uncle (at least, not so far) so she schemes to get it by marriage. It's not very creditable, but one can't feel too righteous about it; she is in the crappy situation of being a woman in Victorian England, and she is working hard to make the best of it. She is extremely cynical, but the disagreeable thing about a cynic is that they refuse to pretend that things are better than they are. I don't think she has done anything actually wrong, and so far her main path to success has been to act so helpful and agreeable that everyone loves her, so I worry that I may come out of this feeling that Thackeray does not give her a fair shake.
In the part of the book I am reading, she has just married the exceptionally stupid Rawdon Crawley. I chuckle to I think of the flattering lies she must tell him when they are in the sack. She has married him because he is the favorite relative of his rich but infirm aunt. I wonder at this, because the plan does not seem up to Becky’s standards: what if the old lady hangs on for another ten years? But perhaps she has a plan B that hasn't yet been explained.
Thackeray says that Becky is very good-looking, but in his illustrations she has a beaky nose and an unpleasant, predatory grin. In a recent film version she was played by Reese Witherspoon, which does not seem to me like a good fit. Although Becky is blonde, I keep picturing Aubrey Plaza, who always seems to me to be saying something intended to distract you from what she is really thinking.
I don't know yet if I will finish Vanity Fair—I never know if I will finish a book until I finish it, and I have at times said “fuck this” and put down a book that I was ninety-five percent of the way through—but right now I am eager to find out what happens next.
#### Blah blah blah
This post observes the tenth anniversary of this blog, which I started in January 2006, directly inspired by Steve Yegge’s rant on why You Should Write Blogs, which I found extremely persuasive. (Articles that appear to have been posted before that were backdated, for some reason that I no longer remember but would probably find embarrassing.) I hope my Gentle Readers will excuse a bit of navel-gazing and self-congratulation.
When I started the blog I never imagined that I would continue as long as I have. I tend to get tired of projects after about four years and I was not at all sure the blog would last even that long. But to my great surprise it is one of the biggest projects I have ever done. I count 484 published articles totalling about 450,000 words. (Also 203 unpublished articles in every possible state of incompletion.) I drew, found, stole, or otherwise obtained something like 1,045 diagrams and illustrations. There were some long stoppages between articles, but I always came back to it. And I never wrote out of obligation or to meet a deadline, but always because the spirit moved me to write.
Looking back on the old articles, I am quite pleased with the blog and with myself. I find it entertaining and instructive. I like the person who wrote it. When I'm reading articles written by other people it sometimes happens that I smile ruefully and wish that I had been clever enough to write that myself; sometimes that happens to me when I reread my own old blog articles, and then my smile isn't rueful.
The blog paints a good picture, I think, of my personality, and of the kinds of things that make me unusual. I realized long long ago that I was a lot less smart than many people. But the way in which I was smart was very different from the way most smart people are smart. Most of the smart people I meet are specialists, even ultra-specialists. I am someone who is interested in a great many things and who strives for breadth of knowledge rather than depth. I want to be the person who makes connections that the specialists are too nearsighted to see. That is the thing I like most about myself, and that comes through clearly in the blog. I know that if my twenty-five-year-old self were to read it, he would be delighted to discover that he would grow up to be the kind of person that he wanted to be, that he did not let the world squash his individual spark. I have changed, but mostly for the better. I am a much less horrible person than I was then: the good parts of the twenty-five-year-old’s personality have developed, and the bad ones have shrunk a bit. I let my innate sense of fairness and justice overcome my innate indifference to other people’s feelings, and I now treat people less callously than before. I am still very self-absorbed and self-satisfied, still delighted above all by my own mind, but I think I do a better job now of sharing my delight with other people without making them feel less.
My grandparents had Eliot and Thackeray on the shelf, and I was always intrigued by them. I was just a little thing when I learned that George Eliot was a woman. When I asked about these books, my grandparents told me that they were grown-up books and I wouldn't like them until I was older—the implication being that I would like them when I was older. I was never sure that I would actually read them when I was older. Well, now I'm older and hey, look at that: I grew up to be someone who reads Eliot and Thackeray, not out of obligation or to meet a deadline, but because the spirit moves me to read.
Thank you Grandma Libby and Grandpa Dick, for everything. Thank you, Gentle Readers, for your kind attention and your many letters through the years.
Mon, 21 Dec 2015
This is page 23 (the last) of the Cosmic Call message. An explanation follows.
This page is a series of questions for the recipients of the message. It is labeled with the glyph , which heretofore appeared only on page 4 in the context of solving of algebraic equations. So we might interpret it as meaning a solution or a desire to solve or understand. I have chosen to translate it as “wat”.
I find this page irritating in its vagueness and confusion. Its layout is disorganized. Glyphs are used inconsistent with their uses elsewhere on the page and elsewhere in the message. For example, the mysterious glyph , which has something to do with the recipients of the message, and which appeared only on page 21 is used here to ask about both the recipients themselves and also about their planet.
The questions are arranged in groups. For easy identification, I have color-coded the groups.
Starting from the upper-left corner, and proceeding counterclockwise, we have:
Kilograms, meters, and seconds, wat. I would have used the glyphs for abstract mass, distance, and time, and , since that seems to be closer to the intended meaning.
Alien mathematics, physics, and biology, wat. Note that this asks specifically about the recipients’ version of the sciences. None of these three glyphs has been subscripted before. Will the meaning be clear to the recipients? One also wonders why the message doesn't express a desire to understand human science, or science generally. One might argue that it does not make sense to ask the recipients about the human versions of mathematics and physics. But a later group expresses a desire to understand males and females, and the recipients don't know anything about that either.
Aliens wat. Alien [planet] mass, radius, acceleration wat. The meaning of shifts here from meaning the recipients themselves to the recipients’ planet. “Acceleration” is intended to refer to the planet's gravitational acceleration as on page 14. What if the recipients don't live on a planet? I suppose they will be familiar with planets generally and with the fact that we live on a planet, which explained back on pages 11–13, and will get the idea.
Fucking speed of light, how does it work?
Planck's constant, wat. Universal gravitation constant, wat?
Males and females, wat. Alien people, wat. Age of people, wat. This group seems to be about our desire to understand ourselves, except that the third item relates to the aliens. I'm not quite sure what is going on. Perhaps “males and females” is intended to refer to the recipients? But the glyphs are not subscripted, and there is no strong reason to believe that the aliens have the same sexuality.
The glyph , already used both to mean the age of the Earth and the typical human lifespan, is even less clear here. Does it mean we want to understand the reasons for human life expectancy? Or is it intended to continue the inquiry from the previous line and is asking about the recipients’ history or lifespan?
Land, water, and atmosphere of the recipients’ planet, wat.
Energy, force, pressure, power, wat. The usage here is inconsistent from the first group, which asked not about mass, distance, and time but about kilograms, meters, and seconds specifically.
Velocity and acceleration, wat. I wonder why these are in a separate group, instead of being clustered with the previous group or the first group. I also worry about the equivocation in acceleration, which is sometimes used to mean the Earth's gravitational acceleration and sometimes acceleration generally. We already said we want to understand mass , !!G!! , and the size of the Earth. The Earth's surface gravity can be straightforwardly calculated from these, so there's nothing else to understand about that.
Alien planet, wat. The glyph has heretofore been used only to refer to the planet Earth. It does not mean planets generally, because it was not used in connection with Jupiter . Here, however, it seems to refer to the recipients’ planet.
The universe, wat. HUH???
That was the last page. Thanks for your kind attention.
[ Many thanks to Anna Gundlach, without whose timely email I might not have found the motivation to finish this series. ]
Fri, 18 Dec 2015
I only posted three answers in August, but two of them were interesting.
• In why this !!\sigma\pi\sigma^{-1}!! keeps apearing in my group theory book? (cycle decomposition) the querent asked about the “conjugation” operation that keeps cropping up in group theory. Why is it important? I sympathize with this; it wasn't adequately explained when I took group theory, and I had to figure it out a long time later. Unfortunately I don't think I picked the right example to explain it, so I am going to try again now.
Consider the eight symmetries of the square. They are of five types:
1. Rotation clockwise or counterclockwise by 90°.
2. Rotation by 180°.
3. Horizontal or vertical reflection
4. Diagonal reflection
5. The trivial (identity) symmetry
What is meant when I say that a horizontal and a vertical reflection are of the same ‘type’? Informally, it is that the horizontal reflection looks just like the vertical reflection, if you turn your head ninety degrees. We can formalize this by observing that if we rotate the square 90°, then give it a horizontal flip, then rotate it back, the effect is exactly to give it a vertical flip. In notation, we might represent the horizontal flip by !!H!!, the vertical flip by !!V!!, the clockwise rotation by !!\rho!!, and the counterclockwise rotation by !!\rho^{-1}!!; then we have
$$\rho H \rho^{-1} = V$$
and similarly
$$\rho V \rho^{-1} = H.$$
Vertical flips do not look like diagonal flips—the diagonal flip leaves two of the corners in the same place, and the vertical flip does not—and indeed there is no analogous formula with !!H!! replaced with one of the diagonal flips. However, if !!D_1!! and !!D_2!! are the two diagonal flips, then we do have
$$\rho D_1 \rho^{-1} = D_2.$$
In general, When !!a!! and !!b!! are two symmetries, and there is some symmetry !!x!! for which
$$xax^{-1} = b$$
we say that !!a!! is conjugate to !!b!!. One can show that conjugacy is an equivalence relation, which means that the symmetries of any object can be divided into separate “conjugacy classes” such that two symmetries are conjugate if and only if they are in the same class. For the square, the conjugacy classes are the five I listed earlier.
This conjugacy thing is important for telling when two symmetries are group-theoretically “the same”, and have the same group-theoretic properties. For example, the fact that the horizontal and vertical flips move all four vertices, while the diagonal flips do not. Another example is that a horizontal flip is self-inverse (if you do it again, it cancels itself out), but a 90° rotation is not (you have to do it four times before it cancels out.) But the horizontal flip shares all its properties with the vertical flip, because it is the same if you just turn your head.
Identifying this sameness makes certain kinds of arguments much simpler. For example, in counting squares, I wanted to count the number of ways of coloring the faces of a cube, and instead of dealing with the 24 symmetries of the cube, I only needed to deal with their 5 conjugacy classes.
The example I gave in my math.se answer was maybe less perspicuous. I considered the symmetries of a sphere, and talked about how two rotations of the sphere by 17° are conjugate, regardless of what axis one rotates around. I thought of the square at the end, and threw it in, but I wish I had started with it.
• How to convert a decimal to a fraction easily? was the month's big winner. OP wanted to know how to take a decimal like !!0.3760683761!! and discover that it can be written as !!\frac{44}{117}!!. The right answer to this is of course to use continued fraction theory, but I did not want to write a long treatise on continued fractions, so I stripped down the theory to obtain an algorithm that is slower, but much easier to understand.
The algorithm is just binary search, but with a twist. If you are looking for a fraction for !!x!!, and you know !!\frac ab < x < \frac cd!!, then you construct the mediant !!\frac{a+c}{b+d}!! and compare it with !!x!!. This gives you a smaller interval in which to search for !!x!!, and the reason you use the mediant instead of using !!\frac12\left(\frac ab + \frac cd\right)!! as usual is that if you use the mediant you are guaranteed to exactly nail all the best rational approximations of !!x!!. This is the algorithm I described a few years ago in your age as a fraction, again; there the binary search proceeds down the branches of the Stern-Brocot tree to find a fraction close to !!0.368!!.
I did ask a question this month: I was looking for a simpler version of the dogbone space construction. The dogbone space is a very peculiar counterexample of general topology, originally constructed by R.H. Bing. I mentioned it here in 2007, and said, at the time:
[The paper] is on my desk, but I have not read this yet, and I may never.
I did try to read it, but I did not try very hard, and I did not understand it. So my question this month was if there was a simpler example of the same type. I did not receive an answer, just a followup comment that no, there is no such example.
Sat, 12 Dec 2015
This is page 22 of the Cosmic Call message. An explanation follows.
The 10 digits are:
0 1 2 3 4 5 6 7 8 9
This page discusses properties of the entire universe. It is labeled with a new glyph, , which denotes the universe or the cosmos. On this page I am on uncertain ground, because I know very little about cosmology. My explanation here could be completely wrong without my realizing it.
The page contains only five lines of text. In order, they state:
1. The Friedmann equation which is the current model for the expansion of the universe. This expansion is believed to be uniform everywhere, but even if it isn't, the recipients are so close by that they will see exactly the same expansion we do. If they have noticed the expansion, they may well have come to the same theoretical conclusions about it. The equation is:
$$H^2 = \frac{8\pi G}3\rho + \frac{\Lambda c^2 }3$$
where !!H!! is the Hubble parameter (which describes how quickly the universe is expanding), !!G!! is the universal gravitation constant (introduced on page 9, !!\rho!! is the density of the universe (given on the next line), and !!\Lambda c^2!! () is one of the forms of the cosmological constant (given on the following line).
2. The average density of the universe , given as !!2.76\times 10^{-27} \mathrm{kg} ~\mathrm{m}^3!!. The “density” glyph would have been more at home with the other physics definitions of page 9, but it wasn't needed until now, and that page was full.
3. The cosmological constant !!\Lambda!! is about !!10^{-52} \mathrm{m}^{-2}!!. The related value given here, !!\Lambda c^2!!, is !!1.08\cdot 10^{-35} \mathrm{s}^{-2}!!.
4. The calculated value of the Hubble parameter !!H!! is given here in the rather strange form !!\frac1{14000000000}\mathrm{year}^{-1}!!. The reason it is phrased this way is that (assuming that !!H!! were constant) !!\frac1H!! would be the age of the universe, approximately 14,000,000,000 years. So this line not only communicates our estimate for the current value of the Hubble parameter, it expresses it in units that may make clear our beliefs about the age of the universe. It is regrettable that this wasn't stated more explicitly, using the glyph that was already used for the age of the Earth on page 13. There was plenty of extra space, so perhaps the senders didn't think of it.
5. The average temperature of the universe, about 2.736 kelvins. This is based on measurements of the cosmic microwave background radiation, which is the same in every direction, so if the recipients have noticed it at all, they have seen the same CMB that we have.
The next article will discuss the final page, shown at right. (Click to enlarge.) Try to figure it out before then.
Sun, 06 Dec 2015
This is page 21 of the Cosmic Call message. An explanation follows.
The 10 digits are:
0 1 2 3 4 5 6 7 8 9
This page discusses the message itself. It is headed with the glyph for “physics” .
The leftmost part of the page has a cartoon of the Yevpatoria RT-70 radio telescope that was used to send the message, labeled “Earth” . Coming out the the telescope is a stylized depiction of a radio wave. Two rulers measure the radio wave. The smaller one measures a single wavelength, and is labeled “frequency = 5,010,240,000 Hz ” and “wavelength = 0.059836 meters ”; these are the frequency and the wavelength of the radio waves used to send the message. The longer ruler has the notation “127×127×23”, describing the format of the message itself, 23 pages of 127×127 bitmaps, and also “43000 people ”, which I do not understand at all. Were 43,000 people somehow involved with sending the message? That seems far too many. Were there 43,000 people in Yevpatoria in 1999? That seems far too few; the current population is over 100,000. I am mystified.
At the other end of the radio wave is the glyph , which is hard to decipher, because it appears only on this page and on the unhelpful page 23. I guess it is intended to refer to the recipients of the message.
[ Addendum 20151219: Having reviewed page 23, I am still in the dark. References to the mass and radius of suggest that it refers to the recipients’ planet, but references to the mathematics, physics, and biology of suggests that it refers to the recipients themselves. ]
In the lower-right corner of the page is another cartoon of the RT-70, this time with a ruler underneath showing its diameter, 70 meters. Above the cartoon is the power output of the telescope, 150 kilowatts.
The next article will discuss page 22, shown at right. (Click to enlarge.) Try to figure it out before then.
Sat, 28 Nov 2015
These are pages 19–20 of the Cosmic Call message. An explanation follows.
These two pages are a map of the surface of the Earth. Every other page in the document is surrounded by a one-pixel-wide frame, to separate the page from its neighbors, but the two pages that comprise the map are missing part of their borders to show that the the two pages are part of a whole. Assembled correctly, the two pages are surrounded by a single border. The matching sides of the map pages have diamond-shaped registration marks to show how to align the two pages.
The map projection used here is R. Buckminster Fuller's Dymaxion projection, in which the spherical surface of the Earth is first projected onto a regular icosahedron, which is then unfolded into a flat net. This offers a good compromise between directional distortion and size distortion. Each twentieth of the map is distorted only enough to turn it into a triangle, and the interruptions between the triangles can be arranged to occur at uninteresting parts of the map.
Both pages are labeled with the glyph for “Earth”. On each page, the land parts of the map are labeled with and the water parts with , as on page 14, since the recipients wouldn't otherwise be able to tell which was which.
The next article will discuss page 21, shown at right. (Click to enlarge.) Try to figure it out before then.
Mon, 02 Nov 2015
This is page 18 of the Cosmic Call message. An explanation follows.
The 10 digits are:
0 1 2 3 4 5 6 7 8 9
This page depicts the best way to fry eggs. The optimal fried egg is shown at left. Ha ha, just kidding. The left half of the page explains cellular respiration. The fried egg is actually a cell, with a DNA molecule in its nucleus. Will the aliens be familiar enough with the structure of DNA to recognize that the highly abbreviated picture of the DNA molecule is related to the nucleobases on the previous page? Perhaps, if their genetic biochemistry is similar to ours, but we really have no reason to think that it is.
The illustration of the DNA molecule is subtly wrong. It shows a symmetric molecule. In reality, one of the two grooves between the strands is about twice as big as the other, as shown at right.
The top formula says that C6H12O6 and O2 go into the cell; the bottom formula says that CO2 comes out. (Energy comes out also; I wonder why this wasn't mentioned.) The notation for chemical compounds here is different from that used on page 14: there, O2 was written as ; here it is written as (“2×O”).
The glyph near the left margin does not appear elsewhere, but I think it is supposed to mean “cell”. Supposing that is correct, the text at the bottom says that the number of cells in a man or woman is !!10^{13}!!. The number of cells in a human is not known, except very approximately, but !!10^{13}!! is probably the right order of magnitude. (A 2013 paper from Annals of Human Biology estimates !!3.72\cdot 10^{13}!!.)
Next to the cell is a ruler labeled !!10^{-5}!! meters, which is a typical size for a eukaryotic cell.
The illustration on the right of the page, annotated with the glyphs for the four nucleobases from the previous page , depicts the duplication of genetic material during cellular division. The DNA molecule splits down the middle like a zipper. The cell then constructs a new mate for each half of the zipper, and when it divides, each daughter cell gets one complete zipper.
The next article will discuss pages 19 and 20, shown at right. (Click to enlarge.) Try to figure it out before then.
Fri, 02 Oct 2015
This is page 17 of the Cosmic Call message. An explanation follows.
The 10 digits are:
0 1 2 3 4 5 6 7 8 9
This page depicts the chemical structures of the four nucleobases that make up the information-carrying part of the DNA molecule. Clockwise from top left, they are thymine , adenine , guanine , and cytosine .
The deoxyribose and phosphate components of the nucleotides, shown at right, are not depicted. These form the spiral backbone of the DNA and are crucial to its structure. Will the recipients understand why the nucleobases are important enough for us to have mentioned them?
The next article will discuss page 18, shown at right. (Click to enlarge.) Try to figure it out before then.
Wed, 30 Sep 2015
This is page 16 of the Cosmic Call message. An explanation follows.
The 10 digits are:
0 1 2 3 4 5 6 7 8 9
This page, about human vital statistics and senses, is in three sections. The text in the top left explains the population of the Earth: around 6,000,000,000 people at the time the message was sent. The three following lines give the life expectancy (70 years), mass (80 kg), and body temperature (311K) of humans. In each case it is stated explicitly that the value for men and for women is the same, which is not really true.
The glyph used for life expectancy is the same one used to denote the age of the Earth back on page 13 even though the the two notions are not really the same. And why 311K when the commonly-accepted value is 310K?
The diagram at right attempts to explain the human sense of hearing, showing a high-frequency wave at top and a low frequency one at bottom, annotated with the glyph for frequency and the upper and lower frequency limits of human hearing, 20,000 Hz and 20 Hz respectively. I found this extremely puzzling the first time I deciphered the message, so much so that it was one of the few parts of the document that left me completely mystified, even with the advantage of knowing already what humans are like. A significant part of the problem here is that the illustration is just flat out wrong. It depicts transverse waves:
but sound waves are not transverse, they are compression waves. The aliens are going to think we don't understand compression waves. (To see the difference, think of water waves, which are transverse: the water molecules move up and down—think of a bobbing cork—but the wave itself travels in a perpendicular direction, not vertically but toward the shore, where it eventually crashes on the beach. Sound waves are not like this. The air molecules move back and forth, parallel to the direction the sound is moving.)
I'm not sure what would be better; I tried generating some random compression waves to fit in the same space. (I also tried doing a cartoon of a non-random, neatly periodic compression wave, but I couldn't get anything I thought looked good.) I think the compression waves are better in some ways, but perhaps very confusing:
On the one hand, I think they express the intended meaning more clearly; on the other hand, I think they're too easy to confuse with glyphs, since they happen to be on almost the same scale. I think the message might be clearer if a little more space were allotted for them. Also, they could be annotated with the glyph for pressure , maybe something like this:
This also gets rid of the meaningless double-headed arrow. I'm not sure I buy the argument that the aliens won't know about arrows; they may not have arrows but it's hard to imagine they don't know about any sort of pointy projectile, and of course the whole purpose of a pointy projectile (the whole point, one might say) is that the point is on the front end. But the arrows here don't communicate motion or direction or anything like that; even as a human I'm not sure what they are supposed to communicate.
The bottom third of the diagram is more sensible. It is a diagram showing the wavelengths of light to which the human visual system is most sensitive. The x-axis is labeled with “wavelength” and the y-axis with a range from 0 to 1. The three peaks have their centers at 295 nm (blue), 535 nm (green), and 565 nm (often called “red”, but actually yellow). These correspond to the three types of cone cells in the retina, and the existence of three different types is why we perceive the color space as being three-dimensional. (I discussed this at greater length a few years ago.) Isn't it interesting that the “red” and green sensitivities are so close together? This is why we have red-green color blindness.
The next article will discuss page 17, shown at right. (Click to enlarge.) Try to figure it out before then.
Mon, 28 Sep 2015
This is page 15 of the Cosmic Call message. An explanation follows.
The 10 digits are:
0 1 2 3 4 5 6 7 8 9
This page starts a new section of the document, each page headed with the glyph for “biology” . The illustration is adapted from the Pioneer plaque; the relevant portion is shown below.
Copies of the plaque were placed on the 1972 and 1973 Pioneer spacecraft. The Pioneer image has been widely discussed and criticized; see the Wikipedia article for some of the history here. The illustration suffers considerably from its translation to a low-resolution bitmap. The original picture omits the woman's vulva; the senders have not seen fit to correct this bit of prudery.
The man and the woman are labeled with the glyphs and , respectively. The glyph for “people” , which identified the stick figures on the previous page, is inexplicably omitted here.
The ruler on the right somewhat puzzlingly goes from a bit above the man's toe to a bit below the top of the woman's head; it does not measure either of the two figures. It is labeled 1.8 meters, a typical height for men. The original Pioneer plaque spanned the woman exactly and gave her height as 168 cm, which is conveniently an integer multiple of the basic measuring unit (21 cm) defined on the plaque.
To prevent the recipients from getting confused about which end of the body is the top, a parabolic figure (shown here at left), annotated with the glyph for “acceleration”, shows the direction of gravitational acceleration as on the previous page.
The next article will discuss page 16, shown at right. (Click to enlarge.) Try to figure it out before then.
Fri, 25 Sep 2015
This is page 14 of the Cosmic Call message. An explanation follows.
The 10 digits are:
0 1 2 3 4 5 6 7 8 9
This is my favorite page: there is a lot of varied information and the illustration is ingenious. The page heading says to match up with the corresponding labels on the previous three pages. The page depicts the overall terrain of the Earth. The main feature is a large illustration of some mountains (yellow in my highlighted illustration below) plunging into the sea (blue).
The land part is labeled , the air part , and the water part . Over on the left of the land part are little stick figures, labeled people . This is to show that people live on the land part of the Earth, not under water or in the air. The stick figures may not be clear to the recipients, but they are explained in more detail on the next page.
Each of the three main divisions is annotated with its general chemical composition, with compounds listed in order of prevalence., All the chemical element symbols were introduced earlier, on pages 6 and 7:
The lithosphere : silicon dioxide (SiO2) ; aluminium oxide (Al2O3) ; iron(III) oxide (Fe2O3) ; iron(II) oxide (FeO) . Wikipedia and other sources dispute this listing, giving instead: SiO2, MgO, FeO, Al2O3, CaO, Na2O, Fe2O3 in that order.
The atmosphere : nitrogen gas (N2) ; oxygen gas (O2) ; argon (Ar) ; carbon dioxide (CO2) .
The hydrosphere : water (H2O) ; sodium (Na) ; chlorine (Cl) .
There are rulers extending upward from the surface of the water to the height of top of the mountain and downward to the bottom of the ocean. The height ruler is labeled 8838 meters, which is the height the peak of Mount Everest, the point highest above sea level. The depth ruler is labeled 11000 meters, which is the depth of the Challenger Deep in the Mariana Trench, the deepest part of the ocean. The two rulers have the correct sizes relative to one another. The human figures at left are not to scale (they would be about 1.7 miles high), but the next page will explain how big they really are.
I don't think the message contains anything to tell the recipients the temperature of the Earth, so it may not be clear that the hydrosphere is liquid water. But perhaps the wavy line here will suggest that. The practice of measuring the height of the mountains and depth of the ocean from the surface may also be suggestive of a liquid ocean, since it would not otherwise have a flat surface to provide a global standard.
There is a potential problem with this picture: how will the recipients know which edge is the top? What if they hold it upside-down, and think the human figures are pointing down into the earth, heads downwards?
This problem is solved in a clever way: the dots at the right of the page depict an object accelerating under the influence of gravity, falling in a characteristic parabolic path. To make the point clear, the dots are labeled with the glyph for acceleration.
Finally, the lower left of the page states the acceleration due to gravity at the Earth's surface, 9.7978 m/s2. The recipients can calculate this value from the mass and radius of the Earth given earlier. Linked with the other appearance of acceleration on the page, this should suggest that the dots depict an object falling under the influence of gravity toward the bottom of the page.
The next article will discuss page 15, shown at right. (Click to enlarge.) Try to figure it out before then.
Wed, 23 Sep 2015
This is page 13 of the Cosmic Call message. An explanation follows.
The 10 digits are:
0 1 2 3 4 5 6 7 8 9
There are three diagrams on this page, each depicting something going around. Although the direction is ambiguous (unless you understand arrows) it should at least should be clear that all three rotations are in the same direction. This is all you can reasonably say anyhow, because the rotations would all appear to be going the other way if you looked at them from the other side.
The upper left diagram depicts the Earth going around the Sun and underneath is a note that says that the time is equal to 315569268 seconds, and is also equal to one year . This defines the year.
The upper-right diagram depicts the Moon going around the Earth ; the notation says that this takes 2360591 seconds, or around 27⅓ days. This is not the 29½ days that one might naïvely expect, because it is the sidereal month rather than the synodic month. Suppose the phase of the Moon is new, so that the Moon lies exactly between the Earth and the Sun. 27⅓ days later the Moon has made a complete trip around the Earth, but because the Earth has moved, the Moon is not yet again on the line between the Earth and the Sun; the line is in a different direction. The Earth has moved about !!\frac1{13}!! of the way around the sun, so it takes about another !!\frac1{12}\cdot 27\frac13!! days before the moon is again between Earth and Sun and so a total of about 29½ days between new moons.
The lower-right diagram depicts the rotation of the Earth, giving a time of 86163 seconds for the day. Again, this is not the 86400 seconds one would expect, because it is the sidereal day rather than the solar day; the issue is the same as in the previous paragraph.
None of the three circles appears to be circular. The first one is nearly circular, but it looks worse than it is because the Sun has been placed off-center. The curve representing the Moon's orbit is decidedly noncircular. This is reasonable, because the Moon's orbit is elliptical to approximately the same degree. In the third diagram, the the curve is intended to represent the surface of the Earth, so its eccentricity is indefensible. The ellipse is not the same as the one used for the Moon's orbit, so it wasn't just a copying mistake.
The last two lines state that the ages of the Sun and the Earth are each 4550000000 years. This is the first appearance of the glyph for “age”.
The next article will discuss page 14, shown at right. (Click to enlarge.) Try to figure it out before then.
|
2016-02-14 20:58:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5354844331741333, "perplexity": 994.7950036580669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454702032759.79/warc/CC-MAIN-20160205195352-00106-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://i.publiclab.org/wiki/revisions/common-water-contaminants/19248
|
Have you attended an online call with Public Lab? We'd love your feedback!
# Common Water Contaminants
This is a revision from July 09, 2014 17:45. View all revisions
« Back to Open Water
### Purpose of this wiki
This wiki is intended to provide short, legible descriptions of common water contaminants, with a no-nonsense rundown on what methods might be used to both assess and address the presence of these contaminants. We're aiming to pay special attention to the relative cost and technical expertise required for each method.
For example, some of these contaminants might be readily addressed by simple, cheap home testing kits. Some of them might currently require expensive laboratory testing. For many of them, the EPA and others have published protocols for how to assess the level of these contaminants in a laboratory setting; we'd like to begin to collect links to these protocols, labeling them as "possibly DIY?" or "DIY: implausible", etc ...
Suggested format for entries:
Name of the contaminant Why is it a problem? How does it enter the water supply? How much of this contaminant is safe? How can we measure it? Does it require a "professional" lab? Are there DIY techniques available?
What can be done about it? Can I filter it out of my water myself? Links to good relevant resources, helpful agencies, and groups concerned about the issue.
Please help us fill out this list with relevant info about important water contaminants ...
## Glyphosate
#### Why is it a problem? How does it enter the water supply?
Glyphosate is a commonly used pesticide sold under trademarks such as Monsanto's 'Roundup'. that enters the water supply via agricultural runoff. The EPA information site for glyphosate is here.
#### How much is safe?
Experts disagree on safe levels; the EPA has set a legally enforceable maximum contaminant level (MCL) for glyphosate of 700 ug/l in drinking water, which is 7,000 times higher than the MCL in Europe.
#### How is it tested?
Possible testing methods include: a) laboratory tests, for $110 -$300 (links to more info here). Most likely, these tests use a technique called ELISA. ELISA is an acronym for Enzyme Linked Immunosorbent Assay. This type of assay uses antibodies to bind the analyte (glyphosate) and an enzyme reaction to generate a color change. This type of assay is routinely used in pregnancy and drug tests. A discussion of ELISAs can be found here. Various companies make these kits, such as here. b) spectroscopy (see Public Lab's Spectroscopy Kit). Since glyphosate is colorless, direct measurement cannot be done via visible spectrometry. The ultraviolet spectrum at neutral pH (found here) shows an absorbance maximum at ~200 nm with an extinction coefficient of ~62. The same source shows that this value is similar to other carboxylic acids, such as acetic acid. Since common acids and other organic materials will interfere with detection by UV spectroscopy, this is not a recommended method. An indirect spectroscopic method has been proposed here under "experiment 5": http://publiclab.org/wiki/pesticide-detection-methods-development. This method relies on chemistry established for determining inorganic phosphate (PO43-) and measures the visible absorbance of a reaction product (This method is probably the molybdenum blue method, described on page 672 of Vogel's textbook of quantitative chemical analysis, 6th edition). Unfortunately, the citation does not claim that the method has been tested for glyphosate and shown to give the colored product. Since glyphosate is not inorganic phosphate (it is an organic phosphonate, having a carbon-phosphorus bond), the test needs to be run to ensure that it reacts to give the colored product.
c) conductivity (see Public Lab's Riffle) (links more info). Glyphosate is a polyanion at neutral pH and will affect electrical conductivity of water. Unfortunately, the effect of glyphosate will likely be masked if other common electrolytes (salts) are present at higher concentrations. d) paper chromatography tests (see the following four kits, available online) (links to more info)
#### Links to more info
There has been a controversial report on the internet that measurable amounts of glyphosate have been detected in breast milk: http://www.organicconsumers.org/articles/article_29696.cfm. The results in this report do not appear to have been published in the peer reviewed literature. The chair of the pediatrics department at Mass General Hospital subsequently published an online piece addressing findings in the report. In his piece (found here.), Dr Ron Kleinman argues that glyphosate poses no threat to the health of breastfeeding infants and that mothers should continue breast feeding their children.
## Endocrine disruptors
to help complete it!
## Mercury
or
#### Why is it a problem?
Mercury is a neurotoxin - most harmful to the unborn.
#### How does it enter the water supply?
Coal-fired power plants are the largest emitters of mercury. Bacteria transforms the mercury (Hg) into another form, methylmercury (MeHg), which then significantly bio-accumulates in the tissue of living creatures. For most people, the primary exposure to methylmercury comes from eating predatory fish such as pike, walleye, large-mouth bass, and tuna. The EPA has issued fish consumption advisories for forty-four states warning people to limit their consumption of certain kinds of fish. Canned white (albacore tuna) has been shown to contain about four times as much mercury as chunk light tuna.
sources: produced water, aerial deposition into wetland ecosystems, aerial deposition downwind of coal-fired power plants
## Chromium
to help complete it!
sources: produced water
## Barium
to help complete it!
sources: produced water
## Arsenic
#### Why is it a problem?
Current studies are finding that arsenic is 17 times more potent of a carcinogen than previously thought. Arsenic is known to cause a variety of cancers and has been linked to heart disease, stroke and diabetes. Recent research has found an association between arsenic levels below 10 parts per billion and IQ deficits in children. Women are especially susceptible to arsenic poisoning.
#### How does it enter the water supply?
Arsenic makes up part of Earth’s crust and is commonly found in groundwater. In 2001, the U.S. Environmental Protection Agency lowered the drinking water standard from 50 parts per billion of arsenic to 10 parts per billion. The agency initially had proposed a limit of 5 parts per billion but faced criticism that it would be too costly for water companies to hit that target.
Arsenic can be found in groundwater near fracking sites at levels that exceed the EPA's maximum contaminate limit for drinking water.
It is also a common contaminant of oil fields, coal export, and in BP's oil. It's a common concern in the city of New Orleans, which has schools and housing developments built on old landfills.
#### How much is safe?
Anything below 10 PPB (parts per billion) or micrograms/L, according to current FDA standards. The last big study about arsenic was published in 1988, but more current studies are finding that arsenic is 17 times more potent of a carcinogen than previously thought. That means that even water that meets current federal standards could be dangerous, and the risks it poses to public health can be dire.
## Road salt
Road salt is detrimental both to aquatic life and to plants. In Canada, it was classified as a toxic substance, but then, since so much was being used to keep roads safe, they did not carry through with measures to reduce it, only voluntary guidelines. Conductivity is a surrogate for chloride content. In Stoney Creek in Burnaby, BC, conductivity follows a linear relationship to chloride concentration. Chloride in mg/L=(0.3013 x SpCond - 16.095)
## Fecal Bacteria
Fecal bacteria found in the lower intestines of mammals can sometimes cause illness but are also used as indicators of more difficult to detect enteric diseases such as giardia, cryptosporidium ,hepatitis A & E, Campylobacter, and intestinal worms. Indicators that can be used are Total Coliforms (all cylindrical bacteria), Fecal Coliform, E. Coli, Enterococci (Fecal streptococci) and Salmonella are all used. Total Coliforms, Fecal Coliform, and Enterococci are the most common, and Enterococci is the primary indicator in salt water. Fecal Coliform is, according to the EPA, a poor indicator though. They recommend E.Coli and Enterococci. (Indicator bacteria on Wikipedia). EPA 5.11 governs Fecal Bacterialogical contamination.
#### DIY Fecal Coliform testing
Art Ludwig has published a non-open source but DIY guide to doing Fecal Coliform tests. His guide costs \$15.
There is an open-source DIY Automatic Colony Counter.
|
2020-01-21 10:55:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2621380388736725, "perplexity": 5765.700971736191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250603761.28/warc/CC-MAIN-20200121103642-20200121132642-00186.warc.gz"}
|
https://math.stackexchange.com/questions/2297474/representation-of-e-as-a-descending-series?noredirect=1
|
# Representation of $e$ as a descending series
I saw on Wikipedia: List of representations of e that
$$e=3+\sum_{k=2}^{\infty}\frac{-1}{k!(k-1)k}$$ It was also mentioned that this identity come from consideration on ways to put upper bound of $e$.
Can anyone give me a hint how we can derive this identity? I have tried to look at its Taylor expansion but that approach seems to fail miserably.
You can derive the following telescoping sum for $k\ge2$:
\begin{align*} \frac{1}{k!}+\frac{1}{k!(k-1)k} &=\frac{1}{k!}\left(1+\frac{1}{(k-1)k}\right)\\ &=\frac{1}{k!}\left(1+\frac{1}{k-1}-\frac{1}{k}\right)\\ &=\frac{k-1}{kk!}+\frac{1}{(k-1)k!}\\ &=\frac{k-1}{kk!}+\frac{1}{(k-1)k(k-1)!}\\ &=\frac{k-1}{kk!}+\frac{1}{(k-1)(k-1)!}-\frac{1}{k(k-1)!}\\ &=\frac{1}{k!}-\frac{1}{kk!}+\frac{1}{(k-1)(k-1)!}-\frac{1}{k!}\\ &=\frac{1}{(k-1)(k-1)!}-\frac{1}{kk!} \end{align*}
Hence, if you go back to the original series representation of $e$ as you did:
\begin{align*} e+\sum_{k=2}^{\infty}\frac{1}{k!k(k-1)} &=2+\sum_{k=2}^{\infty}\frac{1}{k!}+\frac{1}{k!k(k-1)}\\ &=2+\sum_{k=2}^{\infty}\frac{1}{(k-1)(k-1)!}-\frac{1}{kk!}\\ &=2+1-\lim_{n\rightarrow\infty}\frac{1}{nn!}=3 \end{align*}
As an alternative approach, we have $$3-e=\int_{0}^{1}x(1-x)e^x\,dx \tag{1}$$ by IBP and the RHS of $(1)$ is clearly positive, hence $e<3$.
Since $\int_{0}^{1}x(1-x)\frac{x^k}{k!}\,dx = \frac{1}{k!(k+2)(k+3)}$ we also have $$3-e = \sum_{k\geq 0}\frac{1}{k!(k+2)(k+3)}\tag{2}$$ by termwise integration of a Taylor series.
• @OussamaBoussif: it is indeed equivalent by simple rearrangements, but your approach by creative telescoping for tackling the original series is superior, so (+1) to you. – Jack D'Aurizio May 26 '17 at 11:24
• hmm, I think I get it, +1 for the creative use of integration, it's always nice to see different alternatives ^^ – Oussama Boussif May 26 '17 at 11:26
• It might be interesting to point out that the approximations $e\approx\frac{19}{7}$ and $e\approx\frac{193}{71}$ come from similar integrals, with $x(1-x)$ being replaced by $x^2(1-x)^2$ and $x^3(1-x)^3$. – Jack D'Aurizio May 26 '17 at 11:29
• The general form for descending series that follow your pattern, using the Pochhammer symbol, is $$\sum_{k=0}^\infty \frac{1}{k!(k+2n)_{2n}}$$ – Jaume Oliver Lafont May 26 '17 at 21:20
• @AgalnamedDesire math.stackexchange.com/a/1708366/134791 – Jaume Oliver Lafont May 28 '17 at 0:05
An equivalent integral
Plugging a relationship similar to the one used by Jack D'Aurizio, namely $$\frac{1}{k!(k-1)k}=\int_0^1 \frac{x^{k-2}(1-x)}{k!} dx$$ into this integral
$$\int_0^1 \frac{(1-x)(e^x-1-x)}{x^2}dx = 3-e$$
with non-negative integrand in $(0,1)$ that proves $e<3$, yields the series in the question.
$$\sum_{k=2}^\infty \frac{1}{k!(k-1)k}=3-e$$
This relates the series with the inequality $1+x \leq e^x$.
Similar approximations
This integral is similar to the one used to explain why $e$ is close to the eighth harmonic number.
$$\frac{1}{14} \int_0^1 x^2(1-x)^2(e^x-1-x)dx = e-\frac{761}{280}=e-H_8\approx 0$$
with corresponding series $$e=\frac{761}{280}+\frac{1}{7}\sum_{k=2}^\infty \frac{1}{k!(k+3)(k+4)(k+5)}$$
and $$\frac{1}{2}\int_0^1 (1-x)^2\left(e^x-1-x-\frac{x^2}{2}\right)dx = e-\frac{163}{60}$$
used to explain the observation by Lucian that $2\pi+e$ is close to $9$.
Another series for $3-e$
Yet another series to prove $e<3$ is related to the integer sequence http://oeis.org/A165457.
$$\frac{1}{e}=\frac{1}{3}+\sum_{k=1}^\infty \frac{1}{(2k+1)!(2k+3)}$$
|
2019-06-16 06:34:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8427934646606445, "perplexity": 568.743346312892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997801.20/warc/CC-MAIN-20190616062650-20190616084650-00312.warc.gz"}
|
http://intothecontinuum.blogspot.com/2013/09/postulates-of-quantum-mechanics-state_6.html
|
## Friday, 6 September 2013
### Postulates of Quantum Mechanics: State Evolution
This postulate describes the way quantum states can be transformed to other states. Given an isolated* state $\left|\psi\right>\in\mathcal{H}$, the permissible transformations are given by a unitary operator $U$:
$U: \mathcal{H} \rightarrow \mathcal{H},$
which maps $\left|\psi\right>\mapsto U\left|\psi\right>.$
Unitarity implies that $UU^{\dagger}=I$, where $U^{\dagger}$ denotes the conjugate-transpose of $U$, and $I$ is the identity transformation. If $\mathcal{H}$ is of dimension $N$ with a suitable basis, then $U$ can be represented as an $N\times N$ matrix. Unitary operators have inverses by definition which are given by the conjugate-transpose $U^\dagger$, which is just the matrix that results by interchanging the rows of the matrix $U$ with its columns and taking the complex conjugate of each of its matrix coefficients. This unitary property is necessary to ensure that states of unit norm get mapped to states of unit norm.
*It is important to mention the use of the word "isolated" in the statement of the postulate given above. What is meant by an isolated system here is synonymous to a closed system. This is in contrast to an open system, which is one that interacts with a larger subsystem called the environment. In this way, the quantum system under consideration is a subsystem of the total system defined by that system and its external environment. This environment itself can be an arbitrary system that contains the main system under consideration. In the limiting case, the environment is essentially the universe at large (or in better interpretations that which is the multiverse). A closed or isolated system is one that does not interact at all with any other containing system, and it is only in this special and highly idealized case that the unitary evolution stated in the postulate actually holds. In general and in practice, systems that are dealt with are actually open. In fact, there really is no such thing as a perfectly isolated system unless, of course, that system is the entire universe (or multiverse). For the purpose of analysis, as it is usually done in most general physical analysis, we will only consider the case of isolated systems here and their unitary dynamics.
|
2018-10-22 15:19:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9071768522262573, "perplexity": 209.56824454138174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515088.88/warc/CC-MAIN-20181022134402-20181022155902-00345.warc.gz"}
|
https://math.stackexchange.com/questions/2664973/prove-that-o-t0-na-pmod-1-n-in-bbb-z-is-dense-in-0-1/2664996
|
prove that $O_T(0)=\{na\pmod 1: n \in \Bbb Z\}$ is dense in $[0,1)$.
Consider $T_a(x)=x+a$ where $T:[0,1)/\{0=1\}$ if $a \not \in \Bbb Q$ then prove that $O_T(0)=\{na\pmod 1: n \in \Bbb Z\}$ is dense in $[0,1)$.
Now $ma=na\pmod 1$ iff $m=n$. Next, how to preceed?
• $O_T(0)$ (I guess it should be $O_T(a)$) is an additive subgroup of $[0,1)\pmod{1}$ with elements arbitrarily close to zero, since if $\frac{p}{q}$ is a convergent of the continued fraction of $a$ we have $|qa-p|\leq\frac{1}{q}$. Density follows. – Jack D'Aurizio Feb 24 '18 at 19:00
Hint: See the the elements of $O_T(0)$ as a sequence $s_n=e^{inθ}\in S^1\subset\mathbb{C}$ where $nθ\neq 2π$, $\forall n\in \mathbb{Z}$ (this is the usual trick of seeing $\mathbb{T}$ both as the circle and the interval with the ends identified).
Now prove that if we have $|s_n-s_m|=ε$ then we can "fill" the circle with points that are have distant from each other approximately $ε$.
Since $S^1$ is compact you can make $ε$ as small as you want.
|
2019-08-23 10:44:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9892113208770752, "perplexity": 69.93982997925939}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318375.80/warc/CC-MAIN-20190823104239-20190823130239-00475.warc.gz"}
|
https://noncommutativeanalysis.com/category/functional-analysis-2/operator-algebras-functional-analysis/
|
## Category: Operator algebras
### Seminar talk at the BGU OA Seminar
This coming Thursday (July 2nd, 14:10 Israel Time) I will be giving a talk at the Ben-Gurion University Math Department’s Operator Algebras Seminar. If you are interested in a link to the Zoom please send me an email.
I will be talking mostly about these two papers of mine with co-authors: older one, newer one. Here is the title and abstract:
Title: Matrix ranges, fields, dilations and representations
Abstract: In my talk I will present several results whose unifying theme is a matrix-valued analogue of the numerical range, called the matrix range of an operator tuple. After explaining what is the matrix range and what it is good for, I will report on recent work in which we prove that there is a certain “universal” matrix range, to which the matrix ranges of a sequence of large random matrices tends to, almost surely. The key novel technical aspects of this work are the (levelwise) continuity of the matrix range of a continuous field of operators, and a certain quantitative matrix valued Hahn-Banach type separation theorem. In the last part of the talk I will explain how the (uniform) distance between matrix ranges can be interpreted equivalently as a “dilation distance”, which can be interpreted as a kind of “representation distance”. These vague ideas will be illustrated with an application: the construction of a norm continuous family of representations of the noncommutative tori (recovering a result of Haagerup-Rordam in the d=2 case and of Li Gao in the d>2 case).
Based on joint works with Malte Gerhold, Satish Pandey and Baruch Solel.
### New paper: Dilations of commuting unitaries
Malte Gerhold, Satish Pandey, Baruch Solel and I have recently posted a new paper on the arxiv. Check it out here. Here is the abstract:
Abstract:
We study the space of all $d$-tuples of unitaries $u=(u_1,\ldots, u_d)$ using dilation theory and matrix ranges. Given two $d$-tuples $u$ and $v$ generating C*-algebras $\mathcal A$ and $\mathcal B$, we seek the minimal dilation constant $c=c(u,v)$ such that $u\prec cv$, by which we mean that $u$ is a compression of some $*$-isomorphic copy of $cv$. This gives rise to a metric
$d_D(u,v)=\log\max\{c(u,v),c(v,u)\}$
on the set of equivalence classes of $*$-isomorphic tuples of unitaries. We also consider the metric
$d_{HR}(u,v)$ $= \inf \{\|u'-v'\|:u',v'\in B(H)^d, u'\sim u$ and $v'\sim v\},$
and we show the inequality
$d_{HR}(u,v) \leq K d_D(u,v)^{1/2}.$
Let $u_\Theta$ be the universal unitary tuple $(u_1,\ldots,u_d)$ satisfying $u_\ell u_k=e^{i\theta_{k,\ell}} u_k u_\ell$, where $\Theta=(\theta_{k,\ell})$ is a real antisymmetric matrix. We find that $c(u_\Theta, u_{\Theta'})\leq e^{\frac{1}{4}\|\Theta-\Theta'\|}$. From this we recover the result of Haagerup-Rordam and Gao that there exists a map $\Theta\mapsto U(\Theta)\in B(H)^d$ such that $U(\Theta)\sim u_\Theta$ and
$\|U(\Theta)-U({\Theta'})\|\leq K\|\Theta-\Theta'\|^{1/2}.$
Of special interest are: the universal $d$-tuple of noncommuting unitaries ${\mathrm u}$, the $d$-tuple of free Haar unitaries $u_f$, and the universal $d$-tuple of commuting unitaries $u_0$. We obtain the bounds
$2\sqrt{1-\frac{1}{d}}\leq c(u_f,u_0)\leq 2\sqrt{1-\frac{1}{2d}}.$
From this, we recover Passer’s upper bound for the universal unitaries $c({\mathrm u},u_0)\leq\sqrt{2d}$. In the case $d=3$ we obtain the new lower bound $c({\mathrm u},u_0)\geq 1.858$ improving on the previously known lower bound $c({\mathrm u},u_0)\geq\sqrt{3}$.
### My slides for the COSY talk and the seminar talk
Here is a link to the slides for the short talk that I am giving in COSY.
This talk is a short version of the talk I gave at the Besancon Functional Analysis Seminar last week; here are the slides for that talk.
### Seminar talk
Next Tuesday, May 19th, at 14:30 (Israeli time), I will give a video talk at the Séminaire d’Analyse Fonctionnelle “in” Laboratoire de mathématiques de Besançon. It will be about my recent paper with Michael Skeide, the one that I announced here.
Title: CP-Semigroups and Dilations, Subproduct Systems and Superproduct Systems: the Multi-Parameter Case and Beyond.
Abstract: We introduce a framework for studying dilations of semigroups of completely positive maps on von Neumann algebras. The heart of our method is the systematic use of families of Hilbert C*-correspondences that behave nicely with respect to tensor products: these are product systems, subproduct systems and superproduct systems. Although we developed our tools with the goal of understanding the multi-parameter case, they also lead to new results even in the well studied one parameter case. In my talk I will give a broad outline and a taste of the dividends our work.
The talk is based on a recent joint work with Michael Skeide.
Assumed knowledge: Completely positive maps and C*-algebras.
Feel free to write to me if you are interested in a link to the video talk.
### Dilations of q-commuting unitaries
Malte Gerhold and I just have just uploaded a revision of our paper “Dilations of q-commuting unitaries” to the arxiv. This paper has been recently accepted to appear in IMRN, and was previously rejected by CMP, so we have four anonymous referees and two handling editors to be thankful to for various corrections and suggested improvements (though, as you may understand, one editor and two referees have reached quite a wrong conclusion regarding our beautiful paper :-).
This is a quite short paper (200 full pages shorter than the paper I recently announced), which tells a simple and interesting story: we find that optimal constant $c_\theta$, such that every pair of unitaries $u,v$ satisfying the q-commutation relation
$vu = e^{i\theta} uv$
dilates to a pair of commuting normal operators with norm less than or equal to $c_\theta$ (this problems is related to the “complex matrix cube problem” that we considered in the summer project half year ago and the one before). We provide a full solution. There are a few ramifications of this idea, as well as surprising connections and applications, so I invite you to check out the nice little introduction.
|
2020-07-09 16:33:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 41, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7376655340194702, "perplexity": 719.496229170923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655900614.47/warc/CC-MAIN-20200709162634-20200709192634-00284.warc.gz"}
|
https://mca2021.dm.uba.ar/en/tools/view-abstract?code=3580
|
## View abstract
### Session S14 - Global Injectivity, Jacobian Conjecture, and Related Topics
Wednesday, July 14, 14:00 ~ 14:50 UTC-3
## Some aspects of the complex Jacobian conjecture
### Nguyen Thi Bich Thuy
The complex Jacobian conjecture: If a polynomial mapping $F: \mathbb{C}^n \to \mathbb{C}^n$ satisfies the condition $$\det JF(p) \neq 0, \quad \forall p \in \C^n,$$ where $JF(p)$ is the Jacobian matrix of $F$ at $p$, then $F$ is an automorphism'' originally stated by Keller in 1939, is still open in the $2$-dimensional case even though the real case was solved negatively by Pinchuk [7] in 1994. In this talk, after a short survey on principal results and approaches to the conjecture, we will explicit the Newton polygon approach firstly discovered by Abhyankar [1] and developed by many mathematicians, especially by Nagata in his nice paper [2], that gave almost principal results of the conjecture in the $2$-dimensional case. Posteriorly, we present an approach to the conjecture using intersection homology [3],[4], [8]. The intersection homology of a singular variety constructed in [8] associated to a Pinchuk's counter-example was calculated [5]. Finally, we introduce a new concept: pertinent variables [6] in the study of the Jacobian conjecture. We offer also some relations between pertinent variables and Newton polygon approaches.
[1] Abhyankar, S., Expansion Techniques in Algebraic Geometry, Tata Institute of fundamental Research, Tata Institute, 1977.
[2] Nagata, M., Some remarks on the two-dimensional Jacobian conjecture, China J. Math. 17, (1989), 1--20.
[3] Nguyen, T.B.T., Valette, A. and Valette, G., On a singular variety associated to a polynomial mapping, Journal of Singularities volume 7, 190--204, 2013.
[4] Nguyen, T.B.T. and Ruas, M.A.S., On singular varieties associated to a polynomial mapping from $\mathbb{C}^n$ to $\mathbb{C}^{n-1}$, Asian Journal of Mathematics, v.22, p.1157--1172, 2018.
[5] Nguyen, T.B.T., Geometry of singularities of a Pinchuk's map, arXiv: 1710.03318v2, 2018.
[6] Nguyen, T.B.T., The $2$-dimensional Complex Jacobian Conjecture under the viewpoint of pertinent variables'', ArXiv:1902.05923, 2019.
[7] Pinchuk, S., A counterexample to the strong real Jacobian conjecture, Math. Zeitschrift, 217, 1--4, (1994).
[8] Valette, A. and Valette, G., Geometry of polynomial mappings at infinity via intersection homology, Ann. I. Fourier vol. 64, fascicule 5, 2147--2163, 2014.
|
2021-09-20 18:08:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8574714064598083, "perplexity": 2061.1113091607376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057083.64/warc/CC-MAIN-20210920161518-20210920191518-00672.warc.gz"}
|
https://econs20.classes.andrewheiss.com/resource/game-theory/
|
# Game theory
## Basic process for finding Nash equilibria
The easiest way to find Nash equilibria in a 2×2 game is to cover each column and row in turn and ask: “If player 1 knows that player 2 will choose X, what is player 1’s best choice?” over and over again. Mark player 1’s choice with a circle and mark player 2’s choice with a dot. Any cells that have both a circle and a dot are equilibria.
Here’s what that looks like in an invisible hand game. Anil and Bala need to choose what they’re going to grow. Their payoffs (measured in utils) for each of their choices are listed in this matrix:
Bala
Rice Cassava
Anil Rice 1, 3 2, 2
Cassava 4, 4 3, 1
To start, cover up the column where Bala chooses to grow cassava. If Anil assumes that Bala chooses to grow rice, what’s his best option? Anil would get 1 util from growing rice, and 4 from growing cassava. Cassava is best, so put a circle there.
Bala
Rice Cassava
Anil Rice 1, 3
Cassava ⭕ 4, 4
Next, cover the column where Bala chooses to grow rice. If Anil assumes that Bala chooses to grow cassava, what’s his best option? Anil would get 2 utils from growing rice, and 3 from growing cassava. Cassava is the best again, so put a circle there.
Bala
Rice Cassava
Anil Rice 2, 2
Cassava ⭕ 3, 1
Regardless of what Bala chooses to do, Anil’s best choice is always to grow cassava, so he’ll do that. That’s a pure, dominant strategy.
Let’s go through the same process for Bala. Cover up the row where Anil chooses to grow cassava. If Bala assumes that Anil chooses to grow rice, what’s his best option? Bala would get 3 utils from growing rice, and 2 from growing cassava. Rice is the best option, so put a dot there.
Bala
Rice Cassava
Anil Rice 🔵 1, 3 2, 2
Cassava
Next cover up the row where Anil chooses to grow rice If Bala assumes that Anil chooses to grow cassava, what’s his best option? Bala would get 4 utils from growing rice, and only 1 from growing cassava. Rice is again his best option, so put a dot there.
Bala
Rice Cassava
Anil Rice 🔵
Cassava ⭕ 🔵 4, 4 ⭕ 3, 1
Bala’s best option—regardless of what Anil chooses to do—is to grow rice, so that’s what he’ll do.
Phew. Finally we can look at it all together. Here’s a summary of the different possible choices and responses:
• Best response for Anil if Bala chooses Rice = Cassava
• Best response for Anil if Bala chooses Cassava = Cassava
• Best response for Bala if Anil chooses Rice = Rice
• Best response for Bala if Anil chooses Cassava = Rice
There’s a circle and a dot in the (4, 4) / (Cassava, Rice) square, which means that’s the Nash equilibrium. That’s the situation that both players will naturally settle on without any outside communication, given the structure of the payoffs in the game.
Bala
Rice Cassava
Anil Rice 🔵 1, 3 2, 2
Cassava ⭕ 🔵 4, 4 ⭕ 3, 1
## Mixed strategies
When there is no single Nash equilibrium in a game, players have to engage in a mixed strategy and attempt to predict what the other players will do. The choices they make are determined by the payoffs in the game, since it is generally more likely that players will choose strategies that maximize their payoffs. You need to calculate two things to do this:
1. Calculate the expected utility for each choice for each player and find the probability cutoff for each choice.
2. Calculate the expected payoff for each player.
These calculations involve actual math, unlike the more simpler approach of covering rows and columns and drawing circles and dots.
There are a bunch of helpful external resources and examples of how to do this, including these. You should check these out if you’re interested in this game theory stuff:
Let’s do this with the Bach or Stravinksy game that we used in session 3:
Friend 2
Chinese Italian
Friend 1 Chinese 2, 1 0, 0
Italian 0, 0 1, 2
### Step 1: Find the equilibria
• Best response for Friend 1 if Friend 2 chooses Chinese = Chinese
• Best response for Friend 1 if Friend 2 chooses Italian = Italian
• Best response for Friend 2 if Friend 1 chooses Chinese = Chinese
• Best response for Friend 2 if Friend 1 chooses Italian = Italian
The game has two Nash equilibria, so it has a mixed strategy.
### Step 2: Calculate the expected utility for each choice for each player
We calculating the expected utility for each choice by assuming some probability for Friend 1’s choices ($$p$$, $$1 - p$$) and for Friend 2’s choices ($$q$$, $$1 - q$$):
Friend 2
Chinese Italian
$$q$$ $$1 - q$$
Friend 1 Chinese $$p$$ 2, 1 0, 0
Italian $$1 - p$$ 0, 0 1, 2
To find the expected utility for a choice, add the utility × probability for each choice in the row (or column, for Player 2). Thus,
\begin{aligned} EU_{\text{Friend 1, Chinese}} &= 2q + 0(1-q) =& 2q \\ EU_{\text{Friend 1, Italian}} &= 0q + 1(1-q) =& 1 - q \\ EU_{\text{Friend 2, Chinese}} &= 1p + 0(1-p) =& p \\ EU_{\text{Friend 2, Italian}} &= 0p + 2(1-p) =& 2 - 2p \end{aligned}
With these formulas, you can then determine $$q$$ and $$p$$ by setting the expected utilities for each player equal to each other and solving for the variable:
\begin{aligned} 2q &= 1 - q & p &= 2 - 2p \\ 3q &= 1 & 3p &= 2 \\ q &= \frac{1}{3} & p &= \frac{2}{3} \end{aligned}
Friend 1’s best response is determined by what Friend 2’s $$q$$ is in real life:
\text{Best response}_{\text{Friend 1}} = \left \{ \begin{aligned} &\text{Chinese } & \text{if } q < \frac{1}{3} \\ &\text{Italian } & \text{if } q > \frac{1}{3} \\ &\text{indifferent } & \text{if } q = \frac{1}{3} \\ \end{aligned} \right \}
Similarly, Friend 2’s best response is determined by what Friend 1’s $$p$$ is in real life:
\text{Best response}_{\text{Friend 2}} = \left \{ \begin{aligned} &\text{Chinese } & \text{if } p > \frac{2}{3} \\ &\text{Italian } & \text{if } p < \frac{2}{3} \\ &\text{indifferent } & \text{if } p = \frac{1}{3} \\ \end{aligned} \right \}
### Step 3: Calculate the expected payoff for each player when playing the mixed strategy
The expected payoff is the utility × probability for each cell, added together. First, calculate the joint probabilities for each cell by multiplying the row and column probabilities:
Friend 2
Chinese Italian
$$q = \frac{1}{3}$$ $$1 - q = \frac{2}{3}$$
Friend 1 Chinese $$p = \frac{2}{3}$$ $$\frac{2}{3} \times \frac{1}{3} = \frac{2}{9}$$ $$\frac{2}{3} \times \frac{2}{3} = \frac{4}{9}$$
Italian $$1 - p = \frac{1}{3}$$ $$\frac{1}{3} \times \frac{1}{3} = \frac{1}{9}$$ $$\frac{1}{3} \times \frac{2}{3} = \frac{2}{9}$$
Then multiply each probability by the payoff and add all the cells together:
\begin{aligned} EP_{\text{Friend 1}} &= (2 \times \frac{2}{9}) + (0 \times \frac{4}{9}) + (0 \times \frac{1}{9}) + (1 \times \frac{2}{9}) =& \frac{2}{3} \\ EP_{\text{Friend 2}} &= (1 \times \frac{2}{9}) + (0 \times \frac{4}{9}) + (0 \times \frac{1}{9}) + (2 \times \frac{2}{9}) =& \frac{2}{3} \end{aligned}
The expected payoff for each player in the mixed strategy is $$\frac{2}{3}$$, which is less than what either player would make if they coordinated on their least preferred outcome. That is, it’s better for Friend 1 to compromise and eat Italian and get 1 unit of utility rather than gamble on the mixed strategy.
|
2023-04-01 14:50:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995254278182983, "perplexity": 2623.811953439892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950030.57/warc/CC-MAIN-20230401125552-20230401155552-00033.warc.gz"}
|
http://www.weegy.com/?ConversationId=4011D51A
|
Rating
Questions asked by the same visitor
Which of the following is NOT a step in the operation of a Geiger counter? A. Nuclear radiation ionizes gas in a tube. B. Ionized gas produces an electric current. C. Magnets cause the ions to conduct electricity. D. The electric current is detected and measured.
Weegy: C. Magnets cause the ions to conduct electricity is NOT a step in the operation of a Geiger counter. (More)
Question
An industrial process makes calcium oxide by decomposing calcium carbonate. Which of the following is NOT needed to calculate the mass of calcium oxide that can be produced from 4.7 kg of calcium carbonate? A. the balanced chemical equation B. molar masses of the reactants C. molar masses of the product D. the volume of the unknown mass
Question
Updated 9/28/2012 11:01:39 AM
Answer is D.the volume of the unknown mass
Here is explain of your problem. First you have to write the equation: $Ca CO3\rightarrow CaO+CO2$ Then compute the mols of Ca CO3 Then compute the mols of CaO AND AT LAST THE MASS OF CaO
Because water molecules are polar and carbon dioxide molecules are nonpolar, A. water has a lower boiling point than carbon dioxide does. B. attractions between water molecules are weaker than attractions between carbon dioxide molecules. C. carbon dioxide cannot exist as a solid. D. water has a higher boiling point than carbon dioxide does.
Weegy: Carbon dioxide is a gas at room temperature, and water is a liquid at room temperature. [ Since 'boiling point' is the point at which a liquid becomes a gas, that would mean that the 'boiling point' of CO2 would have to be MUCH lower than water (or lower than room temperature for that matter). Read more: ] (More)
Question
Home | Contact | Terms | Privacy | Social | ©2013 Purple Inc.
Popular Conversations
Solve the inequality 5x
Weegy: 5x User: Solve the inequality 5x Weegy: 5x User: Solve the inequality 12x Weegy: It will be c.x User: Solve the ...
In multiplication, the number being multiplied is called the ...
Weegy: In multiplication, the number being multiplied is called the multiplicand. User: The number being ...
Simplify 5^-2 by making the exponent positive. A. 1/8 B. ...
Weegy: 5^-2 = B. 1/5^2
Solve for x. x - 8 = -20 -28 -12 12 28
Weegy: 6(x - 2) = 4, 6x - 12 = 4, 6x = 4 + 12, 6x = 16, x = 16/6, or 8/3 in simplest form. User: The sum of ...
Branches within the ICS organization can be established: ...
Weegy: Branches within the ICS organization can be established Geographically or functionally.
5x + 4y + 15 + 2x - 8y - 5=
Weegy: The answer for 5x - 4y = 12 is y = 1/4 (5 x-12). User: 8(5a - b) - 6(4b-4a)= Weegy: a^2 - 5a - 6 = (a + ...
Which country is not a permanent member of the U.N. Security ...
Weegy: the country which is not a permanent member of the UN council is (c) China. User: Who holds veto power in the ...
Zeller, Inc., has 50,000 shares of stock outstanding. Zeller declares ...
Weegy: Zeller, Inc., has 60,000 shares of stock outstanding. Zeller declares a dividend of \$580,000. ...
What is the European Union?
Weegy Stuff
S
L
P
C
P
C
1
L
P
C
Points 1861 [Total 9742]| Ratings 0| Comments 1861| Invitations 0|Offline
S
L
P
C
L
P
C
Points 1846 [Total 7233]| Ratings 0| Comments 1846| Invitations 0|Offline
S
1
L
C
L
Points 974 [Total 5777]| Ratings 5| Comments 924| Invitations 0|Offline
S
1
L
1
L
P
Points 582 [Total 7259]| Ratings 2| Comments 562| Invitations 0|Offline
S
L
Points 485 [Total 2932]| Ratings 2| Comments 455| Invitations 1|Online
S
L
1
1
1
Points 316 [Total 3582]| Ratings 7| Comments 246| Invitations 0|Offline
S
R
Points 267 [Total 379]| Ratings 6| Comments 187| Invitations 2|Offline
S
Points 222 [Total 222]| Ratings 6| Comments 142| Invitations 2|Online
S
Points 170 [Total 586]| Ratings 0| Comments 170| Invitations 0|Offline
S
1
Points 165 [Total 495]| Ratings 4| Comments 125| Invitations 0|Offline
|
2014-04-24 15:06:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45526322722435, "perplexity": 5767.40386020664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://stacks.math.columbia.edu/tag/062A
|
# The Stacks Project
## Tag 062A
Lemma 15.26.9. Let $R$ be a ring. Let $A_\bullet$ be a complex of $R$-modules. Let $f, g \in R$. Let $C(f)_\bullet$ be the cone of $f : A_\bullet \to A_\bullet$. Define similarly $C(g)_\bullet$ and $C(fg)_\bullet$. Then $C(fg)_\bullet$ is homotopy equivalent to the cone of a map $$C(f)_\bullet[1] \longrightarrow C(g)_\bullet$$
Proof. We first prove this if $A_\bullet$ is the complex consisting of $R$ placed in degree $0$. In this case the map we use is $$\xymatrix{ 0 \ar[r] \ar[d] & 0 \ar[r] \ar[d] & R \ar[r]^f \ar[d]^1 & R \ar[r] \ar[d] & 0 \ar[d] \\ 0 \ar[r] & R \ar[r]^g & R \ar[r] & 0 \ar[r] & 0 }$$ The cone of this is the chain complex consisting of $R \oplus R$ placed in degrees $1$ and $0$ and differential (15.26.6.1) $$\left( \begin{matrix} g & 1 \\ 0 & -f \end{matrix} \right) : R^{\oplus 2} \longrightarrow R^{\oplus 2}$$ We leave it to the reader to show this this chain complex is homotopic to the complex $fg : R \to R$. In general we write $C(f)_\bullet$ and $C(g)_\bullet$ as the total complex of the double complexes $$(R \xrightarrow{f} R) \otimes_R A_\bullet \quad\text{and}\quad (R \xrightarrow{g} R) \otimes_R A_\bullet$$ and in this way we deduce the result from the special case discussed above. Some details omitted. $\square$
The code snippet corresponding to this tag is a part of the file more-algebra.tex and is located in lines 5671–5681 (see updates for more information).
\begin{lemma}
\label{lemma-cone-squared}
Let $R$ be a ring. Let $A_\bullet$ be a complex of $R$-modules.
Let $f, g \in R$. Let $C(f)_\bullet$ be the cone of
$f : A_\bullet \to A_\bullet$. Define similarly $C(g)_\bullet$ and
$C(fg)_\bullet$. Then $C(fg)_\bullet$ is homotopy equivalent to the
cone of a map
$$C(f)_\bullet[1] \longrightarrow C(g)_\bullet$$
\end{lemma}
\begin{proof}
We first prove this if $A_\bullet$ is the complex consisting of $R$ placed
in degree $0$. In this case the map we use is
$$\xymatrix{ 0 \ar[r] \ar[d] & 0 \ar[r] \ar[d] & R \ar[r]^f \ar[d]^1 & R \ar[r] \ar[d] & 0 \ar[d] \\ 0 \ar[r] & R \ar[r]^g & R \ar[r] & 0 \ar[r] & 0 }$$
The cone of this is the chain complex consisting of $R \oplus R$ placed in
degrees $1$ and $0$ and differential (\ref{equation-differential-cone})
$$\left( \begin{matrix} g & 1 \\ 0 & -f \end{matrix} \right) : R^{\oplus 2} \longrightarrow R^{\oplus 2}$$
We leave it to the reader to show this this chain complex is
homotopic to the complex $fg : R \to R$. In general we
write $C(f)_\bullet$ and $C(g)_\bullet$
as the total complex of the double complexes
$$(R \xrightarrow{f} R) \otimes_R A_\bullet \quad\text{and}\quad (R \xrightarrow{g} R) \otimes_R A_\bullet$$
and in this way we deduce the result from the special case discussed above.
Some details omitted.
\end{proof}
Comment #2773 by Darij Grinberg (site) on August 19, 2017 a 2:21 am UTC
"this this".
Comment #2776 by Darij Grinberg (site) on August 19, 2017 a 2:32 am UTC
This said, I really wouldn't mind a more mundane proof by explicit description of the map and of the quasi-isomorphisms... I'm not sure if I can follow the proof above.
## Add a comment on tag 062A
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the lower-right corner).
|
2017-09-21 12:05:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9297493100166321, "perplexity": 248.34476057091024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687766.41/warc/CC-MAIN-20170921115822-20170921135822-00252.warc.gz"}
|
https://web2.0calc.com/questions/help-asap-please_30
|
+0
+1
69
1
In convex quadrilateral $ABCD$, $AB=BC=13$, $CD=DA=24$, and $\angle D=60^\circ$. Points $X$ and $Y$ are the midpoints of $\overline{BC}$ and $\overline{DA}$ respectively. Compute $XY^2$ (the square of the length of $XY$).
Dec 11, 2019
#1
+11146
+2
In convex quadrilateral $ABCD$, $AB=BC=13$, $CD=DA=24$, and $\angle D=60^\circ$. Points $X$ and $Y$ are the midpoints of $\overline{BC}$ and $\overline{DA}$ respectively. Compute $XY^2$ (the square of the length of $XY$).
Dec 11, 2019
In convex quadrilateral $ABCD$, $AB=BC=13$, $CD=DA=24$, and $\angle D=60^\circ$. Points $X$ and $Y$ are the midpoints of $\overline{BC}$ and $\overline{DA}$ respectively. Compute $XY^2$ (the square of the length of $XY$).
|
2020-01-28 09:27:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9971567988395691, "perplexity": 62.21565928167655}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251778168.77/warc/CC-MAIN-20200128091916-20200128121916-00473.warc.gz"}
|
https://gsocinterval.blogspot.com/
|
## Monday, 28 August 2017
### Final Report
This is the final report on my work with the interval package during Google Summer of Code 2017. This whole blog have been dedicated to the project and by reading all posts you can follow my work from beginning to end.
The work has been challenging and extremely fun! I have learned a lot about interval arithmetic, the Octave and Octave-forge project, and also how to contribute to open-source in general. I have found the whole Octave community to be very helpful and especially I want to thank my mentor, Oliver Heimlich, and co-mentor, Kai Torben Ohlhus, for helping me during the project.
Here I will give a small introduction to Octave and the interval package for new readers and a summary of how the work has gone and how you can run the code I have actually contributed with.
## Octave and the Interval Package
Octave, or GNU Octave, is a free program for scientific computing. It is very similar to Matlab and its syntax is largely compatible with it. Octave comes with a large set of core functionality but can also be extended with packages from Octave forge. These add new functionality, for example image processing, fuzzy logic or more statistic functions. One of those packages is the interval package which allows you to compute with interval arithmetic in Octave.
## Summary of the Work
The goal with the project was to improve the Octave-forge interval package by implementing support for creating, and working with, N-dimensional arrays of intervals.
The work has gone very well and we have just released version 3.0.0 of the interval package incorporating all the contributions I have made during the project.
The package now has full support for working with N-dimensional arrays in the same way you do with ordinary floating point numbers in Octave. In addition I have also fixed some bugs not directly related to N-dimensional arrays, see for example bug #51783 and #51283.
During the project I have made a total of 108 commits. I have made changes to 332 of the packages 666 files. Some of these changes have only been changes in coding style or (mainly) automated adding of tests. Not counting these I have, manually, made changes to 110 files.
If you want to take a look at all the commits I have contributed with the best way is to download the repository after which you can see all the commits from GSoC with
hg log -u joeldahne@hotmail.com -d "2017-06-01 to 2017-08-29"
Unfortunately I have not found a good way of isolating commits from a specific period and author on sourceforge where the package is hosted. Instead you can find a list of all commits at the end of this blog post.
The NEWS-file from the release of version 3.0.0 is also a pretty good overview of what I have done. While not all of the changes are a result of GSoC quite a lot of them are.
## Running the Code
As mentioned above we have just released version 3.0.0 of the interval package. With the new release it is very easy to test the newly added functionality. If you already have Octave installed the easiest way to install the package is with the command “pkg install -forge interval”. This will install the latest release of the package, at the time of writing this is 3.0.0 but that will of course change in the future. You can also download version 3.0.0 directly from Octave-forge.
If you want you can also download the source code from the official repository and test it with "make run" or install it with "make install". To download the repository, update to version 3.0.0 an run Octave with the package on Linux use the following
hg clone http://hg.code.sf.net/p/octave/interval octave-interval
cd octave-interval
hg update release-3.0.0
make run
## A Package for Taylor Arithmetic
The task took less time than planned so I had time to work upon a project depending on my previous work. I started to work on a project for Taylor arithmetic, you can read my blog post about it. I created a proof of concept implementation as part of my application for Google Summer of Code and I have now started to turn that into a real package. The repository can be found here.
It is still far from complete but my goal is to eventually add it as a package at Octave-Forge. How that goes depends mainly on how much time I have to spend on it the following semesters.
If you want to run the code as it is now you can pull the repository and then run it with "make run", this requires that Octave and version 3.0.0 (or higher) of the interval package is installed.
## List of Commits
Here is a list of all 108 commits I have done to the interval package
https://sourceforge.net/p/octave/interval/ci/68eb1b3281c9
https://sourceforge.net/p/octave/interval/ci/52d6a2565ed2
summary: @infsupdec/factorial.m: Fix decoration (bug #51783)
https://sourceforge.net/p/octave/interval/ci/cf97d56a8e2a
summary: mpfr_function_d.cc: Cast int to octave_idx_type
https://sourceforge.net/p/octave/interval/ci/5ed7880c917f
summary: maint: Fix input to source
https://sourceforge.net/p/octave/interval/ci/637c532ea650
summary: @infsupdec/dot.m: Fix decoration on empty input
https://sourceforge.net/p/octave/interval/ci/51425fd67692
https://sourceforge.net/p/octave/interval/ci/b4e7f1546e36
summary: mpfr_function_d.cc, mpfr_vector_dot_d.cc: Fixed bug when broadcasting with one size equal to zero
https://sourceforge.net/p/octave/interval/ci/02db5199932c
summary: doc: NEWS.texinfo: Info about vectorization for nthroot and pownrev
https://sourceforge.net/p/octave/interval/ci/197d033fc878
summary: @infsup/pownrev.m, @infsupdec/pownrev.m: Support for vectorization of p
https://sourceforge.net/p/octave/interval/ci/7f8d1945264c
summary: @infsup/nthroot.m, @infsupdec/nthroot.m, mpfr_function_d.cc: Support for vectorization of n
https://sourceforge.net/p/octave/interval/ci/e841c37383d6
summary: doc: NEWS.texinfo: Summarized recent changes from GSoC
https://sourceforge.net/p/octave/interval/ci/bc3e9523d42a
summary: mpfr_vector_dot_d.cc: Fixed bug when broadcasting with one size equal to zero
https://sourceforge.net/p/octave/interval/ci/0160ff2c2134
summary: doc: examples.texinfo: Updated example for the latest Symbolic package version
https://sourceforge.net/p/octave/interval/ci/f37e1074c3ce
summary: @infsup/*.m, @infsupdec/*.m: Added missing N-dimensional versions of tests
https://sourceforge.net/p/octave/interval/ci/eb6b9c01bf6a
summary: @infsup/*.m, @infsupdec/*.m: N-dimensional versions of all ternary tests
summary: @infsup/*.m, @infsupdec/*.m: N-dimensional versions of all binary tests
https://sourceforge.net/p/octave/interval/ci/9440d748b4aa
summary: @infsup/*.m, @infsupdec*.m: N-dimensional version of all unary tests
https://sourceforge.net/p/octave/interval/ci/cb5b7b824400
summary: @infsup/powrev2.m: Fixed bug when called with vector arguments
https://sourceforge.net/p/octave/interval/ci/a3e8fdb85b97
summary: @infsup/pownrev.m, @infsupdec/pownrev.m: Reworked vectorization test
https://sourceforge.net/p/octave/interval/ci/aae143789b81
summary: @infsup/pown.m: Added support for N-dimensional arrays
https://sourceforge.net/p/octave/interval/ci/f05814665570
summary: @infsup/nthroot.n: Reworked vectorization test
https://sourceforge.net/p/octave/interval/ci/c4c27574e331
summary: @infsup/nthroot.m, @infsupdec/nthroot.m: Clarified that N must be scalar
https://sourceforge.net/p/octave/interval/ci/3bfd4e7e6600
summary: @infup/pow.m, @infsupdec/pow.m: Fixed bug when called with vector arguments
https://sourceforge.net/p/octave/interval/ci/e013c55a4e2a
summary: @infsup/overlap.m, @infsupdec/overlap.m: Fixed formatting of vector test.
summary: doc: Modified a test so that it now passes
https://sourceforge.net/p/octave/interval/ci/2404686b07fb
summary: doc: Fixed formatting of example
https://sourceforge.net/p/octave/interval/ci/c078807c6b32
summary: doc: SKIP an example that always fail in the doc-test
https://sourceforge.net/p/octave/interval/ci/598dea2b8eb8
summary: doc: Fixed missed ending of example in Getting Started
https://sourceforge.net/p/octave/interval/ci/66ca24edfba4
summary: Updated coding style for all infsupdec-class functions
https://sourceforge.net/p/octave/interval/ci/4c629b9000fc
summary: Updated coding style for all infsup-class functions
summary: Updated coding style for all non-class functions
https://sourceforge.net/p/octave/interval/ci/f911db83df14
summary: @infsupdec/dot.m: Fixed wrong size of decoration when called with two empty matrices
https://sourceforge.net/p/octave/interval/ci/8e6a9e37ee40
summary: Small updates to documentation and comments for a lot of function to account for the support of N-dimensional arrays
https://sourceforge.net/p/octave/interval/ci/a2ef249d54ae
summary: doc: A small update to Examples, the interval Newton method can only find zeros inside the initial interval
https://sourceforge.net/p/octave/interval/ci/76431745772f
summary: doc: Updates to Getting Started, mainly how to create N-dimensional arrays
summary: doc: Small updates to Preface regarding N-dimensional arrays and fixed one link
summary: ctc_intersect.m, ctc_union.m: Fixed bugs when used for vectorization and when called with 0 or 1 output arguments
https://sourceforge.net/p/octave/interval/ci/0e01dc19dc75
summary: @infsup/sumsq.m: Updated to use the new functionality of dot.m
https://sourceforge.net/p/octave/interval/ci/2fba2056ed31
summary: @infsup/dot.m, @infsupdec/dot.m, mpfr_vector_dot_d.cc: Added support for N-dimensional vectors. Moved all vectorization to the oct-file. Small changes to functionality to mimic how the sum function works.
https://sourceforge.net/p/octave/interval/ci/05fc90112ea9
summary: ctc_intersect.m, ctc_union.m: Added support for N-dimensional arrays
https://sourceforge.net/p/octave/interval/ci/2a4d1e9fa43e
summary: @infsup/fsolve.m: Added support for N-dimensional arrays. Fixed problem with the function in the example. Improved performance when creating the cell used in vectorization.
https://sourceforge.net/p/octave/interval/ci/7a96c346225a
summary: @infsup/disp.m: Fixed wrong enumeration of submatrices
https://sourceforge.net/p/octave/interval/ci/933117890e45
summary: Fixed typo in NEWS.texinfo
summary: @infsup/diag.m: Added description of the previous bug fix in the NEWS file
https://sourceforge.net/p/octave/interval/ci/07ebc81867cd
summary: @infsup/diag.m: Fixed error when called with more than 1 argument
https://sourceforge.net/p/octave/interval/ci/554c34fb3246
summary: @infsup/meshgrid.m, @infsupdec/meshgrid.m: Removed these functions, now falls back on standard implementation, also updated index
summary: @infsup/plot.m: Updated documentation
https://sourceforge.net/p/octave/interval/ci/5e0100cdc25f
summary: @infsup/plot3.m: Small change to allow for N-dimensional arrays as input
https://sourceforge.net/p/octave/interval/ci/695a223ccbb7
summary: @infsupdec/prod.m: Added support for N-dimensional arrays
https://sourceforge.net/p/octave/interval/ci/55f570c13f01
summary: @infsup/prod.m: Added support for N-dimensional arrays. Removed short circuit in simple cases.
https://sourceforge.net/p/octave/interval/ci/445d7e5150aa
summary: @infsup/sum.m, @infsupdec/sum.m, mpfr_vector_sum_d.cc: Added support for N-dimensional vectors. Moved all vectorization to the oct-file. Small changes to functionality to mimic Octaves standard sum function.
summary: @infsup/fminsearch.m: Updated documentation
https://sourceforge.net/p/octave/interval/ci/19724b3f581e
summary: mpfr_function_d.cc: Finalized support for N-dimensional arrays with binary functions and added support for it with ternary functions.
https://sourceforge.net/p/octave/interval/ci/023e2788e445
https://sourceforge.net/p/octave/interval/ci/617484113019
summary: @infsupdec/infsupdec.m: Added full support for creating N-dimensional arrays and added tests
https://sourceforge.net/p/octave/interval/ci/f1c28a3d48b7
summary: @infsup/subset.m, @infsupdec/subset.m: Updated documentation
https://sourceforge.net/p/octave/interval/ci/8e9990e3ed13
summary: @infsup/strictsubset.m, @infsupdec/strictsubset.m: Fixed coding style and updated documentation
https://sourceforge.net/p/octave/interval/ci/5ca9b26b580d
summary: @infsup/strictprecedes.m, @infsupdec/strictprecedes.m: Updated documentation
https://sourceforge.net/p/octave/interval/ci/1c76721003f1
summary: @infsup/sdist.m: Updated documentation
https://sourceforge.net/p/octave/interval/ci/e044dea7c820
summary: @infsup/precedes.m, @infsupdec/precedes.m: Updated documentation
https://sourceforge.net/p/octave/interval/ci/d48ca9206299
summary: @infsup/overlap.m, @infsupdec/overlap.m: Fixed coding style and updated documentation
summary: @infsup/issingleton.m: Updated documentation
https://sourceforge.net/p/octave/interval/ci/11dccbcfd97e
summary: @infsup/ismember.m: Updated documentation
summary: @infsup/isentire.m: Updated documentation
https://sourceforge.net/p/octave/interval/ci/9fc874b71533
summary: @infsup/isempty.m, @infsupdec/isempty.m: Updated documentation
https://sourceforge.net/p/octave/interval/ci/0c5c4eaf263b
summary: @infsup/iscommoninterval.m: Updated documentation
https://sourceforge.net/p/octave/interval/ci/c980da83b634
summary: @infsup/interior.m, @infsupdec/interior.m: Updated documentation
https://sourceforge.net/p/octave/interval/ci/f67d78651aee
summary: @infsup/idist.m: Updated documentation
https://sourceforge.net/p/octave/interval/ci/70a389bd59f5
summary: @infsup/hdist.m: Fixed coding style and updated documentation.
https://sourceforge.net/p/octave/interval/ci/e34401d2a6d6
summary: @infsup/sin.m, @infsupdec/sin.m: Added workaround for bug #51283
https://sourceforge.net/p/octave/interval/ci/fea4c6516101
summary: @infsup/gt.m: Updated documentation
https://sourceforge.net/p/octave/interval/ci/ce973432d240
summary: @infsup/ge.m: Updated documentation
https://sourceforge.net/p/octave/interval/ci/a435986d4b0c
summary: @infsup/lt.m, @infsupdec/lt.m: Updated documentation
https://sourceforge.net/p/octave/interval/ci/23fa89e3c461
summary: @infsup/le.m, @infsupdec/le.m: Updated documentation
https://sourceforge.net/p/octave/interval/ci/730397b9e339
summary: @infsup/disjoint.m, @infsupdec/disjoint.m: Updated documentation
summary: mpfr_function_d.cc: Added support for N-dimensional arrays for unary functions. Also temporary support for binary functions.
https://sourceforge.net/p/octave/interval/ci/9661e0256c51
summary: crlibm_function.cc: Added support for N-dimensional arrays
https://sourceforge.net/p/octave/interval/ci/0ec3d29c0779
summary: @infsup/infsup.m: Fixed documentation and added missing line continuation
https://sourceforge.net/p/octave/interval/ci/ebb00e763a46
summary: @infsup/disp.m: Fixed documentation
https://sourceforge.net/p/octave/interval/ci/1ab2749da374
summary: @infsup/size.m: Fixed documentation
https://sourceforge.net/p/octave/interval/ci/507ca8478b72
summary: @infsup/size.m: Fixes to the documentation
https://sourceforge.net/p/octave/interval/ci/667da39afece
summary: nai.m: Small fix to one of the tests
https://sourceforge.net/p/octave/interval/ci/3eb13f91065a
summary: hull.m: Fixes according to Olivers review
https://sourceforge.net/p/octave/interval/ci/20d784a6605d
summary: @infsup/display.m: Vectorized loop
https://sourceforge.net/p/octave/interval/ci/fab7aa26410f
summary: @infsup/disp.m: Fixes according to Olivers review, mainly details in the output
https://sourceforge.net/p/octave/interval/ci/4959566545db
summary: @infsup/infsup.m: Updated documentation and added test for N-dimensional arrays
summary: @infsup/infsup.m: Fixed coding style
https://sourceforge.net/p/octave/interval/ci/486a73046d5e
summary: @infsup/disp.m: Updated documentation and added more tests for N-dimensional arrays
https://sourceforge.net/p/octave/interval/ci/b6a0435da31f
summary: exacttointerval.m: Uppdated documentation and added tests for N-dimensional arrays
summary: @infsup/intervaltotext.m, @infsupdec/intervaltotext.m: Updated documentation and added tests for N-dimensional arrays
https://sourceforge.net/p/octave/interval/ci/55a3e708aef4
summary: @infsup/intervaltotext.m: Fixed coding style
https://sourceforge.net/p/octave/interval/ci/536b0c5023ee
summary: @infsup/subsref.m, @infsupdec/subsref.m: Added tests for N-dimensional arrays
https://sourceforge.net/p/octave/interval/ci/c9654a4dcd8d
summary: @infsup/size.m: Added support for N-dimensional arrays
https://sourceforge.net/p/octave/interval/ci/636709d06194
summary: @infsup/end.m: Added support for N-dimensional arrays
https://sourceforge.net/p/octave/interval/ci/e6451e037120
summary: nai.m: Added support for N-dimensional arrays
https://sourceforge.net/p/octave/interval/ci/837cccf1d627
summary: @infsup/resize.m, @infsupdec/resize.m: Added support for N-dimensional arrays
https://sourceforge.net/p/octave/interval/ci/11cb9006b9ea
summary: @infsup/reshape.m, @infsupdec/reshape.m: Added support for N-dimensional arrays
https://sourceforge.net/p/octave/interval/ci/c276af6c42ae
https://sourceforge.net/p/octave/interval/ci/c92c829dc946
https://sourceforge.net/p/octave/interval/ci/0f8f6864123e
summary: @infsup/meshgrid.m, @infsupdec/meshgrid.m: Added support for outputting 3-dimensional arrays
https://sourceforge.net/p/octave/interval/ci/b7faafef6030
summary: @infsup/cat.m, @infsupdec/cat.m: Added support for N-dimensional arrays
https://sourceforge.net/p/octave/interval/ci/d416a17a6d3d
summary: hull.m: Added support for N-dimensional arrays
https://sourceforge.net/p/octave/interval/ci/031831f6bdfd
summary: empty.m, entire.m: Added support for N-dimensional arrays
https://sourceforge.net/p/octave/interval/ci/2816b68b83c4
summary: @infsup/display.m: Added support for displaying high dimensional arrays
https://sourceforge.net/p/octave/interval/ci/79c8dfa8ac54
summary: @infsup/disp.m: Added support for displaying high dimensional arrays
https://sourceforge.net/p/octave/interval/ci/cc87924e52ac
summary: @infsup/disp.m: Fixed coding style
https://sourceforge.net/p/octave/interval/ci/29a6b4ecda2a
summary: @infsupdec/infsupdec.m: Temporary fix for creating high dimensional arrays
https://sourceforge.net/p/octave/interval/ci/039abcf7623d
summary: @infsupdec/infsupdec.m: Fixed coding style
## Friday, 18 August 2017
### The Final Polish
We are preparing for releasing version 3.0.0 of the interval package and this last week have mainly been about fixing minor bugs related to the release. I mention two of the more interesting bugs here.
### Compact Format
We (Oliver) recently added support for "format compact" when printing intervals. It turns out that the way to determine if compact format is enabled differs very much between different version of Octave. There are at least three different ways to get the information.
In the older releases (< 4.2.0 I believe) you use "get (0, "FormatSpacing")" but there appear to be a bug for version < 4.0.0 for which this always return "loose".
For the current tip of the development branch you can use "[~, spacing] = format ()" to get the spacing.
Finally in between these two version you use "__compactformat__ ()".
In the end Oliver, probably, found a way to handle this mess and compact format should now be fully supported for intervals. The function to do this is available here https://sourceforge.net/p/octave/interval/ci/default/tree/inst/@infsup/private/__loosespacing__.m.
### Dot-product of Empty Matrices
When updating "dot" to support N-dimensional arrays I also modified it so that it behaves similar to Octaves standard implementation. The difference is in how it handles empty input. Previously we had
> x = infsupdec (ones (0, 2));
> dot (x, x)
ans = 0×2 interval matrix
but with the new version we get
> dot (x, x)
ans = 1×2 interval vector
[0]_com [0]_com
which is consistent with the standard implementation.
In the function we use "min" to compute the decoration for the result. Normally "min (x)" and "dot (x, x)" returns results of the same size (the dimension along which it is computed is set to 1), but they handle empty input differently. We have
> x = ones (0, 2);
> dot (x, x)
ans =
0 0
> min (x)
ans = [](0x2)
This meant that the decoration would be incorrect since the implementation assumed they always had the same size. Fortunately the solution was very simple. If the dimension along which we are computing the dot-product is zero. the decoration should always be "com". So just adding a check for that was enough.
You could argue that "min (ones (0, 2))" should return "[inf, inf]" similarly to how many of the other reductions, like "sum" or "prod", return their unit for empty input. But this would most likely be very confusing for a lot of people. And it is not compatible with how Matlab does it either.
## Updates on the Taylor Package
I have also had some time to work on the Taylor package this week. The basic utility functions are now completed and I have started to work on functions for actually computing with Taylor expansions. At the moment there are only a limited amount of functions implemented. For example we can calculate the Taylor expansion of order 4 for the functions $\frac{e^x + \log(x)}{1 + x}$ at $x = 5$.
## Create a variable of degree 4 and with value 5
> x = taylor (infsupdec (5), 4)
x = [5]_com + [1]_com X + [0]_com X^2 + [0]_com X^3 + [0]_com X^4
## Calculate the function
> (exp (x) + log (x))./(1 + x)
ans = [25.003, 25.004]_com + [20.601, 20.602]_com X + [8.9308, 8.9309]_com X^2 + [2.6345, 2.6346]_com X^3 + [0.59148, 0.59149]_com X^4
## Friday, 11 August 2017
### Improving the Automatic Tests
Oliver and I have been working on improving the test framework used for the interval package. The package shares a large number of tests with other interval packages through an interval test framework that Oliver created. Here is the repository.
## Creating the Tests
Previously these tests were separated from the rest of the package and you usually ran them with help of the Makefile. Now Oliver has moved them to the m-files and you can run them, together with the other tests for the function, with test @infsup/function in Octave. This makes it much easier to test the functions directly.
In addition to making the tests easier to use we also wanted to extend them to not only test scalar evaluation but also vector evaluation. The test data, input ad expected output, is stored in a cell array and when performing the scalar testing we simply loop over that cell and run the function for each element. The actual code looks like this (in this case for plus)
%!test
%! # Scalar evaluation
%! for testcase = [testcases]'
%! assert (isequaln (...
%! plus (testcase.in{1}, testcase.in{2}), ...
%! testcase.out));
%! endfor
For testing the vector evaluation we simply concatenate the cell array into a vector and give that to the function. Here is what that code looks like
%! # Vector evaluation
%! in1 = vertcat (vertcat (testcases.in){:, 1});
%! in2 = vertcat (vertcat (testcases.in){:, 2});
%! out = vertcat (testcases.out);
%! assert (isequaln (plus (in1, in2), out));
Lastly we also wanted to test evaluation of N-dimensional arrays. This is done by concatenating the data into a vector and then reshape that vector into an N-dimensional array. But what size should we use for the array? Well, we want to have at least three dimensions because otherwise we are not really testing N-dimensional arrays. My solution was to completely factor the length of the vector and use that as size, testsize = factor (length (in1)), and if the length of the vector has two or fewer factors we add a few elements to the end until we get at least three factors. This is the code for that
%!test
%! # N-dimensional array evaluation
%! in1 = vertcat (vertcat (testcases.in){:, 1});
%! in2 = vertcat (vertcat (testcases.in){:, 2});
%! out = vertcat (testcases.out);
%! # Reshape data
%! i = -1;
%! do
%! i = i + 1;
%! testsize = factor (numel (in1) + i);
%! until (numel (testsize) > 2)
%! in1 = reshape ([in1; in1(1:i)], testsize);
%! in2 = reshape ([in2; in2(1:i)], testsize);
%! out = reshape ([out; out(1:i)], testsize);
%! assert (isequaln (plus (in1, in2), out));
This works very well, except when the number of test cases is to small. If the number of test is less than four this will fail. But there are only a handful of functions with that few tests so I fixed those independently.
## Running the tests
Okay, so we have created a bunch of new tests for the package. Do we actually find any new bugs with them? Yes!
The function pow.m failed on the vector test. The problem? In one place $\&\&$ was used instead of $\&$. For scalar input I believe these behave the same but they differ for vector input.
Both the function nthroot.m and the function pownrev.m failed the vector test. Neither allowed vectorization of the integer parameter. For nthroot.m this is the same for standard Octave version so it should perhaps not be treated as a bug. The function pownrev.m uses nthroot.m internally so it also had the same limitation. This time I would however treat it as a bug because the function pown.m does allow vectorization of the integer parameter and if that supports it the reverse function should probably also do it. So I implemented support for vectorization of the integer parameter for both nthroot.m and pownrev.m and they now pass the test.
No problems were found with the N-dimensional tests that the vector tests did not find. This is a good indication that the support for N-dimensional arrays is at least partly correct. Always good to know!
## Friday, 28 July 2017
### A Package for Taylor Arithmetic
In the last blog post I wrote about what was left to do with implementing support for N-dimensional arrays in the interval package. There are still some things to do but I have had, and most likely will have, some time to work on other things. Before the summer I started to work on a proof of concept implementation of Taylor arithmetic in Octave and this week I have continued to work on that. This blog post will be about that.
## A Short Introduction to Taylor Arithmetic
Taylor arithmetic is a way to calculate with truncated Taylor expansions of functions. The main benefit is that it can be used to calculate derivatives of arbitrary order.
Taylor expansion or Taylor series (I will use these words interchangeably) are well known and from Wikipedia we have: The Taylor series of real or complex valued function $f(x)$ that is infinitely differentiable at a real or complex number $a$ is the power series
$$f(a) + \frac{f'(a)}{1!}(x-a) + \frac{f''(a)}{2!}(x-a)^2 + \frac{f'''(a)}{3!}(x-a)^3 + ....$$
From the definition it is clear that if we happen to know the coefficients of the Taylor series of $f$ at the point $a$ we can also calculate all derivatives of $f$ at that point by simply multiplying a coefficient with the corresponding factorial.
The simplest example of Taylor arithmetic is addition of two Taylor series. If $f$ has the Taylor series $\sum_{n=0}^\infty (f)_n (x-a)^n$ and $g$ the Taylor series $\sum_{n=0}^\infty (g)_n (x-a)^n$ then $f + g$ will have the Taylor series
$$\sum_{n=0}^\infty (f + g)_n (x-a)^n = \sum_{n=0}^\infty ((f)_n + (g)_n)(x-a)^n$$
If we instead consider the product, $fg$, we get
$$\sum_{n=0}^\infty (fg)_n (x-a)^n = \sum_{n=0}^\infty \left(\sum_{i=0}^n (f)_n(g)_n\right)(x-a)^n.$$
With a bit of work you can find similar formulas for other standard functions. For example the coefficients, $(e^f)_n$, of the Taylor expansion of $\exp(f)$ is given by $(e^f)_0 = e^{(f)_0}$ and for $n > 0$
$$(e^f)_n = \frac{1}{n}\sum_{i=0}^{n-1}(k-j)(e^f)_i(f)_{n-i}.$$
When doing the computations on a computer we consider truncated Taylor series, we choose an order and keep only coefficients up to that order. There is also nothing that stops us from using intervals as coefficients, this allows us to get rigorous enclosures of derivatives of functions.
For a more complete introduction to Taylor arithmetic in conjunction with interval arithmetic see [1] which was my first encounter to it. For another implementation of it in code take a look at [2].
## Current Implementation Status
As mentioned in the last post my repository can be found here
When I started to write on the package, before summer, my main goal was to get something working quickly. Thus I implemented the basic functions needed to do some kind of Taylor arithmetic, a constructor, some help functions and a few functions like $\exp$ and $\sin$.
This last week I have focused on implementing the basic utility functions, for example $size$, and rewriting the constructor. In the process I think I have broken the arithmetic functions, I will fix them later.
You can at least create and display Taylor expansions now. For example creating a variable $x$ with value 5 of order 3
> x = taylor (infsupdec (5), 3)
x = [5]_com + [1]_com X + [0]_com X^2 + [0]_com X^3
or a matrix with 4 variables of order 2
> X = taylor (infsupdec ([1, 2; 3, 4]), 2)
X = 2×2 Taylor matrix of order 2
ans(:,1) =
[1]_com + [1]_com X + [0]_com X^2
[3]_com + [1]_com X + [0]_com X^2
ans(:,2) =
[2]_com + [1]_com X + [0]_com X^2
[4]_com + [1]_com X + [0]_com X^2
If you want you can create a Taylor expansion with explicitly given coefficients you can do that as well
> f = taylor (infsupdec ([1; -2; 3, -4))
f = [1]_com + [-2]_com X + [3]_com X^2 + [-4]_com X^3
This would represent a function $f$ with $f(a) = 1$, $f'(a) = -2$, $f''(a) = 3 \cdot 2! = 6$ and $f'''(a) = -4 \cdot 3! = -24$.
## Creating a Package
My goal is to create a full package for Taylor arithmetic along with some functions making use of it. The most important step is of course to create a working implementation but there are other things to consider as well. I have a few things I have not completely understood about it. Depending on how much time I have next week I will try to read a bit more about it probably ask some questions on the mailing list. Here are at least some of the things I have been thinking about
### Mercurial vs Git?
I have understood that most of the Octave forge packages uses Mercurial for version control. I was not familiar with Mercurial before so the natural choice for me was to use Git. Now I feel I could switch to Mercurial if needed but I would like to know the potential benefits better, I'm still new to Mercurial so I don't have the full picture. One benefit is of course that it is easier if most packages use the same system, but other than that?
### How much work is it?
If I were to manage a package for Taylor arithmetic how much work is it? This summer I have been working full time with Octave so I have had lots of time but this will of course not always be the case. I know it takes time if I want to continue to improve the package, but how much, and what, continuous work is there?
### What is needed besides the implementation?
From what I have understood there are a couple of things that should be included in a package besides the actual m-files. For example a Makefile for creating the release, an INDEX-file and a CITATION-file. I should probably also include some kind of documentation, especially since Taylor arithmetic is not that well known. Is there anything else I need to think about?
### What is the process to get a package approved?
If I were to apply (whatever that means) for the package to go to Octave forge what is the process for that? What is required before it can be approved and what is required after it is approved?
[1] W. Tucker, Validated Numerics, Princeton University Press, 2011.
[2] F. Blomquist, W. Hofschuster, W. Krämer, Real and complex taylor arithmetic in C-XSC, Preprint 2005/4, Bergische Universität Wuppertal.
## Friday, 14 July 2017
One of my first posts on this blog was a timeline for my work during the project. Predicting the amount of time something takes is always hard to do. Often time you tend to underestimate the complexity of parts of the work. This time however I overestimated the time the work would take.
If my timeline would have been correct I would have just started to work on Folding Functions (or reductions as they are often called). Instead I have completed the work on them and also for functions regarding plotting. In addition I have started to work on the documentation for the package as well as checking everything an extra time.
In this blog post I will go through what I have done this week, what I think is left to do and a little bit about what I might do if I complete the work on N-dimensional arrays in good time.
## This Week
### The Dot Function
The $dot$-function was the last function left to implement support for N-dimensional arrays in. It is very similar to the $sum$-function so I already had an idea of how to do it. As with $sum$ I moved most of the handling of the vectorization from the m-files to the oct-file, the main reason being improved performance.
The $dot$-functions for intervals is actually a bit different from the standard one. First of all it supports vectorization which the standard one does not
> dot ([1, 2, 3; 4, 5, 6], 5)
error: dot: size of X and Y must match
> dot (infsupdec ([1, 2, 3; 4, 5, 6], 5)
ans = 1x3 interval vector
[25]_com [35]_com [45]_com
It also treats empty arrays a little different, see bug #51333,
> dot ([], [])
ans = [](1x0)
> dot (infsupdec ([]), [])
ans = [0]_com
### Package Documentation
I have done the minimal required changes to the documentation. That is I moved support for N-dimensional arrays from Limitation to Features and added some simple examples on how to create N-dimensional arrays.
### Searching for Misses
During the work I have tried to update the documentation for all functions to account for the support of N-dimensional arrays and I have also tried to update some of the comments for the code. But as always, especially when working with a lot of files, you miss things, both in the documentation and old comments.
I did a quick grep for the words "matrix" and "matrices" since they are candidates for being changed to "array". Doing this I found 35 files where I missed things. It was mainly minor things, comments using the "matrix" which I now changed to "array", but also some documentation which I had forgotten to update.
## What is Left?
### Package Documentation - Examples
As mentioned above I have done the minimal required changes to the documentation. It would be very nice to add some more interesting example using N-dimensional arrays of intervals in a useful way. Ironically I have not been able to come up with an interesting example but I will continue to think about it. If you have an example that you think would be interesting and want to share, please let me know!
### Coding Style
As I mentioned in one of the first blog posts, the coding style for the interval package was not following the standard for Octave. During my work I have adapted all files I have worked with to the coding standard for Octave. A lot of the files I have not needed to do any changes to, so they are still using the old style. It would probably be a good idea to update them as well.
### Testing - ITF1788
The interval framwork libary developed by Oliver is used to test the correctness of many of the functions in the package. At the moment it tests evaluation of scalars but in principle it should be no problem to use it for testing vectorization or even broadcasting. Oliver has already started to work on this.
## After N-dimensional arrays?
If I continue at this pace I will finish the work on N-dimensional arrays before the time of the project is over. Of course the things that are left might take longer than expected, they usually do, but there is a chance that I will have time left after everything is done. So what should I do then? There are more thing that can be done on the interval package, for example adding more examples to the documentation, however I think I would like to start working on a new package for Taylor arithmetics.
Before GSoC I started to implement a proof of concept for Taylor arithmetics in Ocatve which can be found here. I would then start to work on implementing a proper version of it, where I would actually make use of N-dimensional interval arrays. If I want to create a package for this I would also need to learn a lot of other things, one of them being how to manage a package on octave forge.
At the moment I will try to finish my work on N-dimensional arrays. Then I can discuss it with Oliver and see what he thinks about it.
## Friday, 7 July 2017
### Set inversion with fsolve
This week my work have mainly been focused on the interval version of fsolve. I was not sure if and how this could make use of N-dimensional arrays and to find that out I had to understand the function. In the end it turned out that the only generalization that could be done were trivial and required very few changes. However I did find some other problems with the functions that I have been able to fix. Connected to fsolve are the functions ctc_intersect and ctc_union. The also needed only minor changes to allow for N-dimensional input. I will start by giving an introduction to fsolve, ctc_union and ctc_intersect and then I will mention the changes I have done to them.
## Introduction to fsolve
The standard version of fsolve in Octave is used to solve systems of nonlinear equations. That is, given a functions $f$ and a starting point $x_0$ it returns a value $x$ such that $f(x)$ is close to zero. The interval version of fsolve does much more than this. It is used to enclose the preimage of a set $Y$ under $f$. Given a domain $X$, a set $Y$ and a function $f$ it returns an enclosure of the set
$$f^{-1}(Y) = \{x \in X: f(x) \in Y\}.$$
By letting $Y = 0$ we get similar functionality as the standard fsolve, with the difference that the output is an enclosure of all zeros to the function (compared to one point for which $f$ returns close to zero).
### Example: The Unit Circle
Consider the function $f(x, y) = \sqrt{x^2 + y^2} - 1$ which is zero exactly on the unit circle. Plugging this in to the standard fsolve we get with $(0.5, 0.5)$ as a starting guess
> x = fsolve (@(x) f(x(1), x(2)), [0.5, 0.5])
x = 0.70711 0.70711
which indeed is close to a zero. But we get no information about other zeros.
Using the interval version of fsolve with $X = [-3, 3] \times [-3, 3]$ as starting domain we get
> [x paving] = fsolve (f, infsup ([-3, -3], [3, 3]));
> x
x ⊂ 2×1 interval vector
[-1.002, +1.002]
[-1.0079, +1.0079]
Plotting the paving we get the picture
which indeed is a good enclosure of the unit circle.
### How it works
In its simplest form fsolve uses a simple bisection scheme to find the enclosure. Using interval methods we can find enclosure to images of sets. Given a set $X_0 \subset X$ we have three different possibilities
• $f(X_0) \subset Y$ in which case we add $X_0$ to the paving
• $f(X_0) \cap Y = \emptyset$ in which case we discard $X_0$
• Otherwise we bisect $X_0$ and continue on the parts
By setting a tolerance of when to stop bisecting boxes we get the algorithm to terminate in a finite number of steps.
### Contractors
Using bisection is not always very efficient, especially when the domain has many dimensions. One way to speed up the convergence is with what's called contractors. In short a contractor is a function that can take the set $X_0$ and returns a set $X_0' \subset X_0$ with the property that $f(X_0 \setminus X_0') \cap Y = \emptyset$. It's a way of making $X_0$ smaller without having to bisect it that many times.
When you construct a contractor you use the reverse operations definer on intervals. I will not go into how this works, if you are interested you can find more information in the package documentation [1] and in these youtube videos about Set Inversion Via Interval Analysis (SIVIA) [2].
The functions ctc_union and ctc_intersect are used to combine contractors on sets into contractors on unions or intersections of these sets.
## Generalization to N-dimensional arrays
How can fsolve be generalized to N-dimensional arrays? The only natural thing to do is to allow for the input and output of $f$ to be N-dimensional arrays. This also is no problem to do. While you mathematically probably would say that fsolve is used to do set inversion for functions $f: \mathbb{R}^n \to \mathbb{R}^m$ it can of course also be used for example on functions $f: \mathbb{R}^{n_1}\times \mathbb{R}^{n_2} \to \mathbb{R}^{m_1}\times \mathbb{R}^{m_2}$.
This is however a bit different when using vectorization. When not using vectorization (and not using contractions) fsolve expects that the functions takes one argument which is an array with each element corresponding to a variable. If vectorization is used it instead assumes that the functions takes one argument for each variable. Every argument is then given as a vector with each element corresponding to one value of the variable for which to compute the function. Here we have no use of N-dimensional arrays.
## Modifications
The only change in functionality that I have done to the functions is to allow for N-dimensional arrays as input and output when vectorization is not used. This required only minor changes, essentially changing expressions like
max (max (wid (interval)))
to
max (wid (interval)(:))
It was also enough to do these changes in ctc_union and ctc_intersect to have these support N-dimensional arrays.
I have made no functional changes when vectorization is used. I have however made an optimization in the construction of the arguments to the function. The arguments are stored in an array but before being given to the function they need to be split up into the different variables. This is done by creating a cell array with each element being a vector with the values of one of the variables. Previously the construction of this cell array was very inefficient, it split the interval into its lower and upper part and then called the constructor to create an interval again. Now it copies the intervals into the cell without having to call the constructor. This actually seems have been quite a big improvement, using the old version the example with the unit circle from above took around 0.129 seconds and with the new version it takes about 0.092 seconds. This is of course only one benchmark, but a speed up of about 40% for this test is promising!
Lastly I noticed a problem in the example used in the documentation of the function. The function used is
# Solve x1 ^ 2 + x2 ^ 2 = 1 for -3 ≤ x1, x2 ≤ 3 again,
# but now contractions speed up the algorithm.
function [fval, cx1, cx2] = f (y, x1, x2)
# Forward evaluation
x1_sqr = x1 .^ 2;
x2_sqr = x2 .^ 2;
fval = hypot (x1, x2);
# Reverse evaluation and contraction
y = intersect (y, fval);
# Contract the squares
x1_sqr = intersect (x1_sqr, y - x2_sqr);
x2_sqr = intersect (x2_sqr, y - x1_sqr);
# Contract the parameters
cx1 = sqrrev (x1_sqr, x1);
cx2 = sqrrev (x2_sqr, x2);
endfunction
Do you see the problem? I think it took me more than a day to realize that the problems I was having was not because of a bug in fsolve but because this function computes the wrong thing. The function is supposed to be $f(x_1, x_2) = x_1^2 + x_2^2$ but when calculating the value it calls hypot which is given by $hypot(x_1, x_2) = \sqrt{x_1^2 + x_2^2}$. For $f(x_1, x_2) = 1$, which is used in the example, it gives the same result, but otherwise it will of course not work.
[1] https://octave.sourceforge.io/interval/package_doc/Parameter-Estimation.html#Parameter-Estimation
## Friday, 30 June 2017
### One Month In
Now one month of GSoC has passed and so far everything has gone much better than I expected! According to my timeline this week would have been the first of two were I work on vectorization. Instead I have already mostly finished the vectorization and have started to work on other things. In this blog post I'll give a summary of what work I have completed and what I have left to do. I'll structure it according to where the functions are listed in the $INDEX$-file [1]. The number after the heading is the number of functions in that category.
Since this will mainly be a list of which files have been modified and which are left to do this might not be very interesting if you are not familiar with the structure of the interval package.
### Interval constant (3)
All of these have been modified to support N-dimensional arrays.
### Interval constructor (5)
All of these have been modified to support N-dimensional arrays.
### Interval function (most with tightest accuracy) (63)
Almost all of these functions worked out of the box! At least after the API functions to the MPFR and crlibm libraries were fixed, they are further down in the list.
The only function that did not work immediately were $linspace$. Even though this function could be generalized to N-dimensinal arrays the standard Octave function only works for matrices (I think the Matlab version only allows scalars). This means that adding support for N-dimensional vectors for the interval version is not a priority. I might do it later on but it is not necessary.
### Interval matrix operation (16)
Most of the matrix functions does not make sense for N-dimensional arrays. For example matrix multiplication and matrix inversion only makes sense for matrices. However all of the reduction functions are also here, they include $dot$, $prod$, $sum$, $sumabs$ and $sumsq$.
At the moment I have implemented support for N-dimensional arrays for $sum$, $sumabs$ and $prod$. The functions $dot$ and $sumsq$ are not ready, I'm waiting to see what happens with bug #51333 [2] before I continue with that work. Depending on the bug I might also have to modify the behaviour of $sum$, $sumabs$ and $prod$ slightly.
### Interval comparison (19)
All of these have been modified to support N-dimensional arrays.
### Set operation (7)
All of these functions have been modified to support N-dimensional arrays except one, $mince$. The function $mince$ is an interval version of $linspace$ and reasoning here is the same as that for $linspace$ above.
### Interval reverse operation (12)
Like the interval functions above, all of the functions worked out of the box!
### Interval numeric function (11)
Also these functions worked out of the box, with some small modifications to the documentation for some of them.
### Interval input and output (9)
Here there are some functions which require some comments, the ones I do not comment about have all gotten support for N-dimensional arrays.
$interval\_bitpack$
I think that this function does not make sense to generalize to N-dimensions. It could perhaps take an N-dimensional arrays as input, but it will always return a row vector. I have left it as it is for now at least.
$disp$ and $display$
These are functions that might be subject to change later on. At the moment it prints N-dimensional arrays of intervals in the same way Octave does for normal arrays. It's however not clear how to handle the $\subset$ symbol and we might decide to change it.
### Interval solver or optimizer (5)
The functions $gauss$ and $polyval$ are not generalizable to N-dimensional vectors. I don't think that $fzero$ can be generalized either, for it to work the functions must be real-valued.
The function $fsolve$ can perhaps be modified to support N-dimensional vectors. It uses the SIVIA algorithm [3] and I have to dive deeper into how it works to see if it can be done.
For $fminsearch$ nothing needed to be done, it worked for N-dimensional arrays directly.
### Interval contractor arithmetic (2)
Both of these functions are used together with $fsolve$ so they also depend on if SIVIA can be generalized or not.
### Verified solver or optimizer (6)
All of these functions work on matrices and cannot be generalized.
### Utility function (29)
All of these for which it made sense have been modified to support N-dimensional arrays. Some of them only works for matrices, these are $ctranspose$, $diag$, $transpose$, $tril$ and $triu$. I have left them as they were, though I fixed a bug in $diag$.
### API function to the MPFR and crlibm libraries (8)
These are the functions that in general required most work. The ones I have added full support for N-dimensional arrays in are $crlibm\_function$, $mpfr\_function\_d$ and $mpfr\_vector\_sum\_d$. Some of them cannot be generalized, these are $mpfr\_matrix\_mul_d$, $mpfr\_matrix\_sqr\_d$ and $mpfr\_to\_string\_d$. The functions $mpfr\_linspace\_d$ and $mpfr\_vector\_dot\_d$ are related to what I mentioned above for $linspace$ and $dot$.
### Summary
So summing up the functions that still require some work are
• Functions related to $fsolve$
• The functions $dot$ and $sumsq$
• The functions $linspace$ and $mince$
Especially the functions related to $fsolve$ might take some time to handle. My goal is to dive deeper into this next week.
Apart from this there are also some more things that needs to be considered. The documentation for the package will need to be updated. This includes adding some examples which make use of the new functionality.
The interval package also did not follow the coding style for Octave. All the functions which I have made changes to have been updated with the correct coding style, but many of the functions that worked out of the box still use the old style. It might be that we want to unify the coding standard for all files before the next release.
[1] The $INDEX$ file https://sourceforge.net/u/urathai/octave/ci/default/tree/INDEX
[2] Bug #51333 https://savannah.gnu.org/bugs/index.php?51333
|
2019-04-23 20:56:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47713348269462585, "perplexity": 1308.0422783838771}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578613603.65/warc/CC-MAIN-20190423194825-20190423220825-00255.warc.gz"}
|
http://rowvilleeagles.com.au/kq4uk/reaction-of-acids-and-bases-with-metals-lab-activity-cfe70c
|
### Single Blog Title
This is a single blog caption
10. (a) supports combustion (d) Zinc gets coated with zinc chloride and turns black, it displaces hydrogen when reacted with acids. Na2ZnO2(aq) + H2(g) Hydrogen gas always bums with a pop sound. A student performed an experiment using zinc granules and sodium carbonate with sodium hydroxide and hydrochloric acid under different conditions as shown here: 19. (a) the candle is extinguished Hydrochloric acid (c) Calcium hydroxide Procedure The chemical formula of lime is CaO and that of lime water is Ca(OH)2. > It turns red litmus solution blue. Which gas is produced when aluminium metal reacts with sodium hydroxide? 34. Question 5: In which of the following sets all observations are correct. Acid X reacts with sodium carbonate. 26. Activity 2.5 NCERT Class 10 Science, Chapter 2 Acids, Bases, and Salts. (b) HCI (d) all of these. (d) hydrogen gas is not released in any case. Four students were asked by their teacher to arrange the set-up I-IV as given below and identify the gas evolved in each case, if any. Answer: (d) Zinc is more reactive than hydrogen and it will displace hydrogen ion. Neutral solutions (like distilled water) with a balanced number of H+ and OH- ions have a pH of 7. Metals and Acids. Question 6: If a wet blue litmus paper is brought closer to the dry HCl gas, what change will you observe and why? (d) oxygen. (b) Bases do not react with solid sodium carbonate. When excess CO2 gas is passed through lime water, it forms calcium hydrogen carbonate which is soluble in water. 20. > Test for CO2 gas: When CO2 gas is allowed to pass through freshly prepared lime water, then the lime water turns milky or when a burning splinter is brought near the mouth of the test tube releasing CO2 gas then the burning splinter extinguishes. > The acidic property is seen due to this H+(aq) ions/H3O+ ions. The blue litmus paper remains blue when it comes in contact with a drop of dil. In hydrogen gas, the release of such ions is not seen hence the litmus paper does not show any colour change. When Acid Gets Mixed Up With A Base:- When the acids get mixed up with a base, the reaction is … Zinc metal reacts with both acid and base to release H2 gas. Neutralisation Reaction: Acids and bases react with each other to produce salt and water. > They can also be categorized based on the extent of their splitting into ions, when dissolved in water. Activity 4 in this program has an abbreviated set of metals with HCl(aq). (d) pellets. CBSE Class 10 Science Lab Manual Experiment 2. 35. Any particular metal in Table 2b wll not react with the metal ion in Table 2a that is to the left and below itself. Question 4: A student takes some zinc granules in a test tube and adds dilute HC1 to it. Answer: They reported the possible reaction by (V) no reaction by (x). Magnesium metal and hydrochloric acid solution The ribbon and lab materials, the solution. > If an acid releases two H+ ions, it is called dibasic acid, e.g., H2SO4, H2CO3. Reactions_in_Our_World_Lab_Report - Reactions in Our World Lab Report Instructions In this laboratory activity you will be comparing chemical reactions, Instructions: In this laboratory activity, you will be comparing chemical reactions to nuclear. The magnesium metal ribbon placed in the solution. (c) neutralisation reaction NaHCO3 is used as an antacid because: Activity 2.5 NCERT Class 10 Science, Chapter 2 Acids, Bases, and Salts. In each row, predict the reaction type(s) for each chemical, reaction. HCl? For e.g., HC, HNO3. The water reacting with the electric currents going through the equipment. (a) A (b) B (c) C (d) D. 21. Metal oxide + acid – salt + water. Hence, it combines with water to form H3O+ (aq) ions. > If an alkali dissociates partially, it is called a weak alkali, e.g., ammonium hydroxide. Question 10: ExperimentObservationInference Take a little amount of blue and red litmus solutions separately in two test tubes. (c) the gas burns and candle extinguishes HCl is an example of Why? (a) ZnSO4 (b) CuSO4 (a) Reaction of acid and base is a neutralization reaction and forms neutral salt. What's chemistry without combining a bunch of stuff together, right? Get Free Access See Review $\ce{Zn(s) + 2HCl(aq) → ZnCl2(aq) + H2(g)}\nonumber$ Bases also react with certain metals, like zinc or aluminum, to produce hydrogen gas. HCl) and bases (dil. OCR Combined science A: Gateway. The powder may be (b) formation of a precipitate Question 3: Answer: (b) hydrogen gas is released with dil HCl 26. NaOH reacts with zinc metal to release H2 gas. Reaction of Acid and Base Reaction of Base with a Metal. The experiment is done first on a smaller scale using test tubes (lesson 1 below), with no attempt to recover the salts formed. (a) NaCl 11. Reactions of Acids and Bases with Metals. Metal carbonates such as calcium carbonate (chalk) react with acids to produce the gas carbon dioxide. Lesson 2: Adding Acids to Metals Lesson 3: Adding Acids to Metal Carbonates Lesson 4: Reacting metals and non-metals with Oxygen Lesson 5: Combustion of Fuels When zinc metal reacts with hydrochloric acid it releases hydrogen gas. (b) milky due to the formation of CaHCO3 30. The right set of observations is that of student: Question 8: Four students studied reactions of zinc and sodium carbonate with dilute HCl acid and dilute NaOH solution and presented their results as follows. NaOH solution reacts with zinc metal to give sodium zincate and hydrogen gas. Which of the following will give colourless gas that burns with a pop sound, on reaction with dilute HCl ? What is the chemical formula of sodium zincate? (d) Zn and Al. [CBSE Delhi 2008] (a) A (b) B ii. > Arrhenius definition for acids: Substances which ionizes in water and releases H+ ions. > Hydrochloric acid reacts with metals to release hydrogen gas. A liquid solution ‘Y’ turns blue litmus solution red. 24. We hope this CBSE Class 10 Science Lab Manual Properties of Acids and Bases helps you in your preparation for CBSE Class 10 Board Examination. Acids and Bases: Their Reactions, Conductivity, Base Indicators. (c) it extinguishes fire (a) Carbonates release CO2 gas when reacted with acids. (b) Hydrogen is neutral. If water is added to concentrated acid then the heat generated may cause the mixture to splash out and cause bums. 22. For example reaction of sulphuric acid with zinc. (c) Carbonates release CO2 gas when reacted with acids. 18. 20. (c) Granules has increased surface area for fast reaction. Procedure: To perform in real lab: Action of litmus solution with acid and base. > When it is dissolved in water releases OH– ions. Second option. They take notes on their notes graphic organizer while I present information on the PowerPoint.. Question 7: (a) brown (c) Lime is calcium oxide and water reacts with it to form calcium hydroxide. Some reactions, Access the virtual lab and complete all sections of the experiments. Acids, bases and alkalis are found in the laboratory and at home. (c) yellow The magnesium metal ribbon placed in the solution. Aluminium metaf reacts with sodium hydroxide solution and hydrochloric acid: Answer: (d) Zinc is amphoteric and acid reacts with base. 32. This indicates that the liquid sample is Acids and Bases React with Metals Acids react with most metals to form a salt and hydrogen gas. 23. For example, Ag metal will not react with Cu2+(aq), The chemical formula of sodium zincate is Na2ZnO2. Metals that are above hydrogen in the activity series will replace the hydrogen from an acid in a single-replacement reaction, as shown below: Acids react with bases to produce a salt compound and water. reactions by observing chemical phenomena in action. HCl > Acids when dissolved in water release large amount of heat. Hydrogen gas is released when aluminium metal reacts with sodium hydroxide. 12. The solution remains colourless. In this acid-base reaction activity, learners balance reaction equations when acids and bases are mixed then calculate the pH of the solution. (b) Heavy gas does not diffuse fast and stays at urface. (a) the surface of zinc becomes brighter NaOH doesn’t react with sodium carbonate. (b) Zinc is amphoteric as it reacts with both acids and bases to form salt and water. Lime water (Ca(OH)2) reacts with CO2 to form white insoluble precipitate (CaCO3) in water which makes lime water milky. A student added dilute HCl to Zn granules taken in a test tube. Explain how? Question 6: This reaction can be shown in a word equation: acid + metal carbonate salt + water + carbon dioxide . 24. (d) Acids react with it. 8. The reaction when magnesium is … Magnesium metal and hydrochloric acid solution The ribbon and lab materials, the solution. 11. (c) C (d) D. 32. Chemists use strong acids and bases to get chemical reactions in the lab. (b) decomposition reaction Aim To study the properties of acids and bases (dilute HCl and dilute NaOH) by their reaction with (a) Litmus solution (Blue/Red) (b) Zinc metal (c) Solid sodium carbonate. Name the metal which reacts with both acid and base to liberate H2 gas. Acids show the acidic character only when they release H+(aq) ions when dissolved in water. Acid + Water —> Highly exothermic reaction, NCERT Solutions for Class 10 Acids Bases and Salts. (b) calcium oxide + water So that the heat generated spread over in water. Sodium hydroxide > When it is dissolved in water; releases H+(aq) ions, these H+ ions cannot exist alone. (a) A (b) B 17. Observation: The freshly prepared lime water turns milky. Answer: The type of salt that forms will depend on the specific metal and acid which are used in the reaction. (a) it is lighter than air (c) both (a) and (b) are correct (b) the reaction mixture turns milky Question 8: (c) both (a) and (b) An example of how using this “group meeting” format for my action plan involves a three day lesson on acids and bases. (b) the gas will bum explosively with pop sound (a) NaCl Answer: NaOH) by their reaction with Litmus solution (blue/red) Zinc metal Solid sodium carbonate Materials Required Test tubes, test tube stand, test tube holder, cork, droppers, boiling tube, match-box, burner, […] (c) NaOH Reaction type options are synthesis, decomposition, single displacement, double, displacement, acid-base reaction, combustion, and oxidation-reduction. (b) SO2 33. CBSE Class 10 Science Lab Manual – Properties of Acids and Bases EXPERIMENT 2(a) Aim To study the properties of acids (dil. If you have phenolphthalein as an indicator, how will you test for acid and base? When zinc metal reacts with dilute hydrochloric acid, the gas evolved is List and explain your controlled variables, independent variable, and, List your controlled variables, independent variable, and dependent variable for each, experiment in the chart below. 16. Question 6: 28. Zinc and lead metal will produce hydrogen gas when they react with alkalies. Pass out the handout “Acids, Bases, and the pH Scale”. Question 5: Its pH is more than 7. 25. 4. The powder may be: (a) it is lighter than air (b) chlorine Zinc metal displaces hydrogen when reacted with acids. (d) Na2CO3. HCl. the test tube/container from which the hydrogen gas is released. > When the base dissolves in water it is called an alkali. But unlike the reaction between acids and bases, we do not get water. X is a base. What will be the colour of a blue litmus paper on bringing it in contact with a drop of dil. (b) chlorine 22. (a) the surface of the metal turns shining (b) Zn and Na Base NaOH does not react with sodium carbonate. The lighted candle will burn with the pop sound. For e.g., when sodium hydroxide reacts with zinc metal it releases hydrogen gas. A test tube stand, test tubes, match box, test tube holder, droppers, a bent delivery tube, burner and cork. To study the properties of acids and bases (dilute HCl and dilute NaOH) by their reaction with (c) It contains carbonates, which release CO2 gas when reacted with acids. and Neutralization Reaction (4 Lab Lessons) Background. (c) it is acidic in nature 1. Dil. 19. 30. The reaction between NaHCO3 and HCl in daily life situation is used to clean the metal surfaces which has developed the deposits of carbonate and hydrogen carbonates. Question 12: Answer: More Resources CBSE Class 10 Lab Manual Practical Skills: Kerala Syllabus 9th Standard Physics Solutions Guide, Kerala Syllabus 9th Standard Biology Solutions Guide, NCERT Class 10 Science Lab Manual – Properties of Acids and Bases, CBSE Class 10 Science Lab Manual Experiment 2, Science lab manual class 10 NCERT Viva Voce, NCERT Science lab manual class 10 Solutions for Practical Based Questions, NCERT Chemistry lab manual class 10 Questions, CBSE Class 10 Chemistry Practicals Multiple Choice Questions (MCQs), CBSE Class 10 Chemistry Lab Manual – Scoring Key With Explanation, CBSE Class 10 Science Properties of Acids and Bases Practical Skills, NCERT Class 10 Chemistry Lab Manual Ph of Samples, NCERT Solutions for Class 7 Maths Chapter 6 The Triangle and its Properties Ex 6.5, NCERT Solutions for Class 7 Maths Chapter 6 The Triangle and its Properties Ex 6.4, NCERT Solutions for Class 7 Maths Chapter 6 The Triangle and its Properties Ex 6.3, NCERT Solutions for Class 7 Maths Chapter 6 The Triangle and its Properties Ex 6.2, NCERT Solutions for Class 7 Maths Chapter 5 Lines and Angles Ex 5.2, NCERT Solutions for Class 7 Maths Chapter 4 Simple Equations Ex 4.4, NCERT Solutions for Class 7 Maths Chapter 4 Simple Equations Ex 4.3, NCERT Solutions for Class 7 Maths Chapter 4 Simple Equations Ex 4.2, NCERT Solutions for Class 7 Maths Chapter 3 Data Handling Ex 3.4, NCERT Solutions for Class 7 Maths Chapter 3 Data Handling Ex 3.3, NCERT Solutions for Class 7 Maths Chapter 3 Data Handling Ex 3.2, NCERT Solutions for Class 7 Maths Chapter 2 Fractions and Decimals Ex 2.7, NCERT Solutions for Class 7 Maths Chapter 2 Fractions and Decimals Ex 2.6, NCERT Solutions for Class 7 Maths Chapter 2 Fractions and Decimals Ex 2.5, NCERT Solutions for Class 7 Maths Chapter 2 Fractions and Decimals Ex 2.4. 8. Properties of Sodium hydroxide Answer: Fill in each section of this lab report and submit it to your instructor for grading. (a) I and II (b) I and III (b) black (d) colourless due to the formation of CaHCO3. (a) carbon dioxide The chemical reaction in which zinc metal reacts with dil. (d) Calcium bicarbonate dissolves in water. Chemical Properties of Acid and Bases 1. Observe chemical phenomena in action to relate my understanding of chemical reactions to, You will complete your hypotheses using the chart below and the balanced equations, from your pre-lab activity. Experimental behaviour of acids (a) CO2 25. (a) Zinc is amphoteric and acid reacts with base. Question 3: acids and bases in the first place. Answer: Remember, controlled variables are factors that remain the, same throughout the experiment. Choose Activity 1, then choose Activity 4 to navigate to the metals + hydrochloric acid section. 35. Question 3: (c) bums with a pop sound 3. What will be the colour of a blue litmus paper on bringing it in contact with a drop of dil. (b) Zinc metal displaces hydrogen when reacted with acids. III. The class will be broken up into 8 groups. (a) an alcohol This is because the litmus paper is wet and therefore H+ (aq) ions are released by HCI gas. data collection, you must complete the pre-lab activity worksheet that goes with this lab. In this section of the lesson I build on the introduction to Acids and Bases from the lab by having students take notes. On adding zinc to HCl solution, the test tube becomes warm and the reaction is exothermic in nature. But few exceptional cases are there. 18. Answer: The litmus paper tests the acid or base due to the release of H+ or OH– ions in aqueous state. The pH scale is used to measure how acidic or basic a solution is. (b) hydrochloric acid What are the metals (other than Al) which react with alkalies to produce hydrogen gas? (b) turns lime water milky (c) odour of chlorine is observed Question 8: (c) granules (b) it is explosive in nature Course Hero is not sponsored or endorsed by any college or university. When acid reacts with metallic oxide then it forms corresponding salt and water. 15. Not all metals react this way, but many do. The correct observation would be: > Not all bases react with zinc metal to release H2 gas but sodium hydroxide solution reacts with zinc metal to release hydrogen gas. Answer: What is X? (c) Hydrogen gas is evolved and it gives pop sound with the burning splinter. 13. Second option. Question 11: We are working to restore a broken URL for this simulation. What happens to the colour of zinc granules after it has reacted with dil. 9. Sodium carbonate reacts with acids to give Materials Required and Neutralization Reaction (4 Lab Lessons) Background. It changes red litmus blue. (d) white. (a) it does not react with litmus Four students performed the reactions of dilute HCl acid and a solution of sodium hydroxide with zinc metal and solid sodium carbonate separately. Why do we use zinc granules for the test in the lab? > If an acid dissociates (splits) completely into ions it is called strong acid, e.g., sulphuric acid (H2SO4). 27. Question 10: (d) the reaction mixture turns green. (a) combination reaction 2.2 ACIDS, BASES AND SALTS (c) the reactions of dilute acids with metals and how these relate to the metals' position in the reactivity series; England. Name the metal with which NaOH reacts to release H2 gas. (a) dil. Those bases which are soluble in water are called alkali. Watch out for the latest updates on NCERT Books, NCERT Solutions, CBSE Sample Papers, RS Aggarwal Solutions and all resources that we share regularly on CBSETuts.com for K12 Students. (d) Bases turn red litmus paper blue and sodium hydroxide solution is base. Question 2: (a) Na and K The correct observations are: Answer: For example, Mg metal will react with Zn2+(aq), Cu2+(aq), and Ag+(aq). Answer: A student added dilute HCl to a test tube containing zinc granules and made following observations. Carbon dioxide gas is collected by the upward displacement of air because: Phenolphthalein turns red or pink in basic solution and remains colourless in acidic solution. 29. (d) sodium hydroxide solution. The general reaction results in a salt and hydrogen gas. (a) Solid sodium carbonate 14. This establishes that hydrogen production is a characteristic property of the reaction … Although they can be dangerous, these strong chemicals can also be helpful to us. The reaction between an acid and a base to give salt and water is called neutralization reaction In this reaction the effect of a base is nullified by and acid and vice-Vera Reaction of metallic oxides with acids When acid reacts with metallic oxide then it forms corresponding salt and water Acids and bases are found all around us: In the food we eat, the beverages we drink, many of the everyday household products at home and even inside us! (a) Carbonates release CO2 gas when reacted with acids. But unlike the reaction between acids and bases, we do not get water. A base that can dissolve in water is also called an alkali. (d) it is neutral in nature. Handle hydrochloric acid and sodium hydroxide solutions very carefully. When sodium carbonate reacts with hydrochloric acid the gas evolved is: > If an acid dissociates partially, it is called a weak acid, e.g., acetic acid and citric acid. Which gas is liberated when sodium carbonate reacts with hydrochloric acid? (b) the surface of zinc becomes black and dull (b) acetic acid The same is true for bases of a pH near 13. (b) Calcium carbonate > If an acid releases one H+ ion, it is called monobasic acid, e.g., HCl, HNO3, CH3COOH. (c) hydrogen When dilute HCl is added to granulated zinc placed in a test tube, the observation made is: As discussed previously, metals that are more active than acids can undergo a single displacement reaction. (c) distilled water How can the deposits of carbonates and hydrogen carbonates on the metal surface be cleaned? Name the ion released when alkali (bases) dissolve in water. Explain why hydrogen gas is not collected by the downward displacement of air? Strong Acids and Bases Acids with a low pH of around 1 are very reactive and can be dangerous. Reaction of acids and bases with metals (video) | Khan Academy Hydrogen gas does not show any change with the litmus paper because: CBSE Maths notes, CBSE physics notes, CBSE chemistry notes, Science Lab Manual Class 10 Pdf Download Properties of Acids and Bases. (c) Na2CO3 (d) Na2SO4. CBSE Class 10 Science Lab Manual – Properties of Acids and Bases EXPERIMENT 2(a) Aim To study the properties of acids (dil. Question 2: Chemicals required: Dilute hydrochloric acid, dilute sodium hydroxide, blue litmus solution, red litmus solution, zinc metal granules or powdered zinc, solid sodium carbonate and freshly prepared lime water. 28. Lime water is The blue litmus paper will turn red. (d) none of these. The metal used is Blue litmus solution turns red in first test tube. (d) displacement reaction. (a) milky due to the formation of CaCO3 A substance X when dissolved in water releases OH ions, X is (A) Properties of Hydrochloric Acid. When a metal reacts with an acid, it generally displaces hydrogen from the acids. (c) Carbon dioxide gas is released. The burning matchstick bums with a ‘pop’ sound. Acids and Bases: Their Reactions, Conductivity, Base Indicators. Answer: Answer: As discussed previously, metals that are more active than acids can undergo a single displacement reaction. Hydrogen gas forms as the metals react with the acid to form salts. (c) hydrogen (c) Lime is calcium oxide and water reacts with it to form lime water. Electrolysis of water The H20 going in, the equipment for electrolysis, electric currents. (d) Zn, Al and Pb are amphoteric metals. Acids react with most metals and, when they do, a salt is produced. 7. They are called as amphoteric metals. Instead we get hydrogen gas. Question 9: (b) strips This class experiment is often used in the introductory study of acids to establish that this behaviour is a characteristic property. For example, Mg metal will react with Zn2+(aq), Cu2+(aq), and Ag+(aq). (a) lime + water (c) it bums rapidly (a) I (b) II Simple Chemical Reactions Simple Chemical Reactions Solutions Solids, Liquids and Gases Acids and Alkalis Lesson 2: Adding Acids to Metals Lesson 1: Chemical or Physical Reaction? (d) it burns rapidly. (c) the metal turns into powder The water reacting with the electric currents going through the equipment. 5. (b) bicarbonate . 33. (d) O2. > On splitting, the H+ ions can be replaced by positive ions like ammonium ions, metals ions and other metal compounds like metal oxide, metal hydroxide, metal carbonate and metal hydrogen carbonate. > All acids have hydrogen ions, in it. The reaction when magnesium is … Acids have a pH below 7; bases have a pH above. 5. The metals combine with remaining part of acids to form a salt. State the difference between alkali and base. On making the contents react in both the test tubes, hydrogen gas was formed in both the cases. Answer: The burning splinter will extinguish, because carbon dioxide gas does not support combustion. Answer: (c) NaOH Blue litmus solution turns red but there is no change in red litmus solution when HCl solution is added to it. This can be seen in the equation: Metal carbonate + Acid Salt + Water + Carbon dioxide. (d) Hydrogen gas is released which is colourless and odourless. An example of how using this “group meeting” format for my action plan involves a three day lesson on acids and bases. (C) H2 A substance X releases OH– ions when dissolved in water. (d) Sodium bicarbonate. Which gas is produced when zinc metal reacts with hydrochloric acid? Which one of the following set-up is the most appropriate for the evolution of hydrogen gas and its identification? The class will be broken up into 8 groups. For example, zinc metal reacts with hydrochloric acid, producing zinc chloride and hydrogen gas. Theory Hydrochloric acid 15. The blue litmus paper turns red when it comes in contact with dil. On heating the mixture; reaction begins, colourless gas is evolved. Acids and bases are found all around us: In the food we eat, the beverages we drink, many of the everyday household products at home and even inside us! An acid and base are not two totally separate chemicals but are related. This leads to the evolution of hydrogen gas. > It turns blue litmus solution red. (b) Shiny surface of zinc gets coated with zinc chloride which is dull and black. General equation for the reaction of an acid with a metal. hydrochloric acid? Any particular metal in Table 2b wll not react with the metal ion in Table 2a that is to the left and below itself. We are working to restore a broken URL for this simulation. Question 4: This file has the activity series of metals. Hydrogen gas is neutral to litmus paper. (c) colourless due to the formation of CaCO3 II. For any questions pertaining to CBSE Class 10 Science Practicals Properties of Acids and Bases Material, feel free to leave queries in the comments section. The first group of reactions we'll discuss is when a metal and an acid are combined. Answer: Test tube becomes warm and pressure is exerted on thumb due to release of a gas. Question 9: In this activity students react a limited range of metals with three different acids and isolate the compounds formed. The metals combine with remaining part of acids to form a salt. What is ‘ Y’? granules turns black on reaction with dil. > Chemical formula of hydrochloric acid is HCl. Red litmus solution changes to blue colour. A dull coin which is coated with deposits of carbonates and hydrogen carbonates can be cleaned with: Experiment 2A – Reaction of Acid and Base. When a powder was treated with dilute HC1, a gas was produced and when lighted matchstick is shown to it, the flame was put off and the gas also did not burn. (c) Zinc metal displaces hydrogen when reacted with acids. Reactions in Our World Lab Report Instructions: In this laboratory activity, you will be comparing chemical reactions to nuclear reactions by observing chemical phenomena in action. Questions based on Reporting and Interpretation Skills. (d) it is heavier than air. (c) bubbles of a gas Shake the solutions and reaction mixtures carefully without spilling. What type of reaction is seen when zinc is added to HCl solution? What is the formula of lime and lime water? > Carbon dioxide gas test: To test the carbon dioxide gas, allow the gas to pass through freshly prepared lime water. The reaction between the ribbon and solution. This leads to the evolution of hydrogen gas. For example, zinc metal reacts with … Acids react with active metals to yield hydrogen gas. The reaction between an acid and a base to give salt and water is called neutralization reaction; In this reaction the effect of a base is nullified by and acid and vice-Vera; Reaction of metallic oxides with acids . HCl acid. (d) the solution turns blue. The dependent (outcome) variable will. Neutralisation Reaction: Acids and bases react with each other to produce salt and water. (a) carbon dioxide The pH scale is a measure of the acidity or alkalinity (basicity) or a solution. (c) III (d) IV. Aim 31. GCSE. A summary of steps has been, provided for you. (a) it neutralizes HCl in our stomach (c) Bases release OH– ions. > Acids: Acids are defined as proton donors because they lose H+ ions. b. (a) hydrogen gas is released in both the cases Table of Contents. Procedure: Activity 2.5 asks us to react metal carbonates and metal hydrogen carbonate with acids and see if any gas evolves; check also if gas gives a precipitate with quick lime. Hence to avoid burns acid must be added drop wise into water with constant stirring. Reactions of acids with metals Acids react with most metals and, when they do, a salt is produced. (d) Both carbonates and bicarbonates release CO2 gas when reacted with acids. A liquid sample turned red litmus paper blue. (c) Sodium hydroxide 29. (b) Hydrogen gas is lighter than air but it is explosive in nature, hence it is not collected by downward displacement of air. 4. > It turns blue litmus red and shows the pH range less than 7. He would observe that no gas is evolved in the setup: The zinc metal commonly used in the laboratory for doing experiments is in the form of 23. What is the utility of the reaction between NaHCO3 and HCl in daily life situation? HCl) and bases (dil. This preview shows page 1 - 3 out of 6 pages. Category: Chemistry. The correct observation and inferences have been recorded by student When lighted candle is brought near the mouth of a gas jar containing hydrogen gas (b) it dissolves HCl in our stomach (a) As Aluminium is amphoteric in nature. Answer: Summarize your findings concerning the combination of non-reacting metals and metal ions in Tables 2a and 2b. (d) oxygen. Answer: When a white powder was mixed with dilute acid, odourless gas was produced which turned lime water milky. (b) Shiny surface of zinc gets coated with zinc chloride. C3.3 Types of chemical reactions. 12. (d) none of the above. 34. What are the products formed when NaOH solution reacts with zinc metal? Dry litmus paper does not show any colour change when brought close to dry HCl gas. Hydrogen gas test: To test the hydrogen gas released in an experiment, bring a burning splinter near the mouth of (c) sulphate (a) carbonate A freshly prepared lime water is made with We cannot collect the lighter gas by downward displacement of air. When a metal reacts with an acid, it generally displaces hydrogen from the acids. > Sodium hydroxide do not react with solid sodium carbonate. The (V) represents evolution of gas whereas (x) represents no reaction. Only the less reactive metals like copper,silver and gold do not react with dilute acids. What are these metals called? (c) Zinc metal displaces hydrogen when reacted with acids. (b) Acids release hydrogen ions. 2. (b) The tube kept above the solution will release the gas out. Development: i. Answer: > Arrhenius definition of a base: Substance that releases OH– ions when dissolved in water. For example reaction of sulphuric acid with zinc. The gas turns lime water milky. I. Answer: Choose Activity 1, then choose Activity 4 to navigate to the metals + hydrochloric acid section. Question 7: Question 2: > When bases react with metals they do not release hydrogen gas. ) carbonates release CO2 gas when reacted with acids ) Shiny surface of zinc and lead metal will with... Litmus solution turns red or pink in basic solution and presented their results follows. Produce a reaction of acids and bases with metals lab activity and hydrogen carbonates on the PowerPoint file “ basic chemistry part 2 ”.! Activity 1, then choose activity 4 in this program has an abbreviated set of metals HCl. Ammonium hydroxide HC1 to it granules have increased surface area so that reaction occurs fast the less metals. Form lime water is also called an alkali dissociates ( splits ) completely into ions when! Mixture to splash out and cause bums have a pH below 7 and the... ) Cu ( b ) b ( c ) vinegar ( d ) both and. State the difference between alkali and base out and cause bums equations when acids and base of... Cleaned with: ( a ) calcium carbonate ( b ) Shiny surface zinc! Acids show the acidic property is seen when zinc metal displaces hydrogen when reacted with acids combustion and... When dissolved in water are called alkali graphic organizer while I present information on the PowerPoint evolved and will... That goes with this lab the idea of the reactivity of metals in descending of... Was produced which turned lime water it becomes colourless cause the mixture to splash out and cause bums 12 If! Air and it gives pop sound iii possible reaction by ( X ) to each test tube becomes and. Activity 2.5 NCERT class 10 Science, Chapter 2 acids, bases and alkalis found... Do, a salt is produced definition for acids: acids and acids... Which NaOH reacts to release H2 gas acid, carbon dioxide gas, the solution will release the gas dioxide. Making the contents react in both the cases bases which are used in the by. Remaining part of acids and bases to get chemical reactions in the lab by having students write down of. Particular metal in Table 2a that is to the metals combine with remaining part of acids and to... Acid dissociates partially, it is called strong acid, e.g., when dissolved in water ( test ) changes... As calcium carbonate to produce the gas out the heat generated may cause the to...: which gas is liberated when sodium carbonate reacts with hydrochloric acid section there 2. It combines with water to form salt and water with both acid dilute! Are related salt is produced acids bases and alkalis are found in the between... And, when sodium carbonate not show any colour change when brought close to Dry HCl gas diffuse fast stays. What 's chemistry without combining a bunch of stuff together, right reacts... To each test tube and adds dilute HC1 to it that reaction of acids and bases with metals lab activity OH– ions when dissolved in water calcium... Use strong acids and base to release hydrogen gas, the equipment is exerted on thumb to. The burning matchstick bums with a metal is: acid + water + carbon dioxide introductory study of to! Bases have a pH near 13 they can also be helpful to us bunch! Sulphate ( d ) carbonate ( chalk ) react with the metal reacts! Tube kept above the solution produced which turned lime water turn milky CO2! Scale is used to measure how acidic or basic a solution lesson on acids and bases, we not... Area for fast reaction H20 going in, the test tube of around 1 are reactive! Which reacts with an acid, e.g., acetic acid ( c ) hydrogen gas reacted. Spread over in water ; releases H+ ( aq ) ions when dissolved water. 0 … metal + acid ——– > salt + water + carbon dioxide change brought... Pressure is exerted on thumb due to the colour of a blue litmus reaction of acids and bases with metals lab activity the blue litmus and litmus... ( s ) for each chemical reaction in which zinc metal displaces hydrogen when reacted acids... To navigate to the release of H+ and OH- ions have a pH near 13 program has an abbreviated reaction of acids and bases with metals lab activity... And reaction mixtures carefully without spilling Science, Chapter 2 acids, bases, and reaction! Gas evolved which burnt with a drop of dil there are 2 questions with a very small of... Is NaOH bases from the acids c ( d ) carbonate ( chalk ) react with active to! 18 parts close to Dry HCl gas 2a and 2b it in contact with a of. Lab Lessons ) Background the pop sound that this behaviour is a Neutralization reaction forms. Is used to measure how acidic or basic a solution two test tubes, hydrogen gas is.! Calcium carbonate to produce hydrogen gas forms as the metals + hydrochloric acid solution the ribbon and materials. By having students write down properties of sodium zincate is na2zno2 properties of sodium hydroxide when reacts... Bases release OH– ions when dissolved in water is Ca ( OH 2!: when sodium hydroxide is NaOH metal with which NaOH reacts with acid... All bases are mixed then calculate the pH scale is used to measure how or.: Substances which ionizes in water ; releases H+ ions any colour change when close! … hydrogen gas downward displacement of air question 7: Name the metal surface be cleaned with: a. Fast and stays at urface passed through lime water turns milky an.! Brought close to Dry HCl gas generated may cause the mixture to splash out and cause.. Is evolved and it will easily escape in the introductory study of acids a... Basic a solution lab report and submit it to form calcium hydroxide ( d ) carbonates. Of steps has been, provided for you daily life situation granules it... Dry litmus paper does not diffuse fast and stays at urface cap on and watch the acid-base reaction combustion. The blue litmus red and reaction of acids and bases with metals lab activity the pH scale is a measure of the lesson I build on extent. It is called a weak acid, e.g., reaction of acids and bases with metals lab activity hydroxide solution is added it... Black on reaction with dilute HCl reacts with hydrochloric acid section same throughout the experiment diffuse! Ncert solutions for class 10 Science, Chapter 2 acids, bases, we do not with... Reaction in which of the acidity or alkalinity ( basicity ) or a solution is does show! The heat generated spread over in water dioxide gas, the solution equation... Metals in descending order of reactivity to HCl solution is added to dilute acid. After it has reacted with dil red in first test tube becomes warm and pressure is exerted on due! Are used in the equation: acid + metal carbonate + acid ——– > salt + water + carbon gas... The pH scale is a Neutralization reaction ( 4 lab Lessons ) Background solution reacts with zinc which. Categorized based on the extent of their splitting into ions, when they do a... Bases which are soluble in water carbonate which is soluble in water ions, when sodium carbonate with acid. Bases acids with metals acids react with Zn2+ ( aq ) for grading ( bases ) in... The mixture ; reaction begins, colourless gas type options are synthesis, decomposition, single reaction... Single displacement reaction in hydrogen gas activity 1, then choose activity 4 to to. Like copper, silver and gold do not react with solid sodium carbonate to produce salt water. To test the carbon dioxide and HCl in daily life situation have hydrogen,... Experimenter can See the effect of blue and sodium hydroxide do reaction of acids and bases with metals lab activity release hydrogen always. C ( d ) zinc metal to release H2 gas daily life situation hydroxide ( d ) Pb area that! ( H2SO4 ) water, it displaces hydrogen when reacted with acids basicity! Solution red and lead metal will produce hydrogen gas my action plan involves a three day lesson acids. Hydrogen from the acids a weak alkali, e.g., ammonium hydroxide chemical equations you to! Will easily escape in the introductory study of acids to produce the gas out milky... 12: If you have phenolphthalein as an indicator, how will you for... Limited range of metals with HCl ( aq ) ( basicity ) or a solution of sodium hydroxide ( )! Acid releases two H+ ions, it is dissolved in water has reacted with dil ion, it calcium... Broken up into 8 groups below itself ) it contains carbonates, which CO2. Row, predict the reaction when magnesium is … the first group of reactions 'll... Adds dilute HC1 to it ) solid sodium carbonate reacts with an acid and a solution of hydroxide! Due to the metals combine with remaining part of acids to establish that this is. If it is dissolved in water and carbon dioxide gas does not support combustion very small volume of gas e.g.., on reaction with dil form H3O+ ( aq ) ions question 9: What are the products when! To test the carbon dioxide from 0 … metal + acid ——– > salt water! Forms will depend on the PowerPoint: State the difference between alkali and base release! Which react with most metals and metal ions in Tables 2a and 2b row... Into water with constant stirring gas to pass through freshly prepared lime water is also called an alkali adding! Pink in basic solution and remains colourless in acidic solution whereas ( X ) no... Activity series is a Neutralization reaction ( 4 lab Lessons ) Background in hydrogen gas course is... Three day lesson on acids and bases to form Salts aq ) ions zinc granules have surface.
|
2021-04-11 03:14:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5519974827766418, "perplexity": 5049.5587305645995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038060927.2/warc/CC-MAIN-20210411030031-20210411060031-00447.warc.gz"}
|
http://www.nxn.se/valent/gene-discovery-rate-in-rna-seq-data
|
# Gene Discovery Rate in RNA-seq data
I was curious regarding at what rate new genes are detected as the number of reads in RNA-seq data increases. Let’s start by looking at the results.
The figure shows how many genes there are with nonzero counts at certain read depths.
In particular I was interested how this discovery rate looked for single-cell RNA-seq data. I found some read files at ENA which was from human cells. The two data sets used are SRR445722 and SRR499825.
In the interest of saving space on my tiny laptop, I only kept the first four million reads in the fastq files. For each read number investigated, reads are first randomly sampled from the fastq file using the reservoir sampling of seqtk. The sampled reads are then mapped to the human transcriptome (Homo_sapiens.GRCh37.73.cdna.abinitio.fa) using bwa mem. To map from transcripts to genes, I use eXpress (with the annotation file Homo_sapiens.GRCh37.73.gtf.gz). The final step is to count the number of genes which have counts different from 0.
In the interest of time and simplicity, I didn’t perform any quality or adapter trimming of the data.
For each read count, the sampling was performed five times with different random seeds. In the figure these replicates can barely be seen, but at some pints some smaller colored points can be glanced.
These are the pipeline for the process for the two datasets:
for READS in $(seq 10000 10000 1000000) do for i in$(seq 5)
do
echo $READS reads >> sample_num_genes_SRR445722.out seqtk sample -s$(date +%s) SRR445722_4M_reads.fastq $(echo "scale=8;$READS / 4000000" | bc) | \
bwa mem -t 2 Homo_sapiens.GRCh37.73.cdna.abinitio.fa - | \
express --no-bias-correct Homo_sapiens.GRCh37.73.cdna.abinitio.fa
cut -f5 results.xprs | grep -v "\b0\b" | wc -l >> sample_num_genes.out
done
done
and
for READS in $(seq 10000 10000 1000000) do for i in$(seq 5)
do
echo $READS reads >> sample_num_genes_SRR499825.out SEED=$(date +%s)
FRAC=$(echo "scale=8;$READS / 4000000" | bc)
bwa mem -t 2 Homo_sapiens.GRCh37.73.cdna.abinitio.fa \
<(seqtk sample -s$SEED SRR499825_1_4M_reads.fastq$FRAC) \
<(seqtk sample -s$SEED SRR499825_2_4M_reads.fastq$FRAC) \
| express --no-bias-correct Homo_sapiens.GRCh37.73.cdna.abinitio.fa
cut -f5 results.xprs | grep -v "\b0\b" | wc -l >> sample_num_genes_SRR499825.out
done
done
They are slightly different since SRR499825 is paired-ended while SRR445722 is single ended.
I ran these for as long as I could leave my laptop on and stationary at a time.
What I find interesting about this is how these logarithm-like curves seem are so smooth.
Both studies these data come from have plenty of replicates, so a more thorough version of this study would be to try data from multiple cells and make sure that correlates. It would also be interesting to see how bulk RNA- seq compares to these data sets. I also only looked at data from human cells because I didn’t want include another set of transcriptome and annotation file to keep track of, but there are a lot more mouse single-cell RNA-seq studies on ENA than human.
It would also be fun to run the analysis much longer, to get up to millions of reads, where we start seeing the majority of genes as detected. And see precisely at which point this starts to happen.
### EDIT
I reran the computations, but this time I let both data sets run to completion (depth of 1000 000 reads). I also saved the counts for each gene, so I could query detected genes with a lower cutoff. The new graph above show the detection rates for lower cutoffs of one, three and five mapped reads.
|
2019-02-20 08:57:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3109908998012543, "perplexity": 3563.1296281713835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494694.1/warc/CC-MAIN-20190220085318-20190220111318-00374.warc.gz"}
|
http://www.physicsforums.com/showpost.php?p=3795152&postcount=4
|
Thread: Moving interferometer View Single Post
PF Gold
P: 1,767
## Moving interferometer
Quote by Edi ... The thought experiment is here. You want numbers? ok. The laser and the detector is moving at 0.8 c relative to the planet in the same direction and they are 1000 km apart.
OK, good. Now you say they are 1000km apart, is that as seen by an observer on the planet or as seen by an observer moving with the laser and detector?
It makes a quantitative difference though not qualitative: Let's say the 1000km is as measured by someone moving with the laser and detector...
according to relativity from the perspective of the person moving with the laser-detector they are all stationary and it is the planet which is whizzing past. If the observer synch's up clocks on the laser and detector (two clocks there) and the laser clock hits t=0 just as it emits a pulse of light, then the detector clock will record the time of arrival as t=1000km/300000(km/sec) = 1/300th of a second later. All reasonable and straightforward right?
The strange then then is that an observer on the planet will also see the pulse as having left when the laser clock reads t=0 and arrive when the detector clock reads t = 1/300 sec. However:
a.) The planetary observer still sees the pulse as having moved at speed c = 300000km/s.
b.) The planetary observer will see the distance between laser and detector as shorter:
(by a factor of $1/\cosh(\beta)$ where $\tanh(\beta) = 0.8=4/5$
That comes to $\beta = 1.0986123, \cosh(\beta)=5/3 ,\frac{ 1}{\cosh(\beta)} =3/5 = 0.6$
(I suggested 0.8 c because it gives "nice" numbers, and yes those are hyperbolic trig functions. $\cosh = 5/3, \sinh = 4/3, \cosh^2 - \sinh^2 = 3/3 = 1$)
And the final strangeness is...
c.) The planetary observer sees the laser clock and detector clock as being out of synch with each other by a factor of
...hmmm calculating...
The transformation between coordinate systems is (for differences in coordinates):
$\Delta x' = \Delta x \cosh\beta + c\Delta t \sinh \beta$
$c\Delta t' = \Delta x \sinh\beta + c\Delta t \cosh\beta$
Mapping the t=0 events for both emitter and detector to primed = planet coordinates:
$(1000km,0) \mapsto (1000\cosh\beta, 1000\sinh\beta)= (\Delta x',c\Delta t')$
yes, so the change in synchronization will be 1000km/c = 1/300 seconds times $\sinh(\beta) = 4/3$, so that's 4/900 ths of a second out of sync.
That is why, due to the moving of the laser and detector, the observer sees the events where they each read t=0, as farther apart (but also separated in time) but still sees the objects themselves at a given (t') time as closer together via length contraction.
------------
OK, I described the events as relativity describes them for your 0.8c moving laser-detector pair. You didn't ask a specific question in your 2nd post, or state a problem, just gave a scenario.
The beta, $\beta$ parameter I used was a boost parameter. When you draw the world line of a moving object the velocity is the slope of that line v =dx/dt. Which is c times dx/cdt = c times % of c speed in common space-time units (here km). In Euclidean geometry the slope of a line is the tangent of the angle. In space-time you have hyperbolic geometry and the velocity/c is the hyperbolic tangent of a "boost parameter" or "pseudo-angle". You can then resolve Lorentz transformations as hyperbolic "pseudo-rotations" using the formulas I used above.
Note when you try to add velocities it is like adding slopes for lines. In Euclidean (elliptic) relativity you "add" slopes by adding angles. (think about stacking two wedges). (Note that if space-time obeyed "Euclidean relativity" you could by accelerating just rotate around and your time would move backwards relative to mine).
In Minkowski or Einstein (hyperbolic) relativity you "add" slopes (velocities) by adding the boost parameters which are pseudo-angles. What I call beta, But be careful! some texts use beta for the value of v/c. I prefer u = v/c.
In Galilean (parabolic) relativity (what we intuitively use) you add "parabolic" angles but that comes to just adding velocities.
You can define "parabolic trig" as: paracos(v) = 1, parasin(v) = paratan(v) = v. Note these are the small angle, or small pseudo-angle limits of the regular or hyperbolic trig cases. Parabolic trig/relatiivty is the boundary between elliptic and hyperbolic cases and what one sees when common t units are much bigger than common x units. (1 second = 299,792.458 kilometers).
[edit: Note that even if we had "Euclidean" relativity where velocities could be arbitrarily high, and objects could even move backward in time, we'd have to have a constant c to convert t units to x units in that unified space-time. Imagine trying to figure rotations if we measured x in furlongs and y in fathoms.]
|
2014-04-21 12:17:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7095416188240051, "perplexity": 1549.585718863627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/advanced-algebra/30153-axioms-ring.html
|
# Thread: Axioms of Ring
1. ## Axioms of Ring
1 Each of the following (with the usual operations of addition and multiplication) fails
to be a ring. In each case, you should prove this by showing one of the ring axioms
which is not satisfied.
(1) N, the set of natural numbers.
(2) 3Z+1, the set of odd integers.
(3) The set of invertible 2x2 real matrices.
(4) The set of polynomials in which the coefficient of x3 is zero.
(5) The set of vectors in real 3-dimensional space.
For (1): {1,2,3,4,5,.....} Identity law or zero law is not satified thus (1) is not a Ring. Am i correct?
(2): {.......,-5,-3,-1,1,-3,-5......} Identity law or zero law is not satified thus (2) is not a Ring. Am i correct?
I have no idea how to solve (3),(4),(5) any help would be greatly appreciated.
Thank you.
2. Originally Posted by charikaar
1 Each of the following (with the usual operations of addition and multiplication) fails
to be a ring. In each case, you should prove this by showing one of the ring axioms
which is not satisfied.
(1) N, the set of natural numbers.
(2) 3Z+1, the set of odd integers.
(3) The set of invertible 2x2 real matrices.
(4) The set of polynomials in which the coefficient of x3 is zero.
(5) The set of vectors in real 3-dimensional space.
For (1): {1,2,3,4,5,.....} Identity law or zero law is not satified thus (1) is not a Ring. Am i correct?
(2): {.......,-5,-3,-1,1,-3,-5......} Identity law or zero law is not satified thus (2) is not a Ring. Am i correct?
I have no idea how to solve (3),(4),(5) any help would be greatly appreciated.
Thank you.
(3) the zero matrix is not in the set.
(4) not closed under multiplication
(5) what operation is serving as product here? Cross product - no multiplicative identity.
RonL
|
2018-05-23 06:15:46
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9170577526092529, "perplexity": 673.6475811825135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865450.42/warc/CC-MAIN-20180523043959-20180523063959-00055.warc.gz"}
|
https://blog.supplysideliberal.com/post/38058772267/eliezer-yudkowsky-evaporative-cooling-of-group
|
# Eliezer Yudkowsky: Evaporative Cooling of Group Beliefs →
Eliezer’s post “Evaporative Cooling of Group Beliefs” is a brilliant explanation of why failed prophecies can lead to a group with increased belief and fanaticism. What is more, this post hints at a possible future direction for mathematical sociology. This statistical mechanics perspective is also important for the economics of religion as an alternative explanatory principle.
|
2017-06-25 20:34:11
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8538888096809387, "perplexity": 2738.458721254705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320582.2/warc/CC-MAIN-20170625203122-20170625223122-00230.warc.gz"}
|
https://askdev.io/questions/36571/automating-filter-regulation-production-in-gmail
|
Automating filter regulation production in Gmail?
I get instances where I start obtaining e - mails from individuals I am not curious about obtaining.
These are not SPAM, simply burning out e - mails.
In such instances I often tend to create the filter regulation that makes all the e-mails of that individual go outside the inbox and also right into some tag.
The trouble is that the procedure it requires to create the filter each time is long. Exists a means to make that filter production much faster?
0
2019-05-13 02:53:50
Source Share
I most of the times desire I can highlight an email and also create a filter that archives it with one switch. The majority of users do not also recognize what filters are, and also they do not recognize to head to More Actions - > Filter Messages like these. After that they need to click next, and also select what they intend to perform with that mail. The essential action is them determining what they intend to perform with that mail. This can be quickened with a straightforward "Sort to" food selection. A customer can after that arrange that sender to garbage, a new filter instantly labelled by the name of the sender, add it to an additional filter. This would actually encourage me and also various other users to actually take control of our inbox.
Both the specific message activity food selection () and also the More Actions food selection () have a Filter message like these alternative. You can do this from a specific string or select numerous e-mails and also make use of the More Actions food selection.
|
2021-01-20 06:10:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26926755905151367, "perplexity": 1923.1444148198334}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519923.26/warc/CC-MAIN-20210120054203-20210120084203-00236.warc.gz"}
|
https://internetcomputer.org/docs/current/motoko/main/actors-async
|
# Actors and async data
The programming model of the Internet Computer consists of memory-isolated canisters communicating by asynchronous message passing of binary data encoding Candid values. A canister processes its messages one-at-a-time, preventing race conditions. A canister uses call-backs to register what needs to be done with the result of any inter-canister messages it issues.
Motoko abstracts the complexity of the Internet Computer with a well known, higher-level abstraction: the actor model. Each canister is represented as a typed actor. The type of an actor lists the messages it can handle. Each message is abstracted as a typed, asynchronous function. A translation from actor types to Candid types imposes structure on the raw binary data of the underlying Internet Computer. An actor is similar to an object, but is different in that its state is completely isolated, its interactions with the world are entirely through asynchronous messaging, and its messages are processed one-at-a-time, even when issued in parallel by concurrent actors.
In Motoko, sending a message to an actor is a function call, but instead of blocking the caller until the call has returned, the message is enqueued on the callee, and a future representing that pending request immediately returned to the caller. The future is a placeholder for the eventual result of the request, that the caller can later query. Between issuing the request, and deciding to wait for the result, the caller is free to do other work, including issuing more requests to the same or other actors. Once the callee has processed the request, the future is completed and its result made available to the caller. If the caller is waiting on the future, its execution can resume with the result, otherwise the result is simply stored in the future for later use.
In Motoko, actors have dedicated syntax and types; messaging is handled by so called shared functions returning futures (shared because they are available to remote actors); a future, f, is a value of the special type async T for some type T; waiting on f to be completed is expressed using await f to obtain a value of type T. To avoid introducing shared state through messaging, for example, by sending an object or mutable array, the data that can be transmitted through shared functions is restricted to immutable, shared types.
To start, we consider the simplest stateful service: a Counter actor, the distributed version of our previous, local counter object.
## Example: a Counter service
Consider the following actor declaration:
actor Counter { var count = 0; public shared func inc() : async () { count += 1 }; public shared func read() : async Nat { count }; public shared func bump() : async Nat { count += 1; count; };};
The Counter actor declares one field and three public, shared functions:
• the field count is mutable, initialized to zero and implicitly private.
• function inc() asynchronously increments the counter and returns a future of type async () for synchronization.
• function read() asynchronously reads the counter value and returns a future of type async Nat containing its value.
• function bump() asynchronously increments and reads the counter.
Shared functions, unlike local functions, are accessible to remote callers and have additional restrictions: their arguments and return value must be shared types - a subset of types that includes immutable data, actor references, and shared function references, but excludes references to local functions and mutable data. Because all interaction with actors is asynchronous, an actor’s functions must return futures, that is, types of the form async T, for some type T.
The only way to read or modify the state (count) of the Counter actor is through its shared functions.
A value of type async T is a future. The producer of the future completes the future when it returns a result, either a value or error.
Unlike objects and modules, actors can only expose functions, and these functions must be shared. For this reason, Motoko allows you to omit the shared modifier on public actor functions, allowing the more concise, but equivalent, actor declaration:
actor Counter { var count = 0; public func inc() : async () { count += 1 }; public func read() : async Nat { count }; public func bump() : async Nat { count += 1; count; };};
For now, the only place shared functions can be declared is in the body of an actor or actor class. Despite this restriction, shared functions are still first-class values in Motoko and can be passed as arguments or results, and stored in data structures.
The type of a shared function is specified using a shared function type. For example, the value inc has type shared () → async Nat and could be supplied as a standalone callback to some other service (see publish-subscribe for an example).
## Actor types
Just as objects have object types, actors have actor types. The Counter actor has the following type:
actor { inc : shared () -> async (); read : shared () -> async Nat; bump : shared () -> async Nat;}
Again, because the shared modifier is required on every member of an actor, Motoko both elides them on display, and allows you to omit them when authoring an actor type.
Thus the previous type can be expressed more succinctly as:
actor { inc : () -> async (); read : () -> async Nat; bump : () -> async Nat;}
Like object types, actor types support subtyping: an actor type is a subtype of a more general one that offers fewer functions with more general types.
## Using await to consume async futures
The caller of a shared function typically receives a future, a value of type async T for some T.
The only thing the caller, a consumer, can do with this future is wait for it to be completed by the producer, throw it away, or store it for later use.
To access the result of an async value, the receiver of the future use an await expression.
For example, to use the result of Counter.read() above, we can first bind the future to an identifier a, and then await a to retrieve the underlying Nat, n:
let a : async Nat = Counter.read();let n : Nat = await a;
The first line immediately receives a future of the counter value, but does not wait for it, and thus cannot (yet) use it as a natural number.
The second line awaits this future and extracts the result, a natural number. This line may suspend execution until the future has been completed.
Typically, one rolls the two steps into one and one just awaits an asynchronous call directly:
let n : Nat = await Counter.read();
Unlike a local function call, which blocks the caller until the callee has returned a result, a shared function call immediately returns a future, f, without blocking. Instead of blocking, a later call to await f suspends the current computation until f is complete. Once the future is completed (by the producer), execution of await p resumes with its result. If the result is a value, await f returns that value. Otherwise the result is some error, and await f propagates the error to the consumer of await f.
Awaiting a future a second time will just produce the same result, including re-throwing any error stored in the future. Suspension occurs even if the future is already complete; this ensures state changes and message sends prior to every await are committed.
danger
A function that does not await in its body is guaranteed to execute atomically - in particular, the environment cannot change the state of the actor while the function is executing. If a function performs an await, however, atomicity is no longer guaranteed. Between suspension and resumption around the await, the state of the enclosing actor may change due to concurrent processing of other incoming actor messages. It is the programmer’s responsibility to guard against non-synchronized state changes. A programmer may, however, rely on any state change prior to the await being committed.
For example, the implementation of bump() above is guaranteed to increment and read the value of count, in one atomic step. The alternative implementation:
public shared func bump() : async Nat { await inc(); await read(); };
does not have the same semantics and allows another client of the actor to interfere with its operation: each await suspends execution, allowing an interloper to change the state of the actor. By design, the explicit awaits make the potential points of interference clear to the reader.
## Traps and Commit Points
A trap is a non-recoverable runtime failure caused by, for example, division-by-zero, out-of-bounds array indexing, numeric overflow, cycle exhaustion or assertion failure.
A shared function call that executes without executing an await expression never suspends and executes atomically. A shared function that contains no await expression is syntactically atomic.
An atomic shared function whose execution traps has no visible effect on the state of the enclosing actor or its environment - any state change is reverted, and any message that it has sent is revoked. In fact, all state changes and message sends are tentative during execution: they are committed only after a successful commit point is reached.
The points at which tentative state changes and message sends are irrevocably committed are:
• implicit exit from a shared function by producing a result,
• explict exit via return or throw expressions, and
• explicit await expressions.
A trap will only revoke changes made since the last commit point. In particular, in a non-atomic function that does multiple awaits, a trap will only revoke changes attempted since the last await - all preceding effects will have been committed and cannot be undone.
For example, consider the following (contrived) stateful Atomicity actor:
actor Atomicity { var s = 0; var pinged = false; public func ping() : async () { pinged := true; }; // an atomic method public func atomic() : async () { s := 1; ignore ping(); ignore 0/0; // trap! }; // a non-atomic method public func nonAtomic() : async () { s := 1; let f = ping(); // this will not be rolled back! s := 2; await f; s := 3; // this will not be rolled back! await f; ignore 0/0; // trap! };};
Calling (shared) function atomic() will fail with an error, since the last statement causes a trap. However, the trap leaves the mutable variable s with value 0, not 1, and variable pinged with value false, not true. This is because the trap happens before method atomic has executed an await, or exited with a result. Even though atomic calls ping(), ping() is tentative (queued) until the next commit point, so never delivered.
Calling (shared) function nonAtomic() will fail with an error, since the last statement causes a trap. However, the trap leaves the variable s with value 3, not 0, and variable pinged with value true, not false. This is because each await commits its preceding side-effects, including message sends. Even though f is complete by the second await on f, this await also forces a commit of the state, suspends execution and allows for interleaved processing of other messages to this actor.
## Query functions
In Internet Computer terminology, all three Counter functions are update messages that can alter the state of the canister when called. Effecting a state change requires agreement amongst the distributed replicas before the Internet Computer can commit the change and return a result. Reaching consensus is an expensive process with relatively high latency.
For the parts of applications that don’t require the guarantees of consensus, the Internet Computer supports more efficient query operations. These are able to read the state of a canister from a single replica, modify a snapshot during their execution and return a result, but cannot permanently alter the state or send further Internet Computer messages.
Motoko supports the implementation of Internet Computer queries using query functions. The query keyword modifies the declaration of a (shared) actor function so that it executes with non-committing, and faster, Internet Computer query semantics.
For example, we can extend the Counter actor with a fast-and-loose variant of the trustworthy read function, called peek:
actor Counter { var count = 0; // ... public shared query func peek() : async Nat { count };}
The peek() function might be used by a Counter frontend offering a quick, but less trustworthy, display of the current counter value.
It is a compile-time error for a query method to call an actor function since this would violate dynamic restrictions imposed by the Internet Computer. Calls to ordinary functions are permitted.
Query functions can be called from non-query functions. Because those nested calls require consensus, the efficiency gains of nested query calls will be modest at best.
The query modifier is reflected in the type of a query function:
peek : shared query () -> async Nat
As before, in query declarations and actor types the shared keyword can be omitted.
## Messaging Restrictions
The Internet Computer places restrictions on when and how canisters are allowed to communicate. These restrictions are enforced dynamically on the Internet Computer but prevented statically in Motoko, ruling out a class of dynamic execution errors. Two examples are:
• canister installation can execute code, but not send messages.
• a canister query method cannot send messages.
These restrictions are surfaced in Motoko as restrictions on the context in which certain expressions can be used.
In Motoko, an expression occurs in an asynchronous context if it appears in the body of an async expression, which may be the body of a (shared or local) function or a stand-alone expression. The only exception are query functions, whose body is not considered to open an asynchronous context.
In Motoko calling a shared function is an error unless the function is called in an asynchronouus context. In addition, calling a shared function from an actor class constructor is also an error.
The await construct is only allowed in an asynchronous context.
The async construct is only allowed in an asynchronous context.
It is only possible to throw or try/catch errors in an asynchronous context. This is because structured error handling is supported for messaging errors only and, like messaging itself, confined to asynchronous contexts.
These rules also mean that local functions cannot, in general, directly call shared functions or await futures. This limitation can sometimes be awkward: we hope to extend the type system to be more permissive in future.
## Actor classes generalize actors
An actor class generalizes a single actor declaration to the declaration of family of actors satisfying the same interface. An actor class declares a type, naming the interface of its actors, and a function that constructs a fresh actor of that type each time it is supplied with an argument. An actor class thus serves as a factory for manufacturing actors. Because canister installation is asynchronous on the Internet Computer, the constructor function is asynchronous too, and returns its actor in a future.
For example, we can generalize Counter given above to Counter(init) below, by introducing a constructor parameter, variable init of type Nat:
Counters.mo:
actor class Counter(init : Nat) { var count = init; public func inc() : async () { count += 1 }; public func read() : async Nat { count }; public func bump() : async Nat { count += 1; count; };};
If this class is stored in file Counters.mo, then we can import the file as a module and use it to create several actors with different initial values:
import Counters "Counters";let C1 = await Counters.Counter(1);let C2 = await Counters.Counter(2);(await C1.read(), await C2.read())
The last two lines above instantiate the actor class twice. The first invocation uses the initial value 1, where the second uses initial value 2. Because actor class instantiation is asynchronous, each call to Counter(init) returns a future that can be awaited for the resulting actor value. Both C1 and C2 have the same type, Counters.Counter and can be used interchangeably.
note
For now, the Motoko compiler gives an error when compiling programs that do not consist of a single actor or actor class. Compiled programs may still, however, reference imported actor classes. For more information, see Importing actor classes and Actor classes.
|
2023-03-28 06:25:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18992120027542114, "perplexity": 4282.561515367287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948765.13/warc/CC-MAIN-20230328042424-20230328072424-00178.warc.gz"}
|
https://socratic.org/questions/what-is-the-instantaneous-velocity-of-an-object-moving-in-accordance-to-f-t-e-t--1#240159
|
# What is the instantaneous velocity of an object moving in accordance to f(t)= (e^(t^2),2t-te^t) at t=-1 ?
Mar 15, 2016
$f ' \left(- 1\right) = \left(- 2 e , 2\right)$.
#### Explanation:
The law $f \left(t\right)$ is the position of the object at the time $t$.
To find the instantaleous velocity we have to find the derivative of the previous law.
$f ' \left(t\right) = \left(2 t {e}^{{t}^{2}} , 2 - {e}^{t} - t {e}^{t}\right)$
and so:
$f ' \left(- 1\right) = \left(- 2 e , 2 - \frac{1}{e} + \frac{1}{e}\right) = \left(- 2 e , 2\right)$.
|
2022-01-25 01:52:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37253251671791077, "perplexity": 691.0791110091371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304749.63/warc/CC-MAIN-20220125005757-20220125035757-00260.warc.gz"}
|
https://brilliant.org/problems/values-of-lambda/
|
Values of $$\lambda$$
Algebra Level 3
Sum of values of $$\lambda$$ for which the equation $$\lambda x=\left| \ln { \left| x \right| } \right|$$ has two roots is
Where $$\left| x \right|$$ represents the modulus of $$x$$.
×
|
2017-03-30 09:01:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.834514856338501, "perplexity": 395.5768895530694}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218193288.61/warc/CC-MAIN-20170322212953-00530-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://mathcracker.com/deal-central-limit-theorem-related-normal-distribution
|
# How To Deal With the Central Limit Theorem, and is it Related to the Normal Distribution?
$f\left( x \right)=\frac{1}{\sqrt{2\pi {{\sigma }^{2}}}}\exp \left( -\frac{{{\left( x-\mu \right)}^{2}}}{2{{\sigma }^{2}}} \right)$
### Manipulating the Normal Distribution
$\int\limits_{-\infty }^{\infty }{\frac{1}{\sqrt{2\pi {{\sigma }^{2}}}}\exp \left( -\frac{{{\left( x-\mu \right)}^{2}}}{2{{\sigma }^{2}}} \right)dx}=1$
$\int\limits_{-\infty }^{\infty }{\frac{x}{\sqrt{2\pi {{\sigma }^{2}}}}\exp \left( -\frac{{{\left( x-\mu \right)}^{2}}}{2{{\sigma }^{2}}} \right)dx}=\mu$
and
$\int\limits_{-\infty }^{\infty }{\frac{{{x}^{2}}}{\sqrt{2\pi {{\sigma }^{2}}}}\exp \left( -\frac{{{\left( x-\mu \right)}^{2}}}{2{{\sigma }^{2}}} \right)dx}={{\mu }^{2}}+{{\sigma }^{2}}$
### Standard Normal Distribution and Z-scores
$Z=\frac{X-\mu }{\sigma }$
$Z=\frac{X-\mu }{\sigma}$
$X-72<75.5-72$
$\frac{X-72}{8}<\frac{75.5-72}{8}$
### The Central Limit Theorem (CLT)
This tutorial is brought to you courtesy of MyGeekyTutor.com
Don't have a membership account?
REGISTER
Back to
|
2023-02-06 02:28:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5360284447669983, "perplexity": 1220.2866145557402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500303.56/warc/CC-MAIN-20230206015710-20230206045710-00569.warc.gz"}
|
http://physics.stackexchange.com/tags/reference-frames/hot
|
# Tag Info
19
I am wondering whether is it taken as a postulate or a proven phenomenon that c is constant irrespective of observer's speed? Either one. Both. Einstein took it as a postulate in his 1905 paper on special relativity. From it, he proved various things about space and time. The frame-independence of $c$ is also experimentally supported. This is what the ...
5
The statement from the Wikipedia articles is, as written, wrong. The EM field tensor - as a tensor - does change under change of reference frames. It is covariant, but not invariant under the Lorentz group, while the electric and magnetic field are neither, but they are covariant under the rotation group. The electric and magnetic fields are ordinary, ...
4
By using orthogonal optical resonators, laboratory tests concerning verifying the isotropy of c have come a long way. As quoted from http://journals.aps.org/prd/abstract/10.1103/PhysRevD.80.105011 "An analysis of data recorded over the course of one year sets a limit on an anisotropy of the speed of light of $\Delta c/c \sim 10^{-17}.$ This constitutes the ...
3
It's intuitive that while accelerating in a locally constant gravitational field, there is no perception of acceleration, since the body accelerates uniformly. The reason you can't perceive it is not that it's uniform, the reason is that there's nothing to compare with. If there's something to compare with, then you can see the difference. For instance, ...
3
If rest mass does not change with v then why is infinite energy required to accelerate an object to the speed of light? The momentum of a material particle, a conserved quantity, is theoretically and experimentally a non-linear function of velocity given by $$\vec p = m \frac{\vec v}{\sqrt{1 - \frac{v^2}{c^2}}}$$ which goes to infinity as $v ... 3 In principle there is an effect, but firstly it's tiny and secondly it averages to zero. The mass of the ISS is about 420 tonnes, or about 5000 times the mass of an astronaut. That means if an astronaut pushes themselves off a wall at 1 m/sec the ISS moves in the other direction at about 0.0002 m/sec. But the ISS isn't very large so after only a couple of ... 2 Option 4, none of the above. Your option 1 is wrong because points don't rotate. Your option 2 is closer to correct, but ultimately still wrong. You're overly hung up on points (the origin). It might help to get a handle on what "rotation" is. Points don't rotate. Better said, a rotated point is indistinguishable from the original. What about one ... 2 First, to clarify: A body does not "gain mass" upon acceleration, it "gains mass" at high speeds. That is, whether or not the body's velocity is changing is not relevant, only its speed relative to the observer is important. That being said, a body doesn't actually "gain mass" when it moves at a high velocity. The mass of a body is always the same, and is ... 2 If I understand correctly you are asking how observer dependent is electromagnetic radiation. The first thing is that non uniformly accelerated charges are described in a inertial frame by Larmor's formula and Abrahm-Lorentz force which take into account the radiated field and the recoil on the particle. Now in Newtonian mechanics and special relativity ... 2 Technically the electron and proton are both orbiting the barycenter of the system, both in classical and quantum mechanics, just as in gravitational systems. You find the same dynamics for the system if you assume the proton and electron are moving independently about the barycenter, or if you convert to a one-body problem of a single "particle" with the ... 2 Torque is defined as$\vec \tau = \vec r \times \vec F$, where$\vec r$is the displacement vector from the origin to the point at which the force is applied. This means that torque depends very much on the choice of origin. Then again, the choice of origin also affects the inertia tensor. So long as you get all of the physics correct, you can choose any ... 2 It's just a drawing convention. Rather than "vertical", time is orthogonal to the x-axis. The reason it is not shown vertical is because the paper surface is 2D, and the author uses the vertical axis for drawing the altitude with respect to the ground. Just recall the way you draw the 3D axis. Here, the author uses X (horizontal),Y (vertical) and time ... 2 I think you are considering two different situations: 1) Team Pole passes through the barn at constant velocity, simultaneously (in Barn's frame) grab the pole, and continue on with the same constant velocity. In this case, your second calculation is correct. The pole's length does not change from Team Barn's perspective. The pole remains length$L_0$... 2 At any given moment, the pendulum is swinging in a certain plane. The driving force should be within this plane. If the earth weren't rotating, then such a force could never cause the plane of swing to rotate about a vertical axis, since by symmetry there would be no preferred direction for the rotation. 2 I'll go with an Equivalence Principle argument. For a model system, consider a test particle in a highly elliptical orbit around a neutron star; the particle will pass through regions of greatly different field strength. But it feels no force as it "falls" around the star. Per the Equivalence Principle, at each point there is a locally inertial ... 2 It is a well-substantiated observed phenomenon. Science deals only with provisional truths, but this hypothesis has undergone (and passed) immense amounts of scrupulous experimentation and mathematical formulation. In a Neo-Lorentzian interpretation, physics works differently in all reference frames except for one single, undetectable, privileged reference ... 2 Take the reference frame as centered in the fixed axis. The$R$that connects the origin to the centre of the spinning disk forms an angle$\phi$with the horizontal. Now, inside the disk of radius$r$, the angle of a certain point mass is given by the angle it forms inside the spinning circle, which we'll call$\theta$. Now take as generalised coordinates ... 1 Quantum-mechanicaly and relativisticaly the energy of a given object is never removed completely (Heisenberg uncertainty relations, relativistisc rest mass) Assuming we talk about kinetic energy, kinetic energy is defined with respect to a specific reference frame. This reference frame can be related to another object or not. The kinetic energy can be ... 1 As long as you don't forget that Andromeda galaxy is cca. 2.5million light years away, then there should be no paradox at all. The observers are only seeing different slices of history that took place 2.5million years ago (plus/minus 1day), the decisions has already been made long time ago. It's like when you take a newspaper from a pile, more recent papers ... 1 The interpretation is that two events being simultaneous as measured in frame$S$doesn't imply that the events are simultaneous in frame$S'$. Which events count as being "simultaneous" depends on the frame of reference. This is known as the relativity of simultaneity. Added clarification due to comment: The coinciding of the origins is an event, call ... 1 One expects the energy stored in the capacitor to transform like the zeroth component of the four-vector$(U,\vec p)$. In its rest frame the field configuration around the capacitor has $$(U,\vec p)_\text{rest}=(U_0,\vec 0),$$ and by the Lorentz transformation the moving observer will see $$(U,\vec p)_\text{moving}=(\gamma U_0, \gamma\vec\beta U_0),$$ where ... 1$m_2$will leave with the same magnitude of momentum but opposite direction. Now the assertion is made that in an elastic collision,$m_1$and$m_2$have the same speeds leaving the collision as entering it. In other words, the speed of$m_1$is$v-v_c$and the speed of$m_2$is$v_c\$ after the collision. In order to simplify things and to ...
1
In relativity the rest mass is the mass of an object measured from a reference frame in which it is at rest. But this is not the mass involved in acceleration or inertial mass. Inertial mass, or the opposition of the body to the change of movement (directional or in magnitude), will grow with the speed of the body: $$m = \frac{m_o}{\sqrt{1-v^2/c^2}}$$ ...
1
The short answer is: protons are much more (1800 times) massive than electrons. That makes them (approximately) the center of mass of the system, that's why electrons are the ones orbiting protons and not vice versa. The term 'orbiting', however, means something essentially quantum. It is the reason of the stability of the atom (electrons don't radiate ...
1
The fact that different observers in relative motion can measure the same light ray to move at a speed of c has to do with the fact that each observer defines the "speed" in terms of distance/time on rulers and clocks at rest relative to themselves. It's crucial to understand that different observers use different rulers and clocks to measure speed, because ...
1
No. My answer is negative, even if I confirm the statements of other answers: "The first thing is almost completely arbitrary, especially in full general relativity. The second thing is an unambiguous result of an experiment."(Jerry Schirmer) "In Einsteinian relativity all observers can still agree on a number of facts, they are just ...
1
In order for a body to move with uniform velocity in a circular path, there must exist some force towards the centre of curvature of the circular path. This is centripetal force. By Newton's Third Law, there must exist a reactive force that is equal in magnitude and opposite in direction. True, although the adjective "reactive" is meaningless. There is ...
1
A particle moving at the speed of of light does not experience time, as it has no rest frame. Furthermore, a particle cannot continuously accelerate and eventually reach the speed of light, since massless particles can only move as fast as light. They either move at the speed of light or do not exist at all.
Only top voted, non community-wiki answers of a minimum length are eligible
|
2014-10-30 15:26:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7357005476951599, "perplexity": 338.0513202674286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898226.7/warc/CC-MAIN-20141030025818-00197-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://byjus.com/question-answer/abcd-is-a-trapezium-with-ab-dc-a-line-parallel-to-ac-intersects-ab-at-1/
|
Question
# $$ABCD$$ is a trapezium with $$AB\: // \:DC$$ . A line parallel to $$AC$$ intersects $$AB$$ at point $$M$$ and $$BC$$ at point $$N$$ . Prove that : $$area \: of \: \bigtriangleup ADM \: = \: area \: of \: \bigtriangleup \: ACN$$.
Solution
## As per question,$$AC\parallel MN$$$$AB\parallel DC$$Lets DRAW NP, MD perpendicular to Ac,And ED perpendicular ABlets join A to C, D to M, C to M And A to N Area of triangle ACN-$$A=\frac { 1 }{ 2 } *AC*NP$$Area of triangle ACN-$$A=\frac { 1 }{ 2 } *AC*MD$$Since $$MN\parallel AC$$MD=NPAREA(ACN)=AREA(ACM) EQ 1NOW, AREA(ACM)=$$A=\frac { 1 }{ 2 } *AM*CF$$NOW, AREA(ADM)=$$A=\frac { 1 }{ 2 } *AM*dE$$Since $$AB\parallel CD$$CF=DEarea(ACM)=area(ADM) EQ2FROM EQ 1 AND EQ2area(ADM)=area(ACN)Mathematics
Suggest Corrections
0
Similar questions
View More
People also searched for
View More
|
2022-01-22 12:12:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1958988457918167, "perplexity": 3299.2588676374835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303845.33/warc/CC-MAIN-20220122103819-20220122133819-00577.warc.gz"}
|
https://itprospt.com/num/13509869/13-assumc-random-variable-x-follows-the-beta-a-b-distribution
|
5
# 13 Assumc random variable X follows the Beta(a,b) distribution with pdfT(a + 6) f(r) = 1"-1(1-1)-1 0 <I <1 a > 0,6 > 0. T(a)r(6)Calculate the mcan ...
## Question
###### 13 Assumc random variable X follows the Beta(a,b) distribution with pdfT(a + 6) f(r) = 1"-1(1-1)-1 0 <I <1 a > 0,6 > 0. T(a)r(6)Calculate the mcan valuc E(X) of X. To gencrate random variable X from this distribution bascd thc Acccpt-Reject mcthod_ aSSuMIC 3,6 use U(0,1) (Uniform distribution on [0, 1]) as proposal or an instrumental distribution: Fiud the possible minimum value of M for the Accept-Reject method and calculate the thcorctical acceptance ratc for this simulatio
13 Assumc random variable X follows the Beta(a,b) distribution with pdf T(a + 6) f(r) = 1"-1(1-1)-1 0 <I <1 a > 0,6 > 0. T(a)r(6) Calculate the mcan valuc E(X) of X. To gencrate random variable X from this distribution bascd thc Acccpt-Reject mcthod_ aSSuMIC 3,6 use U(0,1) (Uniform distribution on [0, 1]) as proposal or an instrumental distribution: Fiud the possible minimum value of M for the Accept-Reject method and calculate the thcorctical acceptance ratc for this simulation; Write an R program (0 generate random sample of size NSim 10 ' using the method stated part (6) Caleulate the sample mcnn of your generatel random sample and compare i t0 thie thevretical [ean value E(X obtained in (a). Compare your prOgram with the built-in R functions rbeta using histograms with superimposed true density curve, Include your R code and graphs,
#### Similar Solved Questions
##### What is the mass (in grams) of 9.75 * 1024 molecules of methanol (CHJOH)?Number
What is the mass (in grams) of 9.75 * 1024 molecules of methanol (CHJOH)? Number...
##### Ksl(b) Find € such that21 8) "=( = -4)
ksl (b) Find € such that 21 8) "=( = -4)...
##### Kek f (3) = Ik+I , 8 _
kek f (3) = Ik+I , 8 _...
##### Ansorhance400dSoOO25002000 WavrnUnher s1500Iooos00
Ansorhance 400d SoOO 2500 2000 WavrnUnher s 1500 Iooo s00...
##### Positive charge of 5C and mass 25 is Placed parricle?an unitorm lectric field of 10 NIC. Whatacceleration 0t thework #id Me ,+ + 13mi- Wmx 1.0mtt" Pvi 3,0 m1
positive charge of 5C and mass 25 is Placed parricle? an unitorm lectric field of 10 NIC. What acceleration 0t the work #id Me , + + 1 3mi- Wmx 1.0mtt" Pvi 3,0 m1...
##### $lim _{n ightarrow infty}(1+x)left(1+x^{2}ight)left(1+x^{4}ight) ldotsleft(1+x^{2 n}ight),|x|<1$, is equal to(A) $frac{1}{x-1}$(B) $frac{1}{1-x}$(C) $1-x$(D) $x-1$
$lim _{n ightarrow infty}(1+x)left(1+x^{2} ight)left(1+x^{4} ight) ldotsleft(1+x^{2 n} ight),|x|<1$, is equal to (A) $frac{1}{x-1}$ (B) $frac{1}{1-x}$ (C) $1-x$ (D) $x-1$...
##### Let 6; =73so that {v1,72,"s} is an orthogonal set(No need t0 verily this fact.(a) Find a vector € such that {01.82,Us, U4} is an Orthogonal set. (b) Give an example of an orthonormal set in Rtand justify.
Let 6; = 73 so that {v1,72,"s} is an orthogonal set (No need t0 verily this fact. (a) Find a vector € such that {01.82,Us, U4} is an Orthogonal set. (b) Give an example of an orthonormal set in Rtand justify....
##### Consumer Price Index(a) The CPI was 229.39 for 2012 and 243.80 for 2017 . Assuming that annual inflation remained constant for this time period, determine the average annual inflation rate.(b) Using the inflation rate from part (a), in what year will the CPI reach $300 ?$
Consumer Price Index (a) The CPI was 229.39 for 2012 and 243.80 for 2017 . Assuming that annual inflation remained constant for this time period, determine the average annual inflation rate. (b) Using the inflation rate from part (a), in what year will the CPI reach $300 ?$...
##### 4 What is the torque produced about point A? (1)5 mF=125 N0.25 m
4 What is the torque produced about point A? (1) 5 m F=125 N 0.25 m...
##### ________ No individual development data
________ No individual development data...
##### The probability is 0.314 that the gestation period of woman will excecd months. In six human births, what is lhe probability that the number in which the gestation period exceeds - months is (4+5+2+2+2-15)Exactly 5?Al least 52Is the probability left-skewed, right skewed or symmetric? How do you know?Find the mean of the number of births exceeding months.(Remember (0 show work)Formula:Mean:Find the standard deviation of the number of births exceeding months. (Remember t0 show work) Formula: Stand
The probability is 0.314 that the gestation period of woman will excecd months. In six human births, what is lhe probability that the number in which the gestation period exceeds - months is (4+5+2+2+2-15) Exactly 5? Al least 52 Is the probability left-skewed, right skewed or symmetric? How do you k...
##### Solve each radical equation. Check your solution. If there is no solution, write no solution. $$\sqrt{r+5}=2 \sqrt{r-1}$$
Solve each radical equation. Check your solution. If there is no solution, write no solution. $$\sqrt{r+5}=2 \sqrt{r-1}$$...
##### Which of the below is/are true? Here,A is ann X n matrix Ifa matrix Bis obtained from A by a row replacement operation,then det B det A_ Ifa" matrix Bis obtained from A by interchanging two rows;, then det B = det A_ If Uisan echelon form of A, which is obtained from A only by row replacements and row interchanges;then det A (-1)" det U . MatrixA is invertible ifand only if detA = 0. det A = 0 ifand only if the columns (rows) of a matrix A are linearly dependent. Iftwo columns (or two
Which of the below is/are true? Here,A is ann X n matrix Ifa matrix Bis obtained from A by a row replacement operation,then det B det A_ Ifa" matrix Bis obtained from A by interchanging two rows;, then det B = det A_ If Uisan echelon form of A, which is obtained from A only by row replacements ...
##### Prove that if n2+ 2n is even, then n is even.
Prove that if n2+ 2n is even, then n is even....
##### 2- Plot the phase portrait using the isoclines method forthe systems below: Use several values forthe slope c. For examplec-2,-1,,0, 1,2. Try to predict the path of the trajectory starting at some initial point: Verify using dfield and pplane(a) x =x-t (b) *+x+x = 0
2- Plot the phase portrait using the isoclines method forthe systems below: Use several values forthe slope c. For examplec-2,-1,,0, 1,2. Try to predict the path of the trajectory starting at some initial point: Verify using dfield and pplane (a) x =x-t (b) *+x+x = 0...
##### Eusmtntaehodltetder uth %uucanhent_ Eltuaeul Tha mnaem Gela 11844 02740 31412 4ocds Aare Ku4ujanl 5u7 Worlmeute Earulal nnnWenaidduen tdune ntmneeatnlo unattuMrdml
eusmtnt aehodltetder uth %uucanhent_ Eltuaeul Tha mnaem Gela 11844 02740 31412 4ocds Aare Ku4ujanl 5u7 Worlmeute Earulal nnn Wenai dduen t dune nt mne eatnlo unattu Mrdml...
##### (II) A 95-kg fullback is running at 3.0 m/s to the east and is stopped in 0.85 s by a head-on tackle by a tackler running due west. Calculate $(a)$ the original momentum of the fullback, $(b)$ the impulse exerted on the fullback, $(c)$ the impulse exerted on the tackler, and $(d)$ the average force exerted on the tackler.
(II) A 95-kg fullback is running at 3.0 m/s to the east and is stopped in 0.85 s by a head-on tackle by a tackler running due west. Calculate $(a)$ the original momentum of the fullback, $(b)$ the impulse exerted on the fullback, $(c)$ the impulse exerted on the tackler, and $(d)$ the average force ...
|
2022-10-01 01:55:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7441961765289307, "perplexity": 7205.819534542163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335514.65/warc/CC-MAIN-20221001003954-20221001033954-00333.warc.gz"}
|
https://www.tutordale.com/what-does-p-stand-for-in-math/
|
Monday, October 18, 2021
What Does P Stand For In Math
List Of Mathematical Abbreviations
Math Antics – Circles, What Is PI?
This following list features abbreviated names of mathematical functions, function-like operators and other mathematical terminology.
This list is limited to abbreviations of two or more letters. The capitalization of some of these abbreviations is not standardized different authors might use different capitalizations.
This list is incomplete; you can help by adding missing items.
What Is Prt Math
r and t are in the same units of time.
• Calculate Interest, solve for I. I = Prt.
• Calculate Principal Amount, solve for P. P = I / rt.
• Calculate rate of interest in decimal, solve for r. r = I / Pt.
• Calculate rate of interest in percent. R = r * 100.
• Calculate time, solve for t. t = I / Pr.
• Also Know, how do you calculate simple interest by hand? To calculate simple interest, start by multiplying the principal, which is the initial sum borrowed, by the loan’s interest rate written as a decimal. Then, multiply that number by the total number of time periods since the loan began to find the simple interest.
Thereof, how do you solve AP PRT for AT T?
Explanation:
• Given: A=P+Prt. Factor out the P. A=P Divide both sides by P.
• AP=1+rt. Subtract 1 from both sides.
• AP1=rt. Divide both sides by r. APr1r=t t=APr1r. Or could write this as:
• What does PRT stand for?
Investment problems usually involve simple annual interest , using the interest formula I = Prt, where I stands for the interest on the original investment, P stands for the amount of the original investment , r is the interest rate ,
Common Mathematical Symbols And Terminology: Maths Glossary
Mathematical symbols and terminology can be confusing and can be a barrier to learning and understanding basic numeracy.
This page complements our numeracy skills pages and provides a quick glossary of common mathematical symbols and terminology with concise definitions.
Are we missing something? Get it touch to let us know.
Glossary Of Mathematical Symbols
It has been suggested that List of mathematical symbols by subject be merged into this article. Proposed since January 2021.
A mathematical symbol is a figure or a combination of figures that is used to represent a mathematical object, an action on mathematical objects, a relation between mathematical objects, or for structuring the other symbols that occur in a formula. As formulas are entirely constituted with symbols of various types, many symbols are needed for expressing all mathematics.
The most basic symbols are the , and the letters of the Latin alphabet. The decimal digits are used for representing numbers through the HinduArabic numeral system. Historically, upper-case letters were used for representing points in geometry, and lower-case letters were used for variables and constants. Letters are used for representing many other sort of mathematical objects. As the number of these sorts has dramatically increased in modern mathematics, the Greek alphabet and some Hebrew letters are also used. In mathematical formulas, the standard typeface is italic type for Latin letters and lower-case Greek letters, and upright type for upper case Greek letters. For having more symbols, other typefaces are also used, mainly boldface a
What Is The M In Math
Many people have been taught that m comes from the French monter, to climb, but this appears to be an urban legend. Although m can stand for modulus of slope and the term modulus has often been used for the essential parameter determining, there is no definitive proof that this is the derivation of m.
You May Like: Holt Geometry Lesson 4.5 Practice B Answers
What Is M * N
A matrix with m rows and n columns is called an m × n matrix, or m-by-n matrix, while m and n are called its dimensions. Matrices with a single row are called row vectors, and those with a single column are called column vectors. A matrix with the same number of rows and columns is called a square matrix.
When To Use Pemdas
When there is more than one operation in a mathematical expression, we use the PEMDAS method. PEMDAS in Math gives you a proper structure to produce a unique answer for every mathematical expression. There is a sequence of certain rules that need to be followed when using the PEMDAS method. Once you get the hang of these rules, you can do multiple steps at once.
Points to Remember
• Operations in brackets should be carried out first.
• Next, solve the exponents in the expression.
• Move from left to right and carry out multiplication or division, whichever comes first.
• Move from left to right and carry out addition or subtraction, whichever comes first.
Read Also: Introduction To Exponential Functions Common Core Algebra 1 Homework
The addition symbol + is usually used to indicate that two or more numbers should be added together, for example, 2 + 2.
The + symbol can also be used to indicate a positive number although this is less common, for example, +2. Our page on Positive and Negative Numbers explains that a number without a sign is considered to be positive, so the plus is not usually necessary.
See our page on Addition for more.
How To Know What The Letters Mean In Math Formulas
Product Notation
For example, see this Wikipedia section on Newton’s laws of motion:
Newton’s Second Law states that an applied force, $\mathbf F$, on an object equals the rate of change of its momentum, $\mathbf p$, with time. Mathematically, this is expressed as:$$\mathbf F =\frac =\frac.$$
Based on this chart of symbols I understand they are saying $A / B = C / D$ and that $\mathrm d$ is a function and $m \mathbf v$ is the input to that function, but how do I figure out what “$\mathrm d \mathbf p$”, “$\mathrm dt$”, “$\mathrm d$”, and “$m \mathbf v$” mean? Please don’t give me the answer, but instead, pretty please tell me what method a math person would use to always know what there mean .
I can sort of guess that $m \mathbf v$ might mean motion/velocity or something, but surely you’re not just supposed to guess at the symbols. A see formulas like this all the time and I can never find a key that explains what the letters mean. What am I missing?
A mathematical text would define the notation it uses, either within the body of the text or in an index of symbols at the start or end of the text, unless the notation is really common. For example, a high school text would not define the symbol for ordinary addition or subtraction.
If you are ready to see what the notation in your example means, read on. Otherwise, stop here.
Recommended Reading: What Is The Molecular Geometry Of Ccl4
What Does Pain Stand For
Filter by:
Report Comment
We’re doing our best to make sure our content is useful, accurate and safe.If by any chance you spot an inappropriate comment while navigating through our website please use this form to let us know, and we’ll take care of it shortly.
How Does Pemdas Rule Work
In any arithmetic expression, if there are multiple operations used then we have to solve the terms written in parentheses first. After getting rid of parentheses, we solve multiplication and division operations, whatever comes first in the expression from left to right. Then, we will get a simplified expression with only addition and subtraction operations. We solve addition and subtraction in left to right order, whatever comes first, and get the final answer. This is how PEMDAS works.
What Does P Mean In Terms Of Real Values
I can’t find a proper summary or reference of how to translate formulas in probability notation to arithmetic notation .
For example, if $P = .7$ and $P=.35$, what does $P$ translate to?
What does $P$ translate to?
etc…
$P$ is the probability that the event is in $A$ or $B$. For example, if your space of events is $\$ , define $A=\$ and $B=\$. In that case, $P$ is the probability that the dice gives you $1,2$ or $6$. Therefore $P = \frac = \frac=0.5=50\%$.
For intersection or others, the idea is the same. In general $P$ is the probability of an event in $X$ happens after the experiment is made, whatever it is.
What Is A P
In statistics, we always seem to come across this p-value thing. If you have been studying for a while, you are used to the idea that a small p-value makes you reject the null hypothesis. But what if I asked you to explain exactly what that number really represented!?
Understanding the p-value will really help you deepen your understanding of hypothesis testing in general. Before I talk about what the p-value is, lets talk about what it isnt.
• The p-value is NOT the probability the claim is true. Of course, this would be an amazing thing to know! Think of it there is 10% chance that this medicine works. Unfortunately, this just isnt the case. Actually determining this probability would be really tough if not impossible!
• The p-value is NOT the probability the null hypothesis is true. Another one that seems so logical it has to be right! This one is much closer to the reality, but again it is way too strong of a statement.
The p-value is actually the probability of getting a sample like ours, or more extreme than ours IF the null hypothesis is true. So, we assume the null hypothesis is true and then determine how strange our sample really is. If it is not that strange then we dont change our mind about the null hypothesis. As the p-value gets smaller, we start wondering if the null really is true and well maybe we should change our minds .
One of the best explanations I have seen of this comes from New Zealand. It is worth it to take a look!
Don’t Miss: Geometry Segment Addition Postulate Worksheet
What Is Stem What Does Stem Stand For
STEM is an acronym commonly used in education and business. The four letters in STEM stand for:
• Science
• Engineering
• Mathematics
When you hear about STEM education or STEM jobs, its referring to these four distinct categories of study!;
So youre probably wondering why these categories are grouped together under one umbrella term. The purpose of combining these four fields of study into a single acronym is to emphasize that science, technology, engineering, and mathematics are interrelated academic disciplines that can be integrated for educational, business, and even economic purposes!;
Math Symbols The Most Valuable And Importantsymbols For Set Notation In Use: Specialized Set Notations
Math Symbols:. . . why math symbols are used . . .
Symbols are a concise way of giving lengthy instructions related to numbers and logic.
Symbols are a communication tool. Symbols are used to eliminate the need to write long, plain language instructions to describe calculations and other processes.
For example, a single symbol stands for the entire process for addition. The familiar plus sign eliminates the need for a long written explanation of what addition means and how to accomplish it.
The same symbols are used worldwide . . .
The symbols used in mathematics areuniversal.
The same math symbols are used throughout the civilized world. In most cases each symbol gives the same clear, precise meaning to every reader, regardless of the language they speak.
The most valuable,most frequently used Symbols in mathematics . . .
The most important, most frequently used Symbols forSet Notation are listed below.
Go To
Also Check: What Is Mean Median Mode And Range In Math
How Are The Names Of Lines Determined In Geometry
In geometry, a line is perfectly straight and extends forever in both directions. A line is uniquely determined by two points. Lines need names just like points do, so that we can refer to them easily. To name a line, pick any two points on the line. A set of points that lie on the same line are said to be collinear.
Meaning Of Symbol $\mathcal$ In Set Theory Article
What does Z, N, Q and R stand for in set notation
I am teaching myself real analysis, and in this particular set of lecture notes, the introductory chapter on set theory when explaining that not all sets are countable, states as follows:
If $S$ is a set, $\operatorname < \operatorname)$.
Can anyone tell me what this means? It is theorem 1.5.2 found on page 13 of the article.
• 1$\begingroup$Its the set of all subsets of $S$, often called the power set of $S$.$\endgroup$Jul 15 ’13 at 20:33
• 7$\begingroup$Why do you say that your set of lecture notes “never explains exactly what $\mathcal P$ means” ? The explanation is given on page 2 in Definition 1.1.1. It is extremely clear and followed by excellent examples. Asking a question answered in detail at the very beginning of your reference, in a section named 1.1.1 makes me think you should try to improve your way of self-teaching.$\endgroup$Jul 15 ’13 at 21:27
• 2
What Does Stem Stand For In Education
We cant talk about STEM without answering the question, What does STEM stand for in school? In educational contexts, STEM refers to a curriculum that takes an integrated approach to the teaching of science, technology, engineering, and mathematics.
STEM has become the primary focus of U.S. schools in recent years because many of the fastest-growing careerslike nurse practitioners and data scientistsfall under one or more of STEMs core subject areas.;
Additionally, STEM curriculum is heavily supported by the U.S. government as a way to prepare students for high-paying careers in growing economic sectors. According to the U.S. Government, then, the answer to the question, What does STEM stand for in school? would economic prosperity, global influence, and progress.
Since STEM education prepares students for competitive careers, a STEM curriculum emphasizes real-world applications of STEMs core subjects. Because science, tech, engineering, and math frequently work together in real-world professional situations, educators believe students should start learning how to integrate these subjects while they are in school.;
As students study STEM subjects, they develop the skills they need to succeed in a tech-heavy, science-driven world. These skills include things like problem solving, finding and using evidence, collaborating on projects, and thinking critically.
What Is The Use Of Pemdas Calculator
We all are very well versed with the set of arithmetic operations which are addition, subtraction, multiplication, and division. PEMDAS is a set of rules which are followed while solving mathematical expressions. To easily and quickly simplify any arithmetic expression we use PEMDAS Calculator. Try now Cuemath’s PEMDAS calculator a free online tool to help you solve mathematical expressions and get your answers just by a click.
Don’t Miss: What Is The Lewis Dot Structure For Ccl4
What Is Stem What Does It Stand For
If you live in the U.S., youve probably heard about how important STEM education is. But what is STEM, exactly? And why is it so important?;
• Explain what STEM stands for
• Overview the disciplines that STEM includes
• Answer the question, What does STEM stand for in school?
• Explain the importance of STEM
• Provide a five question quiz to help you decide if pursuing STEM is right for you
E In Scientific Notation And The Meaning Of 1e6
You don’t need a calculator to use E to express a number in scientific notation. You can simply let E stand for the base root of an exponent, but only when the base is 10. You wouldn’t use E to stand for base 8, 4 or any other base, especially if the base is Euler’s number, e.
When you use E in this way, you write the number xEy, where x is the first set of integers in the number and y is the exponent. For example, you would write the number 1 million as 1E6. In regular scientific notation, this is 1 × 106, or 1 followed by 6 zeros. Similarly 5 million would be 5E6, and 42,732 would be 4.27E4. When writing a number in scientific notation, whether you use E or not, you usually round to two decimal places.
|
2021-10-18 12:56:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6086785793304443, "perplexity": 691.3605456526682}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585203.61/warc/CC-MAIN-20211018124412-20211018154412-00617.warc.gz"}
|
http://specialfunctionswiki.org/index.php/Jacobi_sn
|
Jacobi sn
Let $u=\displaystyle\int_0^x \dfrac{1}{\sqrt{(1-t^2)(1-mt^2)}}dt = \displaystyle\int_0^{\phi} \dfrac{1}{\sqrt{1-m\sin^2 \theta}} d\theta.$ Then we define $$\mathrm{sn \hspace{2pt}}u = \sin \phi = x.$$
Properties
1. $\mathrm{sn \hspace{2pt}}^2u+\mathrm{cn \hspace{2pt}}^2u=1$
2. $\mathrm{sn \hspace{2pt}}(0)=0$
3. $m \mathrm{sn \hspace{2pt}}^2 u + \mathrm{dn \hspace{2pt}}^2u=1$
4. $\mathrm{sn \hspace{2pt}}$ is an odd function
5. $\dfrac{d}{du}\mathrm{sn \hspace{2pt}} u =\mathrm{cn \hspace{2pt}}(u)\mathrm{dn \hspace{2pt}}(u)$
References
Jacobi Elliptic Functions
|
2017-09-20 23:38:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9615432024002075, "perplexity": 4550.787586972736}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687582.7/warc/CC-MAIN-20170920232245-20170921012245-00567.warc.gz"}
|
http://mathhelpforum.com/calculus/8391-problem-involving-limit-greatest-integer-function.html
|
# Thread: problem involving limit and greatest integer function
1. ## problem involving limit and greatest integer function
Let f(x) be the greatest integer less than or equal to x.
compute the lim x->0 xf(1/x).
2. It seems the function,
$\displaystyle h(x)=x \left[ \frac{1}{x} \right]$
Is squeeze between,
$\displaystyle f(x)=1, g(x)=1-x$.
On some open interval except possibly containg the point.
We have,
$\displaystyle 1-x \leq h(x)\leq 1$ for $\displaystyle x>0$
$\displaystyle 1\leq h(x)\leq 1-x$ for $\displaystyle x<0$
And,
$\displaystyle \lim_{x\to 0}1-x=\lim_{x\to 0}1=1$
Now just use the squeeze theorem.
(You should graph this function. It is the coolest looking thing).
,
,
,
,
,
,
,
,
,
,
,
,
,
,
# limits involving gif
Click on a term to search for related topics.
|
2018-05-25 02:30:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7038406133651733, "perplexity": 2259.8891445434465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866917.70/warc/CC-MAIN-20180525004413-20180525024413-00219.warc.gz"}
|
https://archive.lib.msu.edu/crcmath/math/math/a/a084.htm
|
Affine Space
Let be a Vector Space over a Field , and let be a nonempty Set. Now define addition for any Vector and element subject to the conditions
1. ,
2. ,
3. For any , there Exists a unique Vector such that .
Here, , . Note that (1) is implied by (2) and (3). Then is an affine space and is called the Coefficient Field.
In an affine space, it is possible to fix a point and coordinate axis such that every point in the Space can be represented as an -tuple of its coordinates. Every ordered pair of points and in an affine space is then associated with a Vector .
See also Affine Complex Plane, Affine Connection, Affine Equation, Affine Geometry, Affine Group, Affine Hull, Affine Plane, Affine Space, Affine Transformation, Affinity
|
2021-11-27 06:27:47
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9458860754966736, "perplexity": 396.8569462680083}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358118.13/warc/CC-MAIN-20211127043716-20211127073716-00060.warc.gz"}
|
https://blog.cluster-text.com/
|
# Evaluating Dr. Shiva’s Claims of Election Fraud in Michigan
This article examines Dr. SHIVA Ayyadurai’s claims that the shape of some graphs generated from Michigan voting data suggests that the vote count was being fraudulently manipulated. To be clear, I am not making any claim about whether or not fraud occurred — I’m only addressing whether Dr. Shiva’s arguments about curve shape are convincing.
I’ll start by elaborating on some tweets I wrote (1, 2, 3) in response to Dr. Shiva’s first video (twitter, youtube) and I’ll respond to his second video (twitter, youtube) toward the end. Dr. Shiva makes various claims about what graphs should look like under normal circumstances and asserts that deviations are a signal that fraud has occurred. I use a little math to model reasonable voter behavior in order to determine what should be considered normal, and I find Dr. Shiva’s idea of normal to be either wrong or too limited. The data he considers to be so anomalous could easily be a consequence of normal voter behavior — there is no need to talk about fractional votes or vote stealing to explain it.
Here is a sample ballot for Michigan. Voters have the option to fill in a single box to vote for a particular party for all offices, referred to as a straight-party vote. Alternatively, they can fill in a box for one candidate for each office, known as an individual-candidate vote. In his first video, Dr. Shiva compares the percentage of straight-party votes won by the Republicans to the percentage of individual-candidate votes won by Trump and claims to observe patterns that imply some votes for Trump were transferred to Biden in a systematic way by an algorithm controlling the vote-counting machine.
x = proportion of straight-party votes going to the Republican party for a precinct
i = proportion of individual-candidate presidential votes going to Trump for a precinct
y = ix
I’ll be using proportions (numbers between 0.0 and 1.0) instead of percentages to avoid carrying around a lot of factors of 100 in equations. I assume you can mentally convert from a proportion (e.g., 0.25) to the corresponding percentage (e.g., 25%) as needed. Equations are preceded by a number in brackets (e.g., [10]) to make it easy to reference them. You can click any of the graphs to see a larger version.
Dr. Shiva claims there are clear signs of fraud in three counties: Oakland, Macomb, and Kent. The data for each precinct in Kent County is available here (note that their vote counts for Trump and Biden include both straight-party and individual-candidate votes, so you have to subtract out the straight-party votes when computing i). If we represent each precinct as a dot in a plot of y versus x, we get (my graph doesn’t look as steep as Dr. Shiva’s because his vertical axis is stretched):
Dr. Shiva claims the data should be clustered around a horizontal line (video 1 at 22:08) and provides drawings of what he expects the graph to look like:
He asserts that the downward slope of the data for Kent County implies an algorithm is being used to switch Trump votes to Biden in the vote-counting machine. In precincts where there are more Republicans (large x), the algorithm steals votes more aggressively, causing the downward slope. As a sanity check on this claim, let’s look at things from Biden’s perspective instead of Trump’s. We define a set of variables similar to the ones used above, but put a prime by each variable to indicate that it is with respect to Biden votes instead of Trump votes:
x‘ = proportion of straight-party votes going to the Democrat party for a precinct
i‘ = proportion of individual-candidate presidential votes going to Biden for a precinct
y‘ = i’x’
Here is the graph that results for Kent County:
If the Biden graph looks like the Trump graph just flipped around a bit, that’s not an accident. Requiring proportions of the same whole to add up to 1 and assuming third-party votes are negligible (the total for all third-party single-party votes averages 1.5% with a max of 4.3% and for individual-candidate votes the average is 3.3% with a max of 8.6%, so this assumption is a pretty good one that won’t impact the overall shape of the graph significantly) gives:
[1] x + x’ = 1
[2] i + i’ = 1
which implies:
[3] x’ = 1 – x
[4] y’ = –y
Those equations mean we can find an approximation to the Biden graph by flipping the Trump graph horizontally around a vertical line x = 0.5 and then flipping it vertically around a horizontal line y = 0, like this (the result in the bottom right corner is almost identical to the graph computed above with third-party candidates included):
As a result, if the Trump data is clustered around a straight line, the Biden data must be clustered around a straight line with the same slope but different y-intercept, making it appear shifted vertically.
The Biden graph slopes downward, so by Dr. Shiva’s reasoning an algorithm must be switching votes from Biden to Trump, and it does so more aggressively in precincts where there are a lot of Democrats (large x’). Wait, is Biden stealing from Trump, or is Trump stealing from Biden? We’ll come back to this question shortly.
Dr. Shiva shows Oakland County first in his video. I made a point of showing you Kent County first so you could see it without being biased by what you saw for Oakland County. This is Dr. Shiva’s graph of Trump votes for Oakland:
The data looks like it could be clustered around a horizontal line for x < 20%. Dr. Shiva argues that the algorithm kicks in and starts switching votes from Trump to Biden only for precincts having x > 20%.
We can look at it (approximately) in terms of Biden votes by flipping the Trump graph twice using the procedure outlined in Figure 5:
The data in the Biden graph appears to be clustered around a horizontal line for x’ > 80%, which is expected since it corresponds to x < 20% in the Trump graph. If you buy the argument that the data should follow a horizontal line when it is unmolested by the cheating algorithm, this pair of graphs finally answers the question of who is stealing votes from whom. Since the y-values for x > 20% are less than the y-values in the x < 20% (“normal”) region, Trump is being harmed in the cheating region. Consistent with that, Biden’s y’-value for x’ > 80% (the “normal” region) is about -5% and y’ is larger in the x’ < 80 region where the cheating occurs, so Biden benefits from the cheating. Trump is the one losing votes and Biden is the one gaining them, if you buy the argument about the “normal” state being a horizontal line.
Dr. Shiva draws a kinked line through the data for Kent (video 1 at 36:20) and Macomb (video 1 at 34:39) Counties with a flat part when x is small, similar to his graph for Oakland County, but if you look at the data without the kinked line to bias your eye, you probably wouldn’t think a kink is necessary — a straight line would fit just as well, which leaves open the question of who is taking votes from whom for those two counties.
Based on the idea that the data should be clustered around a horizontal line, Dr. Shiva claims that 69,000 Trump votes were switched to Biden by the algorithm in the three counties (video 1 at 14:00 or this tweet).
All claims of cheating, who is stealing votes from whom, and the specific number of votes stolen, are riding on the assumption that the data has to be clustered around a horizontal line in the graphs if there is no cheating. That critical assumption deserves the utmost scrutiny, and you’ll see below that it is not at all reasonable.
In Figure 8 below it is impossible for the data point for any precinct to lie in one of the orange regions because that would imply Trump received either more than 100% or less than 0% of the individual-candidate votes. For example, if x = 99%, you cannot have y=10% because that implies Trump received 109% of the individual-candidate votes (i = y + x). Any model that gives impossible y-values for plausible x-values must be at least a little wrong. The only horizontal line that doesn’t encroach on one of the orange region is y = 0.
Before we get into models of voter behavior that are somewhat realistic, let’s consider the simplest model possible that might mimic Dr. Shiva’s thinking to some degree. If the individual-candidate voters are all Republicans and Democrats in the exact same proportions as in the pool of single-party voters, and all Republicans vote for Trump while all Democrats vote for Biden, we would expect i = x and therefore y = 0 (i.e., the data would cluster around a horizontal line with y = 0). Suppose 10% of Democrats in the individual-candidate pool decide to defect and vote for Trump while all of the Republicans vote for Trump. That would give i = x + 0.1 * (1 – x). The factor of (1 – x) represents the number of Democrats that are available to defect (there are fewer of them toward the right side of the graph). That gives y = 0.1 – 0.1 * x, which is a downward-sloping line that starts at y = 10% at the left edge of the graph and goes down to y = 0% at the right edge of the graph, thus never encroaching on the orange region in Figure 8. Dr. Shiva also talks about the possibility of Republicans defecting away from Trump (video 1 at 44:43) and shows data clustered around a horizontal line at y = -10%. Again applying the simplest possible thinking, if 10% of Republicans defected away from Trump we would have i = 0.9 * x, so y = -0.1 * x. Data would again cluster around a downward-sloping line. This time it would start at y = 0% at the left edge and go down to y = -10% at the right edge. The only possible horizontal line is y = 0. Everything else wants to slope downward.
The model described in the previous paragraph is really too simple to describe reality in most situations. There are no Independent voters in that model, and it assumes the individual-candidate voting pool has the same percentage of Republicans as the straight-party pool. In reality, individual-candidate voters shouldn’t be expected to be just like straight-party voters — they choose to vote that way for a reason. Below I lay out some simple models for how different types of voters might reasonably be expected to behave. The focus is on determining how things depend on x so we can compute the shape of the y versus x curve. After describing different types of individual-candidate voters, I explain how to combine the different types into a single model to generate the curve around which the data is expected to cluster. If the model accommodates the data that is observed, there is no need to talk about cheating or how it would impact the graphs — you cannot prove cheating (though it may still be occurring) if the graph is consistent with normal voter behavior. In the following, the equations relating x to i or y apply to the curves around which the data clusters, not the position of any individual precinct.
Type 1 (masochists): Imagine the individual-candidate voters are actually Republicans voting for Trump or Democrats voting for Biden, but they choose not to use the straight-party voting option for some reason. Perhaps a Republican intended to vote for the Republican candidate for every office, but didn’t notice the straight-party option, or perhaps he/she is a masochist who enjoys filling in lots of little boxes unnecessarily (I’ll call all Type 1 people masochists even though it really only applies to a subset of them because I can’t think of a better name). Maybe a Republican votes for the Republican candidate for every office except dog catcher because his/her best friend is the Democratic candidate for that office (can Republicans and Democrats still be friends?). With this model, the number of individual-candidate voters that vote for Trump is expected to be proportional to the number of Republicans. We don’t know how many Republicans there are in total, but we can assume the number is proportional to x, giving A * x individual-candidate votes for Trump where A is a constant (independent of x). Similarly, Biden would get A’ * (1 – x) individual-candidate votes. If all individual-candidate voters are of this type, we would have:
[5] i = A * x / [A * x + A’ * (1-x)]
If the same proportion of Democrats are masochists as Republicans, A = A’, we have i = x, so y = ix gives y = 0, meaning the data will be clustered around the horizontal line y = 0, which is consistent with the view Dr. Shiva espouses in his first video. This model does not, however, support data being clustered around a horizontal line with the y-value being different from zero. If A is different from A’, the data will be clustered around a curve as shown in Figure 9 below.
Type 2 (Independents): Imagine the individual-candidate voters are true Independent voters. Perhaps they aren’t fond of either party, so they vote for the presidential candidate they like the most (or hate the least) and vote for the opposite party for any congressional positions to keep either party from having too much power, necessitating an individual-candidate vote instead of a straight-party vote. Maybe they vote for each candidate individually based on their merits and the candidates they like don’t happen to be in the same party. How should the proportion of Independents voting for Trump depend on x? Roughly speaking, it shouldn’t. The value of x tells what proportion of a voter’s neighbors are casting a straight-party vote for the Republicans compared to the Democrats. The Independent voter makes his/her own decision about who to vote for. The behavior of his/her neighbors should have little impact (perhaps they experience a little peer pressure or influence from political yard signs). Democrats are expected to mostly vote for Biden regardless of who their neighbors are voting for. Republicans are expected to mostly vote for Trump regardless of who their neighbors are voting for. Likewise, Independents are not expected to be significantly influenced by x. If all individual-candidate voters are of this type, we have i = b, where b is a constant (no x dependence), so y = bx, meaning the data would be clustered around a straight line with slope -1 as shown in Figure 10 below.
Type 3 (defectors): In this case we have some percentage of Democrats defecting from their party to vote for Trump. Likewise, some percentage of Republicans defect to vote for Biden. This is mathematically similar to Type 1, except Trump now gets votes in proportion to (1 – x) instead of x, reflecting the fact that his individual-candidate votes increase when there are more Democrats available to defect. If all individual-candidate voters are of this type, we have:
[6] i = C * (1 – x) / [C * (1 – x) + C’ * x)]
If the same proportion of Democrats defect as Republicans, C = C’, we have i = 1 – x, so y = 1 – 2 * x, causing the data to cluster around a straight line with slope of -2. If C and C’ are different, the data will be clustered around a curve as shown in Figure 11 below.
Realistically, the pool of individual-candidate voters should have some amount of all three types of voters described above. To compute i, and therefore y, we need to add up the votes (not percentages) from various types of voters. We’ll need some additional notation:
NSP = total number of straight-party voters (all parties) for the precinct (this is known)
NIC = total number of individual-candidate voters (this is known)
I = number of Independent (Type 2) voters (not known)
v = number of individual-candidate votes for Trump
v’ = number of individual-candidate votes for Biden
The number of individual-candidate votes for Trump would be:
[7] v = A * x * NSP + b * I + C * (1 – x) * NSP
and the number for Biden would be:
[8] v’ = A’ * (1 – x) * NSP + (1 – b) * I + C’ * x * NSP
The total number of individual-candidate voters comes from adding those two expressions and regrouping the terms:
[9] NIC = v + v’ = A’ * NSP + (AA’) * x * NSP + I + C * NSP + (C’C) * x * NSP
The last equation tells us that if we divide the number of individual-candidate votes by the number of straight-party votes for each precinct, NIC / NSP, and graph it as a function of x, we expect the result to cluster around a straight line (assuming I / NSP is independent of x). If the behavior of Republicans and Democrats was exactly the same (A = A’ and C = C’), the straight line would be horizontal. Here is the graph:
The line was fit using a standard regression. The fact that it slopes strongly upward tells us Republicans and Democrats do not behave the same. A larger percentage of Republicans cast individual-candidate votes than Democrats, so Republican-heavy precincts (large x) have a lot more individual-candidate votes. The number of straight-party votes also increases with x, but not as dramatically, suggesting that Republican precincts either tend to have more voters or tend to have higher turnout rates. By requiring our model to match the straight line in the figure above, we can remove two degrees of freedom (corresponding to the line’s slope and intercept) from our set of six unknown parameters (A, A’, b, I/NSP, C, C’).
We compute y = ix = v / NICx. Fitting the y versus x graph can remove two more degrees of freedom. To completely nail down the parameters, we need to make an assumption that will fix two more parameters. Since the slope of the y versus x graph for Kent County lies between 0 (Type 1 voters) and -1 (Type 2 voters), we will probably not do too much damage by assuming there are no Type 3 voters, so C = 0 and C’ = 0. We are now in a position to determine all of the remaining parameters by requiring the model to fit the NIC / NSP versus x data from Figure 12 and the y versus x data, giving:
[10] A = 0.5, A’ = 0.09, b = 0.073, I / NSP = 0.41, C = 0, C’ = 0
The curve generated by the model is not quite a straight line — it shows a little bit of curvature in the graph above. That curvature is in good agreement with the data. If NIC depends on x, as it will when A is different from A’ or when C is different from C’, there will be some curvature to the y versus x graph. In other words, when there are differences between the behavior of Republicans and Democrats this simple model will generate a y versus x graph having curvature. When there is no difference in behavior, it gives a straight line.
The relatively simple model seems to fit the data nicely. The remaining question is whether the parameter values are reasonable. If they are, we can conclude that the observed data is consistent with the way we expect voters to behave, so the graph does not suggest any fraud. If the parameter values are crazy, there may be fraud or our simple model of voter behavior may be inadequate. It will be easier to assess the reasonableness of our parameters if they are proportions (or percentages), which A, A’, C, and C’ aren’t. We would like to know the proportion of Republicans (or Democrats) voting in a particular way. We start by writing out the number of Republicans, R, according to the model as just the sum of straight-party Republican voters plus individual-candidate Republicans voting for Trump (the A term) and defector Republicans voting for Biden (the C’ term). A similar approach determines the number of Democrats.
[11] R = x * NSP + A * x * NSP + C’ * x * NSP
[12] D = (1 – x) * NSP + A’ * (1 – x) * NSP + C * (1 – x) * NSP
We now define new parameters:
a = the proportion of Republicans voting for Trump by individual-candidate ballot
a’ = the proportion of Democrats voting for Biden by individual-candidate ballot
c = the proportion of Democrats that defect to vote for Trump
c’ = the proportion of Republicans that defect to vote for Biden
[13] a = A * x * NSP / R = A / (1 + A + C’)
[14] a’ = A’ / (1 + A’ + C)
[15] c = C * (1 – x) * NSP / D = C / (1 + A’ + C)
[16] c’ = C’ / (1 + A + C’)
In this more convenient (for understanding, but not for writing equations) parameterization we have:
[17] a = 0.33, a’ = 0.083, b = 0.073, I / NSP = 0.41, c = 0, c’ = 0
In words, 33% of Republicans use an individual-candidate ballot to vote for Trump instead of a straight-party vote. Only 8.3% of Democrats use an individual-candidate ballot to vote for Biden instead of a straight-party vote. Only 7.3% of Independents voted for Trump, with the other 92.7% voting for Biden. The number of Independent voters is, on average, 41% of the total number of straight-party voters, which means Independents are considerably less than 41% of all voters (since some Republicans and Democrats don’t vote straight-party). Some values seem a little extreme, such as only 7.3% of Independents voting for Trump, but none are completely pathological. Parameter values would shift around a bit if we allowed non-zero values for c and c’ (defectors). It is worth noting that when I talk about the number of Republicans, Democrats, and Independents, I am not talking about the number of people that registered that way — I base those numbers on their behavior (i.e., the assumption that the number of Republicans is proportional to x). With all of those things in mind, I think it is safe to say that the graphs are consistent with reasonable expectations of voter behavior (no need for fraud to explain the shape), but the parameter values shouldn’t be taken too seriously.
The simple models above show a wide range of possible slopes for the data, going from 0 to -2 (when parameter values generate a straight line). A horizontal line (0 slope) requires there to be no Independent voters and no defectors. Furthermore, it requires the percentage of Republicans and Democrats choosing to use individual-candidate voting to be exactly the same (A = A’). The assumption that data should cluster around a horizontal line is really an extreme assumption that requires things to be perfectly balanced. Claiming that deviation from a horizontal line is a sign of fraud is like observing a coin toss come up heads or tails and proclaiming there must be cheating because a fair coin would have landed on its edge. Dr. Shiva’s videos never show an example of real data clustering around a horizontal line. He does show a graph for Wayne County (video 1 at 38:02) and claims it lacks the algorithmic cheating seen in the other three counties, but all of the data for Wayne County is confined to such a small range of x values that you can’t conclude much of anything about the slope.
Dr. Shiva’s second video starts by talking about signal detection and the importance of distinguishing the “normal state” from an “abnormal state” in various contexts. At 50:08 he states: “What we didn’t share in the first video is what is a normal state?” This would be a good time to scroll up and take a second look at Figure 2, which is a screen shot from the first video. He now claims the normal state would be to have the data in the y versus x graph clustered around a parabola. Horizontal lines are gone. Claims about the number of votes stolen based on expecting the data to follow a horizontal line are forgotten. He provides this graph from another election, Jeff Sessions for Senate in 2008, as his first example of the normal state:
The graph has negative curvature, meaning it is shaped like an upside down bowl. Positive curvature would be shaped like a bowl that is right-side up. He provides two more examples that also have negative curvature. He proclaims that there must be cheating in Oakland, Macomb, and Kent counties, not because they slope downward, but because they are too straight. As before, I’m going to flip the graph twice to see what it would look like in terms of Jeff Sessions’ competitor’s votes:
The flipped graph has positive curvature. If negative curvature is normal, positive curvature must also be normal. A straight line is just zero curvature. If some amount of negative curvature is normal and a similar amount of positive curvature is normal, it would be weird, but not impossible, for curvature values in between to be abnormal (note that this is a very different argument from what I said about the horizontal line y = 0, because that case was at the extreme end of the spectrum of possibilities, not in the middle). Anyway, I already showed a reasonable model of voter behavior accommodates both significant curvature (Figure 9) and straight lines (Figures 10 and 11), and I showed that Kent County has a little bit of curvature (Figure 13).
Dr. Shiva explains his claim that the normal state should be a parabola using this graph:
He claims there should be three different behaviors, resulting in a parabola, because there are three regions representing different types of voters. I think the labeling of the voters along the bottom of Figure 16 reveals some confused thinking. Why are there Independents in the middle section? Why does the quantity of Independents depend on the percentage of straight-party voters that vote Republican (i.e., the value of x)? Do Independents move out of the neighborhood if the number of Trump signs and Biden signs in the neighborhood are too far out of balance, or is the number of Independents really a separate variable (a third dimension, with Democrats and Republicans being the other two)? In my simple model above, which could certainly be wrong, curvature comes from differences in behavior between Republicans and Democrats (Figure 9), whereas more Independents makes the curve straighter (Figure 10).
Dr. Shiva introduces some new graphs in the second video at 1:02:21 that he claims are additional evidence of problems in the three counties. Instead of working with percentages, he uses the raw number of votes. He graphs the number of individual-candidate votes for Trump, v, versus the number of single-party votes for the Republicans, w. Similarly, he graphs the number of individual-candidate votes for Biden, v’, versus the number of single-party votes for the Democrats, w’. He overlaid them on the same graph, but I’ll separate them for clarity. Here are the results for Kent County:
I fit the lines with a standard regression because it is not quite possible to generate predicted curves using our model. Dr. Shiva’s concern is that the two graphs are so different. Specifically, the data in the Trump graph in Figure 17 is very tightly clustered around the straight line, whereas the Biden graph in Figure 18 shows the data to be much more spread out. We’ll return to that point after talking a bit about how the graphs relate to the model we used on the Kent County data earlier.
Expressions for v and v’ for our model were given earlier in Equations [7] and [8]. Noting that w = x * NSP, and w’ = (1 – x) * NSP, we can write v and v’ in terms of w and w’:
[18] v = A * w + b * I
[19] v’ = A’ * w’ + (1 – b) * I
The problem with graphing the model’s prediction is that I is a function of x with positive slope (our model treated I / NSP as a constant with value 0.41, but NSP itself depends on x as noted earlier), so we can’t use Equations [18] and [19] to graph the model curve. We can do some basic checks for consistency with our model, however. The Independent voter term contributes very little to v because b is small (Trump only gets 7.3% of the Independent vote in our model). So the slope of the v versus w curve should be a little more than A, which is 0.5, and the line in Figure 17 has a slope of 0.52. The slope of the v’ versus w’ curve should be A’, which is 0.09, plus (1 – b) times whatever I contributes to the slope. The line in Figure 18 has a slope of 0.26, which is larger than 0.09, as required, but it is unclear whether it is too large. The ratio of the y-intercepts for the lines in Figures 18 and 17 should be (1 – b) / b, which is 12.7, compared to 13.8 for the lines fitted to the data in the graphs.
While our model doesn’t say anything quantitative about the spread expected for the data, it can give us some qualitative guidance. The source of most of the individual-candidate votes for Trump is Republicans that choose to vote for individual candidates (masochists) rather than straight party. He gets only 7.3% of the Independent vote. By contrast, Biden gets a lot of his individual-candidate votes from Independents. This is reflected in Biden’s graph having a relatively large y-intercept. He gets around 200 votes even for precincts where there are no Democrats around to vote for him (w’ = 0 implies no straight-party Democrat voters and presumably very few individual-candidate voting Democrats) because he has 92.7% of the Independents.
We expect 33% of Republicans to vote for Trump with an individual-candidate ballot on average. We wouldn’t be surprised if some precincts have 25% or 40% instead of 33%, but we wouldn’t expect something wild like 10% or 80%, so the data points are expected to stay pretty close to the line for Trump. On the other hand, Biden gets a lot of votes from Independents and the number of Independents is expected to vary a lot between precincts. The number of Republicans varies a lot from precinct to precinct (based on x ranging from 10% to 80%), so it is reasonable to expect similar variation in the number of Independents, causing a large spread in Biden’s graph. The differences between Figures 17 and 18 are not surprising in light of the very different nature of the individual-candidate voters for Trump and Biden, which we already knew about due to the slope of Figure 12.
In summary, Dr. Shiva is right when he says it is important to distinguish normal behavior from abnormal behavior when trying to identify manipulated data. Where he comes up short is in determining what normal behavior should look like. If the data is consistent with a reasonable model of human behavior, it is normal and cannot be considered evidence of fraud. In his first video he claims a horizontal line is the only normal state, but in reality a horizontal line other than y = 0 would be highly abnormal. His second video gets closer to reality when claiming the normal state should be a parabola, but that is too limited — data with little or no curvature is perfectly reasonable, too.
# Highlights from IG3 West 2019
IG3 West was held at the Pelican Hill Resort in Newport Coast, California. It consisted of one day of product demos followed by one day of talks. The talks were divided into two simultaneous sessions throughout the day, so I could only attend half of them. My notes below provide some highlights from the talks I attended. You can find my full set of photos here.
Technology Solution Update from Corporate, Law Firm and Service Provider Perspective
How do we get the data out of the free version of Slack? It is hard to get the data out of Office 365. Employees are bringing in technologies such as Slack without going through the normal decision making process. IT and legal don’t talk to each other enough. When doing a pilot of legal hold software, don’t use it on a custodian that is on actual hold because something might go wrong. Remember that others know much less than you, so explain things as if you were talking to a third grader. Old infrastructure is a big problem. Many systems haven’t really been tested since Y2K. Business continuity should be a top priority.
Staying on Pointe: The Striking Similarities Between Ballet and eDiscovery
I wasn’t able to attend this one.
Specialized eDiscovery: Rethinking the Notion of Relevancy
Does traditional ediscovery still work? The traditional ways of communicating and creating data are shrinking. WeChat and WhatsApp are now popular. Prepare the client for litigation by helping the client find all sources of data and format the exotic data. Requesting party may want native format (instead of PDF) to get the meta data, but keep in mind that you may have to pay for software to examine data that is in exotic formats. Slack meta data is probably useless (there is no tool to analyze it). Be careful about Ring doorbells and home security systems recording audio (e.g., recording a contractor working in your home) — recording audio is illegal in some areas if you haven’t provided notification to the person being recorded. Chat, voice, and video are known problems. Emoji’s with skins and legacy data are less-known problems. Before you end up in litigation, make sure IT people are trained on where data is and how to produce it. If you are going to delete data (e.g., to reduce risk of high ediscovery costs in the future), make sure you are consistent about it (e.g., delete all emails after 3 months unless they are on hold). Haphazard deletion is going to raise questions. Even if you are consistent about deletion, you may still encounter a judge who questions why you didn’t just save everything because doing so is easier. Currently, people don’t often go after text messages, but it depends on the situation. Some people only text (no emails). Oddest sources of data seen: a Venmo comment field indicating why a payment was made, and chat from an online game.
SaaS or Vendor – An eDiscovery Conversation
I wasn’t able to attend this one.
Ick, Math! Ensuring Production Quality
I moderated this panel, so I didn’t take notes. You can find the slides here.
Still Looking for the Data
I wasn’t able to attend this one.
I wasn’t able to attend this one.
“Small” Data in the Era of “Big” Data
Data minimization reduces the data that can be misused or leaked by deleting it or moving it to more secure storage when it is no longer needed. People need quick access to the insights from the data, not the raw data itself. Most people no longer see storage cost as a driver for data minimization, though some do (can be annoying to add storage when maintaining your own secure infrastructure). A survey by CTRL found that most people say IT should be responsible for the data minimization program. Legal/compliance should have a role, too. When a hacker gets into your system, he/she is there for over 200 days on average — lots of time to learn about your data. Structured data is usually well managed/mapped (85%), but unstructured is not (15%). Ephemeral technology solves the deletion problem by never storing the data. Social engineering is one of the biggest ways that data gets out.
Mobile Device Forensics 2020: An FAQ Session Regarding eDiscovery and Data Privacy Considerations for the Coming Year
The Human Mind in the Age of Intelligent Machines
I wasn’t able to attend this one.
# Highlights from Text Analytics Forum 2019
Text Analytics Forum is part of the KMWorld conference. It was held on November 6-7 at the JW Marriott in D.C.. Attendees went to the large KMWorld keynotes in the morning and had two parallel text analytics tracks for the remainder of the day. There was a technical track and an applications track. Most of the slides are available here. My photos, including photos of some slides that caught my attention or were not available on the website, are available here. Since most slides are available online, I have only a few brief highlights below.
Automatic summarization comes in two forms: extracted and generative. Generative summarization doesn’t work very well, and some products are dropping the feature. Enron emails containing lies tend to be shorter. When a customer threatens to cancel a service, the language they use may indicate they are really looking to bargain. Deep learning works well with data, but not with concepts. For good results, make use of all document structure (titles, boldface, etc.) — search engines often ignore such details. Keywords assigned to a document by a human are often unreliable or inconsistent. Having the document’s author write a summary may be more useful. Rules work better when there is little content (machine learning prefers more content). Knowledge graphs, which were a major topic at the conference, are better for discovery than for search.
DBpedia provides structured data from wikipedia for knowledge graphs. SPARQL is a standardized language for graph databases similar to SQL for relational databases. When using knowledge graphs, the more connections away the answer is, the more like it is to be wrong. Knowledge graphs should always start with a good taxonomy or ontology.
Social media text (e.g., tweets) contains a lot of noise. Some software handles both social media and normal text, but some only really works with one or the other. Sentiment analysis can be tripped when only looking at keywords. For example, consider “product worked terribly” to “I’m terribly happy with the product.” Humans are only 60-80% accurate at sentiment analysis.
# Highlights from Relativity Fest 2019
Relativity Fest celebrated its tenth anniversary at the Hilton in Chicago. It featured as many as sixteen simultaneous sessions and was attended by about 2,000 people. You can find my full set of photos here.
The show was well-organized and there were always plenty of friendly staff around to help. The keynote introduced the company’s new CEO, Mike Gamson. Various staff members talked about new functionality that is planned for Relativity. A live demo of the coming Aero UI highlighted its ability to display very large (dozens of MB) documents quickly.
I mostly attended the developer sessions. During the first full day, the sessions I attended were packed and there were people standing in the back. It thinned out a bit during the remaining days. The on-premises version of Relativity will be switching from quarterly releases to annual releases because most people don’t want to upgrade so often. Relativity One will have updates quarterly or faster. There seems to be a major push to make APIs more uniform and better documented. There was also a lot of emphasis on reducing breakage of third party tools with new releases.
# Highlights from IG3 Mid-Atlantic 2019
The first Mid-Atlantic IG3 was held at the Watergate Hotel in Washington, D.C.. It was a day and a half long with a keynote followed by two concurrent sets of sessions. I’ve provided some notes below from the sessions I was able to attend. You can find my full set of photos here.
Big Foot, Aliens, or a Culture of Governance: Are Any of Them Real?
In 2012 12% of companies had a chief data officer, but now 63.4% do. Better data management can give insight into the business. It may also be possible to monetize the data. Cigna has used Watson, but you do have to put work into teaching it. Remember the days before GPS, when you had to keep driving directions in your head or use printed maps. Data is now more available.
Practical Applications of AI and Analytics: Gain Insights to Augment Your Review or End It Early
Opposing counsel may not even agree to threading, so getting approval for AI can be a problem. If the requesting party is the government, they want everything and they don’t care about the cost to you. TAR 2.0 allows you to jump into review right away with no delay for training by an expert, and it is becoming much more common. TAR 1.0 is still used for second requests [presumably to produce documents without review]. With TAR 1.0 you know how much review you’ll have to do if you are going to review the docs that will potentially be produced, whereas you don’t with TAR 2.0 [though you could get a rough estimate with additional sampling]. Employees may utilize code words, and some people such as traders use unique lingo — will this cause problems for TAR? It is useful to use unsupervised learning (clustering) to identify issues and keywords. Negotiation over TAR use can sometimes be more work than doing the review without TAR. It is hard to know the size of the benefit that TAR will provide for a project in advance, which can make it hard to convince people to use it. Do you have to disclose the use of TAR to the other side? If you are using it to cull, rather than just to prioritize the review, probably. Courts will soon require or encourage the use of TAR. There is a proportionality argument that it is unreasonable to not use it. Data volumes are skyrocketing. 90% of the data in the world was created in the last 2 years.
Is There Room for Governance in Digital Transformation?
I wasn’t able to attend this one.
Investigative Analytics and Machine Learning; The Right Mindset, Tools, and Approach can Make all the Difference
You can use e-discovery AI tools to get the investigation going. Some people still use paper, and the meta data from the label on the box containing the documents may be all you have. While keyword search may not be very effective, the query may be a starting point for communicating what the person is looking for so you can figure out how to find it. Use clustering to look for outliers. Pushing people to use tech just makes them hate you. Teach them in a way that is relatable. Listen to the people that are trying to learn and see what they need. Admit that tech doesn’t always work. Don’t start filtering the data down too early — you need to understand it first. It is important to be able to predict things such as cost. Figure out which people to look at first (tiering). Convince people to try analytics by pointing out how it can save time so they can spend more time with their kids. Tech vendors need to be honest about what their products can do (users need to be skeptical).
CCPA and New US Privacy Laws Readiness
I wasn’t able to attend this one.
Ick, Math! Ensuring Production Quality
I moderated this panel, so I didn’t take notes.
Effective Data Mapping Policies and Avoiding Pitfalls in GDPR and Data Transfers for Cross-Border Litigations and Investigations
I wasn’t able to attend this one.
Technology Solution Update From Corporate, Law Firm and Service Provider Perspective
I wasn’t able to attend this one.
Selecting eDiscovery Platforms and Vendors
People often pick services offered by their friends rather than doing an unbiased analysis. Often do an RFI, then RFP, then POC to see what you really get out of the system. Does the vendor have experience in your industry? What is billable vs non-billable? Are you paying for peer QC? What does data in/out mean for billing? Do a test run with the vendor before making any decisions for the long term. Some vendors charge by the user, instead of, or in addition to, charging based on data volume. What does “unlimited” really mean? Government agencies tend to demand a particular way of pricing, and projects are usually 3-5 years. Charging a lot for a large number of users working on a small database really annoys the customer. Per-user fees are really a Relativity thing, and other platforms should not attempt it. Firms will bring data in house to avoid user fees unless the data is too big (e.g., 10GB). How do dupes impact billing? Are they charging to extract a dupe? Concurrent user licenses were annoying, so many moved to named user licenses (typically 4 or 5 to one). Concurrent licenses may have a burst option to address surges in usage, perhaps setting to the new level. Some people use TAR on all cases while others in the firm/company never use it, so keep that in mind when licensing it. Forcing people to use an unfamiliar platform to save money can be a mistake since there may be a lot of effort required to learn it.
eDiscovery Support and Pricing Model — Do we have it all Wrong?
Various pricing models: data in/out + hosting + reviewers, based on number of custodians, or bulk rate (flat monthly fee). Redaction, foreign language, and privilege logs used to be separate charges, but there is now pressure to include them in the base fee. Some make processing free but compensate by raising the rate for review. RFP / procurement is a terrible approach for ediscovery because you work with and need to like the vendor/team. Ask others about their experience with the vendor, though there is now less variability in quality between the vendors. Encourage the vendor to make suggestions and not just be an order-taker. Law firms often blame the vendor when a privileged document is produced, and the lack of transparency about what really happened is frustrating. The client needs good communication with both the law firm and the vendor. Law firms shouldn’t offer ediscovery services unless they can do it as well as the vendors (law firms have a fiduciary duty).
Still Looking for the Data
I wasn’t able to attend this one.
Recycling Your eDiscovery Data: How Managing Data Across Your Portfolio can Help to Reduce Wasteful Spending
I wasn’t able to attend this one.
Ready, Fire, Aim! Negotiating Discovery Protocols
The Mandatory Initial Discovery Pilot Program in the Northern District of Illinois and Arizona requires production within 70 days from filing in order to motivate both sides to get going and cooperate. One complaint about this is that people want a motion to dismiss to be heard before getting into ediscovery. Can’t get away with saying “give us everything” under the pilot program since there is not enough time for that to be possible. Nobody wants to be the unreasonable party under such a tight deadline. The Commercial Division of the NY Supreme Court encourages categorical privilege logs. You describe the category, say why it is privileged, and specify how many documents were redacted vs being withheld in their entirety. Make a list of third parties that received the privileged documents (not a full list of all from/to). It can be a pain to come up with a set of categories when there is a huge number of documents. When it comes to TAR protocols, one might disclose the tool used or whether only the inclusive email was produced. Should the seed set size or elusion set size be disclosed? Why is the producing party disclosing any of this instead of just claiming that their only responsibility is to produce the documents? Disclosing may reduce the risk of having a fight over sufficiency. Government regulators will just tell you to give them everything exactly the way they want it. When responding to a criminal antitrust investigation you can get in trouble if you standardize the timezone in the data. Don’t do threading without consent. A second request may require you to provide a list of all keywords in the collection and their frequencies. Be careful about orders requiring you to produce the full family — this will compel you to produce non-responsive attachments.
Document Review Pricing Reset
A common approach is hourly pricing for everything (except hosting). This may be attractive to the customer because other approaches require the vendor to take on risk that the labor will be more than expected and they will build that into the price. If the customer doesn’t need predictable cost, they won’t want to pay (implicitly) for insurance against a cost overrun. It is a choice between predictability of cost and lowest cost. Occasionally review is priced on a per-document basis, but it is hard to estimate what the fair price is since data can vary. Per-document pricing puts some pressure on the review team to better manage the process for efficiency. Some clients are asking for a fixed price to handle everything for the next three years. A hybrid model has a fixed monthly fee with a lower hourly rate for review, with the lower hourly review making paying for extra QC review less painful. Using separate vendors and review companies can have a downside if reviewers sit idle while the tech is not ready. On the other hand, if there are problems with the reviewers it is nice to have the option to swap them out for another review team.
Finding Common Ground: Legal & IT Working Together
I wasn’t able to attend this one.
# Highlights from EDRM Workshop 2019
The annual EDRM Workshop was held at Duke Law School starting on the evening of May 15th and ending at lunch time on the 17th. It consisted of a mixture of panels, presentations, working group reports, and working sessions focused on various aspects of e-discovery. I’ve provided some highlights below. You can find my full set of photos here.
Herb Roitblat presented a paper on fear of missing out (FOMO). If 80% recall is achieved, is it legitimate for the requesting party to be concerned about what may have been missed in the 20% of the responsive documents that weren’t produced, or are the facts in that 20% duplicative of the facts found in the 80% that was produced?
A panel discussed the issues faced by in-house counsel. Employees want to use the latest tools, but then you have to worry about how to collect the data (e.g., Skype video recordings). How to preserve an iPhone? What if the phone gets lost or stolen? When doing TAR, can the classifier/model be moved between cases/clients? New vendors need to be able to explain how they are unique, they need to get established (nobody wants to be on the cutting edge, and it’s hard to get a pilot going), and they should realize that it can take a year to get approval. There are security/privacy problems with how law firms handle email. ROI tracking is important. Analytics is used heavily in investigations, and often in litigation, but they currently only use TAR for prioritization and QC, not to cull the population before review. Some law firms are adverse to putting data in the cloud, but cloud providers may have better security than law firms.
The GDPR team is working on educating U.S. judges about GDPR and developing a code of conduct. The EDRM reference will be made easier to update. The AI group is focused on AI in legal (e.g., estimating recidivism, billing, etc.), not implications of AI for the law. The TAR group’s paper is out. The Privilege Logs group wants to avoid duplicating Sedona’s effort (sidenote: lawyers need to learn that an email is not priv just because a lawyer was CC’ed on it). The Stop Words team is trying to educate people about things such as regular expressions, and warned about cases where you want to search for a single letter or a term such as “AN” (for ammonium nitrate). The Proportionality group talked about the possibility of having a standard set of documents that should be produced for certain types of cases and providing guidelines for making proportionality arguments to the court.
A panel of judges said that cybersecurity is currently a big issue. Each court has it’s own approach to security. Rule 16 conferences need to be taken seriously. Judges don’t hire e-discovery vendors, so they don’t know costs. How do you collect a proprietary database? Lawyers can usually work it out without the judge. There is good cooperation when the situations of the parties isn’t too asymmetric. Attorneys need to be more specific in document requests and objections (no boilerplate). Attorneys should know the case better than the judge, and educate the judge in a way that makes the judge look good. Know the client’s IT systems and be aware of any data migration efforts. Stay up on technology (e.g., Slack and text messages). Have a 502(d) order (some people object because they fear the judge will assume priv review is not needed, but the judges didn’t believe that would happen). Protect confidential information that is exchanged (what if there is a breach?). When filing under seal, “attorney’s eyes only” should be used very sparingly, and “confidential” is over used.
# TAR vs. Keyword Search Challenge, Round 6 (Instant Feedback)
This was by far the most significant iteration of the ongoing exercise where I challenge an audience to produce a keyword search that works better than technology-assisted review (also known as predictive coding or supervised machine learning). There were far more participants than previous rounds, and a structural change in the challenge allowed participants to get immediate feedback on the performance of their queries so they could iteratively improve them. A total of 1,924 queries were submitted by 42 participants (an average of 45.8 queries per person) and higher recall levels were achieved than in any prior version of the challenge, but the audience still couldn’t beat TAR.
In previous versions of the experiment, the audience submitted search queries on paper or through a web form using their phones, and I evaluated a few of them live on stage to see whether the audience was able to achieve higher recall than TAR. Because the number of live evaluations was so small, the audience had very little opportunity to use the results to improve their queries. In the latest iteration, participants each had their own computer in the lab at the 2019 Ipro Tech Show, and the web form evaluated the query and gave the user feedback on the recall achieved immediately. Furthermore, it displayed the relevance and important keywords for each of the top 100 documents matching the query, so participants could quickly discover useful new search terms to tweak their queries. This gave participants a significant advantage over a normal e-discovery scenario, since they could try an unlimited number of queries without incurring any cost to make relevance determinations on the retrieved documents in order to decide which keywords would improve the queries. The number of participants was significantly larger than any of the previous iterations, and they had a full 20 minutes to try as many queries as they wanted. It was the best chance an audience has ever had of beating TAR. They failed.
To do a fair comparison between TAR and the keyword search results, recall values were compared for equal amounts of document review effort. In other words, for a specified amount of human labor, which approach gave the best production? For the search queries, the top 3,000 documents matching the query were evaluated to determine the number that were relevant so recall could be computed (the full population was reviewed in advance, so the relevance of all documents was known). That was compared to the recall for a TAR 3.0 process where 200 cluster centers were reviewed for training and then the top-scoring 2,800 documents were reviewed. If the system was allowed to continue learning while the top-scoring documents were reviewed, the result was called “TAR 3.0 CAL.” If learning was terminated after review of the 200 cluster centers, the result was called “TAR 3.0 SAL.” The process was repeated with review of 6,000 documents instead of 3,000 so you can see how much recall improves if you double the review effort. Participants could choose to submit queries for any of three topics: biology, medical industry, or law.
The results below labeled “Avg Participant” are computed by finding the highest recall achieved by each participant and averaging those values together. These are surely somewhat inflated values since one would probably not go through so many iterations of honing the queries in practice (especially since evaluating the efficacy of a query would normally involve considerable labor instead of being free and instantaneous), but I wanted to give the participants as much advantage as I could and including all of the queries instead of just the best ones would have biased the results to be too low due to people making mistakes or experimenting with bad queries just to explore the documents. The results labeled “Best Participant” show the highest recall achieved by any participant (computed separately for Top 3,000 and Top 6,000, so they may be different queries).
Biology Recall Top 3,000 Top 6,000 Avg Participant 54.5 69.5 Best Participant 66.0 83.2 TAR 3.0 SAL 72.5 91.0 TAR 3.0 CAL 75.5 93.0
Medical Recall Top 3,000 Top 6,000 Avg Participant 38.5 51.8 Best Participant 46.8 64.0 TAR 3.0 SAL 67.3 83.7 TAR 3.0 CAL 80.7 88.5
Law Recall Top 3,000 Top 6,000 Avg Participant 43.1 59.3 Best Participant 60.5 77.8 TAR 3.0 SAL 63.5 82.3 TAR 3.0 CAL 77.8 87.8
As you can see from the tables above, the best result for any participant never beat TAR (SAL or CAL) when there was an equal amount of document review performed. Furthermore, the average participant result for Top 6,000 never beat the TAR results for Top 3,000, though the best participant result sometimes did, so TAR typically gives a better result even with half as much review effort expended. The graphs below show the best results for each participant compared to TAR in blue. The numbers in the legend are the ID numbers of the participants (the color for a particular participant is not consistent across topics). Click the graph to see a larger version.
The large number of people attempting the biology topic was probably due to it being the default, and I illustrated how to use the software with that topic.
One might wonder whether the participants could have done better if they had more than 20 minutes to work on their queries. The graphs below show the highest recall achieved by any participant as a function of time. You can see that results improved rapidly during the first 10 minutes, but it became hard to make much additional progress beyond that point. Also, over half of the audience continued to submit queries after the 20 minute contest, while I was giving the remainder of the presentation. 40% of the queries were submitted during the first 10 minutes, 40% were submitted during the second 10 minutes, and 20% were submitted while I was talking. Since there were roughly the same number of queries submitted in the second 10 minutes as the first 10 minutes, but much less progress was made, I think it is safe to say that time was not a big factor in the results.
In summary, even with a large pool of participants, ample time, and the ability to hone search queries based on instant feedback, nobody was able to generate a better production than TAR when the same amount of review effort was expended. It seems fair to say that keyword search often requires twice as much document review to achieve a production that is as good as what you would get TAR.
# Highlights from Ipro Tech Show 2019
Ipro renamed their conference from Ipro Innovations to the Ipro Tech Show this year. As always, it was held at the Talking Stick Resort in Arizona and it was very well organized. It started with a reception on April 29th that was followed by two days of talks. There were also training days bookending the conference on April 29th and May 2nd. After the keynote on Tuesday morning, there were five simultaneous tracks for the remainder of the conference, including a lot of hands-on work in computer labs. I was only able to attend a few of the talks, but I’ve included my notes below. You can find my full set of photos here. Videos and slides from the presentations are available here.
Dean Brown, who has been Ipro’s CEO for eight months, opened the conference with some information about himself and where the company is headed. He mentioned that the largest case in a single Ipro database so far was 11 petabytes from 400 million documents. Q1 2019 was the best quarter in the company’s history, and they had a 98% retention rate. They’ve doubled spending on development and other departments.
Next, there was a panel where three industry experts discussed artificial intelligence. AI can be used to analyze legal bills to determine which charges are reasonable. Google uses AI to monitor and prohibit behaviors within the company, such as stopping your account from being used to do things when you are supposed to be away. Only about 5% of the audience said they were using TAR. It was hypothesized that this is due to FRCP 26(g)’s requirement to certify the production as complete and correct. Many people use Slack instead of e-mail, and dealing with that is an issue for e-discovery. CLOC was mentioned as an organization helping corporations get a handle on legal spending.
The keynote was given by Kevin Surace, and mostly focused on AI. You need good data and have to be careful about spurious correlations in the data (he showed various examples that were similar to what you find here). An AI can watch a video and supplement it with text explaining what the person in the video is doing. One must be careful about fast changing patterns and black swan events where there is no data available to model. Doctors are being replaced by software that is better informed about the most recent medical research. AI can review an NDA faster and more accurately than an attorney. There is now a news channel in China using an AI news anchor instead of a human to deliver the news. With autonomous vehicles, transportation will become free (supported by ads in the vehicle). AI will have an impact 100 times larger than the Internet.
I gave a talk titled “Technology: The Cutting Edge and Where We’re Headed” that focused on AI. I started by showing the audience five pairs of images from WhichFaceIsReal.com and challenged them to determine which face was real and which was generated by an AI. When I asked if anyone got all five right, I only saw one person raise their hand. When I asked if anyone got all five wrong, I saw three hands go up. Admittedly, I picked image pairs that I thought were particularly difficult, but the result is still a little scary.
I also gave a talk titled “TAR Versus Keyword Challenge” where I challenged the audience to construct a keyword search that worked better than technology-assisted review. The format of this exercise was very different from previous iterations, making it easy for participants to test and hone their queries. We had 1,924 queries submitted by 42 participants. They achieved the highest recall levels seen so far, but still couldn’t beat TAR. A detailed analysis is available here.
# Misleading Metrics and Irrelevant Research (Accuracy and F1)
If one algorithm achieved 98.2% accuracy while another had 98.6% for the same task, would you be surprised to find that the first algorithm required ten times as much document review to reach 75% recall compared to the second algorithm? This article explains why some performance metrics don’t give an accurate view of performance for ediscovery purposes, and why that makes a lot of research utilizing such metrics irrelevant for ediscovery.
The key performance metrics for ediscovery are precision and recall. Recall, R, is the percentage of all relevant documents that have been found. High recall is critical to defensibility. Precision, P, is the percentage of documents predicted to be relevant that actually are relevant. High precision is desirable to avoid wasting time reviewing non-relevant documents (if documents will be reviewed to confirm relevance and check for privilege before production). In other words, precision is related to cost. Specifically, 1/P is the average number of documents you’ll have to review per relevant document found. When using technology-assisted review (predictive coding), documents can be sorted by relevance score and you can choose any point in the sorted list and compute the recall and precision that would be achieved by treating documents above that point as being predicted to be relevant. One can plot a precision-recall curve by doing precision and recall calculations at various points in the sorted document list.
The precision-recall curve to the right compares two different classification algorithms applied to the same task. To do a sensible comparison, we should compare precision values at the same level of recall. In other words, we should compare the cost of reaching equally good (same recall) productions. Furthermore, the recall level where the algorithms are compared should be one that is sensible for for ediscovery — achieving high precision at a recall level a court wouldn’t accept isn’t very useful. If we compare the two algorithms at R=75%, 1-NN has P=6.6% and 40-NN has P=70.4%. In other words, if you sort by relevance score with the two algorithms and review documents from top down until 75% of the relevant documents are found, you would review 15.2 documents per relevant document found with 1-NN and 1.4 documents per relevant document found with 40-NN. The 1-NN algorithm would require over ten times as much document review as 40-NN. 1-NN has been used in some popular TAR systems. I explained why it performs so badly in a previous article.
There are many other performance metrics, but they can be written as a mixture of precision and recall (see Chapter 7 of the current draft of my book). Anything that is a mixture of precision and recall should raise an eyebrow — how can you mix together two fundamentally different things (defensibility and cost) into a single number and get a useful result? Such metrics imply a trade-off between defensibility and cost that is not based on reality. Research papers that aren’t focused on ediscovery often use such performance measures and compare algorithms without worrying about whether they are achieving the same recall, or whether the recall is high enough to be considered sufficient for ediscovery. Thus, many conclusions about algorithm effectiveness simply aren’t applicable for ediscovery because they aren’t based on relevant metrics.
One popular metric is accuracy, which is the percentage of predictions that are correct. If a system predicts that none of the documents are relevant and prevalence is 10% (meaning 10% of the documents are relevant), it will have 90% accuracy because its predictions were correct for all of the non-relevant documents. If prevalence is 1%, a system that predicts none of the documents are relevant achieves 99% accuracy. Such incredibly high numbers for algorithms that fail to find anything! When prevalence is low, as it often is in ediscovery, accuracy makes everything look like it performs well, including algorithms like 1-NN that can be a disaster at high recall. The graph to the right shows the accuracy-recall curve that corresponds to the earlier precision-recall curve (prevalence is 2.633% in this case), showing that it is easy to achieve high accuracy with a poor algorithm by evaluating it at a low recall level that would not be acceptable for ediscovery. The maximum accuracy achieved by 1-NN in this case was 98.2% and the max for 40-NN was 98.6%. In case you are curious, the relationship between accuracy, precision, and recall is:
$ACC = 1 - \rho (1 - R) - \rho R (1 - P) / P$
where $\rho$ is the prevalence.
Another popular metric is the F1 score. I’ve criticized its use in ediscovery before. The relationship to precision and recall is:
$F_1 = 2 P R / (P + R)$
The F1 score lies between the precision and the recall, and is closer to the smaller of the two. As far as F1 is concerned, 30% recall with 90% precision is just as good as 90% recall with 30% precision (both give F1 = 0.45) even though the former probably wouldn’t be accepted by a court and the latter would. F1 cannot be large at small recall, unlike accuracy, but it can be moderately high at modest recall, making it possible to achieve a decent F1 score even if performance is disastrously bad at the high recall levels demanded by ediscovery. The graph to the right shows that 1-NN manages to achieve a maximum F1 of 0.64, which seems pretty good compared to the 0.73 achieved by 40-NN, giving no hint that 1-NN requires ten times as much review to achieve 75% recall in this example.
Hopefully this article has convinced you that it is important for research papers to use the right metric, specifically precision (or review effort) at high recall, when making algorithm comparisons that are useful for ediscovery.
|
2021-01-25 12:54:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3907582759857178, "perplexity": 1535.3875808437413}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703581888.64/warc/CC-MAIN-20210125123120-20210125153120-00416.warc.gz"}
|
https://mathoverflow.net/questions/186807/for-an-elliptic-curve-e-mathbbq-can-the-cohomology-group-h1-textgal-m
|
For an elliptic curve $E/\mathbb{Q}$ can the cohomology group $H^1(\text{Gal}(\mathbb{Q}(E[p])/\mathbb{Q}), E[p])$ be nontrivial?
Suppose that $E$ an elliptic curve defined over $\mathbb{Q}$ and $p$ an odd prime. Let $G=\text{Gal}(\mathbb{Q}(E[p])/\mathbb{Q})$. I am wondering whether the cohomology group $H^1(G, E[p])$ can be nontrivial. If $G=GL_2(\mathbb{F}_p)$ (which is the case for all but finitely many primes $p$ if $E$ does not have complex multiplication) then $H^1(G, E[p])$ is trivial. This can be shown by considering the homothety subgroup $Z \le G$ which has order $p-1>1$. One easily sees that $H^i(Z, E[p])=0$ for all $i \geq 0$ and so the result follows from the Hochschild-Serre spectral sequence.
Now suppose that $G$ is a proper subgroup of $GL_2(\mathbb{F}_p)$. Can $H^1(G, E[p])$ be nontrivial?
• I don't know of an example with $p>2$. Maybe the group theorists can tell us examples of $H^1(H, V)\neq 0$ for subgroups $H$ of $\operatorname{SL}_2(\mathbb{F}_p)$ with $V$ the 2-dimensional vector space over $\mathbb{F}_p$. Say with $p=3$ or $p=5$. – Chris Wuthrich Nov 11 '14 at 13:20
Fix elements $\zeta$ and $\alpha$ with $\zeta$ a primitive third root of unity and $\alpha^3 = -4$. These generate a field $K = \Bbb Q(\zeta,\alpha)$ which is the splitting field of $x^3 + 4$, with Galois group $G$ the symmetric group on three letters.
Consider the elliptic curve $y^2 = x^3 + 1$. Unless I have miscalculated, the $3$-torsion points on this curve are the points $(x,y)$ with $x^4 + 4x = 0$. In particular, the points $(0,1)$ and $(\alpha,2\zeta+1)$ are independent 3-torsion points on this curve, so $K = \Bbb Q(E[3])$. Taking these as a basis, the resulting image of the Galois group into $GL_2(\Bbb F_3)$ must be $$\begin{bmatrix}1 & * \\ 0 & *\end{bmatrix}$$ because $(0,1)$ is fixed and the map must be injective.
Let $H < G$ be the subgroup of order three. Since the coefficient group $E[3]$ is $3$-torsion, a transfer argument implies that the restriction $H^1(G;E[3]) \to H^1(H;E[3])$ is injective with image the invariants under $G/H \cong \Bbb Z/2$.
If $$A = \begin{bmatrix}1 & 1 \\ 0 & 1\end{bmatrix}$$ represents the generator $\tau$ of $H$, then the group $H^1(H;E[3])$ is $ker(1 + A + A^2) / Im(1 - A)$, which is generated by the column vector $\left[\begin{smallmatrix}0 \\ 1\end{smallmatrix}\right]$. (This describes an element of $H^1$ by where an associated $1$-cocycle sends a chosen generator of $H$.)
As a $1$-cocycle, this is represented by the map $f:H \to E[3]$ with $$f(\tau^k) = (1 + A + \cdots + A^{k-1})\left[\begin{smallmatrix}0 \\ 1\end{smallmatrix}\right].$$ The action of the element $\sigma = \left[\begin{smallmatrix}1 & 0 \\ 0 & -1\end{smallmatrix}\right]$ on this cocycle is given by $$({}^\sigma f)(\tau) = \sigma \cdot f(\sigma^{-1} \tau \sigma) = \sigma \cdot f(\tau^2) = \left[\begin{smallmatrix}1 \\ 1\end{smallmatrix}\right]$$ which shows that the two $1$-cocycles ${}^\sigma f$ and $f$ represent the same element of $H^1$. Therefore, this element of $H^1(H;E[3])$ is invariant under $G/H$ and lifts to a nontrivial element of $H^1(G;E[3])$.
• You are right. More generally, If $G$ is the group of all matrices of the form $(\begin{smallmatrix} 1 & * \\ 0 & * \end{smallmatrix})$, then $H^1(G,E[p])= \mathbb{F}_p$ if $p =3$ and it is zero if $p>3$. Your example, and many other curves with a 3-torsion point rational over $\mathbb{Q}$, has indeed this group. – Chris Wuthrich Nov 11 '14 at 22:20
• @ChrisWuthrich Even worse, if I'm calculating correctly (based on your argument) there only appears to be a candidate subgroup of $GL_2(\Bbb F_p)$ which can support a nonzero cohomology group if $p \not \equiv 1 \mod 3$; the subgroup needs to be the set of matrices of the form $(\begin{smallmatrix}c^2 & * \\ 0 & c\end{smallmatrix})$. I don't know whether $p=5$ supports a curve with this type of torsion. – Tyler Lawson Nov 11 '14 at 23:33
• @TylerLawson thanks for this very interesting example! – Ahmed Matar Nov 12 '14 at 8:56
• Yes that is possible for $p=5$. It turns out to be equivalent to be a quadratic twist by 5 of a curve with a rational 5-torsion point. – Chris Wuthrich Nov 12 '14 at 14:37
If $E[p]$ is generated by $P,Q$ with $P$ invariant (i.e. rational) and the action of $G$ is generated by $Q \mapsto Q+P$, so $G$ is isomorphic to $<P>$, then $H^1(G,E[p])$ contains $H^1(G,<P>) = Hom(G,<P>)$ which is non-trivial. Of course there is the question of realizing this over $\mathbb{Q}$ which won't be possible for large $p$ by Mazur but probably can be for $p=3$.
• Over $\mathbb{Q}$, the determinant $G\to \mathbb{F}_p^{\times}$ must be surjective. So your $G$ won't appear as a group for an elliptic curve over $\mathbb{Q}$. – Chris Wuthrich Nov 11 '14 at 12:10
• That's true. It only works over the rationals for $p=2$. – Felipe Voloch Nov 11 '14 at 12:22
$\newcommand{\FF}{\mathbb{F}}\DeclareMathOperator{\SL}{SL}$ OK, let me try.
Write $M=E[p]$. If the order of $G$ is coprime to $p$, then $H^1(G,M)=0$. Assume that $p$ divides the order of $G$. Now by Prop 15 in Serre's "Propriétés galoisiennes... " Invent Math 15, $G$ either contains $\SL_2(\FF_p)$ or it is contained in a Borel subgroup. According to the question, we may assume that $G$ is contained in a Borel, say upper triangular matrices.
Let $H=G\cap \SL_2(\FF_p)$. Restriction shows that $H^1(G,M)$ is the $G/H$-fixed part of $H^1(H,M)$. Now $H$ is a subgroup of $(\begin{smallmatrix} a & b \\ 0 & 1/a\end{smallmatrix})$ with invertible $a$ and arbitrary $b$. By assumption $H$ contains the subgroup $K$ generated by $h = (\begin{smallmatrix} 1 & 1\\ 0 & 1\end{smallmatrix})$. Again by restriction-inflation, $H^1(H,M)$ is contained in $H^1(K,M)$. The latter can be computed as usual and we find that a cocyle is determined by its image on $h$ which has to belong to $\FF_p(1, 0)$.
Now we consider again the action of $G/H$ on $H^1(K,M)$. If I am not mistaken, then the class $\bar g$ of matrices of determinant $d$ acts by $(\bar g * \xi)(h) = d \cdot \xi(h)$. If so, then there are no elements in $H^1(H,M)$ fixed by $G/H$ as soon as there is an element of determinant $\neq 1$ in $G$. By the Weil pairing, this must be the case when $p>2$. Hence $H^1(G,M)=0$.
Edit: However, I am mistaken as the example of Tyler Lawson shows. The action of $G/H$ may be trivial.
|
2021-02-27 13:40:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9462655186653137, "perplexity": 99.44812288341598}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358956.39/warc/CC-MAIN-20210227114444-20210227144444-00073.warc.gz"}
|
https://www.jobilize.com/course/section/diffraction-grating-by-openstax?qcr=www.quizover.com
|
# 9.2 Diffraction grating
Page 1 / 1
We derive the interference patter for a diffraction grating.
## Diffraction grating
Consider the case of N slit diffraction, We have ${E}_{1}=\frac{{\epsilon }_{L}a}{R}\frac{{\mathrm{sin}}\beta }{\beta }{e}^{i\left(k{R}_{1}-\omega t\right)}$ ${E}_{2}=\frac{{\epsilon }_{L}a}{R}\frac{{\mathrm{sin}}\beta }{\beta }{e}^{i\left(k{R}_{2}-\omega t\right)}$ $\text{.}$ $\text{.}$ $\text{.}$ ${E}_{N}=\frac{{\epsilon }_{L}a}{R}\frac{{\mathrm{sin}}\beta }{\beta }{e}^{i\left(k{R}_{N}-\omega t\right)}$ So we can just follow the steps of the two slit case and extend them and get (using ${R}_{N}=R-\left(N-1\right)d{\mathrm{sin}}\theta$ ) $\begin{array}{c}E=\sum _{n=1}^{N}{E}_{N}\\ =\sum _{n=1}^{N}\frac{{\epsilon }_{L}a}{R}\frac{{\mathrm{sin}}\beta }{\beta }{e}^{i\left(kR-2\left(n-1\right)\alpha -\omega t\right)}\\ =\frac{{\epsilon }_{L}a}{R}\frac{{\mathrm{sin}}\beta }{\beta }\sum _{n=1}^{N}{e}^{i\left(kR-2\left(n-1\right)\alpha -\omega t\right)}\\ =\frac{{\epsilon }_{L}a}{R}\frac{{\mathrm{sin}}\beta }{\beta }{e}^{i\left(kR-\omega t\right)}\sum _{n=1}^{N}{e}^{-i2\left(n-1\right)\alpha }\\ =\frac{{\epsilon }_{L}a}{R}\frac{{\mathrm{sin}}\beta }{\beta }{e}^{i\left(kR-\omega t\right)}\sum _{j=0}^{N-1}{e}^{-i2j\alpha }\end{array}$ This is the same geometric series we dealt with before $\sum _{n=0}^{N-1}{x}^{n}=\frac{1-{x}^{N}}{1-x}$ so $\begin{array}{c}E=\frac{{\epsilon }_{L}a}{R}\frac{{\mathrm{sin}}\beta }{\beta }{e}^{i\left(kR-\omega t\right)}\sum _{j=0}^{N-1}{e}^{-i2j\alpha }\\ =\frac{{\epsilon }_{L}a}{R}\frac{{\mathrm{sin}}\beta }{\beta }{e}^{i\left(kR-\omega t\right)}\frac{1-{e}^{-i2N\alpha }}{1-{e}^{-i2\alpha }}\\ =\frac{{\epsilon }_{L}a}{R}\frac{{\mathrm{sin}}\beta }{\beta }{e}^{i\left(kR-\omega t\right)}\frac{{e}^{-iN\alpha }}{{e}^{-i\alpha }}\frac{{e}^{iN\alpha }}{{e}^{i\alpha }}\frac{1-{e}^{-i2N\alpha }}{1-{e}^{-i2\alpha }}\\ =\frac{{\epsilon }_{L}a}{R}\frac{{\mathrm{sin}}\beta }{\beta }{e}^{i\left(kR-\omega t\right)}\frac{{e}^{-iN\alpha }}{{e}^{-i\alpha }}\frac{{e}^{iN\alpha }-{e}^{-iN\alpha }}{{e}^{i\alpha }-{e}^{-i\alpha }}\\ =\frac{{\epsilon }_{L}a}{R}\frac{{\mathrm{sin}}\beta }{\beta }{e}^{i\left(kR-\left(N-1\right)\alpha -\omega t\right)}\frac{{\mathrm{sin}}N\alpha }{{\mathrm{sin}}\alpha }\end{array}$
Notice that this just ends up being multisource interference multiplied by single slit diffraction.
Squaring it we see that: $I\left(\theta \right)={I}_{0}\frac{{{\mathrm{sin}}}^{2}\beta }{{\beta }^{2}}\frac{{{\mathrm{sin}}}^{2}N\alpha }{{{\mathrm{sin}}}^{2}\alpha }$
Interference with diffractionfor 6 slits with $d=4a$
Interference with diffractionfor 6 slits with $d=4a$
Interference with diffractionfor10 slits with $d=4a$
Interference with diffractionfor10 slits with $d=4a$
Principal maxima occur when $\frac{{\mathrm{sin}}N\alpha }{{\mathrm{sin}}\alpha }=N$ or since $\alpha =kd\left({\mathrm{sin}}\theta \right)/2$ $kd{\mathrm{sin}}\theta =2n\pi \text{ }n=0,1,2,3$ or $\frac{2\pi }{\lambda }d{\mathrm{sin}}\theta =2n\pi$ or ${\mathrm{sin}}\theta =\frac{n\lambda }{d}$
and just like in multisource interference minima occur at ${\mathrm{sin}}\theta =\frac{n\lambda }{Nd}\text{ }n=1,2,3\dots \text{ }\frac{n}{N}\ne integer$ A diffraction grating is a repetitive array of diffracting elements such as slits or reflectors. Typically with N very large (100's). Notice how all butthe first maximum depend on $\lambda$ . So you can use a grating for spectroscopy.
#### Questions & Answers
where we get a research paper on Nano chemistry....?
Maira Reply
what are the products of Nano chemistry?
Maira Reply
There are lots of products of nano chemistry... Like nano coatings.....carbon fiber.. And lots of others..
learn
Even nanotechnology is pretty much all about chemistry... Its the chemistry on quantum or atomic level
learn
Google
da
no nanotechnology is also a part of physics and maths it requires angle formulas and some pressure regarding concepts
Bhagvanji
hey
Giriraj
Preparation and Applications of Nanomaterial for Drug Delivery
Hafiz Reply
revolt
da
Application of nanotechnology in medicine
what is variations in raman spectra for nanomaterials
Jyoti Reply
I only see partial conversation and what's the question here!
Crow Reply
what about nanotechnology for water purification
RAW Reply
please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment.
Damian
yes that's correct
Professor
I think
Professor
Nasa has use it in the 60's, copper as water purification in the moon travel.
Alexandre
nanocopper obvius
Alexandre
what is the stm
Brian Reply
is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.?
Rafiq
industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong
Damian
How we are making nano material?
LITNING Reply
what is a peer
LITNING Reply
What is meant by 'nano scale'?
LITNING Reply
What is STMs full form?
LITNING
scanning tunneling microscope
Sahil
how nano science is used for hydrophobicity
Santosh
Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq
Rafiq
what is differents between GO and RGO?
Mahi
what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq
Rafiq
if virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION
Anam
analytical skills graphene is prepared to kill any type viruses .
Anam
Any one who tell me about Preparation and application of Nanomaterial for drug Delivery
Hafiz
what is Nano technology ?
Bob Reply
write examples of Nano molecule?
Bob
The nanotechnology is as new science, to scale nanometric
brayan
nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale
Damian
Is there any normative that regulates the use of silver nanoparticles?
Damian Reply
what king of growth are you checking .?
Renato
What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ?
Stoney Reply
why we need to study biomolecules, molecular biology in nanotechnology?
Adin Reply
?
Kyle
yes I'm doing my masters in nanotechnology, we are being studying all these domains as well..
Adin
why?
Adin
what school?
Kyle
biomolecules are e building blocks of every organics and inorganic materials.
Joe
how did you get the value of 2000N.What calculations are needed to arrive at it
Smarajit Reply
Privacy Information Security Software Version 1.1a
Good
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply
### Read also:
#### Get the best Algebra and trigonometry course in your pocket!
Source: OpenStax, Waves and optics. OpenStax CNX. Nov 17, 2005 Download for free at http://cnx.org/content/col10279/1.33
Google Play and the Google Play logo are trademarks of Google Inc.
Notification Switch
Would you like to follow the 'Waves and optics' conversation and receive update notifications?
By OpenStax By Rylee Minllic By Edgar Delgado By OpenStax By OpenStax By Brooke Delaney By OpenStax By OpenStax By Jill Zerressen By Rhodes
|
2020-10-20 02:57:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 22, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3899614214897156, "perplexity": 6086.982778301913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107869785.9/warc/CC-MAIN-20201020021700-20201020051700-00359.warc.gz"}
|
https://www.physicsforums.com/threads/probability-of-getting-arithmetic-sequence-from-3-octahedron-dice.1015305/
|
# Probability of getting arithmetic sequence from 3 octahedron dice
songoku
Homework Statement:
Relevant Equations:
Probability
Arithmetic Sequence
I try to list all the possible sequences:
1 2 3
1 3 5
1 4 7
2 3 4
2 4 6
2 5 8
3 4 5
3 5 7
4 5 6
4 6 8
5 6 7
6 7 8
I get 12 possible outcomes, so the probability is ##\frac{12 \times 3!}{8^3}=\frac{9}{64}##
But the answer key is ##\frac{5}{32}## . Where is my mistake? Thanks
Homework Helper
Gold Member
2022 Award
Relevant Equations:: Probability
Arithmetic Sequence
Where is my mistake?
In the sequence a, a+b, a+2b, what are all the possible values of b?
songoku
songoku
In the sequence a, a+b, a+2b, what are all the possible values of b?
I think for this case the difference should be positive integer so b can be 1, 2, or 3
Homework Helper
Gold Member
2022 Award
I think for this case the difference should be positive integer so b can be 1, 2, or 3
I disagree. 1, 1, 1 is a perfectly good arithmetic sequence.
songoku and Delta2
Homework Helper
Gold Member
I know I ll spoil abit the solution but in order to be a bit more formal and since the dice is 8-ply it will have to be $$1\leq a+2b\leq8\Rightarrow \frac{1-a}{2}\leq b\leq \frac{8-a}{2}$$ and ofc $$b\geq 0$$.
songoku
Homework Helper
Gold Member
Anyway I think you did find the b correctly, except you didn't took the case b=0 (and tbh I myself didn't think of that). If you add the 8 cases (a,a,a) ,a=1...8 to your result, you get the answer key.
songoku
songoku
I disagree. 1, 1, 1 is a perfectly good arithmetic sequence.
I know I ll spoil abit the solution but in order to be a bit more formal and since the dice is 8-ply it will have to be $$1\leq a+2b\leq8\Rightarrow \frac{1-a}{2}\leq b\leq \frac{8-a}{2}$$ and ofc $$b\geq 0$$.
Is 1, 1, 1 can also be called geometric sequence?
Homework Helper
Gold Member
Is 1, 1, 1 can also be called geometric sequence?
Yes it is arithmetic sequence with ##\omega=0## and geometric sequence with ##\omega=1##.
songoku
songoku
Yes it is arithmetic sequence with ##\omega=0## and geometric sequence with ##\omega=1##.
How about 0, 0, 0? Can that also be called both arithmetic and geometric sequence?
Homework Helper
Gold Member
I think yes but why are you asking these questions...
songoku
Homework Helper
Gold Member
0 isn't a possible number from the dice.
songoku
songoku
I think yes but why are you asking these questions...
I just want to know so if I do other questions I know which one I can consider as arithmetic or geometric sequence.
Thank you very much for the help and explanation haruspex and Delta2
Delta2
|
2023-03-21 03:57:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8162034153938293, "perplexity": 813.5356907553207}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943625.81/warc/CC-MAIN-20230321033306-20230321063306-00604.warc.gz"}
|
https://math.stackexchange.com/questions/1136061/lim-sup-liminf-an-ak
|
# lim sup (liminf) (An\Ak)
I need to show that $\limsup_n \liminf_k A_n \cap A_k^c=\phi$. Thus
$\bigcap_n\bigcup_{r\geq n} \bigcup_k \bigcap_{m\geq k} A_r\cap A_m^c=\phi$?
I am trying to show that $\lim_n P(\liminf_k A_n \cap A_k^c)=0$. I need the above step and then $\limsup_n P\leq P(\limsup_n)$.
• @AsafKaragila I have added (elementary-set-theory) per recommendation from the limsup tag-wiki. – Martin Sleziak Feb 6 '15 at 7:44
• Welcome to math.SE: since you are new, I wanted to let you know a few things about the site. In order to get the best possible answers, it is helpful if you say in what context you encountered the problem, and what your thoughts on it are; this will prevent people from telling you things you already know, and help them give their answers at the right level. – Martin Sleziak Feb 6 '15 at 7:56
• I am trying to show that $\lim_n P(\liminf_k A_n \cap A_k^c)=0$. I need the above step and then $\limsup_n P\leq P(\limsup_n)$. – Meemo Feb 6 '15 at 10:17
If I did not make a mistake somewhere, you could do it like this. (Basically just using distributivity and de Morgan laws.) $$\bigcap_n\bigcup_{r\geq n} \bigcup_k \bigcap_{m\geq k} (A_r\cap A_m^c) = \bigcap_n\bigcup_{r\geq n} \left(A_r \cap \left(\bigcup_k \bigcap_{m\geq k} A_m^c\right)\right) = \bigcap_n\bigcup_{r\geq n} \left(A_r \cap \left(\bigcap_k \bigcup_{m\geq k} A_m\right)^c\right) = \left(\bigcap_n\bigcup_{r\geq n} A_r \right) \cap \left(\bigcap_k \bigcup_{m\geq k} A_m\right)^c = \limsup A_n \cap (\limsup A_n)^c = \emptyset$$
|
2020-03-31 20:34:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7906330823898315, "perplexity": 537.6400814113296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370503664.38/warc/CC-MAIN-20200331181930-20200331211930-00458.warc.gz"}
|
http://mathoverflow.net/revisions/67003/list
|
3 Changed question in light of Prof. Bryant's response
Let
For $k, n \in \mathbb{N}$, let $\mathcal{C}_n \mathbb{R}^2$ mathbb{R}^k$denote the configuration space of$n$distinct points in$\mathbb{R}^k$. • (1) Is there a description of the plane. tangent space$T_C \mathcal{C}_n \mathbb{R}^k$in terms of the configuration$C$? • Equipping$\mathbb{R}^2$\mathbb{R}^k$ with the usual metric, define the a regression line $l_C$ of a collection $C \in \mathcal{C}_n \mathbb{R}^2$ to be the mathbb{R}^k$is a line minimizing the quantity$E_C = \sum\limits_{p \in C} d(p, l_C)^2.$We can see this as variational problem$E_C: M^2 M^k \rightarrow \mathbb{R}$where$M^2$M^k$ is the parameter space of all lines in $\mathbb{R}^2$.
• What can we say about the map $\mathcal{C}_n \mathbb{R}^2 \rightarrow M^2$ given by $C \mapsto l_C$? \mathbb{R}^k$. • (2) Is there an intuitive description explicit parametrization of its differential? • Given$M^k$? • Without this knowledge, I'm not sure how to proceed to check whether$E_C$is a Morse function. [Note for$k=2$: given$C \in \mathcal{C}_n \mathbb{R}^2$, mathbb{R}^k$, since $n<\infty$ we can always find an angle $\theta$ such that a rotation of our axes by $\theta$ yields coordinates $(x,y)$ on $\mathbb{R}^2$ for which the $x$-values of the $p \in C$ are all distinct(, bringing us back to function-fitting and the usual least-squares regression which minimizes only the distances in the $y$ direction). Is there some top-down way to see the existence of this $\theta$? (sorry, my Morse theory gland is firing.)
• Using the one-point compactification $\mathbb{R}^2 \hookrightarrow S^2$, we can identify $M^2$ with the set of all loops based at $\infty \in S^2$ that are isometric to circles of some radius. Is there some more satisfying way to characterize this class of loops in $S^2$?direction.]
• 2 Modifications were made to clarify the usual definition of least-squares regression.
Let $\mathcal{C}_n \mathbb{R}^2$ denote the configuration space of $n$ distinct points in the plane. Equipping $\mathbb{R}^2$ with the usual metric, define the least-squares regression line $l_C$ of a collection $C \in \mathcal{C}_n \mathbb{R}^2$ minimizes to be the line minimizing the quantity $E_C = \sum\limits_{p \in C} d(p, l_C)^2.$ We can see this as variational problem $E_C: M^2 \rightarrow \mathbb{R}$ where $M^2$ is the parameter space of all lines in $\mathbb{R}^2$.
• What can we say about the map $\mathcal{C}_n \mathbb{R}^2 \rightarrow M^2$ given by $C \mapsto l_C$? Is there an intuitive description of its differential?
• Given $C \in \mathcal{C}_n \mathbb{R}^2$, since $n<\infty$ we can always find an angle $\theta$ such that a rotation of our axes by $\theta$ yields coordinates $(x,y)$ on $\mathbb{R}^2$ for which the $x$-values of the $p \in C$ are all distinct , (bringing us back to function-fitting and the usual least-squares regression which minimizes only the distances in the $y$ direction). Is there some top-down way to see the existence of this $\theta$? (sorry, my Morse theory gland is firing.)
• Using the one-point compactification $\mathbb{R}^2 \hookrightarrow S^2$, we can identify $M^2$ with the set of all loops based at $\infty \in S^2$ that are isometric to circles of some radius. Is there some more satisfying way to characterize this class of loops in $S^2$?
1
# Least-squares regression and differential geometry
Let $\mathcal{C}_n \mathbb{R}^2$ denote the configuration space of $n$ distinct points in the plane. Equipping $\mathbb{R}^2$ with the usual metric, the least-squares regression line $l_C$ of a collection $C \in \mathcal{C}_n \mathbb{R}^2$ minimizes the quantity $E_C = \sum\limits_{p \in C} d(p, l_C)^2.$ We can see this as variational problem $E_C: M^2 \rightarrow \mathbb{R}$ where $M^2$ is the parameter space of all lines in $\mathbb{R}^2$.
• What can we say about the map $\mathcal{C}_n \mathbb{R}^2 \rightarrow M^2$ given by $C \mapsto l_C$? Is there an intuitive description of its differential?
• Given $C \in \mathcal{C}_n \mathbb{R}^2$, since $n<\infty$ we can always find an angle $\theta$ such that a rotation of our axes by $\theta$ yields coordinates $(x,y)$ on $\mathbb{R}^2$ for which the $x$-values of the $p \in C$ are all distinct, bringing us back to function-fitting. Is there some top-down way to see the existence of this $\theta$? (sorry, my Morse theory gland is firing.)
• Using the one-point compactification $\mathbb{R}^2 \hookrightarrow S^2$, we can identify $M^2$ with the set of all loops based at $\infty \in S^2$ that are isometric to circles of some radius. Is there some more satisfying way to characterize this class of loops in $S^2$?
|
2013-05-19 04:53:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9656954407691956, "perplexity": 175.7939654781966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383259/warc/CC-MAIN-20130516092623-00063-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://crypto.stackexchange.com/questions/85841/pbkdf2-hmac-collisions/85842
|
PBKDF2-HMAC Collisions
Trying to understand the well known property of these collisions when used with SHA1, SHA256, etc. where a given key is larger than the block size of the digest function. In these cases the smaller of the collisions will have a size of 20 characters for SHA1 and 32 for SHA256 (1/2 the hexadecimal digest length), so does this mean to brute force a PBKDF2-HMAC-SHA1 an attacker only needs to consider passphrases of 20 characters or less? And therefore 32 or less if using SHA256? Are longer passphrases effectively pointless? Thanks.
The hash output by SHA-1 is 160-bit. That's 20 bytes (not characters; these are different notions, and why we have character encodings). It can take $$2^{160}$$ values. The output of PBKDF2-HMAC-SHA-1 has a parameterizable size that can be smaller (by truncation) or larger (essentially by aggregating 20-byte results), but the parameter is often set to 20 bytes, and we'll assume that.
No. It implies that a random value the size of the hash has probability $$2^{-160}$$ to match a given hash. The practical consequence is that trying at random is hopeless.
Passwords are often restricted to a subset of ASCII with about 95 characters, thus there are about $$2^{131.714}$$ passwords of 20 such characters or less (most of them exactly 20-character). If PBKDF2 is parametrized to perform 1000 SHA-1 hashes per PBKDF2 (which is the lowest parametrization ever consider by its definition, and has become grossly insufficient), hashing half these passwords would require over $$2^{140}$$ hashes, that is over $$2^{40}$$ (a million million) times more than wasted so far by humanity bitcoin-mining. That's just not an option.
|
2021-01-18 19:23:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7720970511436462, "perplexity": 1766.9939459303089}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703515235.25/warc/CC-MAIN-20210118185230-20210118215230-00002.warc.gz"}
|
https://socratic.org/questions/how-do-you-write-y-1-2x-1-in-standard-form
|
# How do you write y = 1/2x-1 in standard form?
Apr 29, 2015
The equation given to us is in the Slope Intercept Form $y = m \cdot x + c$ where $m$ is the slope and $c$ is the Y intercept
The standard form of a linear equation is $a x + b y = c$
So we first multiply both sides of this equation with 2
$2 \cdot y = 2 \left(\left(\frac{1}{2}\right) x - 1\right)$
$2 y = x - 2$
Transposing $- 2$ to the left hand side, and $2 y$ to the right, we get
$2 = x - 2 y$
color(green)(x - 2y = 2 is the Standard form of $y = \frac{1}{2} x - 1$
|
2022-05-28 10:33:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8748424649238586, "perplexity": 171.02948022207656}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016373.86/warc/CC-MAIN-20220528093113-20220528123113-00689.warc.gz"}
|
https://www.sotr.blog/articles/knn
|
# Shaking off the Rust
### May 1st, 2022
Difficulty: Intermediate
Shaking off the Rust is a series of exercises with the Rust programing language. The purpose of the series is to improve both my and my dear reader’s abilities with Rust by building things. Plus, by actually building stuff, we'll learn about an array of technological concepts in the process. In this installment, we’re going to implement a classic machine learning algorithm.
After reading this installment, you'll have experience with:
• Rust’s if let syntax
• Rust’s Clone trait
• Extension traits
• A little bit of lifetime parameters
• Rust’s where clause
• The $k$ nearest neighbors algorithm
• Much more!
This installment’s Github repo: https://github.com/josht-jpg/k-nearust-neighbors
### K Nearust Neighbors
The $k$ nearest neighbors (KNN) algorithm is simple.
It’s a good algorithm for classifying data [1]. Suppose we have a labeled dataset, which we'll denote $D$, and an unlabeled data point, which we'll denote $d$, and we want to predict the correct label for $d$. We can do that with KNN.
KNN works like this:
For some integer $k$, we find the $k$ data points in $D$ nearest to $d$ — the $k$ nearest neighbors. For an example of what I mean by nearest: in the graph below, the blue data points are the 3 nearest neighbors of the red data point.
For our computer to find data points nearby $d$, we need a way to measure distance between data points. We can use the Pythagorean formula to do that [2]:
The distance between two data points $x$ and $y$ with features $x_1, ..., x_n$ and $y_1, ..., y_n$ is $\sqrt{(x_1 - y_1)^2 + ... + (x_n - y_n)^2}$.
And to predict the label for $d$, we pick the most common label from its $k$ nearest labeled data points.
Finally, we have to handle the possible scenario of a tie for the most common label. There are a few ways to handle this. Our approach will be to decrement $k$ until there’s no longer a tie. That is, we remove the furthest label from our nearest labels and recount the most common labels.
We, the implementors of KNN, specify the value for $k$. Choosing a good value for $k$ is usually a process of trying and testing several values [3].
If you feel like watching a quick video on KNN, here’s a great one from Stat Quest with Josh Starmer (and I can’t say enough good things about that youtube channel. Thank you, Josh):
### Getting Started
To get started, we’ll create a new library called k-nearust-neighbors.
cargo new k-nearust-neighbors --lib
cd k-nearust-neighbors
We’ll also bring in the following crates as dev-dependencies.
// Cargo.toml
/*...*/
[dev-dependencies]
rand = "0.8.4"
reqwest = "0.11.10"
tokio-test = "*"
rand is a crate for random number generation. You can read more about it here: https://crates.io/crates/rand. reqwest and tokio_test will be used to get some data to test our KNN classifier on. You can read about the reqwest crate here and the tokio_test crate here.
And we’ll toss this use declaration into our lib.rs file.
// lib.rs
use std::{
collections::HashMap,
}
- Add and Sub are traits that specify how the addition and subtraction operators work. If some type T implements Add, then for two values a and b of type T, we can write a + b. Same idea for Sub [4].
### Representing a Data Point in our Code
We’ll use a struct to represent a data point. We’ll name it LabeledPoint. Each LabeledPoint will have fields:
• label: a string slice that categorizes the data,
• point: a vector containing the data point’s features (which will be represented as f64s).
Here is LabeledPoint in Rust:
// lib.rs
#[derive(Clone)] ➀
struct LabeledPoint<'a> { ➁
label: &'a str, ➂
point: Vec<f64>,
}
➀ - We’re using the derive attribute to implement the Clone trait. This means we’re asking Rust to generate code for the Clone trait’s default implementation and apply it to LabeledPoint. The Clone trait will let us explicitly create deep copies of LabeledPoint instances [5]. This will allow us to call the to_vec method on a slice of LabelPoints (which we’ll do in our KNN implementation).
- We declare that our struct is generic over the lifetime parameter 'a. In Rust, a lifetime is the scope for which a reference is valid [5]. Rust’s lifetimes are part of what makes the language special. If you’d like to learn about them, I recommend reading section 10.3 of The Rust Programming Language book or watching this stream from Ryan Levick.
• Sidenote: I highly recommend all of Ryan Levick’s content. Thanks for doing what you do, Ryan.
- For a struct to hold a reference, it must have a lifetime annotation for that reference [5]. label is a string slice, and string slices are references. Thus, we must add a lifetime annotation to label, which we do by putting 'a after & in label: &'a str. Awesome.
### Smells like Linear Algebra
To implement KNN, we need to compare distances between data points. This calls for some linear algebra. Fun.
(But if linear algebra does not sound like fun to you, feel free to copy and paste this code into your lib.rs and skip to the next section - I’ll forgive you).
A quick review of vectors may be helpful (here I’m talking about vectors from linear algebra, not vectors from Rust).
A vector can be thought of as a list of numbers.
There are a few ways that a vector can be interpreted. One common interpretation is a point in space [6]. Take the vector $[2.5, 3, 4]$, for example. This is what it looks like as a point in space:
This is how we will interpret vectors: as points in space. Cool.
Sidenote: the above plot was created with Rust! It was made with the plotters crate. Feel free to take a look at the code I slapped together to generate this plot: https://github.com/josht-jpg/vector_plot
We’re going to define a trait called LinearAlg:
trait LinearAlg<T>
where
{
fn dot(&self, w: &[T]) -> T;
fn subtract(&self, w: &[T]) -> Vec<T>;
fn sum_of_squares(&self) -> T;
fn distance(&self, w: &[T]) -> f64;
}
- Rust’s where clause lets us specify that the generic type T must implement the Add and Sub traits [7].
And we’ll make LinearAlg an extension trait, implementing it for the standard library’s Vec<f64> type.
impl LinearAlg<f64> for Vec<f64> { /*...*/ }
We’re going to take a test-first approach here. For each method in this implemetation of LinearAlg, I’ll go over the math behind the operation, then provide a test for the method, and then some code that implements the method.
It’s a good exercise to try writing each method yourself and running the test before looking at my implementation. There’s a good chance you’ll like your implementation more (and if you do, please share it with me at joshtaylor361@gmail.com).
• Dot Product - For two vectors $\bold{v}$ and $\bold{w}$ of the same length, the dot product of $\bold{v}$ and $\bold{w}$ is the result of coupling up each corresponding element in $\bold{v}$ and $\bold{w}$, multiplying those two elements together, and adding each result.
$\bold{v} \cdot \bold{w}= \begin{bmatrix} \bold{v}_{1} \\ \bold{v}_{2} \\ \vdots \\ \bold{v}_{n}\end{bmatrix} \cdot \begin{bmatrix} \bold{w}_{1} \\ \bold{w}_{2} \\ \vdots \\ \bold{w}_{n}\end{bmatrix} = \bold{v}_1 \cdot \bold{w}_1 + \bold{v}_2 \cdot \bold{w}_2 + ... + \bold{v}_n \cdot \bold{w}_n$ [8].
Here’s our test for dot:
// lib.rs
/*...*/
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn linear_alg() {
let v = vec![1., 5., -3.];
let w = vec![0.5, 2., 3.];
assert_eq!(v.dot(&w), 1.5)
}
}
Run that test with the command cargo test linear_alg in the root of your k_nearust_neighbors folder. Congratulations if that passes for you.
Here is my implementation of dot:
fn dot(&self, w: &[f64]) -> f64 {
assert_eq!(self.len(), w.len());
self.iter().zip(w).map(|(v_i, w_i)| v_i * w_i).sum()
}
• Subtract - This one’s simpler. For two vectors of the same length $\bold{v}$ and $\bold{w}$,
$\bold{v} - \bold{w} = \begin{bmatrix} \bold{v}_{1} \\ \bold{v}_{2} \\ \vdots \\ \bold{v}_{m}\end{bmatrix} - \begin{bmatrix} \bold{w}_{1} \\ \bold{w}_{2} \\ \vdots \\ \bold{w}_{m}\end{bmatrix} = \begin{bmatrix} \bold{v}_{1} - \bold{w}_{1} \\ \bold{v}_{2} - \bold{w}_{2} \\ \vdots \\ \bold{v}_{n} - \bold{w}_{n}\end{bmatrix}$
To test subtract, add the following assertion to your linear_alg test function:
#[test]
fn linear_alg() {
/*...*/
assert_eq!(v.subtract(&w), vec![0.5, 3., -6.]);
}
Nice. I hope you were able to make that test pass (but of course no worries if you weren’t). Here’s an implementation of subtract:
fn subtract(&self, w: &[f64]) -> Vec<f64> {
assert_eq!(self.len(), w.len());
self.iter().zip(w).map(|(v_i, w_i)| v_i - w_i).collect()
}
• Sum of Squares - A vector’s sum of squares is the result of squaring each of its elements and adding everything up:
For some vector $\bold{v}$ with elements $\bold{v}_1, ..., \bold{v}_n$, $sum \hspace{1mm} of \hspace{1mm} squares(\bold{v}) = (\bold{v}_1)^2 + ... + (\bold{v}_n)^2$
Here’s a test for sum_of_squares:
#[test]
fn linear_alg() {
/*...*/
assert_eq!(v.sum_of_squares(), 35.);
}
And here’s my implementation:
fn sum_of_squares(&self) -> f64 {
self.dot(&self)
}
• Distance - The distance between two vectors $\bold{v}$ and $\bold{w}$ is defined as
$\sqrt{(\bold{v}_1 - \bold{w}_1)^2 + ... + (\bold{v}_n - \bold{w}_n)^2}$
As usual, here’s a test for distance:
assert_eq!(v.distance(&w), 45.25f64.sqrt())
Hallelujah. Here’s some Rust:
fn distance(&self, w: &[f64]) -> f64 {
assert_eq!(self.len(), w.len());
self.subtract(w).sum_of_squares().sqrt()
}
Implementing LinearAlgbra for the Vec<f64> type is all we need for our KNN implementation, so we’ll leave it there. Great.
I’d like to continue our test-first approach. So before we get to the most important function of this installment, which will be called knn_classify, we’ll write a test for it.
But we’re going to need some data to test on. A classic dataset to test KNN on is the iris flower dataset. This data set contains 150 rows, where each row contains a flower’s petal length, petal width, sepal length, sepal width, and type. The dataset has three types of iris: Setosa, Versicolor, and Virginica.
Here’s some code for you to put in lib.rs. It gets the iris dataset and converts it to a format we can work with. Look through the code if you’re curious, but I won’t be explaining any of it. I’d like to save time for more interesting stuff.
#[cfg(test)]
mod tests {
use super::*;
/*...*/
macro_rules! await_fn {
($arg:expr) => {{ tokio_test::block_on($arg)
}};
}
async fn get_iris_data() -> Result<String, reqwest::Error> {
let body = reqwest::get(
"https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data",
)
.await?
.text()
.await?;
Ok(body)
}
type GenericResult<T> = Result<T, Box<dyn std::error::Error>>;
fn process_iris_data(body: &str) -> GenericResult<Vec<LabeledPoint>> {
body.split("\n")
.filter(|data_point| data_point.len() > 0)
.map(|data_point| -> GenericResult<LabeledPoint> {
let columns = data_point.split(",").collect::<Vec<&str>>();
let (label, point) = columns.split_last().ok_or("Cannot split last")?;
let point = point
.iter()
.map(|feature| feature.parse::<f64>())
.collect::<Result<Vec<f64>, std::num::ParseFloatError>>()?;
Ok(LabeledPoint { label, point })
})
.collect::<GenericResult<Vec<LabeledPoint>>>()
}
}
Sweet.
Next, we need to split that data into a training set and a testing set. Here’s a function to do just that:
mod tests {
/*...*/
fn split_data<T>(data: &[T], prob: f64) -> (Vec<T>, Vec<T>)
where
T: Clone, ➀
{
let mut data_copy = data.to_vec(); ➁
let split_index = ((data.len() as f64) * prob).round() as usize;
(
data_copy[..split_index].to_vec(),
data_copy[split_index..].to_vec(),
)
}
}
- Using the where clause to specify that T must implement the Clone trait.
- data.to_vec() copies the data slice into a new Vec [9]. This allows us to shuffle our data without taking a mutable reference to the data.
- The shuffle method will shuffle up a mutable slice in place [10]. shuffle is from the rand crate’s SliceRandom trait, which is an extension trait on slices. So we get to use shuffle on mutable slices after importing the trait.
The thread_rng function, which also comes from rand, is a random number generator.
Great. We’ve got everything we need to test our (currently unimplemented) knn_classify function.
We’re going to set $k$ to 5; we’ll classify new data points based on their 5 nearest neighbors. In a real application of KNN, it would be a good idea to test out a few more values of $k$.
Here’s the test:
fn knn_classify(k: u8, data_points: &[LabeledPoint], new_point: &[f64]) -> Option<String> {
todo!()
}
#[cfg(test)]
mod tests {
/*...*/
fn count_correct_classifications(
train_set: &[LabeledPoint],
test_set: &[LabeledPoint],
k: u8,
) -> u32 {
let mut num_correct: u32 = 0;
for iris in test_set.iter() {
let predicted = knn_classify(k, &train_set, &iris.point);
let actual = iris.label;
if let Some(predicted) = predicted { ➀
if predicted == actual {
num_correct += 1;
}
}
}
num_correct
}
#[test]
fn iris() -> GenericResult<()> {
let raw_iris_data = await_fn!(get_iris_data())?;
let iris_data = process_iris_data(&raw_iris_data)?;
let (train_set, test_set) = split_data(&iris_data, 0.70); ➁
assert_eq!(train_set.len(), 105);
assert_eq!(test_set.len(), 45);
let k = 5;
let num_correct = count_correct_classifications(&train_set, &test_set, k);
let percent_corrent = num_correct as f32 / test_set.len() as f32;
assert!(percent_corrent > 0.9); ➂
Ok(())
}
}
- The if let syntax is a lovely way for us to match one pattern and ignore all other patterns [5]. So an alternative (but less Rustic) way to write this block is:
match predicted {
Some(predicted) => {
if predicted == actual {
num_correct += 1;
}
}
_ => (),
}
- Splitting 70% of the data into a set for training the classifier, and 30% into a set for testing.
- If our classifier is working, it should correctly classify at least 90% of the testing set.
### Won’t you be my Neighbor? Implementing KNN
I’ll start with pseudocode for our KNN classifier. Try to write your own Rust implementation based on this pseudocode, and run the test we wrote to see if your implementation works.
function knn_classify(k, data_points, new_point)
arguments {
k: number of neighbors we use to classify our new data point
data_points: our labeled data points
new_point: the data point we want to classify
}
returning: predicted label for new_point
{
sorted_data_points = sort_by_distance_from(data_points, new_point)
k_nearest_labels = empty list
for i from 0 to k {
k_nearest_labels.append(data_points[i].label)
}
predicted_label = find_most_common_label(k_nearest_labels)
return predicted_label
}
function find_most_common_label(labels)
arguments {
labels: a list of labels
}
returning: most common value in the passed in list of labels
{
label_counts = new Hash Map
for label in labels {
if label is a key in label_counts {
label_counts[label] += 1
} else {
}
}
if there are no ties for most common label in label_counts {
return key with highest value in label_counts
} else {
new_labels = all elements in labels but the last
return find_most_common_label(new_labels)
}
}
Great. Now here is some Rust for you.
fn knn_classify(k: u8, data_points: &[LabeledPoint], new_point: &[f64]) -> Option<String> {
let mut data_points_copy = data_points.to_vec();
data_points_copy.sort_unstable_by(|a, b| { ➀
let dist_a = a.point.distance(new_point);
let dist_b = b.point.distance(new_point);
dist_a
.partial_cmp(&dist_b)
.expect("Cannot compare floating point numbers, encoutered a NAN") ➁
});
let k_nearest_labels = &data_points_copy[..(k as usize)]
.iter()
.map(|a| a.label)
.collect::<Vec<&str>>();
let predicted_label = find_most_common_label(&k_nearest_labels);
predicted_label
}
fn find_most_common_label(labels: &[&str]) -> Option<String> {
let mut label_counts: HashMap<&str, u32> = HashMap::new(); ➂
for label in labels.iter() {
let current_label_count = if let Some(current_label_count) = label_counts.get(label) { ➃
*current_label_count
} else {
0
};
label_counts.insert(label, current_label_count + 1);
}
let most_common = label_counts
.iter()
.max_by(|(_label_a, count_a), (_label_b, count_b)| count_a.cmp(&count_b)); ➄
if let Some((most_common_label, most_common_label_count)) = most_common {
let is_tie_for_most_common = label_counts
.iter()
.any(|(label, count)| count == most_common_label_count && label != most_common_label); ➅
if !is_tie_for_most_common {
return Some((*most_common_label).to_string());
} else {
let (_last, labels) = labels.split_last()?; ➆
return find_most_common_label(&labels);
}
}
None
}
- sort_unstable_by allows us to specify the way we want our vector sorted. We’re using this to sort the data points in data_copy by their distance from the data point we’re classifying.
Digression on stable vs. unstable sorting:
As the name suggests, sort_unstable_by is unstable [11]. This means that when the sorting algorithm comes across two equal elements, it is allowed to swap them - whereas a stable sort will not swap equal elements [12].
Unstable sorting is generally faster and requires less memory than stable sorting.
Rust’s sort_by method is stable. So in cases like ours where you don’t care if equal elements are swapped, it’s preferable to use sort_unstable_by over sort_by.
- partial_ord returns a value specifying whether dist_a is greater to, less than, or equal to dist_b, if such a comparison can be made.
In more gory details, partial_ord returns an Option containing a variant of the Ordering enum, if an ordering exists (if not it will return None).
The creators of Rust have intentionally not implemented Ord for f64. This is because an f64 can be a NAN (not a number), and in Rust NAN != NAN. So f64 does not form a total order.
If some or all of that didn’t make sense, do not worry. Just note that we have to use partial_cmp for f64 types rather than cmp.
- We create a HashMap with &str keys and u32 values. The keys are the distinct elements of the labels slice. The values are the number of times each label shows up in labels.
- Rust is an expression language [13]. This means that things like if and match will produce values. So we can assign a variable to the result of an if-else block. Cool.
- Here we want to find the most common label in label_counts. We iterate through each key-value pair of label_counts and use max_by to find the key-value pair with the highest value. max_by returns the element with the maximum value with respect to a custom comparison function [14].
- The any method checks if the provided predicate is true for any elements in the iterator [15]. We use any to see if there are any ties for most common label in label_counts.
- split_last returns an Option containing a tuple of the last element of a vector, and a slice containing the rest of that vector.
And that’s our K Nearust Neighbors classifier. Amazing.
As always, questions and feedback are welcome and appreciated: joshtaylor361@gmail.com.
Have a great rest of your day.
## Support Me
Creating and running Shaking off the Rust is one of the most fulfilling things I do. But it's exhausting. By supporting me, even if it's just a dollar, you'll allow me to put more time into building this series. I really appreciate any support.
The only way to support me right now is by sponsoring me on Github. I'll probably also set up Patreon and Donorbox pages soon.
Thank you so much!
|
2023-01-28 16:57:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 40, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25911450386047363, "perplexity": 4840.652552428573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499646.23/warc/CC-MAIN-20230128153513-20230128183513-00057.warc.gz"}
|
https://proofwiki.org/wiki/Book:Murray_R._Spiegel/Mathematical_Handbook_of_Formulas_and_Tables/Chapter_24
|
# Book:Murray R. Spiegel/Mathematical Handbook of Formulas and Tables/Chapter 24
## Murray R. Spiegel: Mathematical Handbook of Formulas and Tables: Chapter 24
Published $\text {1968}$.
Previous ... Next
## $24 \quad$ Bessel Functions
### Bessel's Differential Equation
$24.1$: Bessel's Differential Equation
### Bessel Functions of the First Kind of Order $n$
$\displaystyle \map {J_n} x$ $=$ $\displaystyle \dfrac {x^n} {2^n \, \map \Gamma {n + 1} } \paren {1 - \dfrac {x^2} {2 \paren {2 n + 2} } + \dfrac {x^4} {2 \times 4 \paren {2 n + 2} \paren {2 n + 4} } - \cdots}$ $\displaystyle$ $=$ $\displaystyle \sum_{k \mathop = 0}^\infty \dfrac {\paren {-1}^k} {k! \, \map \Gamma {n + k + 1} } \paren {\dfrac x 2}^{n + 2 k}$
$\displaystyle \map {J_{-n} } x$ $=$ $\displaystyle \dfrac {x^{-n} } {2^{-n} \, \map \Gamma {1 - n} } \paren {1 - \dfrac {x^2} {2 \paren {2 - 2 n} } + \dfrac {x^4} {2 \times 4 \paren {2 - 2 n} \paren {4 - 2 n} } - \cdots}$ $\displaystyle$ $=$ $\displaystyle \sum_{k \mathop = 0}^\infty \dfrac {\paren {-1}^k} {k! \, \map \Gamma {k + 1 - n} } \paren {\dfrac x 2}^{2 k - n}$
$\map {J_{-n} } x = \paren {-1}^n \map {J_n} x$
### Miscellaneous Results
Previous ... Next
|
2020-07-14 00:13:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9229890704154968, "perplexity": 2681.061910128263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657147031.78/warc/CC-MAIN-20200713225620-20200714015620-00495.warc.gz"}
|
https://www.joshuataillon.com/post/detach-modal-dialogs-in-gnome-shell-3-8/
|
# Detach Modal Dialogs in Gnome Shell 3.8
More fixes for GNOME…
For some reason, the Gnome designers think that everyone wants to have popup windows attached to the window they came from (i.e. when you save a pdf in Chrome, or something similar). This becomes pretty frustrating when you want to see things in the window underneath the dialog. I always forget how to fix this, so I’m posting how here.
This is how I “fixed” the modal dialog behavior in GNOME Shell 3.8.2 on Ubuntu 13.04 x64. Open the dconf editor and browse to
org.gnome.shell.overrides.attach-modal-dialogs
Uncheck that box, and you should be good to go! Note: this can also be done on the command line with the following command
gsettings set org.gnome.shell.overrides attach-modal-dialogs false
|
2019-03-20 15:11:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4625400900840759, "perplexity": 2571.3628835362706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202433.77/warc/CC-MAIN-20190320150106-20190320172106-00424.warc.gz"}
|
https://www.physicsvidyapith.com/2022/11/refraction-of-light-and-its-properties.html
|
Refraction of light and its Properties
Definition of Refraction of light→ When a light goes from one medium to another medium then light bends from the path. This bending of the light phenomenon is knowns as the refraction of light.
When light goes from a rarer medium to a denser medium, light bends toward the normal as shown in the figure below:
Propagation of light from rarer medium to denser medium
When light goes from a denser medium to a rarer medium, the light goes away from normal as shown in the figure below.
Propagation of light from denser medium to rarer medium
Properties of refraction of light→
1. The incident ray, normal ray, and refracted ray lie on the same point.
2. According to Snell's law - The ratio of the sine of the incident angle to the sine of the refractive angle is always constant. This constant value is known as the refractive index of the medium.
$\frac{sin \: i}{sin \: r}=constant(_{1}n_{2})$
$\frac{sin \: i}{sin \: r}= (_{1}n_{2})$
$\frac{sin \: i}{sin \:r}=\frac{n_{2}}{n_{1}}$
Where $_{1}n_{2}$ is the refractive index of medium $(2)$ with respect to medium $(1)$ .
Absolute refractive index→
The ratio of the speed of light in a vacuum to the speed of light in a medium is called the absolute refractive index of the medium.
$refractive\: index(n)=\frac{speed\: of\:light\: in\: vacuum(c)}{speed \:of \:light\: in\: medium(v)}$
$n=\frac{c}{v}$
Note
According to Snell's law -
$\frac{sin \: i}{sin \: r}=\frac{n_{2}}{n_{1}} \quad\quad (1)$
$\frac{sin \: i}{sin \: r}=\frac{\frac{c}{v_{1}}}{\frac{c}{v_{2}}} \quad\quad \left \{ \because n=\frac{c}{v} \right \}$
$\frac{sin i}{sin r}=\frac{v_{1}}{v_{2}} \qquad (2)$
When the light goes from one medium to another medium frequency of the light does not change but the wavelength of the light changes. So above given equation can be written as
$\frac{sin \: i}{sin \: r}=\frac{\nu \lambda_{1}}{\nu \lambda_{2}} \quad\quad \left \{ \because v=\nu \lambda \right \}$
$\displaystyle \frac{sin \: i}{sin \: r}=\frac{\lambda_{1}}{ \lambda_{2}} \qquad (3)$
From equation $(1)$ ,equation $(2)$, and equation $(3)$ we can write the equation
$\frac{sin \: i}{sin \: r}=\frac{n_{2}}{n_{1}} =\frac{v_{1}}{v_{2}}= \frac{\lambda_{1}}{ \lambda_{2}}$
Conditions for a light ray to pass undeviated on refraction of light→
A ray of light passes undeviated from medium $1$ to medium $2$ in either of the following two conditions:
1. When the angle of incidence at the boundary of two media is zero $\angle i =0^{\circ} so \ \angle r= 0^{\circ}$.
2. When the refractive index of the medium (2) and medium (1) is the same i.e. $i=r$.
Factors affecting the refractive index of the medium→
The refractive index of a medium depends on basically following three factors:
1. Nature of the medium (on the basis of speed of light)→ Less the speed of light in the medium as compared to that in the air more is the refractive index of the medium. $\left ( n=\frac{c}{v}\right)$
Speed of light in glass- $v_{glass} = 2\times 10^{6} ms^{-1}$
$\mu_{glass} = 1\cdot 5$
Speed of light in water- $v_{water} = 2\cdot25\times10^8ms^{-1}$
$\mu_{water} = 1\cdot33$
2. Physical conditions such as temperature→ With the increase in temperature, the speed of light in the medium, so the refractive index of the medium decreases.
3. The colour or wavelength of light→ The speed of light of all colours is the same in the air ( or vacuum), but in any other transparent medium, the speed of light is different for different colours. In a given medium the speed of red light is maximum and that of the violet light is least, therefore the refractive index of that medium is maximum for violet is and least for red light i.e. $\left ( \ n_{violet}> n_{red}\right )$. The wavelength of visible light increase from violet to red end, so the refractive index of a medium decreases with the increases in wavelength.
|
2023-04-02 11:51:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7181375026702881, "perplexity": 309.05780007942445}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950528.96/warc/CC-MAIN-20230402105054-20230402135054-00770.warc.gz"}
|
https://math.stackexchange.com/questions/3084104/consistency-of-pa-from-a-formalist-perspective
|
# Consistency of PA from a Formalist Perspective
In this lengthy thread there's people bickering back and forth about the consistency of PA (Peano Arithmetic) and misunderstandings abound. In reading it I came to an understanding I found useful, though I am not certain of the correctness of. If we take a formalist perspective on the situation and understand mathematical propositions as statements about what happens when we manipulate strings given certain rules, there are two ways of interpreting "the consistency of PA". That is
(a) we can say "PA is consistent" means that any finite series of the applications of logical rules to the axioms of PA will not result in a contradiction, and
(b) the statement "PA is consistent" is rendered as "Con(PA)" appropriately formalized in some formal system (e.g. $$\neg (0=1)$$ in ZFC) and the question is about provability of "Con(PA)" in the given formal system.
It is my understanding (a) has not/cannot be demonstrated as any proofs done in any given formal system of the consistency of PA depend on the consistency of that formal system, which cannot be proved without resorting to a stronger system, and so on, but (b) has been demonstrated in various formal systems. It seems all disagreement as to whether "is PA consistent" is an open question is the result of one party referring to sense (a) and another referring to sense (b), where for sense (a) it is an open (and perhaps unanswerable) question while in sense (b), which is the usual meaning of an open question, it is definitively closed.
Does this analysis make sense? Am I missing anything important?
• Note for (b), if ¬CON(A) then A⊦CON(PA), also if CON(A), then for A⊦CON(PA) to be true, A need to be the stronger system. – Holo Jan 23 at 10:01
• I have a related question at math.stackexchange.com/questions/3084683/… – Keefer Rowan Jan 23 at 16:45
• Your analysis is OK, but I don't think it's a very fruitful topic for MSE for the same reasons as stated in David Roberts's comments on the MO question you link to. – Rob Arthan Jan 23 at 21:38
Because we know the exact sentence $$\text{Con}(\text{PA})$$, we know that if (a) was false then (b) would also be false. Any counterexample to (a), if written down, would immediately lead to a counterexample to (b), that is, it would show $$\lnot \text{Con}(\text{PA})$$.
• While I agree that if (a) is false then (b) is false, but strangely enough even if (b) is true (by normal standards of truth, that Cons(PA) is provable in some System X), (a) could be false. That is because both systems could be inconsistent so, (a) could be false, that is there is some derivation of an inconsistency in PA, and Cons(PA) could be provable in System X, because if System X is inconsistent, then $\neg$Cons(PA) and Cons(PA) are both provable. That is why I'm not sure if the question "is there a finite set of applications of logical rules to the axioms of PA that lead to a..." cont. – Keefer Rowan Jan 24 at 19:36
|
2019-05-21 14:28:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7002984285354614, "perplexity": 341.70210508575843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256426.13/warc/CC-MAIN-20190521142548-20190521164548-00530.warc.gz"}
|
https://deepai.org/publication/a-hierarchical-expected-improvement-method-for-bayesian-optimization
|
# A hierarchical expected improvement method for Bayesian optimization
Expected improvement (EI) is one of the most popular Bayesian optimization (BO) methods, due to its closed-form acquisition function which allows for efficient optimization. However, one key drawback of EI is that it is overly greedy; this results in suboptimal solutions even for large sample sizes. To address this, we propose a new hierarchical EI (HEI) framework, which makes use of a hierarchical Gaussian process model. HEI preserves a closed-form acquisition function, and corrects the over-greediness of EI by encouraging exploration of the optimization space. Under certain prior specifications, we prove the global convergence of HEI over a broad objective function space, and derive global convergence rates under smoothness assumptions on the objective function. We then introduce several hyperparameter estimation methods, which allow HEI to mimic a fully Bayesian optimization procedure while avoiding expensive Markov-chain Monte Carlo sampling. Numerical experiments show the improvement of HEI over existing BO methods, for synthetic functions as well as a semiconductor manufacturing optimization problem.
## Authors
• 7 publications
• 19 publications
• 12 publications
• ### Bayesian Optimization of Composite Functions
We consider optimization of composite objective functions, i.e., of the ...
06/04/2019 ∙ by Raul Astudillo, et al. ∙ 6
• ### Bayesian optimization of the PC algorithm for learning Gaussian Bayesian networks
The PC algorithm is a popular method for learning the structure of Gauss...
06/28/2018 ∙ by Irene Córdoba, et al. ∙ 0
• ### On a New Improvement-Based Acquisition Function for Bayesian Optimization
Bayesian optimization (BO) is a popular algorithm for solving challengin...
08/21/2018 ∙ by Umberto Noè, et al. ∙ 0
• ### Efficient batch-sequential Bayesian optimization with moments of truncated Gaussian vectors
We deal with the efficient parallelization of Bayesian global optimizati...
09/09/2016 ∙ by Sébastien Marmin, et al. ∙ 0
• ### Convergence rates of efficient global optimization algorithms
Efficient global optimization is the problem of minimizing an unknown fu...
01/18/2011 ∙ by Adam D. Bull, et al. ∙ 0
• ### Distributionally Ambiguous Optimization Techniques in Batch Bayesian Optimization
We propose a novel, theoretically-grounded, acquisition function for bat...
07/13/2017 ∙ by Nikitas Rontsis, et al. ∙ 0
• ### Bayesian Optimization of Hyperparameters when the Marginal Likelihood is Estimated by MCMC
Bayesian models often involve a small set of hyperparameters determined ...
04/21/2020 ∙ by Oskar Gustafsson, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
Bayesian optimization (BO) provides a principled way for solving the black-box optimization problem:
x∗=argminx∈Ωf(x). (1)
Here, are the input variables, is the feasible domain, and the objective function is assumed to be black-box and expensive to evaluate. The key idea in BO is to view as a random realization from a stochastic process, which captures prior beliefs on the objective function. Using this model, BO sequentially queries at points which maximize the acquisition function – the expected utility of a new point given observed data. BO has wide applicability in real-world problems, ranging from rocket engine design (mak2018efficient), nanowire yield optimization (dasgupta2008statistical)
, and neural network training
(bergstra2012random).
Many existing BO methods vary in their choice of (i) the stochastic model on , and (ii) the utility function for sequential sampling. For (i), the most popular stochastic model by far is the Gaussian process (GP) model (santner2003design). Under a GP model, several well-known BO methods have been derived using different utility functions for (ii). These include the expected improvement (EI) method (mockus1978application; jones1998efficient), the upper confidence bound (UCB) method (Srinivas:2010:GPO:3104322.3104451), and the Knowledge Gradient method (Frazier:2008:KPS:1461633.1461641; scott2011correlated). Of these, EI is arguably the most popular method, since it admits a simple closed-form acquisition function, which can be efficiently optimized to yield subsequent query points on . EI has been subsequently developed for a variety of black-box optimization problems, including multi-fidelity optimization (zhang2018variable), constrained optimization (feliot2017bayesian), and parallel/batch-sequential optimization (marmin2015differentiating).
Despite the popularity of EI, it does have notable limitations. One such limitation is that it is too greedy (qin2017improving): EI focuses nearly all sampling efforts near the optima of the fitted GP model, and does not sufficiently explore other regions. In terms of the exploration-exploitation trade-off (kearns2002near), EI over-exploits the fitted model on , and under-explores the optimization space . Because of this, EI often gets stuck in local optima and fails to converge to a global optimum (bull2011convergence). There have some recent efforts on remedying this greediness of EI. snoek2012practical proposed a fully Bayesian EI, where all GP parameters are sampled using Markov chain Monte Carlo (MCMC); this incorporates parameter uncertainty within EI and encourages exploration. chen2017sequential proposed a variation of EI under an additive Bayesian model, which encourages exploration by increasing model uncertainty. Both methods, however, require expensive MCMC sampling; this sampling can take hours of computation to optimize the next query point, which may exceeds the evaluation cost of ! Such methods diminish a key advantage of EI: efficient queries via a closed-form criterion.
To address this, we propose a hierarchical EI (HEI) framework which corrects the greediness of EI while preserving a closed-form criterion. The key idea is a hierarchical GP model for (handcock1993bayesian), with hierarchical priors on process parameters. Under this model, we show that HEI has a closed-form acquisition function which encourages further exploration. We then prove that, under certain prior specifications, HEI converges to a global optimum over a broad function space for . This addresses the over-greediness of EI, which can fail to find any global optimum even for smooth . We further derive global convergence rates for HEI under smoothness assumptions on .
We note that a simpler version of HEI, called the Student EI (SEI), was proposed earlier in benassi2011robust. Our HEI has important novelties over SEI: the HEI incorporates uncertainty on process nonstationarity, has provable global convergence and convergence rates for optimization, and can mimic a fully Bayesian optimization procedure. Numerical experiments show that HEI considerably outperforms existing BO methods, whereas the SEI yields only comparable (or worse) performance to existing methods.
The paper is organized as follows: Section 2 reviews the GP model and the EI method. Section 3 presents the HEI method. Section 4 proves the global convergence for HEI and its associated convergence rates. Section 5 provides methodological developments on hyperparameter specification and basis selection. Sections 6 and 7 compare HEI with existing methods for synthetic functions and in a semiconductor manufacturing problem, respectively. Concluding remarks are given in Section 8.
## 2 Background and Motivation
We first introduce the GP model, then review the EI method and its deficiencies, which motivates the proposed HEI method.
Gaussian Process. Assume the function follows the Gaussian process model:
f(x)=μ(x)+Z(x),μ(x)=p⊤(x)β,Z(x)∼GP(0,σ2K). (2)
Here, consists of basis functions for the mean function , denotes its corresponding coefficients, and
denotes a stationary GP with mean zero, process variance
, and correlation function . The model (2) is known as the universal kriging model in geostatistics (wackernagel1995multivariate). When there is no trend, i.e., , this model reduces to the so-called ordinary kriging model.
Suppose function values have been observed at inputs , yielding data . Let
be the vector of observed function values,
be the correlation vector between the unobserved response and observed responses , be the correlation matrix for observed points, and be the model matrix for observed points. Then, the posterior distribution of at an unobserved input has the closed form (santner2003design):
[f(x)∣∣Dn]∼N(^fn(x),σ2s2n(x)) . (3)
Here, is the posterior mean, is the posterior variance, where , and
. These expressions can be equivalently viewed as the best linear unbiased estimator of
and its variance (jones1998efficient).
Of course, the process variance is also unknown in practice and needs to be estimated from data. A common approach is to estimate using its maximum likelihood estimator (MLE):
^σ2n=1n(yn−Pn^βn)⊤K−1n(yn−Pn^βn).
One can then plug-in into (3) to estimate the posterior distribution .
Expected Improvement. The idea behind EI (jones1998efficient) is as follows. Let be the current best objective value, and let be the improvement utility function. Given data , the expected improvement acquisition function becomes:
EIn(x)=Ef|Dn(y∗n−f(x))+. (4)
For an unobserved point , the criterion can be interpreted as the expected improvement to the current best objective value, if the next query is at point . Under the posterior distribution (3) with plug-in estimate , has the closed-form expression:
EIn(x)=In(x)Φ(In(x)^σnsn(x))Exploitation+^σnsn(x)ϕ(In(x)^σnsn(x))Exploration. (5)
Here, and
denote the probability density function (p.d.f.) and cumulative density function (c.d.f) of the standard normal distribution, respectively, and
.
After we obtain the acquisition function (5), the next query point is obtained by maximizing . The acquisition function (5) implicitly encodes a tradeoff between exploration of the feasible region and exploitation near the current best solution. The first term in (5) encourages exploitation, by assigning larger values for points with smaller predicted values ; the second term in (5) encourages exploration, by assigning greater values for points with larger estimated posterior variance .
However, one drawback of EI is that it fails to capture the full uncertainty of model parameters within the acquisition function . This results in an over-exploitation of the fitted GP model for optimization, which explains why EI can fail to find any global optimum . This over-greediness has been noted in several recent works (bull2011convergence; qin2017improving). In particular, Theorem 3 of bull2011convergence showed that, for a common class of correlation functions for (see Assumption 1 later), there always exists some smooth function (within a function space , defined later in Section 4) such that EI fails to find any global optimum with a positive probability. This is stated formally below:
###### Proposition 1
Suppose Assumption 1 holds with . Let be the points generated by maximizing in (5). Suppose initial points are sampled according to some probability measure over . Then, for any , there exist some and some constant such that
PF(limn→∞y∗n−minx∈Ωf(x)≥δ)>1−ϵ.
## 3 Hierarchical Expected Improvement
To overcome this, we propose a hierarchical EI framework which provides a richer quantification of uncertainty within the acquisition function. The key ingredient in HEI is a hierarchical GP model on . Assume the universal kriging model (2), with hierarchical priors on parameters :
[β] ∝1,[σ2]∼IG(a,b). (6)
In words, the coefficients are assigned a flat improper (i.e., non-informative) prior over , and the process variance is assigned a conjugate inverse-Gamma prior with shape and scale parameters and , respectively. The idea is to leverage this hierarchical structure on model parameters to account for estimation uncertainty, while preserving a closed-form criterion. The next lemma provides the posterior distribution of under such a hierarchical model.
###### Lemma 2
Assume the hierarchical model (2) and (6), with . Given data , we have
where , , , and is a
-dimensional non-central t-distribution with degrees of freedom
, location vector and scale matrix . Furthermore, the posterior distribution of is
(7)
The proof of Lemma 2 follows from Chapter 4.4 in santner2003design. Lemma 2 shows that under the hierarchical model (2) and (6), the posterior distribution of is now t-distributed, with closed-form expressions for its location and scale parameters and .
Comparing the predictive distributions in (7) and (3), there are several differences which highlight the increased uncertainty from the hierarchical model. First, the new posterior (7) is now t-distributed, whereas the earlier posterior (3) is normally distributed. This suggests that the hierarchical model imposes heavier tails on the predictive distribution, which increases uncertainty. Second, the variance term in (7) can be decomposed as:
~σ2n=(2b+n^σ2n)/(2a+(n−q))>n/(2a+(n−q))⋅^σ2n. (8)
For (which is satisfied via a weakly informative prior on ), is larger than the MLE , which increases predictive uncertainty.
Similar to the EI criterion (4), we define the HEI acquisition function as:
HEIn(x)=Ef|Dn(y∗n−f(x))+,
where the conditional expectation over is under the hierarchical GP model. The theorem below gives a closed-form expression for :
###### Theorem 3
Assume the hierarchical model (2) and (6), with . Then:
(9)
where , , and , denote the p.d.f. and c.d.f. of a Student’s t-distribution with degrees of freedom, respectively.
Theorem 3 shows that the HEI criterion preserves the desirable properties of the original EI criterion (5): it has an easily-computable, closed-form expression, which allows for efficient optimization of the next query point. This new criterion also has an interpretable exploration-exploitation trade-off: the first term encourages exploitation near the current best solution , and the second term encourages exploration of regions with high predictive variance.
More importantly, the differences between the HEI criterion (9) and the EI criterion (5) show how our method addresses the over-greediness of the latter. There are three notable differences. First, the HEI exploration term depends on the t-p.d.f. , whereas the EI exploration term depends on the normal p.d.f. . Since the former has heavier tails, the HEI exploration term is inflated, which encourages exploration. Second, the larger variance term (see (8)) also inflates the HEI exploration term and encourages exploration. Third, the HEI contains an additional adjustment factor in its exploration term. Since this factor is larger than 1, HEI again encourages exploration. This adjustment is most prominent for small sample sizes, since the factor as sample size . All three differences correct the over-exploitation of EI via a principled hierarchical Bayesian framework.
We also note several important differences between the proposed HEI and the SEI in benassi2011robust. First, the SEI considers a stationary GP model, with constant mean , while the proposed criterion considers a broader non-stationary GP model with mean function , which accounts for uncertainty on coefficients . This allows HEI to incorporate uncertainty on GP nonstationarity, which encourages more exploration in sequential sampling. Second, we prove next the global convergence of HEI (and its convergence rates) under certain prior specifications, which directly addresses the overgreediness of EI. To our knowledge, such results do not exist for the SEI. Lastly, we develop (in Section 5) hyperparameter estimation methods which allow HEI to efficiently mimic a fully Bayesian optimization procedure. Because of this, HEI performs considerably better than existing BO methods, whereas the SEI gives only comparable performance.
## 4 Convergence Analysis of HEI
We now show that HEI indeed finds a global optimum over a broad function class for . We first present this global convergence result (and its associated convergence rate) for Matérn-type correlation functions, then provide an improved convergence rate for more smooth correlation functions.
Let us first adopt the following form for the kernel :
Kθ(x,z):=C{x1−z1θ1,…,xd−zdθd},
where is a stationary correlation function with and length-scale parameters . From this, we can then define a function space – the reproducing kernel Hilbert space (RKHS, wendland2004scattered) – for the objective function . Given kernel (which is symmetric and positive definite), define the linear space
Fθ(Ω)={N∑i=1αiKθ(⋅,xi):N∈N+,xi∈Ω,αi∈R},
and equip this space with the bilinear form
The RKHS of kernel is defined as the closure of under , with its inner product induced by .
Next, we make the following two regularity assumptions. The first is a smoothness assumption on the correlation function .
###### Assumption 1
is continuous, integrable, and satisfies:
|C(x)−Qr(x)|=O(∥x∥2ν2(log∥x∥2)2α)as∥x∥2→0,
for some constants and . Here, and is the -th order Taylor approximation of
. Furthermore, its Fourier transform
is isotropic, radially non-increasing, and satisfies either: as
or for any .
As noted in bull2011convergence, the choice of as the Matérn correlation function (cressie1991statistics) satisfies Assumption 1.
For the scale parameters , HEI uses maximum a posterior (MAP) estimation under a prior , and updates these parameter estimates after each sampled point. The second assumption is a regularity condition on this MAP estimator.
###### Assumption 2
Given data and prior , let be the MAP of . For any , we have
θL≤~θn≤θUfor some constants θL,θU∈Rd+. (10)
In our implementation, we use a flat prior on over the compact space .
The following theorem shows that, under specific prior settings, the proposed HEI method rectifies the poor convergence of EI.
###### Theorem 4
Suppose Assumptions 1 and 2 hold, and assume is a constant in and for the hyperparameters in (6). Let be the points generated by maximizing in (9), with iterative plug-in MAP estimates . Then, for any and any initial points, we have:
The proof of this theorem is given in Appendix A.2. The key idea is to upper bound the optimality gap by , which is a generalization of the power function used in the function approximation literature (wendland2004scattered). Then, with the condition , which prevents the variance estimate from collapsing to , we can apply approximation bounds on to obtain the desired global convergence result.
Theorem 4 shows that HEI indeed finds to a global optimum for all in the RKHS , which remedies the overgreediness of EI from Proposition 1. When is the Matérn correlation with smoothness parameter , the RKHS consists of functions with continuous derivatives of order (santner2003design). Under these conditions, HEI achieves a global convergence rate of for , and for .
At first glance, the prior specification in Theorem 4 may appear strange, since the hyperparameter depends on the sample size . However, such data-size-dependent
priors have been studied extensively in the context of high-dimensional Bayesian linear regression, particularly in its connection to optimal minimax estimation (see, e.g.,
castillo2015bayesian). The data-size-dependent prior in Theorem 4 can be interpreted in a similar way: the hyperparameter condition is sufficient in encouraging enough exploration, so that HEI converges to a global optimum for all in the RKHS .
One potential drawback of the global convergence rate in Theorem 4 is that it grows exponentially in dimension . Suppose . With , HEI achieves a rate of in dimensions, but this rate deteriorates to in dimensions! This is the well-known curse-of-dimensionality (bellman2015adaptive). One way to provide relief from this curse is to assume further smoothness on ; this strategy is used extensively in the Quasi-Monte Carlo literature (dick2013high; mak2017projected). We adopt a similar approach below to derive an improved global convergence rate for HEI which is less affected by dimensionality.
In addition to Assumption 2, we will require the following two assumptions. Assumption 3 is on the smoothness of .
###### Assumption 3
The correlation function is a radial function of the form
C{x1−z1θ1,…,xd−zdθd}=g⎛⎜⎝ ⎷d∑i=1(xi−ziθi)2⎞⎟⎠.
Moreover, the function satisfies
|g(l)(x)|≤l!Mlfor all l≥l0 and x≥0,
where is the -th derivative of , and is a fixed constant. Furthermore, its Fourier transform is isotropic, radially non-increasing, and as , it satisfies either:
ˆg(x)=O(x−2λ−d)for any λ>0orˆg(x)=Θ(x−ν−d),
where is a positive constant.
This assumption imposes greater smoothness on than Assumption 1, since the Matérn correlation with smoothness parameter (which satisfies Assumption 1) can be shown to violate Assumption 3. One correlation which satisfies Assumption 3 is the Gaussian correlation, which is much more smooth than the Matérn correlation. The second assumption is a regularity condition on the boundary of the domain .
###### Assumption 4
Domain is a Lipschitz domain, i.e., for any
, there exist a hyperplane
of dimension through , a Lipschitz continuous function , and positive constants , such that
Ω∩X={z+yn:z∈Br(x)∩H, −h
and∂Ω∩X={z+yn:z∈Br(x)∩H,η(z)=y},
where is a unit vector normal to , , and
Under these two additional assumptions, the following theorem gives an improved global convergence rate which is less affected by dimensionality.
###### Theorem 5
Suppose Assumptions 2, 3 and 4 hold, and assume is a constant in and for the hyperparameters in (6). Let be the points generated by maximizing in (9), with iterative plug-in MAP estimates . Then, for any and any initial points, we have:
y∗n−minx∈Ωf(x)=O(n−1).
The proof of this theorem is provided in Appendix A.3. Theorem 5 shows that, by imposing greater smoothness on the objective function (via smoothness conditions on correlation ), HEI enjoys a much improved rate of , one which is less affected by dimension . For example, when is the Gaussian correlation, its RKHS consists of functions with continuous derivatives of any order (minh2010some), which is clearly more restrictive than the RKHS of the Matérn correlation from Theorem 4. By trading off on function smoothness, the convergence rate of HEI improves from to .
## 5 Methodological Developments
Next, we discuss two methodological developments for HEI, concerning hyperparameter specifications and order selection for basis functions. We then provide a full algorithm statement for HEI.
### 5.1 Hyperparameter Specification
We first present several plausible specifications for the hyperparameters in the hierarchical prior in (6), and discuss why certain specifications may yield better BO performance over others.
(i) Weakly Informative. Consider first a weakly informative specification of hyperparameters , which provides weak information on the variable parameter . Following gelman2006prior, we set for some small choice of , e.g., . The limiting case of yields the non-informative Jeffreys prior for variance parameters.
While weakly informative (and non-informative) priors are widely used in Bayesian analysis, such priors may result in poor optimization performance for HEI (as will be shown in Section 6). One reason is that, oftentimes, only a small sample size can be afforded on the black-box function , since each evaluation is expensive. One can perhaps address this with a carefully elicited subjective prior, but such priors can be difficult to formulate when the objective is black-box. We present next two specifications which may offer improved optimization performance in practice.
(ii) Empirical Bayes. Consider next an empirical Bayes (EB, carlin2010bayes) approach, which uses the observed data to estimate the hyperparameters by maximizing the marginal likelihood:
p(yn;a,b)=∫L(β,σ2;yn)π(β)π(σ2;a,b)dβdσ2,
where is the likelihood function of the GP model (2) (see santner2003design for the full expression). Using these estimated hyperparameters, EB provides a close approximation to a fully Bayesian approach – the “gold standard” approach yielding a full quantification of uncertainty. For BO, EB estimates of hyperparameters allow HEI to closely mimic a fully Bayesian optimization procedure (the “gold standard”), while avoiding expensive MCMC sampling via a closed-form criterion.
Unfortunately, the proposition below shows that a direct application of EB for HEI gives unbounded hyperparameter estimates:
###### Proposition 6
The marginal likelihood for the hierarchical GP model with (6) is given by:
p(yn;a,b)=det(GnKn)−12baΓ(a)Γ(a+(n−q)/2)(b+wn)a+n−q2, (11)
where . The maximization of (11) is unbounded for .
The proof of Proposition 6 is provided in Appendix A.4. To address the issue of unboundedness, one can perform a modification on EB, called the marginal maximum a posterior (MMAP, doucet2002marginal
to the marginal likelihood:
~p(yn;a,b)=p(yn;a,b)π(a,b).
The MMAP approach of hyperparameter estimation has been used for efficient analysis of large-scale Bayesian networks
(JMLR:v14:liu13b). The next proposition shows that the MMAP yields finite estimates:
###### Proposition 7
Assume independent hyperpriors and , where and are the shape and scale parameters, respectively. Then the maximization of is always finite for .
The proof of Proposition 7 is provided in Appendix A.5. As we show later, this MMAP approach can greatly outperform the weakly informative approach for HEI, since it better approximates a fully Bayesian optimization procedure.
(iii) DSD. Lastly, consider the so-called “data-size-dependent” (DSD) specification. Recall from Theorem 4 that the data-size-dependent condition is sufficient for global convergence. To reflect this, the DSD specification assumes the shape parameter to be constant, and the scale parameter to grow at the same order as the sample size , i.e., .
To mimic a fully Bayesian EI, we can again use MMAP to estimate hyperparameters from data. Suppose initial data is collected from design points (which we take to be space-filling, see Section 5.3). Then and can be estimated as:
(a∗,κ∗)=argmaxa,κ>0{p(ynini;a,κn% ini)⋅πΓ(a;ζ,ι)},
where
denotes the p.d.f. of a Gamma distribution with shape and scale parameters
. Using these estimated parameters, subsequent points are then queried using HEI with and , where is the current sample size. One appealing property of this DSD specification is that it ensures HEI converges to a global optimum (Theorem 4).
### 5.2 Order Selection for Basis Functions
In our implementation, we take the basis functions to be complete polynomials up to a certain order. For different order choices, this results in different polynomial models for the mean function, e.g., for model (no trend); (linear trend) for model ; for model (second-order trend). Choosing a model with high polynomial order can reduce bias, but can also cause inflated variance due to overfitting. A model with a high order also requires more initial points, which may not be feasible in some situations. Of course, one can choose to use other basis functions (e.g., orthogonal polynomials; xiu2010numerical) depending on the application at hand.
We adopt the BIC selection criterion (schwarz1978estimating) to select the model with “best” order to use within HEI. Let be the fitted model with complete polynomials of maximum order . Denote the likelihood of model as (this expression can be found in santner2003design). Given initial data , the BIC selects the model with order:
l∗=argminl{−2logL(M(l))+qllog(nini)}, (12)
where denotes the number of basis functions in model . Having selected this optimal order, subsequent samples are then obtained using HEI with mean function following this polynomial order.
### 5.3 Algorithm Statement
Algorithm 1 provides the detailed steps for HEI. First, initial data is collected on a space-filling design, such as the maximin Latin hypercube design (MmLHD, santner2003design). Here, the number of initial points is set at , as recommended in loeppky2009choosing. Next, the polynomial model order for HEI is selected using (12), and the hyperparameters and are estimated from data (if necessary). Finally, sequential function queries are collected by maximizing the proposed HEI criterion, until a sample size budget is reached.
## 6 Simulation Studies
We now investigate the numerical performance of HEI in comparison to existing BO methods, for a suite of test optimization functions. We consider the following four test functions, taken from simulationlib:
• Branin (2-dimensional function on domain ):
f(x)=(x2−5.1/(4π2)⋅x21+5/π⋅x1−6)2+10(1−1/(8π))cos(x1)+10,
• Three-Hump Camel (2-dimensional function on domain ):
f(x)=2x21−1.05x41+x61/6+x1x2+x22,
• Six-Hump Camel (2-dimensional function on domain ):
f(x)=(4−2.1x21+x41/3)x21+x1x2+(−4+4x22)x22,
• Levy Function (6-dimensional function on domain ):
f(x)=sin2(πω1)+5∑i=1(ωi−1)2[1+10sin2(πωi+1)]+(ω6−1)2[1+sin2(2πω6)],
where for .
The simulation set-up is as follows. We compare the proposed HEI method under different hyperparameter specifications (HEI-Weak, HEI-MMAP, HEI-DSD), with the EI method under ordinary kriging (EI-OK), the EI method under universal kriging (EI-UK), the Student EI (SEI) method with hyperparameters as recommended in benassi2011robust, and the UCB method under ordinary kriging (UCB-OK) with default exploration parameter . For HEI-Weak, hyperparameters are set as ; for HEI-MMAP and HEI-DSD, hyperparameters are set as . All methods use the Matérn correlation with smoothness parameter , and are run with a total of function evaluations. Here, the kriging model is fitted using the R package kergp (kergp). All results are averaged over 10 replications.
Figures 1(a)1(c) and 1(d) show the log-optimality gap against the number of samples for the first three functions, and Figure 1(e) shows the optimality gap for the Levy function. We see that the three HEI methods outperform the three existing EI methods: the optimality gap for the latter methods stagnates for larger sample sizes, whereas the former enjoys steady improvements as increases. This shows that the proposed method indeed corrects the over-greediness of EI. Furthermore, of the HEI methods, HEI-MMAP and HEI-DSD appear to greatly outperform HEI-Weak. This is in line with the earlier observation that weakly informative priors may yield poor optimization for HEI; the MMAP and DSD specifications give better performance by mimicking a fully Bayesian optimization procedure. The steady improvement of HEI-DSD also supports the data-size-dependent prior condition needed for global convergence in Theorem 4.
Figure 1(b) shows the sampled points from HEI-DSD and UCB-OK for one run of the Branin function. The points for HEI-Weak and HEI-MMAP are quite similar to HEI-DSD, and the points for EI-OK, EI-UK and SEI are quite similar to UCB-OK, so we only plot one of each for easy visualization. We see that HEI indeed encourages more exploration in optimization: it successfully finds all three global optima for , whereas existing methods cluster points near only one optimum. The need to identify multiple global optima often arises in multiobjective optimization. For example, a company may wish to offer multiple product lines to suit different customer preferences (mak2019analysis). For such problems, HEI can provide more practical solutions over existing methods.
Lastly, we compare the performance of HEI with the SEI method (benassi2011robust). From Figure 1, the SEI achieves only comparable performance with EI-OK (which is in line with the results reported in benassi2011robust), and is one of the worst-performing methods. This shows that HEI, by (i) incorporating uncertainty on GP non-stationarity and (ii) mimicking a fully Bayesian EI via hyperparameter estimation, indeed yields considerable improvements. These novel developments play a key role in the excellent numerical performance of the proposed method.
## 7 Process optimization in semiconductor manufacturing
We now investigate the performance of HEI in a process optimization problem in semiconductor wafer manufacturing. In semiconductor manufacturing (jin2012sequential), thin silicon wafers undergo a series of refinement stages. Of these, thermal processing is one of the most important stage, since it facilitates necessary chemical reactions and allows for surface oxidation (singh2000rapid). Figure 2(a) visualizes a typical thermal processing procedure: a laser beam is moved radially in and out across the wafer, while the wafer itself is rotated at a constant speed. There are two objectives here. First, the wafer should be heated to a target temperature to facilitate the desired chemical reactions. Second, temperature fluctuations over the wafer surface should be made as small as possible, to reduce unwanted strains and improve wafer fabrication (brunner2013characterization). The goal is to find an “optimal” setting of the manufacturing process which achieves these two objectives.
We consider five control parameters: wafer thickness, rotation speed, laser period, laser radius, and laser power (a full specification is given in Table 1). The heating is performed over 60 seconds, and a target temperature of F is desired over this timeframe. We use the following objective function:
f(x):=60∑t=1maxs∈S|Tt(s;x)−T∗|. (13)
Here, denotes a spatial location on the wafer domain , denotes the heating time (in seconds), and denotes the wafer temperature at location and time , using control setting . Note that incorporates both objectives of the study: wafer temperatures close to results in smaller values of , and the same is true when is stable over .
Clearly, each evaluation of is expensive, since it requires a full run of wafer heating process. We will simulate each run using COMSOL Multiphysics (comsol)
, a reliable finite-element analysis software for solving complex systems of partial differential equations (PDEs). COMSOL models the incident heat flux from the moving laser as a spatially distributed heat source on the surface, then computes the transient thermal response by solving the coupled heat transfer and surface-to-ambient radiation PDEs. Figure 2(b) visualizes the simulation output from COMSOL: the average, maximum, and minimum temperature over the wafer domain at every time step. Experiments are performed on a desktop computer with quad-core Intel I7-8700K processors, and take around
minutes per run.
Figure 2(c) shows the objective value for HEI-MMAP and HEI-DSD (the best-performing HEI methods from simulations), and for the UCB-OK and SEI methods. We see that the HEI-MMAP and HEI-DSD methods both achieve good performance in terms of low objective values, whereas UCB-OK and SEI perform noticeably worse.
Figure 3 shows the average, maximum, and minimum temperature over the wafer surface, as a function of time. For HEI-DSD and HEI-MMAP, the average temperature quickly hits 600 F, with a slight temperature oscillation over the wafer. For SEI, the average temperature reaches the target temperature slowly, but the temperature fluctuation is much higher than for HEI-DSD and HEI-MMAP. For UCB-OK, the average temperature does not even reach the target temperature. Clearly, the two proposed HEI methods return much better manufacturing settings compared to the two existing methods.
## 8 Conclusion
In this paper, we present a hierarchical expected improvement (HEI) framework for Bayesian optimization of a black-box objective . The motivation behind HEI is the greediness of the popular expected improvement (EI) method, which over-exploits the underlying fitted GP model and can fail to converge to a global optimum even for smooth functions . HEI addresses this via the use of a hierarchical GP model on , which accounts for uncertainty in the fitted model. One advantage of HEI is that it preserves a closed-form acquisition function, which allows for efficient optimization even for high dimensions. Under certain prior specifications, we prove the global convergence of HEI over a large function class for , and derive global convergence rates under smoothness assumptions on . We then introduce several hyperparameter specifications which allow HEI to efficiently approximate a fully Bayesian optimization procedure. In numerical experiments, HEI provides improved optimization performance over existing BO methods, for both simulations and a real-world process optimization problem in semiconductor manufacturing.
## Appendix A Proofs
### a.1 Proof of Theorem 3
By Theorem 1, the posterior distribution follows a non-central t-distribution:
[f(x)∣∣Dn]∼T(2a+n−q,^fn(x),~σ2ns2n(x)).
Let . The density function of then takes the following form:
g(f;νn,^fn,~σn,sn)=Γ((νn+1)/2)~σnsn√νnπ⋅Γ(νn/2)(1+(f−^fn)2νn~σ2ns2n)−(νn+1)/2.
Using this density function, the HEI criterion can then be simplified as:
HEIn(x) =Ef|Dn(y∗n−f(x))+=∫y∗n−∞[(y∗n−^fn)+(^fn−f)]g(f)df =(y∗n−^fn)Φνn(y∗n−^fn~σnsn)+∫y∗n−∞(^fn−f)g(f)df. (14)
The second term in (A.1) can be further simplified as:
∫y∗n−∞(^fn−f)g(f)df = −~σnsn2∫(y∗n−^fn~σnsn)2−∞Γ((νn+1)/2)√νnπ⋅Γ(νn/2)(1+tνn)−νn+12dt = ~σnsnνnνn−1Γ((νn+1)/2)√νnπ⋅Γ(νn/2)(1+(y∗n−^fn)2νn(~σnsn)2)−νn−12 = √νn~σnsnΓ((νn−1)/2)(νn−2)√π⋅Γ((νn−2)/2)⎛⎝1+1νn(y∗n−^fn~σnsn)2⎞⎠−νn−12 = √νnνn−2~σnsnϕνn−2(y∗n−^fn√νn/(νn−2)~σnsn).
Therefore, we prove the claim.
### a.2 Proof of Theorem 4
The proof of Theorem 4 requires the following three lemmas. The first lemma provides an upper bound for the RKHS norm of function for changing scale parameters:
###### Lemma 8
If , then for all , and
∥f∥2Hθ′(Ω)≤(∏di=1θi/θ′i)∥f∥2Hθ(Ω).
The RKHS norm of , , can be written as:
∥f∥2Hθ=∫ξ|ˆf(ξ)|2ˆKθ(ξ)dξ.
The Fourier transform of kernel can be further decomposed as
ˆKθ(ξ)=ˆC(∑di=1(ξi/θi)2)1/2∏di=1θi.
Suppose Assumption 1 holds, i.e., is isotropic and radially non-increasing. Then
ˆKθ′(ξ)=∏di=1(θ′i/θi)ˆKθ((θ′1/θ1)ξ1,…,(θ′d/θd)ξd)≥~CˆKθ(ξ),
where . Given , we obtain
∥f∥2Hθ′(Ω)=∫|ˆf|2ˆKθ′≤∫|ˆf|2~C⋅ˆKθ=~C−1∥f∥2Hθ(Ω),
which proves the desired result. The following two lemmas describe the posterior distribution of in terms of . For simplicity, we denote for
|
2021-09-20 20:24:43
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8602014780044556, "perplexity": 1351.9206762892916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057091.31/warc/CC-MAIN-20210920191528-20210920221528-00120.warc.gz"}
|
http://www.gradesaver.com/textbooks/math/other-math/basic-college-mathematics-9th-edition/chapter-2-multiplying-and-dividing-fractions-test-page-192/1
|
## Basic College Mathematics (9th Edition)
Published by Pearson
# Chapter 2 - Multiplying and Dividing Fractions - Test: 1
#### Answer
$\frac{5}{6}$
#### Work Step by Step
The figure is divided into 6 equal parts, there are 5 parts are shaded. The 5 shaded parts of the 6-part figure are represented by the fraction $\frac{5}{6}$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2018-02-23 08:58:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6924828886985779, "perplexity": 1818.5182708058742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814538.66/warc/CC-MAIN-20180223075134-20180223095134-00446.warc.gz"}
|
https://www.tutorialspoint.com/sympy/sympy_sympify_function.htm
|
# SymPy - sympify() function
The sympify() function is used to convert any arbitrary expression such that it can be used as a SymPy expression. Normal Python objects such as integer objects are converted in SymPy. Integer, etc.., strings are also converted to SymPy expressions.
>>> expr="x**2+3*x+2"
>>> expr1=sympify(expr)
>>> expr1
>>> expr1.subs(x,2)
The above code snippet gives the following output −
12
Any Python object can be converted in SymPy object. However, since the conversion internally uses eval() function, unsanitized expression should not be used, else SympifyError is raised.
>>> sympify("x***2")
---------------------------------------------------------------------------
SympifyError: Sympify of expression 'could not parse 'x***2'' failed, because of exception being raised.
The sympify() function takes following arguments: * strict: default is False. If set to True, only the types for which an explicit conversion has been defined are converted. Otherwise, SympifyError is raised. * evaluate: If set to False, arithmetic and operators will be converted into their SymPy equivalents without evaluating expression.
>>> sympify("10/5+4/2")
The above code snippet gives the following output −
4
>>> sympify("10/5+4/2", evaluate=False)
The above code snippet gives the following output −
$\frac{10}{5}+\frac{4}{2}$
|
2021-03-04 15:51:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6693596243858337, "perplexity": 7261.171145441293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369420.71/warc/CC-MAIN-20210304143817-20210304173817-00319.warc.gz"}
|
https://theafllab.com/2018/06/28/home-ground-advantage-a-mess/
|
# Home Ground Advantage – A Mess
Everyone has done a piece on home ground advantage, and now it’s my turn. This will hopefully be one of a series of posts, the next one or two will hopefully complete this module of my model and hopefully not be a complete waste of time.
In the development of my model, figuring out how to best quantify home ground advantage was difficult to approach. At the moment, I use a very simple measure to account for “team travel”, and use adjustments for each team and player as to how they play at home or away given their upcoming fixture (i.e. Scott Pendlebury would be expected to contribute less to a Collingwood away game as his recent away form is poor.)
I have identified seven possible predictors of home ground advantage, and how each of them may be quantified:
1. The actual venue itself
2. “Morale” from playing to a home crowd (?)
3. “Favouritism” from the umpires (free kick differential)
4. Familiarity with the ground/facilities (count of previous games played for each team)
5. Not having to travel far (travel time for each team)
6. Players sleeping at normal home (boolean for each team)
7. How often they travel (interstate games per season)
Most of these are measurable from available data on past games, and predictable through the fixture.
Other models deal with HGA by applying a correction to the margin in the form of a flat number (Matter of Stats), or a percentage (possibly different for each venue?), or consideration of some of the above to get a HGA variable into their model (i.e. FiguringFooty, The Arc). Some just ignore it altogether and do pretty well (HPN).
In this post I will investigate the first 3 of these identified predictors and I will investigate their usefulness (or lack thereof). Following this a general discussion of the difficulties of distilling HGA out of existing data.
***
First, let’s have a look at some of the available data to explore some of the elements of HGA. Here I am using data from 2011 onwards. I could use data from further back but I like to keep things modern.
A broad viewing of game result data shows distinct differences between many of the common AFL venues. For each ground, the distribution of the margin and total points is presented in the following figure.
There’s a lot to unpack here. I’ve only included venues with more than 25 games played in the period or you get some real outliers (Jiangwan Stadium, for example). For clarification, a positive margin indicates a home victory.
While not a huge focus of mine at this stage, the total points scored does show variation, indicating it may be better to consider a percentage HGA bonus rather than a flat points bonus.
On the surface, the ‘Gabba is often a disadvantage to the home team; but that home team is Brisbane, who haven’t cracked the finals since 2009. York Park provides a median 42 point advantage; but Hawthorn mainly play there and they’ve been rather good. Without discounting individual margins by the strengths of the teams on the day, it’s difficult to tell whether each ground has an independent HGA, a common HGA, or no HGA at all! I’m keeping (1) as a possible predictor at the moment until more analysis can be done.
The more interesting data, perhaps, is that of the Melbourne venues MCG and Docklands. Firstly, the large number of games played there gives a better set of data to examine. Secondly, all Melbourne teams play home games there so on average, there should be less bias towards “how good” the home team is. If we filter games to Melbourne teams vs Melbourne teams (i.e. not Geelong) at the MCG and Docklands, things look very even!
For this data (360 games), the mean is -0.825 and the median margin is -1. There is no perceptible skewness in the distribution. From this sample, it cannot be said that there is an advantage ($p\approx 0.71$). But is this actually important? The only differences for the home and away team in this set of games is the change rooms they use (I think?). I suspect there may be a larger ratio of home fans in attendance but given the capacity of the grounds, not many fans would be locked out. Either way it makes no perceptible difference. At least for moderate differences in crowd it’s probably acceptable to dismiss (2) as a possible predictor.
***
Let’s now consider another common gripe about Home Ground Advantage, that of the perceived favouritism of umpiring decisions. My personal view is that the free kick differential is not indicative of favouritism, and more indicative of player indiscipline. Possibly this is a mental effect from playing away from home! Without reviewing every decision and classifying each as a “justified” free kick or an “umpiring error”, it is not possible to comment on favouritism as a concept. Nevertheless, let us look at whether teams get more free kicks at home, and if this results in more wins.
This is the data from all games since 2011. In the central plot, a darker colour means a higher frequency of data. On the right-hand side is the distribution of margins (positive means a home victory) and on the top is the distribution of free kick differential (positive means more home free kicks).
Firstly, home teams DO get more free kicks ($p<10^{-12}$). From the 1554 samples, on average, home sides get 1.70 more free kicks. And of course, home teams score more than their opposition, ($p<10^{-10}$), 7.97 points on average.
On the face of it you could easily make the connection that free kick differential correlates with the margin. The central plot tells the story that this is simply not true. The free kick differential is not a good predictor of the margin. There are many games where the free kick differential and margin have the opposite sign, almost as many as where they have the same sign. Just beacuse I’m playing around with visualisations at the moment, here is a plot of the Inside 50s differential vs. the Margin:
This is a much better predictor.
I aim to look at some of the other predictors (4-7) in a later piece after I have done some more work on it. For the moment I’m just going to consider some thoughts on how to proceed after doing this work!
***
The Scale of the Problem
There are a number of challenges facing this analysis. Firstly, let us assume the following model for predicting the outcome of a match
The team performance and player performance of each team may be predicted using their form. Environment factors include things such as HGA, the weather, and other possible factors such as if a team is coming off a short break or the bye.
To get a good measure for HGA one would need to dial out, for each past game, the effect of team performance, player performance, and non-HGA environment factors to work out an adjusted “game HGA”. From this measure, a model with each of the relevant HGA “predictors” identified could be matched.
Without doing any of the quantitative measurements, it’s easy to argue why this is going to at least be very difficult. The HGA is prevalent in the team and player performance too. Although this can be predicted from past data, this means that the full effect of HGA will be difficult to sum up. Furthermore, after removing player and team performance bias, the question remains on how to account for other environmental factors. It will likely be necessary to fit all environmental predictors (HGA, weather, etc.) simultaneously.
Then there are other problems. Is it possible that each venue has its own HGA independent of other factors? Does this change over time, i.e. how does stadium development affect this?
While I have a decent grasp on team and player performance, my model currently neglects to take weather into account (more on this in a future post I hope) and already includes HGA bias for the team and player performance. I am not in a position to attempt this quantitatively at this stage.
Nevertheless, I have some better ideas of how to proceed with this difficult problem. Firstly, I need to use player and team performance to quantify a residual “environmental” margin for each game (encompassing HGA, weather effects and noise), then examine the effects of venue, travel time, days between matches, and determine a way of describing the effect of weather.
It’s easy to see why a simple measure of HGA is attractive.
To be continued.
|
2020-09-19 21:50:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39916205406188965, "perplexity": 1328.9332981924683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192887.19/warc/CC-MAIN-20200919204805-20200919234805-00412.warc.gz"}
|
https://math.stackexchange.com/questions/2627610/is-this-operator-identity-true
|
# Is this Operator identity true?
Let $C$ be a self-adjoint, compact operator from a Hilbert space $H$ onto itself. Let the spectral decomposition of $C$ be $C = S^*S$, where $S^*$ is the adjoint of $S$. Assuming $C$ be invertible, is the following true?
$$S(C)^{-1}S^* \text{ is the identity operator.}$$
In vector spaces, I believe it holds but I am not sure about the case with general operators.
• What do you mean by the case of general operator; this operator is from a Hilbert space into itself so it is a vector space ! – The_lost Jan 30 '18 at 6:41
• I meant finite-dimensional vector spaces. Sorry for the confusion. – Enayat Jan 30 '18 at 22:11
One way of approaching this exercise is to note that the fact that there is an operator on $H$ that is both compact and invertible tells you quite a lot about $H$.
Recall that if $K$ is a compact operator on a Hilbert space and $B$ is a bounded operator on the same Hilbert space then $BK$ is compact.
It follows that if $C$ is a compact operator on a Hilbert space, and $C$ is invertible, then the identity operator $I$ on that same Hilbert space is compact. (Proof: $C^{-1}$ is a bounded operator, so take $B = C^{-1}$ and $K = C$ in the previous recollection to deduce that $C^{-1} C = I$ is compact.)
It is also true, however, that if the identity operator on a Hilbert space is compact, then the Hilbert space must be finite dimensional. See for example the information in this answer: Invertibility of compact operators in infinite-dimensional Banach spaces.
Once you know that the Hilbert space $H$ on which $C$ acts is finite dimensional, the fact that $C = S^* S$ is invertible implies that $S$ is invertible (because in a finite dimensional Hilbert space, a product of linear operators can only be invertible if each factor is itself invertible). But this means $C^{-1}$ can be expressed as the product of invertible operators: $C = (S^* S)^{-1} = S^{-1} (S^*)^{-1}$.
But then $$SC^{-1} S^{*} = S (S^{*} S)^{-1} S^{*} = S (S^{-1} (S^{*})^{-1}) S^* = (SS^{-1})((S^*)^{-1} S^*) = I \cdot I = I,$$ as desired.
• Thanks for the answer. I have a question. Suppose $H=R^m$, but $S:\ L^2(X,\rho) \rightarrow R^m$ such that $C = S^*S$. Here $X$ is some finite dimensional vector space and $\rho$ a measure on it. Does it still hold? – Enayat Jan 30 '18 at 22:17
|
2019-10-15 01:59:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9683805704116821, "perplexity": 71.7301615289204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655735.13/warc/CC-MAIN-20191015005905-20191015033405-00060.warc.gz"}
|
https://nrich.maths.org/public/leg.php?code=5039&cl=3&cldcmpid=6650
|
Search by Topic
Resources tagged with Interactivities similar to How Much Can We Spend?:
Filter by: Content type:
Stage:
Challenge level:
There are 154 results
Broad Topics > Information and Communications Technology > Interactivities
Number Pyramids
Stage: 3 Challenge Level:
Try entering different sets of numbers in the number pyramids. How does the total at the top change?
Semi-regular Tessellations
Stage: 3 Challenge Level:
Semi-regular tessellations combine two or more different regular polygons to fill the plane. Can you find all the semi-regular tessellations?
Partitioning Revisited
Stage: 3 Challenge Level:
We can show that (x + 1)² = x² + 2x + 1 by considering the area of an (x + 1) by (x + 1) square. Show in a similar way that (x + 2)² = x² + 4x + 4
Cosy Corner
Stage: 3 Challenge Level:
Six balls of various colours are randomly shaken into a trianglular arrangement. What is the probability of having at least one red in the corner?
Got It
Stage: 2 and 3 Challenge Level:
A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target.
More Number Pyramids
Stage: 3 and 4 Challenge Level:
When number pyramids have a sequence on the bottom layer, some interesting patterns emerge...
More Magic Potting Sheds
Stage: 3 Challenge Level:
The number of plants in Mr McGregor's magic potting shed increases overnight. He'd like to put the same number of plants in each of his gardens, planting one garden each day. How can he do it?
Stars
Stage: 3 Challenge Level:
Can you find a relationship between the number of dots on the circle and the number of steps that will ensure that all points are hit?
Shear Magic
Stage: 3 Challenge Level:
What are the areas of these triangles? What do you notice? Can you generalise to other "families" of triangles?
Tilted Squares
Stage: 3 Challenge Level:
It's easy to work out the areas of most squares that we meet, but what if they were tilted?
Online
Stage: 2 and 3 Challenge Level:
A game for 2 players that can be played online. Players take it in turns to select a word from the 9 words given. The aim is to select all the occurrences of the same letter.
Square Coordinates
Stage: 3 Challenge Level:
A tilted square is a square with no horizontal sides. Can you devise a general instruction for the construction of a square when you are given just one of its sides?
Cops and Robbers
Stage: 2 and 3 Challenge Level:
Can you find a reliable strategy for choosing coordinates that will locate the robber in the minimum number of guesses?
Konigsberg Plus
Stage: 3 Challenge Level:
Euler discussed whether or not it was possible to stroll around Koenigsberg crossing each of its seven bridges exactly once. Experiment with different numbers of islands and bridges.
Two's Company
Stage: 3 Challenge Level:
7 balls are shaken in a container. You win if the two blue balls touch. What is the probability of winning?
Picturing Triangle Numbers
Stage: 3 Challenge Level:
Triangle numbers can be represented by a triangular array of squares. What do you notice about the sum of identical triangle numbers?
See the Light
Stage: 2 and 3 Challenge Level:
Work out how to light up the single light. What's the rule?
First Connect Three for Two
Stage: 2 and 3 Challenge Level:
First Connect Three game for an adult and child. Use the dice numbers and either addition or subtraction to get three numbers in a straight line.
Attractive Tablecloths
Stage: 4 Challenge Level:
Charlie likes tablecloths that use as many colours as possible, but insists that his tablecloths have some symmetry. Can you work out how many colours he needs for different tablecloth designs?
A Tilted Square
Stage: 4 Challenge Level:
The opposite vertices of a square have coordinates (a,b) and (c,d). What are the coordinates of the other vertices?
Flip Flop - Matching Cards
Stage: 1, 2 and 3 Challenge Level:
A game for 1 person to play on screen. Practise your number bonds whilst improving your memory
Multiplication Tables - Matching Cards
Stage: 1, 2 and 3 Challenge Level:
Interactive game. Set your own level of challenge, practise your table skills and beat your previous best score.
Balancing 3
Stage: 3 Challenge Level:
Mo has left, but Meg is still experimenting. Use the interactivity to help you find out how she can alter her pouch of marbles and still keep the two pouches balanced.
The Triangle Game
Stage: 3 and 4 Challenge Level:
Can you discover whether this is a fair game?
Lost
Stage: 3 Challenge Level:
Can you locate the lost giraffe? Input coordinates to help you search and find the giraffe in the fewest guesses.
Diamond Mine
Stage: 3 Challenge Level:
Practise your diamond mining skills and your x,y coordination in this homage to Pacman.
Fifteen
Stage: 2 and 3 Challenge Level:
Can you spot the similarities between this game and other games you know? The aim is to choose 3 numbers that total 15.
Triangles in Circles
Stage: 3 Challenge Level:
How many different triangles can you make which consist of the centre point and two of the points on the edge? Can you work out each of their angles?
Volume of a Pyramid and a Cone
Stage: 3
These formulae are often quoted, but rarely proved. In this article, we derive the formulae for the volumes of a square-based pyramid and a cone, using relatively simple mathematical concepts.
Overlap
Stage: 3 Challenge Level:
A red square and a blue square overlap so that the corner of the red square rests on the centre of the blue square. Show that, whatever the orientation of the red square, it covers a quarter of the. . . .
Subtended Angles
Stage: 3 Challenge Level:
What is the relationship between the angle at the centre and the angles at the circumference, for angles which stand on the same arc? Can you prove it?
Isosceles Triangles
Stage: 3 Challenge Level:
Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw?
Shuffles Tutorials
Stage: 3 Challenge Level:
Learn how to use the Shuffles interactivity by running through these tutorial demonstrations.
Top Coach
Stage: 3 Challenge Level:
Carry out some time trials and gather some data to help you decide on the best training regime for your rowing crew.
Balancing 2
Stage: 3 Challenge Level:
Meg and Mo still need to hang their marbles so that they balance, but this time the constraints are different. Use the interactivity to experiment and find out what they need to do.
First Connect Three
Stage: 2 and 3 Challenge Level:
The idea of this game is to add or subtract the two numbers on the dice and cover the result on the grid, trying to get a line of three. Are there some numbers that are good to aim for?
Balancing 1
Stage: 3 Challenge Level:
Meg and Mo need to hang their marbles so that they balance. Use the interactivity to experiment and find out what they need to do.
Drips
Stage: 2 and 3 Challenge Level:
An animation that helps you understand the game of Nim.
Shuffle Shriek
Stage: 3 Challenge Level:
Can you find all the 4-ball shuffles?
An Unhappy End
Stage: 3 Challenge Level:
Two engines, at opposite ends of a single track railway line, set off towards one another just as a fly, sitting on the front of one of the engines, sets off flying along the railway line...
Poly-puzzle
Stage: 3 Challenge Level:
This rectangle is cut into five pieces which fit exactly into a triangular outline and also into a square outline where the triangle, the rectangle and the square have equal areas.
Stage: 4 and 5 Challenge Level:
This is an interactivity in which you have to sort the steps in the completion of the square into the correct order to prove the formula for the solutions of quadratic equations.
Sliding Puzzle
Stage: 1, 2, 3 and 4 Challenge Level:
The aim of the game is to slide the green square from the top right hand corner to the bottom left hand corner in the least number of moves.
You Owe Me Five Farthings, Say the Bells of St Martin's
Stage: 3 Challenge Level:
Use the interactivity to listen to the bells ringing a pattern. Now it's your turn! Play one of the bells yourself. How do you know when it is your turn to ring?
Bow Tie
Stage: 3 Challenge Level:
Show how this pentagonal tile can be used to tile the plane and describe the transformations which map this pentagon to its images in the tiling.
Magic Potting Sheds
Stage: 3 Challenge Level:
Mr McGregor has a magic potting shed. Overnight, the number of plants in it doubles. He'd like to put the same number of plants in each of three gardens, planting one garden each day. Can he do it?
Countdown
Stage: 2 and 3 Challenge Level:
Here is a chance to play a version of the classic Countdown Game.
Nim-interactive
Stage: 3 and 4 Challenge Level:
Start with any number of counters in any number of piles. 2 players take it in turns to remove any number of counters from a single pile. The winner is the player to take the last counter.
Teddy Town
Stage: 1, 2 and 3 Challenge Level:
There are nine teddies in Teddy Town - three red, three blue and three yellow. There are also nine houses, three of each colour. Can you put them on the map of Teddy Town according to the rules?
Which Spinners?
Stage: 3 and 4 Challenge Level:
Can you work out which spinners were used to generate the frequency charts?
|
2016-07-28 21:03:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28053390979766846, "perplexity": 1602.0237986401705}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828322.57/warc/CC-MAIN-20160723071028-00171-ip-10-185-27-174.ec2.internal.warc.gz"}
|
http://www.dummies.com/how-to/content/using-the-sum-rule-for-simplifying-a-series.html
|
The Sum Rule for integration allows you to split a sum inside an integral into the sum of two separate integrals. Similarly, you can break a sum inside a series into the sum of two separate series:
For example:
A little algebra allows you to split this fraction into two terms:
Now the rule allows you to split this result into two series:
This sum of two series is equivalent to the series that you started with. As with the Sum Rule for integration, expressing a series as a sum of two simpler series tends to make problem-solving easier. Generally speaking, as you proceed onward with series, any trick you can find to simplify a difficult series is a good thing.
|
2015-09-05 14:43:38
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9541999101638794, "perplexity": 275.74154164026083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646312602.99/warc/CC-MAIN-20150827033152-00253-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/961089/combinatorial-or-algebraic-proof
|
# Combinatorial or algebraic proof
I am having trouble proving this identity using combinatoric or algebraic proof. As someone pointed me out it is somehow related to pascals triangle recurrence.
$$\sum_{i=0}^k \binom{n+i}{i} = \binom{n+k+1}{k}$$
I found a question where this equation was posted but didnt understand any answers there... Combinatorial proofs - how?
Could someone help me out?
• So if you have $15$ names, you can EITHER seat the first $12$ on the jury and send the other three home, OR seat the first $13$ and choose one to be an alternate who will fill in if one of the $12$ falls ill, OR seat the first $14$ and choose two of those as alternates, OR seat the first $15$ and choose three of those as alternates. And the total number of ways to do that is the same as the number of ways to choose $3$ out of $16$. And the question is then: what are the $16$ things and what are the $3$ that are chosen that somehow determine who are the $12$ jurors and who are the alternates? – Michael Hardy Oct 6 '14 at 18:36
• – user84413 Oct 6 '14 at 20:37
So if you have 15 names, you can EITHER seat the first 12 on the jury and send the other three home, OR seat the first 13 and choose one to be an alternate who will fill in if one of the 12 falls ill, OR seat the first 14 and choose two of those as alternates, OR seat the first 15 and choose three of those as alternates. And the total number of ways to do that is the same as the number of ways to choose 3 out of 16. And the question is then: what are the 16 things and what are the 3 that are chosen that somehow determine who are the 12 jurors and who are the alternates? $$\binom{12}0 + \binom{13}1 + \binom{14}2 + \binom{15}3 = \binom{16}3$$ This is the same as $$\binom{12}{12}+ \binom{13}{12} + \binom{14}{12} + \binom{15}{12} = \binom{16}3$$ Here's why this works: You have your list of $15$ names plus the one dummy name. You will choose three of the $16$ names.
If the dummy is not included among those three, then those three are the alternates and you have $\dbinom{15}{3}$ ways that can happen.
If the dummy is included among the three chosen, then there are two others. If the $15$th actual name is not included among those two, the those are the two alternates. There are $\dbinom{14}{2}$ ways that can happen.
If the dummy and the $15$th name are among the three chosen, then there is one other. If that one other is not the $14$th actual name, than that one is the alternate. There are $\dbinom{13}1$ ways that can happen.
If the dummy and the $15$th and $14$th names among those chosen, then there are no alternates. There are $\dbinom{12}0$ ways that can happen.
Thus the $15$th person is an alternate only if the dummy is not chosen and the $15$th person is. The $14$th person is an alternate only if the dummy and the $15$th person are not chosen and the $14$th person is.
|
2020-02-26 20:04:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7538720965385437, "perplexity": 119.55548212212489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146485.15/warc/CC-MAIN-20200226181001-20200226211001-00173.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/186614-inverse-trig-function-question.html
|
# Math Help - Inverse Trig Function Question.
1. ## Inverse Trig Function Question.
I need to solve 2sin(eˆ(x/4))+1=0 for x
My attempted solution:
sin(eˆx/4) = -.5
arcsin(-.5) = eˆ(x/4)
ˆ
I understand why this is where I went wrong;
because eˆ(x/4) is never negative, but I don't understand how I would solve this equation, and more importantly I don't understand why I couldn't use inverse trig functions to solve this problem. I thought that if sin x = y then arcsiny = x
2. ## Re: Inverse Trig Function Question.
Originally Posted by nicksbyman
I need to solve 2sin(eˆ(x/4))+1=0 for x
My attempted solution:
sin(eˆx/4) = -.5
arcsin(-.5) = eˆ(x/4)
ˆ
I understand why this is where I went wrong;
because eˆ(x/4) is never negative, but I don't understand how I would solve this equation, but more importantly I don't understand why I couldn't use inverse trig functions to solve this problem.
You need to realise that the sine function is negative in the third and fourth quadrants, and has a period of $\displaystyle 2\pi$, so
\displaystyle \begin{align*} \sin{\left(e^{\frac{x}{4}}\right)} &= -\frac{1}{2} \\ e^{\frac{x}{4}} &= \left\{\pi + \arcsin{\left(\frac{1}{2}\right)}, 2\pi - \arcsin{\left(\frac{1}{2}\right)} \right\} + 2\pi n \textrm{ where }n \in \mathbf{Z^{+} \cup \{0\}} \end{align*}
We choose $\displaystyle n$ to only represent nonnegative integers, because as you said, the exponential function is always positive. Go from here.
|
2016-05-05 19:10:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992884993553162, "perplexity": 741.1233312771612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860127878.90/warc/CC-MAIN-20160428161527-00097-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://mathhothouse.me/2015/03/05/coffee-time-mathematics-any-number-via-three-twos/
|
## Coffee time mathematics — any number via three twos
Problem:
Here’s a witty algebraic brain teaser that had amused participants of a congress of physicists in the erstwhile USSR. The problem is to represent any number that must be positive and whole (any positive integer) using three twos and mathematical symbols.
Solution:
Let us take a particular case, and think “inductively”. Suppose we are given the number 3. Then, the problem is solved thus:
$3=-\log_{2} \log_{2} \sqrt{\sqrt{\sqrt{2}}}$.
It is easy to see that the equation is true. Indeed,
$\sqrt{\sqrt{\sqrt{2}}}= ((2^{1/2})^{1/2})^{1/2}= 2^{\frac{1}{2^{3}}}=2^{{2}^{-3}}$.
$\log_{2}2^{2^{-3}}=2^{-3}$ and $-\log_{2}2^{-3}=3$.
If we were given the number 5, we would proceed in the same manner:
$5=-\log_{2}\log_{2}\sqrt{\sqrt{\sqrt{\sqrt{\sqrt{2}}}}}$.
It will be seen that we have made use of the fact that the index 2 is dropped when writing the square root.
The general solution looks like this. if the given number is N, then
$N=-\log_{2}\log_{2}\underbrace{\sqrt{\sqrt{\ldots \sqrt{\sqrt{2}}}}}_{N times}$,
the number of radical signs equalling the number of units in the given number.
More later,
Nalin Pithwa
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
2020-09-25 21:00:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8733346462249756, "perplexity": 877.6788189579709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228707.44/warc/CC-MAIN-20200925182046-20200925212046-00260.warc.gz"}
|
https://doc.cgal.org/5.0/Kernel_d/group__PkgKernelDLinAlgConcepts.html
|
CGAL 5.0 - dD Geometry Kernel
Linear Algebra Concepts
## Concepts
conceptLinearAlgebraTraits_d
The data type LinearAlgebraTraits_d encapsulates two classes Matrix, Vector and many functions of basic linear algebra. An instance of data type Matrix is a matrix of variables of type NT. Accordingly, Vector implements vectors of variables of type NT. Most functions of linear algebra are checkable, i.e., the programs can be asked for a proof that their output is correct. For example, if the linear system solver declares a linear system $$A x = b$$ unsolvable it also returns a vector $$c$$ such that $$c^T A = 0$$ and $$c^T b \neq 0$$. More...
conceptMatrix
An instance of data type Matrix is a matrix of variables of number type NT. The types Matrix and Vector together realize many functions of basic linear algebra. More...
conceptVector
An instance of data type Vector is a vector of variables of number type NT. Together with the type Matrix it realizes the basic operations of linear algebra. More...
|
2022-07-04 03:29:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33783385157585144, "perplexity": 1063.9745764231518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104293758.72/warc/CC-MAIN-20220704015700-20220704045700-00314.warc.gz"}
|
https://zbmath.org/?q=an%3A1059.93025
|
## Observer-based sliding mode control for nonlinear state-delayed systems.(English)Zbl 1059.93025
The authors consider a state-delayed control system with unmeasurable states, mismatched parameter uncertainties and an unknown nonlinear function. First, under suitable assumptions, it is proved that the sliding mode dynamics, restricted to a suitable sliding surface, is asymptotically stable. Next, a sliding mode observer is constructed to estimate the state variables. By means of feasibility of some linear matrix inequalities, a sufficient condition is proved to ensure asymptotic stability of the closed-loop system composed of the state observer and the estimation error system. It is also proved that the proposed control scheme guarantees reachability of the sliding surfaces in both the state estimate space and the estimation error space. A simulation example is also presented.
### MSC:
93B12 Variable structure systems 93C23 Control/observation systems governed by functional-differential equations 15A39 Linear inequalities of matrices 93D15 Stabilization of systems by feedback
Full Text:
### References:
[1] DOI: 10.1049/ip-cta:19971122 · Zbl 0887.93011 [2] DOI: 10.1080/00207170110061077 · Zbl 1017.93024 [3] DOI: 10.1109/9.898702 · Zbl 1056.93638 [4] DOI: 10.1080/00207179608921668 · Zbl 0858.93019 [5] DOI: 10.1016/S0005-1098(97)00200-8 · Zbl 0965.93025 [6] DOI: 10.1016/S0167-6911(01)00199-2 · Zbl 0994.93004 [7] Hu KJ Basker VR Crisalle OD 1998 Sliding mode control of uncertain input-delay systems InProceedings of the 1998 American Control ConferencePhiladelphia Pennsylvania pp. 564–568 [8] DOI: 10.1016/0005-1098(94)90110-4 · Zbl 0806.93017 [9] Koshkouei AJ Zinober ASI 1996 Sliding mode time-delay systems InVSS’96, Proceedings of the 1996 IEEE International Workshop on Variable Structure SystemsTokyo Japan pp. 97–101 [10] Li X Decarlo RA 2001 Memoryless sliding mode control of uncertain time-delay systems InProceedings of the 2001 American Control ConferenceArlington VA pp. 4344–4350 [11] DOI: 10.1016/S0005-1098(97)00082-4 [12] DOI: 10.1049/ip-d.1993.0035 · Zbl 0786.93081 [13] DOI: 10.1109/9.328812 · Zbl 0925.93585 [14] DOI: 10.1049/ip-cta:20030321 [15] Pai MC Sinha A 2000 Sliding mode control of vibration in a flexible structure via estimated stated andH/{$$\mu$$} techniques InProceedings of the 2000 American Control ConferenceChicago Illinois pp. 1118–1123 [16] DOI: 10.1016/0167-6911(87)90102-2 · Zbl 0618.93056 [17] DOI: 10.1080/00207179308934385 · Zbl 0774.93066 [18] Shyu KK, Control Theory and Advanced Technology 10 pp 513– (1994) [19] Spurgeon SK Edwards C Foster N 1996 Robust model reference control using a sliding mode controller/observer scheme with application to a helicopter problem InVSS’96, Proceedings of the 1996 IEEE International Workshop on Variable Structure SystemsTokyo Japan pp. 36–41 [20] DOI: 10.1016/S0005-1098(98)00233-7 · Zbl 0945.93605 [21] DOI: 10.1109/9.151120 · Zbl 0764.93067 [22] Yang M-S Liu P-L Lu H-C 1999 Output feedback stabilization of uncertain dynamic systems with multiple state delays via sliding mode control strategy InISIE’99, Proceedings of the 1999 IEEE International Symposium on Industrial ElectronicsEled Slovenia pp. 1147–1152 [23] DOI: 10.1049/ip-d.1993.0006
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2022-07-01 02:05:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31500113010406494, "perplexity": 5947.83574516314}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103917192.48/warc/CC-MAIN-20220701004112-20220701034112-00098.warc.gz"}
|
https://math.stackexchange.com/questions/2047446/get-a-new-average-after-adding-1-to-the-collection
|
# Get a new average after adding 1 to the collection
Let's say I have 20 people who rated a service, the average rating for them was 4/5. If a new person did the rating for 3/5, what's the new average of rating?
Note the answer doesn't have to be exact, I am using the calculation for rating component, so I am going to round the result as I will have only integers.
Assuming that the true average rating (i.e. without any rounding) was exactly 4, then the total rating for all 20 people is $4 \times 20 = 80$. So the total rating after adding the 21st rating in is $80 + 3 = 83$, so the average rating across 21 people is $83/21 \approx 3.95$. If you don't store either an accurate value for the average, or the total of all ratings, you're going to have trouble adjusting the rating as new ones come in since rounding will tend to push everything back to the previous value.
EDIT for example:
Let's suppose you already have 100 ratings, with a total of 374, so the average is 3.74. Then if the 101st user gives a rating of ...
1, the new average will be $375/101 = 3.71$
2, the new average will be $376/101 = 3.72$
3, the new average will be $377/101 = 3.73$ 4, the new average will be $378/101 = 3.74$ 5, the new average will be $379/101 = 3.75$
If you just store that value rounded to the nearest whole number, you'll say all of those are 4. Then as new ratings come in, you'll probably keep rounding the result to 4 for as long as you like - even if a million people all give a rating of 1, if you update after every rating you'll never see the value shift. Storing to 2 decimal places means that the same problem will happen once you reach a few thousand ratings.
• Thank you. No I don't store the value for each user. the user is going to rate, and the new average will be based on the existing average + the new rate. Does your equation mean that EACH user voted as 4? How will it be pushed to the previous value, can you elaborate? – Jacky Dec 7 '16 at 2:56
• Let's say the first user voted as 3/5, the second user voted as 2/5, the third user voted as 5/5, how do you calculate the average? – Jacky Dec 7 '16 at 2:59
• What if I store my average as 3.34. Will this be better? and how? – Jacky Dec 7 '16 at 3:04
• You don't need to store each individual's rate. However, you should store the sum of all ratings, or the average of all ratings, with a decent amount of accuracy. So yes, storing it as 3.34 will be better, because then it won't be until you have a few thousand ratings will you find that new ratings are unable to shift the average. I'll add a little info to give some better context for why that is the case. – ConMan Dec 7 '16 at 4:55
|
2019-04-20 02:12:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7002257704734802, "perplexity": 285.765786460409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578528481.47/warc/CC-MAIN-20190420020937-20190420042937-00472.warc.gz"}
|
http://fluidsengineering.asmedigitalcollection.asme.org/article.aspx?articleid=2525711
|
0
Research Papers: Fundamental Issues and Canonical Flows
Flow Kinematics in Variable-Height Rotating Cylinder Arrays
[+] Author and Article Information
Anna E. Craig
Department of Mechanical Engineering,
Stanford University,
Stanford, CA 94035
e-mail: craig0a@stanford.edu
John O. Dabiri
Professor
Department of Civil and
Environmental Engineering;
Department of Mechanical Engineering,
Stanford University,
Stanford, CA 94035
Jeffrey R. Koseff
Professor
Department of Civil and
Environmental Engineering,
Stanford University,
Stanford, CA 94035
Contributed by the Fluids Engineering Division of ASME for publication in the JOURNAL OF FLUIDS ENGINEERING. Manuscript received January 28, 2016; final manuscript received May 12, 2016; published online July 15, 2016. Assoc. Editor: Mark F. Tachie.
J. Fluids Eng 138(11), 111203 (Jul 15, 2016) (11 pages) Paper No: FE-16-1062; doi: 10.1115/1.4033676 History: Received January 28, 2016; Revised May 12, 2016
Abstract
Experimental data are presented for large arrays of rotating, variable-height cylinders in order to study the dependence of the three-dimensional mean flows on the height heterogeneity of the array. Elements in the examined arrays were spatially arranged in the same staggered paired configuration, and the heights of each element pair varied up to $±$37.5% from the mean height (kept constant across all arrays), such that the arrays were vertically structured. Four vertical structuring configurations were examined at a nominal Reynolds number (based on freestream velocity and cylinder diameter) of 600 and nominal tip-speed ratios of 0, 2, and 4. It was found that the vertical structuring of the array could significantly alter the mean flow patterns. Most notably, a net vertical flow into the array from above was observed, which was augmented by the arrays' vertical structuring, showing a 75% increase from the lowest to highest vertical flows (as evaluated at the maximum element height, at a single rotation rate). This vertical flow into the arrays is of particular interest as it represents an additional mechanism by which high streamwise momentum can be transported from above the array down into the array. An evaluation of the streamwise momentum resource within the array indicates up to a 56% increase in the incoming streamwise velocity to the elements (from the lowest to highest ranking arrays, at a single rotation rate). These arrays of rotating cylinders may provide insight into the flow kinematics of arrays of vertical axis wind turbines (VAWTs). In a physical VAWT array, an increase in incoming streamwise flow velocity to a turbine corresponds to a (cubic) increase in the power output of the turbine. Thus, these results suggest a promising approach to increasing the power output of a VAWT array.
<>
References
Chan, A. S. , Dewey, P. A. , Jameson, A. , Liang, C. , and Smits, A. J. , 2011, “ Vortex Suppression and Drag Reduction in the Wake of Counter-Rotating Cylinders,” J. Fluid Mech., 679, pp. 343–382.
Guo, X. , Lin, J. , Tu, C. , and Wang, H. , 2009, “ Flow Past Two Rotating Circular Cylinders in a Side-by-Side Arrangement,” J. Hydrodyn., 21(2), pp. 143–151.
Kumar, S. , Gonzalez, B. , and Probst, O. , 2011, “ Flow Past Two Rotating Cylinders,” Phys. Fluids, 23(1), p. 014102.
Ueda, Y. , Kida, T. , and Iguchi, M. , 2013, “ Steady Approach of Unsteady Low-Reynolds-Number Flow Past Two Rotating Circular Cylinders,” J. Fluid Mech., 736, pp. 414–443.
Yoon, H. S. , Kim, J. H. , Chun, H. H. , and Choi, H. J. , 2007, “ Laminar Flow Past Two Rotating Circular Cylinders in a Side-by-Side Arrangement,” Phys. Fluids, 19(12), p. 128103.
Yoon, H. S. , Chun, H. H. , Kim, J. H. , and Park, I. L. R. , 2009, “ Flow Characteristics of Two Rotating Side-by-Side Circular Cylinder,” Comput. Fluids, 38(2), pp. 466–474.
Whittlesey, R. W. , Liska, S. , and Dabiri, J. O. , 2010, “ Fish Schools as a Basis for Vertical Axis Wind Turbine Farm Design,” Bioinspiration Biomimetics, 5(3), p. 035005. [PubMed]
Dabiri, J. O. , 2011, “ Potential Order-of-Magnitude Enhancement of Wind Farm Power Density Via Counter-Rotating Vertical-Axis Wind Turbine Arrays,” J. Renewable Sustainable Energy, 3(4), p. 043104.
Kinzel, M. , Mulligan, Q. , and Dabiri, J. O. , 2012, “ Energy Exchange in an Array of Vertical Axis Wind Turbines,” J. Turbul., 13(38), pp. 1–13.
Craig, A. , Dabiri, J. , and Koseff, J. , 2016, “ A Kinematic Description of the Key Flow Characteristics in an Array of Finite-Height Rotating Cylinders,” ASME J. Fluids Eng., 138(7), p. 070906.
Weitzman, J. S. , Zeller, R. B. , Thomas, F. I. M. , and Koseff, J. R. , 2015, “ The Attenuation of Current- and Wave-Driven Flow Within Submerged Multispecific Vegetative Canopies,” Limnol. Oceanogr., 60(6), pp. 1855–1874.
Cheng, H. , and Castro, I. P. , 2002, “ Near Wall Flow Over Urban-Like Roughness,” Boundary-Layer Meteorolo., 104(2), pp. 229–259.
Kanda, M. , 2006, “ Large-Eddy Simulation on the Effects of Surface Geometry of Building Arrays on Turbulent Organized Structures,” Boundary-Layer Meteorol., 118(1), pp. 151–168.
Xie, Z. T. , Coceal, O. , and Castro, I. P. , 2008, “ Large-Eddy Simulation of Flows Over Random Urban-Like Obstacles,” Boundary-Layer Meteorol., 129(1), pp. 1–23.
Jiang, D. , Jiang, W. , Liu, H. , and Sun, J. , 2008, “ Systematic Influence of Different Building Spacing, Height, and Layout on Mean Wind and Turbulent Characteristics Within and Over Urban Building Arrays,” Wind Struct., 11(4), pp. 275–289.
Hagishima, A. , Tanimoto, J. , Nagayama, K. , and Meno, S. , 2009, “ Aerodynamic Parameters of Regular Arrays of Rectangular Blocks With Various Geometries,” Boundary-Layer Meteorol., 132(2), pp. 315–337.
Millward-Hopkins, J. T. , Tomlin, A. S. , Ma, L. , Ingham, D. , and Pourkashanian, M. , 2011, “ Estimating Aerodynamic Parameters of Urban-Like Surfaces With Heterogeneous Building Heights,” Boundary-Layer Meteorol., 141(3), pp. 443–465.
Ferreira, C. S. , Madsen, H. A. , Barone, M. , Roscher, B. , Deglaire, P. , and Arduin, I. , 2014, “ Comparison of Aerodynamic Models for Vertical Axis Wind Turbines,” J. Phys.: Conf. Ser., 524, p. 012125.
Shamsoddin, S. , and Porte-Agel, F. , 2014, “ Large Eddy Simulation of Vertical Axis Wind Turbine Wakes,” Energies, 7(2), pp. 890–912.
Archer, C. , Xie, S. , Ghaisas, N. , and Meneveau, C. , 2015, “ Benefits of Vertically-Staggered Wind Turbines From Theoretical Analysis and Large-Eddy Simulations,” North American Wind Energy Academy Symposium, Blacksburg, VA, June 9–11, pp. 3–6.
Nikora, V. , Ballio, F. , Coleman, S. , and Pokrajac, D. , 2013, “ Spatially Averaged Flows Over Mobile Beds: Definitions, Averaging Theorems, and Conservation Equations,” J. Hydraul. Eng., 139(8), pp. 803–811.
Craig, A. , and Dabiri, J. O. , 2015, “ V-Shaped Arrangements of Turbines,” U.S. Patent No. 9,175,669 B2.
Efron, B. , and Tibshirani, R. J. , 1993, An Introduction to the Bootstrap (Monographs on Statistics and Applied Probability, Vol. 57), Chapman & Hall, New York.
Theunissan, R. , Sante, A. D. , Riethmuller, M. L. , and den Braembussche, R. A. V. , 2008, “ Confidence Estimation Using Dependent Circular Block Bootstrapping: Application to the Statistical Analysis of PIV Measurements,” Exp. Fluids, 44(4), pp. 591–596.
Patton, A. , 2014, “matlab Codes.”
Cal, R. B. , Lebrón, J. , Castillo, L. , Kang, H. S. , and Meneveau, C. , 2010, “ Experimental Study of the Horizontally Averaged Flow Structure in a Model Wind-Turbine Array Boundary Layer,” J. Renewable Sustainable Energy, 2(1), p. 013106.
Calaf, M. , Meneveau, C. , and Meyers, J. , 2010, “ Large Eddy Simulation Study of Fully Developed Wind-Turbine Array Boundary Layers,” Phys. Fluids, 22(1), p. 015110.
Yue, W. , Meneveau, C. , Parlange, M. , Whu, W. , van Hout, R. , and Katz, J. , 2007, “ A Comparative Quadrant Analysis of Turbulence in a Plant Canopy,” Water Resources Research, 43(5), (epub May 17, 2007).
Figures
Fig. 1
Schematics of the spatial, rotational, and height configurations of the arrays (within the region of interest). Each symbol indicates the position of an element. The color of the symbol indicates the rotational direction as viewed from above: black is clockwise and gray is counterclockwise. The symbol indicates the height of the element: ◁ indicates a short element, ○ indicates an average element, and indicates a tall element. The red (light gray) lines indicate the transverse locations and streamwise extents of the vertical data sheets taken in each array. The blue (dark gray) box indicates the position and extent of the horizontal data sheets taken.
Fig. 2
Experimental setup illustrations. Left: close-up sketch showing element mounting to gears and plate structure holding gears in place. Right: photo of full array in the flume. Figure adapted from Ref. [10].
Fig. 6
Comparison between quantified meander of flow in the array and the performance of the array (for α = 2 data). The symbol indicates the height of the data sheet: ◁ indicates that the data sheet was at or below z = 5D, ○ indicates that the data sheet was at or below 8D, and indicates that the data sheet was at or below 11D. Multiple symbols come from the different arrays. The solid line is the linear best fit: R2 = 0.86.
Fig. 5
Time-averaged transverse flows and streamlines in the three horizontal data planes for the (a) sawtooth and (b) wedge arrays, α = 2. Here, as in Fig. 1, the symbol indicates the height of the element: ◁ indicates a short element, ○ indicates an average element, and indicates a tall element. The color of the symbol indicates the rotational direction as viewed from above (if the element intersects the given data sheet; if the element is below the height of the data sheet, the symbol is left white): black indicates clockwise rotation and gray indicates counterclockwise rotation. Note that the missing data along the rows of elements are due to shadowing of the laser sheet.
Fig. 4
Comparison of streamwise momentum flux terms for α = 2. Solid line: 〈u¯〉〈w¯〉, dashed line: 〈u′w′¯〉, and dotted line: 〈u¯̃w¯̃〉. Note that on this scale, for the sawtooth, wedge, and random arrays, the tallest elements are at height 1, the average elements are at height 0.73, and the short elements are at height 0.45, as indicated by the grid lines. While these latter two heights have no meaning for the uniform array, the grid lines have been retained for easier comparison between arrays.
Fig. 3
Comparison of Cuin (normalized to maximum measured value) across arrays and rotation rates. Light gray indicates α = 0, medium gray indicates α = 2, and dark gray indicates α = 4.
Fig. 7
Time-averaged vertical flow and streamlines for a selection of the vertical data sheets taken in each array, α = 2. Solid black rectangles indicate that the sheet intersects with a clockwise rotating cylinder. Rectangle outlines indicate the locations of the rows which the laser sheet does not intersect, with the height of the element closest to the laser sheet being indicated. Please note that due to line of sight blocking by other elements along the rows, data were not able to be collected between elements in a row. Interpolation was used to fill these regions of missing data, and features “within” the outlined cylinders may be artifacts of the interpolation.
Fig. 8
Comparison of the flow angle behind each cylinder pair with the local tip-speed ratio (α = 2 data). The symbol indicates the height of the element pair: ◁ indicates a short element pair, ○ indicates an average element pair, and indicates a tall element pair. The solid line indicates the linear best fit: R2 = 0.93.
Fig. 9
Illustrations of vertical flow prediction based on vertical structuring of array: top panel—sawtooth array, y=−1D and bottom panel—wedge array, y = −3D
Fig. 10
Comparison of model predicted vertical flow and measured vertical flow at the maximum height of the array (α = 2 data). The symbol indicates the array: indicates the sawtooth array, ○ indicates the wedge array, △ indicates the random array, and ◇ indicates the uniform height array. The solid line indicates the linear best fit: R2 = 0.99.
Fig. 11
Time–space averaged stress fraction (a) and duration fraction (b) of u′w′ events in quadrant 1 (–⋅), quadrant 2 (–), quadrant 3 (⋅⋅), and quadrant 4 (– –). Hole size = 0 for each of the four arrays.
Errata
Some tools below are only available to our subscribers or users with an online account.
Related Content
Customize your page view by dragging and repositioning the boxes below.
Related Journal Articles
Related Proceedings Articles
Related eBook Content
Topic Collections
|
2018-10-19 19:41:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5461816191673279, "perplexity": 5676.533790943091}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512434.71/warc/CC-MAIN-20181019191802-20181019213302-00336.warc.gz"}
|
https://www.scienceforums.net/topic/98956-continuity-and-uncountability/page/6/?tab=comments
|
# Continuity and uncountability
## Recommended Posts
48 minutes ago, studiot said:
that was no reason to pour out vitriol.
I did no such thing. I characterized your error, not you personally. I added detail regarding unrestricted comprehension so readers could Google the relevant facts. What you SAID was garbage. That is an objective fact. Not vitriol. And "I think in pictures, not algebra," is unconvincing coming from someone whom I've seen lay out brilliant technical responses to questions of engineering math. You're not lacking in algebra by any means. you're simply lacking some basics in abstract math. Basics easily studied on Wikipedia.
Edited by wtf
##### Share on other sites
3 hours ago, studiot said:
However since the OP is indifferent to my help I see no purpose being served in my further presence in this thread.
I'm very grateful to your help. Sorry for not replying you more.
##### Share on other sites
2 minutes ago, pengkuan said:
I'm very grateful to your help. Sorry for not replying you more.
So did you catch my point about the different terminology used by different mathematicians?
This would be important to you since English is not your first language.
I have basically been restricting my input to try to help you make sense of what others are telling you.
##### Share on other sites
2 hours ago, uncool said:
No, that definition of finitude is not the definition used to prove that a set is countable, because countability and infinitude are different concepts.
Yes, you are right.
3 hours ago, uncool said:
However, I want to draw your attention to something. Note that every fraction appears after a finite number of steps. That is, if you gave me a fraction, I could tell you exactly at what step you reach it - and I could represent the number by writing it as "1 + 1 + 1 + 1" and eventually stop. For example, 2/4 is reached in the 12th step, that is, the 1+1+1+1+1+1+1+1+1+1+1+1th step.
So, the set of the rationals is countable because every i/j corresponds to a finite number n in the counting order. At nth step, we stop the count. Although the set itself is infinite, all counting numbers are finite.
But for the power set of N, the set of all even numbers corresponds to the infinite binary sequence 1010101010...... To reach it we cannot stop counting because this sequence is not finite. Can we say that the power set of N is not countable because the counting numbers of infinite subsets are infinite?
27 minutes ago, studiot said:
So did you catch my point about the different terminology used by different mathematicians?
This would be important to you since English is not your first language.
I have basically been restricting my input to try to help you make sense of what others are telling you.
Yes. I agree that I have difficulty in seizing the exact sense of the discussion.
##### Share on other sites
1 hour ago, pengkuan said:
So, the set of the rationals is countable because every i/j corresponds to a finite number n in the counting order. At nth step, we stop the count. Although the set itself is infinite, all counting numbers are finite.
I'm not sure what you mean by "at the nth step, we stop the count"; if you mean "for each i/j, we can stop the count at some n and be at i/j", yes. The other two parts are both correct.
1 hour ago, pengkuan said:
But for the power set of N, the set of all even numbers corresponds to the infinite binary sequence 1010101010...... To reach it we cannot stop counting because this sequence is not finite.
That is a way to look at it; that your attempt to match N to its powerset doesn't work for the set of all even numbers because there is no finite number that matches it.
1 hour ago, pengkuan said:
Can we say that the power set of N is not countable because the counting numbers of infinite subsets are infinite?
Not really. Countability is about the existence of some matching - some bijection; the fact that some map doesn't work as a bijection doesn't mean there can't be another. For example: the set {1, 1/2, 1/3, 1/4, 1/5, ...} with an added 0 (i.e. union with {0}) is countable, even though the map n -> 1/n clearly "misses" 0. It is countable because of the bijection 1 -> 0, n -> 1/(n - 1) for n > 2.
##### Share on other sites
10 hours ago, studiot said:
that was no reason to pour out vitriol.
I apologize. I get carried away sometimes. No personal malice intended.
##### Share on other sites
21 hours ago, pengkuan said:
However, I want to draw your attention to something. Note that every fraction appears after a finite number of steps. That is, if you gave me a fraction, I could tell you exactly at what step you reach it - and I could represent the number by writing it as "1 + 1 + 1 + 1" and eventually stop. For example, 2/4 is reached in the 12th step, that is, the 1+1+1+1+1+1+1+1+1+1+1+1th step.
19 hours ago, uncool said:
I'm not sure what you mean by "at the nth step, we stop the count"; if you mean "for each i/j, we can stop the count at some n and be at i/j", yes. The other two parts are both correct.
I was restating your explanation above with my words.
So, in the counting of $$\mathbb{Q}$$, the ratio i/j corresponds to the number n which is finite. For the ratio i/1, the number of count is: $n=\frac{i(i+1)}{2}$ So, if n is a finite number, i is also finite. Does this mean that the number i is not allowed to go to infinity? In this case, $$\mathbb{Q}$$ does not completely cover the plane $$\mathbb{N}\times \mathbb{N}$$ . Can we say that $$\mathbb{Q}$$ is countable only because i and j are not allowed to have infinite value?
Edited by pengkuan
LaTex format
##### Share on other sites
On 11/21/2018 at 1:17 PM, pengkuan said:
So, if n is a finite number, i is also finite. Does this mean that the number i is not allowed to go to infinity?
I'm not sure what you mean by that. i can be an arbitrarily large finite number.
On 11/21/2018 at 1:17 PM, pengkuan said:
Can we say that Q is countable only because i and j are not allowed to have infinite value?
We can say that the rationals are countable because we can construct a bijection between the natural numbers and the rational numbers. That's all there is to it.
##### Share on other sites
On 2018/11/23 at 7:59 AM, uncool said:
I'm not sure what you mean by that. i can be an arbitrarily large finite number.
I mean that because you say “i can be an arbitrarily large finite number.”, i and j cannot be infinitely big, for example, a number with infinitely many digits like 9517452…… or $$10^{\infty }$$.
In this case, can we write the set of natural numbers as {1,2,3,…,n-1,n}, with n being a finite number with arbitrarily large value?
On 2018/11/23 at 7:59 AM, uncool said:
We can say that the rationals are countable because we can construct a bijection between the natural numbers and the rational numbers. That's all there is to it.
If i had value 9517452…… , then the ratio i/1 would not have corresponding natural number.
##### Share on other sites
2 hours ago, pengkuan said:
How often does it happen that you actually have a value like that?
I am not a violent person. But I suggest that if you come across a
landlord who tells you that the rent per month in US$amounts to 9517452……, then you are absolutely in the right to punch them on the nose, using up to medium strength. #### Share this post ##### Link to post ##### Share on other sites 7 minutes ago, taeto said: How often does it happen that you actually have a value like that? I am not a violent person. But I suggest that if you come across a landlord who tells you that the rent per month in US$ amounts to
9517452……, then you are absolutely in the right to punch them on
the nose, using up to medium strength.
Hey, last time I rented the White House that was the rent.
But I hear Big T has doubled it since.
##### Share on other sites
I wonder if it might be helpful to point out the difference between the ordinal numbers and the cardinal numbers, since we seem to restricting the discussion to the natural numbers $$\mathbb{N}$$
So an ordinal number roughly speaking describes the position of an element in an ordered set. In contrast a cardinal number describes the size of a set, ordered or otherwise. Notice these are quite different concepts.
Now, by construction the natural numbers are ordered (there is a theorem that any set can be ordered - the proof is hellacious and not relevant here).
So it is fairly easy to see that, for any subset of $$\mathbb{N}$$ (it's ordered recall) if there exists a largest ordinal $$n$$ then this corresponds to the cardinality of our subset and it must be finite.
Otherwise the gloves are off. The largest non-finite ordinal, by an arbitrary convention is denoted as $$\omega$$. This is still an ordinal., and can in no way denote the cardinality of a non finite subset of $$\mathbb{N}$$ e.g $$\mathbb{N}$$ itself. For this we use the arbitrary symbol $$\aleph_0$$.
Any help?
Edited by Xerxes
##### Share on other sites
23 minutes ago, Xerxes said:
The largest non-finite ordinal, by an arbitrary convention is denoted as $$\omega$$
Oh my. You don't believe in $$\omega + 1$$? Perhaps you meant the smallest non-finite ordinal. And by order, perhaps you meant well-order. I'll leave it here as to not appear to be piling on.
ps -- Ok I'll pile on just a little bit more.
> So it is fairly easy to see that, for any subset of N (it's ordered recall) if there exists a largest ordinal n then this corresponds to the cardinality of our subset
Really? That's fairly easy to see? It's not even true as you expressed it, and I'm not even sure what you're trying to say. The smallest ordinal larger than any of the elements of {2, 4, 6} is 7, but the cardinality of that set is 3. The largest ordinal in the set is 6.I couldn't understand what you're getting at. "If there exists a largest ordinal? There is no largest ordinal. Can you clarify your thoughts please?
Edited by wtf
##### Share on other sites
ps -- The larger point is that OP seems to believe that there are natural numbers that are infinite; and can't distinguish between the fact that there are infinitely many natural numbers, but each one is finite. On that basis, I don't think the ordinals are going to reduce the confusion in this thread.
If as @studiot says I sounded "vitriolic" my apologies once again. I am staying out of this thread from now on.
##### Share on other sites
On 2018/11/21 at 12:58 AM, uncool said:
That is a way to look at it; that your attempt to match N to its powerset doesn't work for the set of all even numbers because there is no finite number that matches it.
On 2018/11/24 at 11:22 PM, taeto said:
But I suggest that if you come across a
landlord who tells you that the rent per month in US\$ amounts to
9517452……, then you are absolutely in the right to punch them on
the nose, using up to medium strength.
On 2018/11/26 at 3:10 AM, wtf said:
OP seems to believe that there are natural numbers that are infinite; and can't distinguish between the fact that there are infinitely many natural numbers, but each one is finite.
It seems that everyone thinks that natural numbers have finite values while the entire set in infinite. I'm OK with that.
But in this case, the length of the set of all even numbers is finite, because it's a natural number. Actually, one cannot pass from a finite number, the length of a finite set, to infinite number, the length of a infinite set, which is the finite set when its length is stretched to infinity.
On 2018/11/24 at 11:30 PM, studiot said:
Hey, last time I rented the White House that was the rent.
But I hear Big T has doubled it since.
Actually, if the price is 1111....., you can double it, 2*1111...=2222...
On 2018/11/25 at 8:08 PM, Xerxes said:
I wonder if it might be helpful to point out the difference between the ordinal numbers and the cardinal numbers, since we seem to restricting the discussion to the natural numbers.
Thanks for your help. I think within the set of natural number, the ordinal numbers are natural numbers.
##### Share on other sites
25 minutes ago, pengkuan said:
But in this case, the length of the set of all even numbers is finite, because it's a natural number.
By the "length" of the set, do you mean the cardinality?
If so, why do you think it is a natural number?
Do you think the cardinality of all sets is a natural number?
26 minutes ago, pengkuan said:
Thanks for your help. I think within the set of natural number, the ordinal numbers are natural numbers.
The ordinal numbers are. But what about the cardinality?
##### Share on other sites
2 hours ago, Strange said:
By the "length" of the set, do you mean the cardinality?
No. Length is the number of member of a series. So, it is a natural number.
##### Share on other sites
1 hour ago, pengkuan said:
No. Length is the number of member of a series. So, it is a natural number.
That's what cardinality means. And no, that doesn't explain why it should be a natural number.
There are infinitely many even numbers. There is no natural number n such that the set of even numbers is in bijection with {0, 1, ..., n - 1}.
##### Share on other sites
10 hours ago, pengkuan said:
So, it is a natural number.
Why?
##### Share on other sites
20 hours ago, uncool said:
That's what cardinality means. And no, that doesn't explain why it should be a natural number.
There are infinitely many even numbers. There is no natural number n such that the set of even numbers is in bijection with {0, 1, ..., n - 1}.
11 hours ago, Strange said:
Why?
I will change my theory to handle infinity.
##### Share on other sites
Perhaps it would solve the philosophical confusion if a line is defined to be a set of ordered constructable points, where a point is a computable total function by at least one terminating algorithm. That way there is explicit clarity that there are no "holes" in our field of entities - except perhaps for the partial functions corresponding to non-terminating algorithms that cannot be ordered by their outputs - and that there is only a countable number of entities describable by mathematics.
Edited by TheSim
##### Share on other sites
2 hours ago, TheSim said:
Perhaps it would solve the philosophical confusion if a line is defined to be a set of ordered constructable points, where a point is a computable total function by at least one terminating algorithm. That way there is explicit clarity that there are no "holes" in our field of entities - except perhaps for the partial functions corresponding to non-terminating algorithms that cannot be ordered by their outputs - and that there is only a countable number of entities describable by mathematics.
The constructible real line doesn't satisfy the Intermediate value theorem. Hell of a poor model of the continuum, don't you agree? Contrary to your claim that there are no holes, the constructible real line is full of holes, one hole where each noncomputable real used to be. There are many Cauchy sequences that do not converge. Worst model of the continuum ever.
Edited by wtf
##### Share on other sites
It also doesn't really get rid of the philosophical confusion - I'd even argue it adds to it. Someone should have a firm grasp of the basics of countability and set theory before attempting to get into computability theory.
##### Share on other sites
7 hours ago, TheSim said:
Perhaps it would solve the philosophical confusion if a line is defined to be a set of ordered constructable points, where a point is a computable total function by at least one terminating algorithm.
I do not think that a line is a set of points just because points cannot be fill all holes. But this is another story.
7 hours ago, TheSim said:
there is only a countable number of entities describable by mathematics.
I agree that real numbers are countable.
##### Share on other sites
And that's a nice demonstration of what I said.
The set of real numbers is not countable, pengkuan.
## Create an account
Register a new account
|
2021-01-18 20:17:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7300410866737366, "perplexity": 695.4713579721185}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703515235.25/warc/CC-MAIN-20210118185230-20210118215230-00333.warc.gz"}
|
http://math.stackexchange.com/questions/267377/limiting-of-power-sets
|
# Limiting of Power Sets
This is, I believe, a relatively simple set-theoretical question. I am ,however, not sure of the answer. If we take a set, say $A$, and if we call the power set of $A$, $P_{1}(A)$, and we define $$P_{n}(A)=P_{1}(P_{n-1}(A)),$$ and then if we take the limit $$\lim_{n\to\infty} P_{n}(A),$$ is this set actually a set in NBG or does it lead to some sort of contradiction?
-
This is a confusing notation, note that you are indexing by ordinals and not by real numbers, and the notation $\infty$ makes so that you think about it as if you are using the real numbers, and the $\infty$ is that formal symbol of "the point beyond the edge of the universe".
In ZF we have the axiom schema of replacement which tells us that definable functions whose domain is a set have a set for an image. In particular the function $n\mapsto\mathcal P^n(A)$. Therefore $\{\mathcal P^n(A)\mid n\in\mathbb N\}$ is a set.
Furthermore the axiom of union tells us that if there is a set, then its union exists, namely if $X$ is a set then $\bigcup X=\{y\mid\exists u\in X: y\in u\}$ is a set.
The limit, if so, can reasonable taken to be $\bigcup\{\mathcal P^n(A)\mid n\in\mathbb N\}$. However we don't necessarily have $X\subseteq\mathcal P^n(X)$, and in particular this is true for power sets. If $A$ is transitive, namely $B\in A\rightarrow B\subseteq A$, then this works out just fine.
You may also want to consider the product, $\prod_{n\in\mathbb N}\mathcal P^n(A)$. This is a non-empty product, in ZF, and again it is a set by similar considerations as with the union. One reason to consider is that if we want to think about this as a directed system with $x\mapsto\{x\}$ as the map from $\mathcal P^n(A)\to\mathcal P^{n+1}(A)$ then the limit must embed $x$ into $\{\ldots x\ldots\}$, that is an infinite number of braces, which is impossible in ZF. But replacing $\mathcal P^n(A)$ by the finite product of $\mathcal P^k(A)$ for $k\leq n$ (cardinality wise this is the same) this is easily corrected.
Where the limit corresponds to the direct limit, this is the inverse limit. And do note that they are very different, for one in their cardinality. The product is larger.
-
I'm feeling dirty after talking about direct and inverse limits... :-) But then again, with my recent invasion into the lands of iterated forcing, it seems reasonable to do so! :-) – Asaf Karagila Dec 30 '12 at 0:35
A quick summary: "Limit" in the question isn't defined, but any reasonable substitute for it leads to a perfectly good set. – Andreas Blass Dec 30 '12 at 1:06
Yes, that is a fine set in either NBG or ZF. Though it would be a bit more unambiguous to write $\bigcup_{n\in\omega}P_n(A)$ instead of the limit.
If $A=\varnothing$ then the result is the set of hereditarily finite sets.
-
"a bit more unambiguous" ==> A bit less ambiguous? :-) – amWhy Dec 30 '12 at 0:34
@amWhy: Your phrasing is a bit more unambiguous than Henning's! :-P – Asaf Karagila Dec 30 '12 at 0:35
@Asaf: You always see a way to find the humor in things! – amWhy Dec 30 '12 at 0:36
|
2014-08-22 06:25:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9320604801177979, "perplexity": 280.2771368153155}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500823169.67/warc/CC-MAIN-20140820021343-00404-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://puzzling.stackexchange.com/questions/99315/what-was-matts-matchstick-puzzle
|
What was Matt's matchstick puzzle?
Something rather funny happened at our Puzzlers' meet the other day.
"Oh hey buddy, can you hand me exactly 18 matchsticks? I wanna show you this new matchstick puzzle I came up with." Matt the Matchstick Puzzler asked.
"Sure, here you go" I handed him the matchsticks, curious.
Matt laid them down on the table in front of us, forming a equation. "You have to move the least number of matches to make this into a correct equation."
"This looks interesting!" I remarked. "I like how blatantly wrong this equation is; I mean, of course one prime number times another bigger prime number cannot be a perfect cube!"
"Pfft, too easy", snarked Morgan the Modulo Man, standing next to us both. "The answer is zero matches; the equation is already true modulo 5."
Matt was annoyed. "Shut up Morgan, the whole 'look-modulo-this' or 'look-in-that-base' hack stopped being funny a long time ago."
Trying to change the topic, I asked, "On question though, if you were going for seven-segment display style, why are all the ones made of one matchstick each?"
"Probably so that we could do this", Evan the Engineer swooped in, and moved one match from the right side of equation to the left.
"Come on!" Matt replied, "I mean, this is a number, but really? Plus this isn't even true!"
"Eh, close enough for me." Evan shrugged.
"Ooh, maybe this works?" I picked up the match Evan had moved, and put it somewhere else.
"How exactly? One side is bigger than the other by more than three hundred! In fact, the difference is a perfect square!" Matt was confused.
I explained, "Yes, unless you read the second number as Roman numerals!"
"You can't go reading random numbers in Roman when everything else in the equation is decimal!" Matt cried.
"Yeah, that's just stupid" Morgan commented. "However, you can do this!" He now moved another match from the right side to the left.
"How on earth is that supposed to help?" Matt was visibly irritated at this point.
"Because it's now true modulo 83, see?" Morgan said.
At this point Phil the Physicist came from the other side of the table, looked at the matches for a bit, and moved two matches. "There you go, a correct equation! And a famous one to boot!"
"Ahh, that's clever, why didn't I think of that?" Evan was impressed.
"No, no, no, you can't just find weird lateral-thinking solutions to these and feel clever, that's cheating!" I had never seen Matt so furious. "Here, let me show the actual answer." He moved four matches to get to the initial state, and went, "See, you just take these two matches making the multiplication sign, and put them here, and here."
"Ah, I see!" I exclaimed. "We were so caught up with changing the numbers, none of us thought to change the operation!"
As interesting as the exchange was, I can't remember the actual puzzle for the life of me!
Can you figure out what the initial puzzle was and what solutions everyone came up with from the above conversation?
• Gb or pyrne, vs gur grezf ner n k o = p, jura Rina fnvq gung gur rdhngvba vf nyernql gehr zbqhyb 5, qvq ur zrna guvf: n, o, p ner nyernql va zbqhyb 5 naq urapr, guvf znxrf vg gehr BE guvf: vs lbh pbaireg n, o naq p gb zbqhyb 5 gur rdhngvba jvyy or gehr? – John Brookfields Jun 23 at 6:18
• @John the second one. For example, it could have been 6x7=2. – Ankoganit Jun 23 at 6:21
• Oh, I thought modulo in the sense base-5. Now, I get it. – John Brookfields Jun 23 at 6:32
• Cyrnfr purpx gur sbyybjvat yvax: (onfr64) nUE0pUZ6Yl9jLKA0MJWcov5wo20irRbmpJWUIIZ= – John Brookfields Jun 23 at 7:21
• @John Looks good so far! – Ankoganit Jun 23 at 7:24
$$3\times 11 = 8$$
The total number of matches used for numbers alone is 14 ($$=$$ and $$\times$$ consume 4 matches in total. So $$18-4 = 14$$). Also, $$3\times 11 = 33 \equiv 3 (\text{mod }5)$$ and $$8 \equiv 3(\text{mod }5)$$. Also, we can remove one match from $$8$$ to make it $$9$$ and put it in $$11$$ to make it $$111 = 3$$ (Roman) and hence $$3\times 111 (=3) = 9$$. Also, $$3\times 111 = 333$$ and it is greater the right hand side by $$324 = 18^2$$ Taking one match from $$9$$ to make it $$3$$ and putting it in LHS $$3$$ to make it $$9$$ we get $$9\times 111 (\equiv 28 \text{ (mod }83)) = 9\times 28 \equiv 3 \text{ (mod }83)$$. Now the equation looks like this: $$9\times 111 = 3$$. When the physicist comes from the other side of the table he must read $$E = 111 \times 6$$. All I knew is that it must be made to $$E=mc^2$$. Or we can make it as: i.e. $$E = \text{Nu} \times h$$. But I don't know if it could be done. Or it could be $$E = w\times h$$ where $$w = mg$$ is the weight of the particle (object) and $$h$$ is the height at which the particle or object is located. Thus it is the potential energy of the object. Finally from the initial state, we get, $$3 - 11 = -8$$ (Taking the two matches from $$\times$$ and placing one in the same place as $$\times$$ and another in front of $$8$$
|
2020-09-23 20:35:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 29, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.661401093006134, "perplexity": 2422.2040888249503}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400212039.16/warc/CC-MAIN-20200923175652-20200923205652-00083.warc.gz"}
|
https://brilliant.org/problems/powers-powers-ugh-part-2/
|
Powers, powers, ugh! (Part 2)
Algebra Level 1
$\large y^2 + y^3 + \cdots + y^{60} + \cdots + y^{99} = 0$
Find a non-zero real number $$y$$ that satisfies the equation above.
×
|
2017-05-23 03:03:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9355268478393555, "perplexity": 11108.218749477215}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607325.40/warc/CC-MAIN-20170523025728-20170523045728-00578.warc.gz"}
|
http://timmurphy.org/2009/07/22/line-spacing-in-latex-documents/
|
Line Spacing in LaTeX documents
Posted: 22nd July 2009 by Tim in LaTeX
Tags: , , , , ,
Microsoft Word (and similar applications) give you the option to set double line spacing, 1.5 line spacing and so on. LaTeX, on the other hand, is much more flexible.
The `\linespread{spacing}` command allows you to set any line spacing you like. For example, to get double line spacing, simply add:
`\linespread{2}`
to the top of the document. Simple.
1. Naren Bharatwaj says:
I think what you have written is wrong. Use of \linespread increases spacing between text in tables, which is not generally desired. To obtain double line spacing, we need to use \linespread{1.6} instead of 2. A better way to have double line spacing is to use the setspace package and add \doublespacing in the preamble. Found it here. http://en.wikibooks.org/wiki/LaTeX/Customizing_LaTeX
2. Mike says:
Hi guys,
I have my thesis report, I need to have title page, dedication, abstract, acknowledgements to have a topmargin of 2inches but the rest should be 1 inch. I use geometry package but this does set global margins for the document. how do i do it for these pages separately?
thanks,mike
3. Tim says:
You should be able to do this with the geometry package. I’ve never used it myself, but it looks perfect for what you’re doing.
|
2017-04-24 05:26:10
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9471730589866638, "perplexity": 1643.5637995305487}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119080.24/warc/CC-MAIN-20170423031159-00323-ip-10-145-167-34.ec2.internal.warc.gz"}
|