url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://socratic.org/questions/how-do-you-factor-48y-2-72xy-27x-2
# How do you factor 48y^2 - 72xy + 27x^2? The answer is $3 {\left(3 x - 4 y\right)}^{2}$ In the given expression we can see that the coefficients of the polynomials are $48 , 72$ and $27$ these numbers are divisible by $3$ $3 \left(9 {x}^{2} - 24 x y + 16 {y}^{2}\right)$ and when we factorise $9 {x}^{2} - 24 x y + 16 {y}^{2}$ we get ${\left(3 x - 4 y\right)}^{2}$ so finally we get $3 {\left(3 x - 4 y\right)}^{2}$ as our answer.
2019-12-15 15:56:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9033942222595215, "perplexity": 118.41000286182027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541308604.91/warc/CC-MAIN-20191215145836-20191215173836-00472.warc.gz"}
http://www.satmath4u.com/archive/index.php/t-92.html?s=2cce425dcf2c3520b1f58d22db150593
PDA View Full Version : sat math permutations Jack 10-10-2013, 11:27 AM How many different 4-digit numbers can be formed using all of the following numbers? 0,1,1,9 (A) 4 (B) 12 (C) 16 (D) 24 thanks miranda 10-10-2013, 11:31 AM In this case we have permutations since the order matters. Additionally we must divide the number of permutations by 2! Since the digit “1” appears twice. Therefore the answer is \frac{^4P_{4}}{2!}=\frac{4!}{2!}=12 (B). Jack 10-10-2013, 11:32 AM thank you!!
2018-11-20 23:49:31
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.821739137172699, "perplexity": 2464.0660313436038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746847.97/warc/CC-MAIN-20181120231755-20181121013755-00182.warc.gz"}
https://www.jobilize.com/course/section/terminology-introduction-circle-geometry-by-openstax?qcr=www.quizover.com
# Introduction, circle geometry Page 1 / 3 ## Discussion : discuss these research topics Research one of the following geometrical ideas and describe it to your group: 1. taxicab geometry, 2. spherical geometry, 3. fractals, 4. the Koch snowflake. ## Terminology The following is a recap of terms that are regularly used when referring to circles. • An arc is a part of the circumference of a circle. • A chord is defined as a straight line joining the ends of an arc. • The radius, $r$ , is the distance from the centre of the circle to any point on the circumference. • The diameter is a special chord that passes through the centre of the circle. The diameter is the straight line from a point on the circumference to another point on the circumference, that passes through the centre of the circle. • A segment is the part of the circle that is cut off by a chord. A chord divides a circle into two segments. • A tangent is a line that makes contact with a circle at one point on the circumference. ( $AB$ is a tangent to the circle at point $P$ ). ## Axioms An axiom is an established or accepted principle. For this section, the following are accepted as axioms. 1. The Theorem of Pythagoras, which states that the square on the hypotenuse of a right-angled triangle is equal to the sum of the squares on the other two sides. In $▵ABC$ , this means that ${\left(AB\right)}^{2}+{\left(BC\right)}^{2}={\left(AC\right)}^{2}$ 2. A tangent is perpendicular to the radius, drawn at the point of contact with the circle. ## Theorems of the geometry of circles A theorem is a general proposition that is not self-evident but is proved by reasoning (these proofs need not be learned for examination purposes). Theorem 1 The line drawn from the centre of a circle, perpendicular to a chord, bisects the chord. Proof : Consider a circle, with centre $O$ . Draw a chord $AB$ and draw a perpendicular line from the centre of the circle to intersect the chord at point $P$ . The aim is to prove that $AP$ = $BP$ 1. $▵OAP$ and $▵OBP$ are right-angled triangles. 2. $OA=OB$ as both of these are radii and $OP$ is common to both triangles. Apply the Theorem of Pythagoras to each triangle, to get: $\begin{array}{ccc}\hfill O{A}^{2}& =& O{P}^{2}+A{P}^{2}\hfill \\ \hfill O{B}^{2}& =& O{P}^{2}+B{P}^{2}\hfill \end{array}$ However, $OA=OB$ . So, $\begin{array}{ccc}\hfill O{P}^{2}+A{P}^{2}& =& O{P}^{2}+B{P}^{2}\hfill \\ \hfill \therefore A{P}^{2}& =& B{P}^{2}\hfill \\ \hfill \mathrm{and AP}& =& BP\hfill \end{array}$ This means that $OP$ bisects $AB$ . Theorem 2 The line drawn from the centre of a circle, that bisects a chord, is perpendicular to the chord. Proof : Consider a circle, with centre $O$ . Draw a chord $AB$ and draw a line from the centre of the circle to bisect the chord at point $P$ . The aim is to prove that $OP\perp AB$ In $▵OAP$ and $▵OBP$ , 1. $AP=PB$ (given) 2. $OA=OB$ (radii) 3. $OP$ is common to both triangles. $\therefore ▵OAP\equiv ▵OBP$ (SSS). $\begin{array}{ccc}\hfill \stackrel{^}{OPA}& =& \stackrel{^}{OPB}\hfill \\ \hfill \stackrel{^}{OPA}+\stackrel{^}{OPB}& =& {180}^{\circ }\phantom{\rule{1.em}{0ex}}\left(\mathrm{APB}\phantom{\rule{2pt}{0ex}}\mathrm{is a str. line}\right)\hfill \\ \hfill \therefore \stackrel{^}{OPA}& =& \stackrel{^}{OPB}={90}^{\circ }\hfill \\ \hfill \therefore OP& \perp & AB\hfill \end{array}$ Theorem 3 The perpendicular bisector of a chord passes through the centre of the circle. Proof : Consider a circle. Draw a chord $AB$ . Draw a line $PQ$ perpendicular to $AB$ such that $PQ$ bisects $AB$ at point $P$ . Draw lines $AQ$ and $BQ$ . The aim is to prove that $Q$ is the centre of the circle, by showing that $AQ=BQ$ . In $▵OAP$ and $▵OBP$ , 1. $AP=PB$ (given) 2. $\angle QPA=\angle QPB$ ( $QP\perp AB$ ) 3. $QP$ is common to both triangles. $\therefore ▵QAP\equiv ▵QBP$ (SAS). From this, $QA=QB$ . Since the centre of a circle is the only point inside a circle that has points on the circumference at an equal distance from it, $Q$ must be the centre of the circle. #### Questions & Answers Application of nanotechnology in medicine what is variations in raman spectra for nanomaterials Jyoti Reply I only see partial conversation and what's the question here! Crow Reply what about nanotechnology for water purification RAW Reply please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment. Damian yes that's correct Professor I think Professor what is the stm Brian Reply is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.? Rafiq industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong Damian How we are making nano material? LITNING Reply what is a peer LITNING Reply What is meant by 'nano scale'? LITNING Reply What is STMs full form? LITNING scanning tunneling microscope Sahil how nano science is used for hydrophobicity Santosh Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq Rafiq what is differents between GO and RGO? Mahi what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq Rafiq if virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION Anam analytical skills graphene is prepared to kill any type viruses . Anam what is Nano technology ? Bob Reply write examples of Nano molecule? Bob The nanotechnology is as new science, to scale nanometric brayan nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale Damian Is there any normative that regulates the use of silver nanoparticles? Damian Reply what king of growth are you checking .? Renato What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ? Stoney Reply why we need to study biomolecules, molecular biology in nanotechnology? Adin Reply ? Kyle yes I'm doing my masters in nanotechnology, we are being studying all these domains as well.. Adin why? Adin what school? Kyle biomolecules are e building blocks of every organics and inorganic materials. Joe anyone know any internet site where one can find nanotechnology papers? Damian Reply research.net kanaga sciencedirect big data base Ernesto Introduction about quantum dots in nanotechnology Praveena Reply hi Loga what does nano mean? Anassong Reply nano basically means 10^(-9). nanometer is a unit to measure length. Bharti Got questions? Join the online conversation and get instant answers! Jobilize.com Reply ### Read also: #### Get the best Algebra and trigonometry course in your pocket! Source:  OpenStax, Siyavula textbooks: grade 12 maths. OpenStax CNX. Aug 03, 2011 Download for free at http://cnx.org/content/col11242/1.2 Google Play and the Google Play logo are trademarks of Google Inc. Notification Switch Would you like to follow the 'Siyavula textbooks: grade 12 maths' conversation and receive update notifications? By Joli Julianna By Angelica Lito By Candice Butts By Katy Keilers By Sean WiffleBoy By Tess Armstrong By Hoy Wen By OpenStax By Courntey Hub By Steve Gibbs
2020-07-03 11:32:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 49, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34235459566116333, "perplexity": 1369.1929190101005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655881984.34/warc/CC-MAIN-20200703091148-20200703121148-00161.warc.gz"}
https://phys.libretexts.org/TextBooks_and_TextMaps/Book%3A_Electricity_and_Magnetism_(Tatum)/13%3A_Alternating_Current/13.04%3A_Resistance_and_Inductance_in_Series
$$\require{cancel}$$ # 13.4: Resistance and Inductance in Series The impedance is just the sum of the resistance of the resistor and the impedance of the inductor: $\label{13.4.1}Z=R+jl\omega .$ Thus the impedance is a complex number, whose real part $$R$$is the resistance and whose imaginary part $$L\omega$$ is the reactance. For a pure resistance, the impedance is real, and $$V$$ and $$I$$ are in phase. For a pure inductance, the impedance is imaginary (reactive), and there is a 90o phase difference between $$V$$ and $$I$$. The voltage and current are related by $\label{13.4.2}V=IZ = (R+jL\omega )I.$ Those who are familiar with complex numbers will see that this means that $$V$$ leads on $$I$$, not by 90o, but by the argument of the complex impedance, namely $$\tan^{-1}(L\omega /R)$$. Further the ratio of the peak (or RMS) voltage to the peak (or RMS) current is equal to the modulus of the impedance, namely $$\sqrt{R^2+L^2\omega^2}$$.
2018-12-17 17:22:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9536781311035156, "perplexity": 380.7486910264342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828697.80/warc/CC-MAIN-20181217161704-20181217183704-00054.warc.gz"}
http://accesssurgery.mhmedical.com/content.aspx?bookid=430&sectionid=42074502
Plate 84 Figure 9 The peritoneum over the region of the left kidney is divided as gentle traction is maintained downward and medially on the splenic flexure of the colon. There is a tendency to grasp the colon and to encircle it completely with the fingers. This tends to puncture the thinned out mesentery. Rents can be avoided if a gauze pack is used to gently sweep the splenic flexure downward and medially (Figure 6). Usually, it is unnecessary to divide and ligate any vessels during this procedure. The peritoneum in the left lumbar gutter is divided, and the entire descending colon is swept medially. The rectosigmoid is freed from the hollow of the sacrum as shown in Plates 70 and 71, Total Mesorectal Excision. The sigmoid is first separated from any attachments to the iliac fossa on the left side, and the left gonadal vessels and the ureter are identified throughout their course in the field of operation (Figure 7). Often, especially in the female, a very low-lying lesion can be mobilized and lifted up well into the wound. After the bowel has been freed from the hollow of the sacrum, the fingers of the left hand should separate the right ureter from the overlying peritoneum by blunt dissection (Figure 8). The peritoneum is incised some distance from the tumor, and the rectum is freed further down to the region of the levator muscles using the mesorectal dissection (Plates 70 and 71). Division of the middle hemorrhoidal vessels with the suspensory ligaments may be necessary to ensure the needed length of bowel to be resected below the tumor. The surgeon should not hesitate to divide the peritoneal attachments in the region of the pouch of Douglas, to free the rectum from the prostate gland in the male and from the posterior wall of the vagina in the female. The inferior mesenteric artery is freed from the underlying aorta to near its point of origin (Figure 9). Three curved clamps are applied to the inferior mesenteric artery, and the vessel is divided and ligated with 00 silk. The inferior mesenteric vein should be ligated at this time, before the tumor has been palpated and compressed due to the manipulation required during resection. Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access. Ok Subscription Options AccessSurgery Full Site: One-Year Subscription Connect to the full suite of AccessSurgery content and resources including more than 160 instructional videos, 16,000+ high-quality images, interactive board review, 20+ textbooks, and more.
2017-02-28 14:35:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20895880460739136, "perplexity": 3517.4924924107772}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00019-ip-10-171-10-108.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/96364-derivative-1-1-function-print.html
# Derivative of 1-1 function • July 28th 2009, 10:19 PM goldenroll Derivative of 1-1 function I'm having trouble on this problem Let f(x)= 1/4 x^2 +x-1. This is a 1-1 function and hence its inverse is a 1-1 function. Find the following a. f'(2) b. f'(3) I got this so far 3/4 x^2 +1 then 3/4 (2)^2 +1 = 3+1 = 4 My first question is am I right by getting the derivative and sub the 2 in? my second question is if I am right then what do i do afterward? • July 28th 2009, 10:27 PM pickslides Quote: Originally Posted by goldenroll Let f(x)= 1/4 x^2 +x-1. This is a 1-1 function Are you sure? Quote: Originally Posted by goldenroll Find the following a. f'(2) b. f'(3) $f'(x) = \frac{x}{2}+1$ $f'(2) = \frac{2}{2}+1 = \dots$ $f'(3) = \frac{3}{2}+1 = \dots$ • July 28th 2009, 10:31 PM VonNemo19 Quote: Originally Posted by goldenroll I'm having trouble on this problem Let f(x)= 1/4 x^2 +x-1. This is a 1-1 function and hence its inverse is a 1-1 function. Find the following a. f'(2) b. f'(3) I got this so far 3/4 x^2 +1 then 3/4 (2)^2 +1 = 3+1 = 4 My first question is am I right by getting the derivative and sub the 2 in? my second question is if I am right then what do i do afterward? The way you've writtenthis question confuses me a litle bit. The derivative of a function is not dependent on whether or not it has an inverse. But if you want $f'(2)\text{ and }f'(3)$, then $f'(x)=\frac{1}{2}x+1\Rightarrow{f'(2)}=2\text{ and }f'(3)=\frac{5}{2}$ A function $f$ is said to have an inverse function $f^{-1}$, if for every element in the range of $f$, there exists one, and only one corresponding element in the domain of $f$ Is the definition satisfied in your case? • July 29th 2009, 07:07 AM HallsofIvy Quote: Originally Posted by goldenroll I'm having trouble on this problem Let f(x)= 1/4 x^2 +x-1. This is a 1-1 function and hence its inverse is a 1-1 function. Find the following a. f'(2) b. f'(3) No, it's not 1-1 and does not have an inverse. Or are you restricting f to a given interval? [/quote]I got this so far 3/4 x^2 +1[/quote] What is equal to this? Certainly Not the derivative! f'(x)= (1/2)x+ 1. Quote: then 3/4 (2)^2 +1 = 3+1 = 4 My first question is am I right by getting the derivative and sub the 2 in? my second question is if I am right then what do i do afterward?
2015-01-29 08:50:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.794477105140686, "perplexity": 1068.3714929567932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122022571.56/warc/CC-MAIN-20150124175342-00217-ip-10-180-212-252.ec2.internal.warc.gz"}
https://ask.opendaylight.org/questions/5645/revisions/
# Revision history [back] ### Is 'leafref' condition enforced? Hi, Previously I posted this question, and this current one somehow relates to that. So it seems that 'leafref' condition is not enforced by MD-SAL in the master branch. Example: I have a model, which contains list of links, and list of services. A service contains reference to the links it is built on. Although the model specifies that the service link must be part of the link list, this condition is not enforced by the controller. container configuration { list connections { key id; } list services { config true; key id; leaf-list supportingConnections { type leafref { path "/configuration/connections/id"; } } } } So I can create service with "supportingConnections": [ 1, 2, 3]" without creating those links beforehand. Is it so by design? Is it a missing feature which will be added later to the code? Regards, peter
2019-04-21 00:07:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27649959921836853, "perplexity": 4576.4270714415115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530100.28/warc/CC-MAIN-20190421000555-20190421022555-00354.warc.gz"}
https://mmore500.com/2020/06/26/regulation-dishtiny.html
## 🔗 Introduction The DISHTINY artificial life system [Moreno and Ofria, 2019] is designed to study fraternal transitions in individuality, evolutionary events where independently replicating entities unify into a higher-level individual. Multicellular organisms and eusocial insect colonies, which survive and propagate through the cooperation of lower-level kin, are typical exemplars of fraternal transitions [Smith and Szathmary, 1997]. ## 🔗 Methods DISHTINY tracks autonomous self-replicating digital cells with heritable behavioral strategies situated on a two-dimensional toroidal grid. Cells can form spatial kin groups to cooperatively collect resources. The morphology of these groups is determined by where offspring are placed and whether they remain part of the parent’s group or are expelled to bud a new group. These conditions yield evolved strategies where individual cells perform actions contrary to their immediate interests (\textit{e.g.}, suppressing reproduction, giving away resources, or even self-destructing) to benefit their kin group [Moreno and Ofria, in prep]. In current work with DISHTINY, SignalGP programs manage cellular behavior. Within each cell, four interconnected hardware instances independently execute a single genetic program; each instance controls cell behavior with respect to a cardinal direction. Sensor instructions and event-driven cues allow cells to react to local information. Likewise, special instructions enable local actions (such as reproduction, resource sharing, and messaging). SignalGP function regulation operates as in the repeated- and changing-signal tasks, except function regulation decays to a neutral baseline unless explicitly renewed. Evolved DISHTINY populations analyzed here were drawn from ongoing work investigating cell-cell interconnects. We analyze 64 replicate populations each evolved independently for 200 compute-hours under identical evolutionary conditions (mean 62912, S.D. 32840 cellular generations elapsed). (See [Moreno and Ofria, 2020b] for details.) ## 🔗 Implementation We implemented our experimental system using the Empirical library for scientific software development in C++, available at https://github.com/devosoft/Empirical [Ofria et al., 2019]. We used OpenMP to parallelize our main evolutionary replicates, distributing work over two threads. The code used to perform and analyze our experiments, our figures, data from our experiments, and a live in-browser demo of our system is available via the Open Science Framework at https://osf.io/kqvmn/ [Foster and Deardorff, 2017]. ## 🔗 Results We used knockout experiments to measure the fraction of replicates that adaptively employ genetic regulation. We harvested a representative genotype from each of the 64 evolved DISHTINY populations. For each independently-evolved population, we performed 16 competition experiments between the representative genotype and a corresponding strain where gene regulation was knocked out. We seeded these runs half-and-half with wild-type and derived knockout genotypes. After one hour of compute time, we assessed the relative abundance of cells descended from wild-type and knockout ancestors. Within the one-hour window, a mean of 91 (S.D. 38) cellular generations elapsed and 17\% of competition runs had coalesced to only the wild type or the knockout strain. For 10 out of 64 populations, all 16 competition runs exhibited greater abundance of cells descended from the wild-type strain than the knockout strain. Gene regulation contributed significantly to fitness in these runs (Fisher’s exact test with Bonferroni correction, each $p < 0.001$). Among these 10 independently-evolved genotypes, gene regulation plays an adaptive role. However, it is possible that in these cases, gene regulation represents non-plastic contingency rather than contributing to meaningful functionality. To address this question, we compared the fitnesses of strains that relied on regulation to strains that did not. We measured strain fitness by competing it against a randomly sampled panel of 20 evolved genotypes. We did not find evidence that strains that used gene regulation outperformed strains that did not. Figure PVOGR: PCA visualization of gene regulation in case-study evolved genotype. Shifting our approach, we selected a strain from among the 10 to evaluate further as a case study to test for a functional role of gene regulation (Figure PVOGR; live in-browser viewer at mmore500.com/hopto/5). We confirmed that in this strain gene regulation plays a bona fide functional role facilitating coordination among same-cell SignalGP instances. Figure FCOGR(a) summarizes the functional context of gene regulation in this strain. We inferred relationships between components in this diagram by monitoring hardware execution in a monoculture population with mutation disabled. Component-by-component knockout experiments confirmed the adaptive significance of each element of the interaction diagram. We hypothesize that gene regulation mediates intracell mutual exclusion (see Figure FCOGR(b)). (a) Regulation governs cell division based on environmental state and intracellular state. In baseline conditions, stimulus 1 would activate module A, expressing cell division instructions. However, when module B represses module A, this stimulus instead activates module C — leaving module A unexpressed. This mechanism conditions expression of module A on stimulus 1, but neither of stimulus 2 or 3. Module A also contains instructions to broadcast an intracellular message that activates module B in a cell’s other independently-executing SignalGP instances. (b) Intracellular broadcast delivers a message to each of a cell’s other SignalGP instances, activating their module 14 and subsequently repressing their module 2. Figure FCOGR: Functional context of gene regulation in a case-study evolved genotype. Figure SPOGR shows the spatial expression and regulation patterns of modules A, B, and C. Indeed, in some instances, we can observe expression of module A in a single hardware instance within cells next to two different-group neighbors. However, we can also observe the opposite in other instances, presumably due to interfering cellular mechanisms and virtual hardware limitations (e.g., inbox capacity, thread capacity, concurrency effects, latency effects, conditional message blocking, and thread requisition). (a) Module 0 expression snapshot. (b) Module 2 expression snapshot. (c) Module 14 expression snapshot. (d) Module 2 regulation snapshot. Figure SPOGR: Spatial patterns of gene regulation and module expression. Square tiles represent individual cells. Black borders divide cells belonging to different organisms. Cells are divided into four triangle slices, each corresponding to a independently-executing SignalGP instance. In module expression snapshots, white denotes no expression, green denotes a module running on a single hardware core, blue on two, purple on three, red on four, and silver on five. In module regulation shapshots, white denotes baseline state and blue denotes repression. ## 🔗 Let’s Chat I would love to hear your thoughts on artificial life models of gene regulation and studying major transitions in evolution!! I started a twitter thread (right below) so we can chat Pop on there and drop me a line or make a comment ## 🔗 Cite This Post APA Lalejini, A., Moreno, M. A., & Ofria, C. (2020, June 27). Case Study of Adaptive Gene Regulation in DISHTINY. https://doi.org/10.17605/OSF.IO/KQVMN MLA Lalejini, Alexander, Matthew A Moreno, and Charles Ofria. “Case Study of Adaptive Gene Regulation in DISHTINY.” OSF, 27 June 2020. Web. Chicago Lalejini, Alexander, Matthew A Moreno, and Charles Ofria. 2020. “Case Study of Adaptive Gene Regulation in DISHTINY.” OSF. June 27. doi:10.17605/OSF.IO/KQVMN. BibTeX @misc{Lalejini_Moreno_Ofria_2020, title={Case Study of Adaptive Gene Regulation in DISHTINY}, url={osf.io/kqvmn}, DOI={10.17605/OSF.IO/KQVMN}, publisher={OSF}, author={Lalejini, Alexander and Moreno, Matthew A and Ofria, Charles}, year={2020}, month={Jun} } ## 🔗 References Foster, E. D. and Deardorff, A. (2017). Open science framework (osf). Journal of the Medical Library Association: JMLA, 105(2):203. Lalejini, A. and Ofria, C. (2018). Evolving event-driven programs with signalgp. In Proceedings of the Genetic and Evolutionary Computation Conference, pages 1135–1142. Moreno, M. A. and Ofria, C. (2019). Toward open-ended fraternal transitions in individuality. Artificial life, 25(2):117–133. Moreno, M. A. and Ofria, C. (2020a). Case Study of Adaptive Gene Regulation in DISHTINY. DOI: 10.17605/OSF.IO/KQVMN; URL: https://osf.io/kqvmn. Moreno, M. A. and Ofria, C. (in prep.). Spatial constraints and kin recognition can produce open-ended major evolutionary transitions in a digital evolution system. https://doi.org/10.17605/OSF.IO/G58XK. Ofria, C., Dolson, E., Lalejini, A., Fenton, J., Moreno, M. A., Jorgensen, S., Miller, R., Stredwick, J., Zaman, L., Schossau, J., Gillespie, L., G, N. C., and Vostinar, A. (2019). Empirical. Smith, J. M. and Szathmary, E. (1997). The major transitions in evolution. Oxford University Press. ## 🔗 Acknowledgements This research was supported in part by NSF grants DEB-1655715 and DBI-0939454, and by Michigan State University through the computational resources provided by the Institute for Cyber-Enabled Research. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1424871. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
2020-12-04 20:29:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19729068875312805, "perplexity": 8408.05369517178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141743438.76/warc/CC-MAIN-20201204193220-20201204223220-00088.warc.gz"}
https://socratic.org/questions/how-do-you-solve-x-1-2-3-13
# How do you solve (x+1)^2-3=13? Sep 6, 2016 $x = 3 \mathmr{and} x = - 5$ #### Explanation: Although this is a quadratic equation, we do not have to use the normal method of making it equal to 0. This is a special case - there is no x-term. Move all the constants to the right hand side and then find the square root of each side. ${\left(x + 1\right)}^{2} = 16$ $x + 1 = \pm \sqrt{16} = \pm 4$ $\mathmr{if} x + 1 = + 4 \text{ } \rightarrow x = 3$ $\mathmr{if} x + 1 = - 4 \text{ } \rightarrow x = - 5$
2019-11-17 22:18:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9020100831985474, "perplexity": 167.4997433723228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669352.5/warc/CC-MAIN-20191117215823-20191118003823-00454.warc.gz"}
http://rosalind.info/problems/ba1e/
# Find Patterns Forming Clumps in a String solved by 996 July 29, 2015, 12:29 a.m. by Rosalind Team Given integers L and t, a string Pattern forms an (L, t)-clump inside a (larger) string Genome if there is an interval of Genome of length L in which Pattern appears at least t times. For example, $\color{green}{\text{TGCA}}$ forms a (25,3)-clump in the following Genome: $\text{gatcagcataagggtccc}\color{green}\textbf{TGCA}\color{black}\text{A}\color{green}\textbf{TGCA}\color{black}\text{TGACAAGCC}\color{green}\textbf{TGCA}\color{black}\text{gttgttttac}$. ## Clump Finding Problem Find patterns forming clumps in a string. Given: A string Genome, and integers k, L, and t. Return: All distinct k-mers forming (L, t)-clumps in Genome. ## Sample Dataset CGGACTCGACAGATGTGAAGAAATGTGAAGACTGAGTGAAGAGAAGAGGAAACACGACACGACATTGCGACATAATGTACGAATGTAATGTGCCTATGGC 5 75 4 ## Sample Output CGACA GAAGA AATGT
2020-04-03 08:16:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2529464662075043, "perplexity": 11814.107249669332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510352.43/warc/CC-MAIN-20200403061648-20200403091648-00355.warc.gz"}
http://p4est.org/
## p4est: Parallel AMR on Forests of Octrees The p4est software library enables the dynamic management of a collection of adaptive octrees, conveniently called a forest of octrees. p4est is designed to work in parallel and scales to hundreds of thousands of processor cores. It is free software released under GNU General Public Licence version 2, or (at your option) any later version. ### Source code Please see the github repository of p4est or download the latest release tarball. The source comes with commented example programs and test cases. You can also download older stable releases. Please note that the so-called releases auto-generated by github do not work (they are lacking the subdirectory sc and some generated files). ### Binary packages Contributed packages of p4est are available for Gentoo Linux (these are also available on the deal.ii download page) and the Homebrew distribution. ### Autogenerated API documentation This is the doxygen output for p4est. You can recreate it with make doxygen after calling configure. ### Howto document and step-by-step examples This as a howto document that documents the basic interface design of p4est and comments on the step-by-step examples included in the source code. ### Questions / Get involved We appreciate comments, bug reports, and suggestions for adding features. To this end, we recommend using the issue tracker. We will also consider pull requests. For further questions, please email us at info@p4est.org. We had previously used a now-defunct mailing list that is archived. ### Technical papers / Citations If you use p4est for your publications, please cite it as follows [1a]. The reference [1b] is for people specifically using the topology iterator, the high-order node numbering, or the top-down search. [1c] is for people interested in the 2:1 balance details, the strong scaling limit and/or memory footprint. [1a] Carsten Burstedde, Lucas C. Wilcox, and Omar Ghattas, p4est: Scalable Algorithms for Parallel Adaptive Mesh Refinement on Forests of Octrees. Published in SIAM Journal on Scientific Computing 33 no. 3 (2011), pages 1103-1133 (download). @ARTICLE{BursteddeWilcoxGhattas11, author = {Carsten Burstedde and Lucas C. Wilcox and Omar Ghattas}, title = {{\texttt{p4est}}: Scalable Algorithms for Parallel Adaptive Mesh Refinement on Forests of Octrees}, journal = {SIAM Journal on Scientific Computing}, volume = {33}, number = {3}, pages = {1103-1133}, year = {2011}, doi = {10.1137/100791634} } [1b] Tobin Isaac, Carsten Burstedde, Lucas C. Wilcox, and Omar Ghattas, Recursive algorithms for distributed forests of octrees. Published in SIAM Journal on Scientific Computing 37 no. 5 (2015), pages C497-C531 (download). @ARTICLE{IsaacBursteddeWilcoxEtAl15, author = {Tobin Isaac and Carsten Burstedde and Lucas C. Wilcox and Omar Ghattas}, title = {Recursive algorithms for distributed forests of octrees}, journal = {SIAM Journal on Scientific Computing}, volume = {37}, number = {5}, pages = {C497--C531}, year = {2015}, doi = {10.1137/140970963} } [1c] Tobin Isaac, Carsten Burstedde, and Omar Ghattas, Low-Cost Parallel Algorithms for 2:1 Octree Balance. Published in Proceedings of the 26th IEEE International Parallel & Distributed Processing Symposium, 2012 (download). Errata: In Algorithm 7, line 3 reads $$\text{for all}\ o\in R\ \text{do}$$; it should read $$\text{for all}\ o\in R\cup R^{\text{new}}\ \text{do}$$. p4est uses libsc written by the same authors and others for basic helper functionality such as logging, array and hash data structures, parallel statistics, and more. libsc also integrates the third-party libraries zlib and lua. libsc is free software under LGPL v2.1 (or later) and hosted under github as well. May I copy and modify p4est source code for internal use? yes Will my source that contains copied or modified p4est code automatically be GPL? yes Will my source that includes p4est header files and is supposed to be linked against p4est automatically be GPL? no May I distribute my source that includes p4est header files and is supposed to be linked against p4est? yes May I distribute a binary executable that links against p4est if I distribute my source code along with it? yes May I distribute a binary executable that links against p4est without distributing my source code along with it? no, but: May I contact UT Austin OTC to negotiate permission to distribute a binary executable without the source? yes The ForestClaw project is an ongoing collaboration with Donna Calhoun to solve hyperpolic PDEs. The generic adaptive finite element software deal.II now interfaces to p4est to obtain distributed mesh information [2]. The corresponding algorithms are described in this article. If you use deal.II with p4est for your publications, please cite it as: [2] Wolfgang Bangerth, Carsten Burstedde, Timo Heister, and Martin Kronbichler, Algorithms and Data Structures for Massively Parallel Generic Adaptive Finite Element Codes. Published in ACM Transactions on Mathematical Software 38 No. 2 (2011), pages 14:1-14:28 (download). @ARTICLE{BangerthBursteddeHeisterEtAl11, author = {Wolfgang Bangerth and Carsten Burstedde and Timo Heister and Martin Kronbichler}, title = {Algorithms and Data Structures for Massively Parallel Generic Adaptive Finite Element Codes}, journal = {ACM Transactions on Mathematical Software}, volume = {38}, number = {2}, pages = {14:1-14:28}, year = {2011} } The p4est authors: Carsten Burstedde Lucas C. Wilcox Tobin Isaac Thanks to our contributors! Please see the AUTHORS file for details. The development of p4est was partially supported by the US National Science Foundation (NSF Grants No. CCF-0427985, CMMI-1028889, CNS-0540372, CNS-0619838, DMS-0724746, OCI-0749334, OPP-0941678) and the US Department of Energy (DOE Grants No. 06ER25782, 08ER25860, SC0002710). The authors thank the Texas Advanced Computing Center (TACC) for providing them with access to the Ranger supercomputer under NSF TeraGrid award MCA04N026, and the National Center for Computational Science (NCCS) for early-user access to the Jaguar Cray XT5 supercomputer. Any opinions, findings and conclusions or recomendations expressed on this web page or in the source code and documentation are those of the authors and do not necessarily reflect the views of the National Science Foundation (NSF).
2017-07-23 06:36:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26768413186073303, "perplexity": 6860.522945669195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424287.86/warc/CC-MAIN-20170723062646-20170723082646-00472.warc.gz"}
https://www.ams.org/publicoutreach/feature-column/fcarc-eulers-formula
# Euler's Polyhedral Formula A theorem which would make both my list of 10 favorite theorems and my list of 10 most influential theorems . . . Joseph Malkevitch York College (CUNY) malkevitch at york.cuny.edu 1. Introduction It's coming to the end of the calendar year and a lot of people are producing lists. What were the 10 largest box-office blockbusters? What were the 10 best movies of the year? Who are the 10 best dressed men and 10 worst dressed women? One can also construct more grandiose lists. Who were the 10 best pitchers of all time or what were the 10 greatest movies? What appears on a list constructed by the same person can change dramatically with slight wording changes. Thus, the list of my 10 favorite movies might not coincide with my list of the 10 greatest movies ever made. Does it make sense to construct lists related to mathematics? What about a list of the 10 greatest mathematicians? 10 greatest women mathematicians? The 10 most influential theorems? The 10 niftiest theorems? On the one hand constructing lists is perhaps silly. How can one make a list of the 10 greatest composers of classical music? Must I leave Tchaikovsky out to include Mahler or Handel? Yet, from another perspective constructing lists of this kind makes one think about a wide variety of value-laden issues. What makes a composer great? Should a composer of a few great pieces be put on a short list of greats while another composer who perhaps composed nothing that rose to the heights of the first person, yet composed 100 times as many pieces at a very high level of inspiration, is omitted? This being the first of my last two columns as solo editor of the Feature Column, perhaps readers will indulge me if I write two columns about a theorem which would make both my list of 10 favorite theorems and my list of 10 most influential theorems. This theorem involves Euler's polyhedral formula (sometimes called Euler's formula). Today we would state this result as: The number of vertices V, faces F, and edges E in a convex 3-dimensional polyhedron, satisfy V + F - E = 2. Aspects of this theorem illustrate many of the themes that I have tried to touch on in my columns. 2. Basic ideas Polyhedra drew the attention of mathematicians and scientists even in ancient times. The Egyptians built pyramids and the Greeks studied "regular polyhedra," today sometimes referred to as the Platonic Solids . What is a polyhedron? This question is remarkably hard to answer simply! Part of the problem is that as scholars have studied "polyhedral" objects in greater detail, the idea of what is a "legal" polyhedron has changed and evolved. "Traditional polyhedra" consist of flat faces, straight edges, and vertices, but exactly what rules do these individual parts have to obey, and what additional global conditions should be imposed? Which of the following objects below should be allowed to qualify as polyhedra? a. A cube with a triangular tunnel bored through it. (Problem: The "faces" that lie in planes are not always polygons.) b. The portion of the surface of three pairwise intersecting vertical planes (e.g. "triangular cylinder"). (Problem: This surface does not have any vertices.) c. The surface formed based on three rays which meet at a point as shown below. (Problem: The "faces" are not polygons but unbounded portions of planes.) Almost certainly, in the early days of the study of polyhedra, the word referred to convex polyhedra. A set is convex if the line segment joining any two points in the set is also in the set. Among the nice properties of convex sets is the fact that the set of points in common to a collection of convex sets is convex. Loosely speaking, non-convex sets in two dimensions have either notches or holes. The diagram below shows a non-convex polygon and a convex polygon. The planar set below is not convex, but note that it does not satisfy the usual definition of a polygon, even through it is bounded by sections of straight lines. The faces of a convex polyhedron consist of convex polygons. However, this approach to defining polyhedra rules out a "polyhedron" which goes off to infinity, such as the surface below: This polyhedron has three rays (which, if extended, should meet at a point) and three line segments as edges of the polyhedron, rather than having edges which are line segments. One can also have examples where there are only rays (see earlier diagram). Traditionally, what is essential for a polyhedron is that it consist of pieces of flat surfaces, but as time has gone on, the definition of "legal" polyhedra has been broadened. Typically this has resulted in a dramatic improvement in the insights that have been obtained concerning the original "narrower" class of polyhedra and the newer class of polyhedra under the broader definition. For example, in Euclid's Elements there is a "proof" that there are 5 regular polyhedra, based on the implicit assumption that the faces of the polyhedra are convex polygons. A regular polyhedron is one in which all faces are congruent regular (convex) polygons and all vertices are "alike." This means that there are the same number of regular polygons at every vertex. Using this definition one finds there are 5 regular polyhedra. If one has a supply of regular pentagonal polygons such as the one below, one can assemble 12 of them, three at each vertex, to form the solid known as the regular dodecahedron. However, the Greeks knew about the polygon that today is called the pentagram: This polygon has a good claim to be called a regular polygon because all its sides have equal length and the angle between two consecutive sides of the polygon is always the same. However, this polygon, when drawn in the plane, does not define a convex set and the sides of the polygon intersect each other. Nonetheless, when mathematicians considered the idea of "non-convex" regular polyhedra where such "star polygons" were permitted, they discovered some new examples of "regular polyhedra," now known as the Kepler-Poinsot polyhedra . Later H.S.M. Coxeter and Branko Grünbaum broadened the rules for polyhedra and discovered other "regular polyhedra." The systematic study of polyhedra began relatively early in the history of mathematics as shown by the place they hold in Euclid's Elements . Yet it remained until the 18th century before there was a statement of the result that V + F - E = 2. 3. Euler's contribution It appears that Leonard Euler (1707-1783) was the first person to notice the fact that for convex 3-dimensional polyhedra V + F - E = 2. Euler mentioned his result in a letter to Christian Goldbach (of Goldbach's Conjecture fame) in 1750. He later published two papers in which he described what he had done in more detail and attempted to give a proof of his new discovery. It is sometimes claimed that Descartes (1596-1650) discovered Euler's polyhedral formula earlier than Euler. Though Descartes did discover facts about 3-dimensional polyhedra that would have enabled him to deduce Euler's formula, he did not take this extra step. With hindsight it is often difficult to see how a talented mathematician of an earlier era did not make a step forward that with today's insights seems natural, however, it often happens. If one is wandering through a maze of tunnels in a cave, perhaps one comes across a large hall of impressive formations with many tunnels. You may explore some of these without successfully discovering any impressive new formations and go back home contented. However, it might have turned out that one of the many tunnels you did not have time to explore would have led you to an even more spectacular chamber. Descartes' Theorem is a very lovely result in its own right, and in 3 dimensions it is equivalent to Euler's polyhedral formula. It is tempting to speculate about why all the able mathematicians, artists, and scholars who investigated polyhedra in the years before Euler did not notice the polyhedral formula. There certainly are results in Euclid's Elements and in the work of later Greek geometers that appear more complex than Euler's polyhedral formula. Presumably a major factor, in addition to the lack of attention paid to counting problems in general up to relatively recent times, was that people who thought about polyhedra did not see them as structures with vertices, edges, and faces. It appears that Euler did not view them this way either! He seems to have adopted a fairly conventional view of polyhedra. In his view the "vertex" of a polyhedron is a solid angle or a part of a "polyhedral cone" that starts at the vertex. Given that today Euler is credited with being the "father" of graph theory for having solved the Königsberg bridge problem using graph theory ideas, one might have expected him to see the graph associated with a polyhedron. However, he does not appear to have thought of polyhedra as graphs, a step not exploited until rather later by Cauchy (1789-1856). Because he did not look at the polyhedra he was studying as graphs, Euler attempted to give a proof of the formula based on decomposing a polyhedron into simpler pieces. This attempt does not meet modern standards for a proof. His argument was not correct. However, results proven later make it possible to use Euler's technique to prove the polyhedral formula. Although Euler did not give the first correct proof of his formula, one can not prove conjectures that have not been made. It appears to have been the French mathematician Adrian Marie Legendre (1752-1833) who gave the first proof, though he did not use combinatorial methods. Ironically, this quintessentially combinatorial theorem was given a metric proof by Legendre. Despite using metrical methods the proof is very insightful and clever. While Euler first formulated the polyhedral formula as a theorem about polyhedra, today it is often treated in the more general context of connected graphs (e.g. structures consisting of dots and line segments joining them that are in one piece). Cauchy seems to have been the first person to make this important connection. He in essence noticed that the graph of a convex polyhedron (or more generally what today we would call a polyhedron which is topologically homeomorphic to a sphere) has a planar connected graph. A plane graph is one which has been drawn in the plane in such a way that the edges meet (intersect) only at a vertex. A graph which has the potential to be drawn in this way is known as a planar graph. A typical plane graph is shown in the diagram below. The number of edges at a vertex in such a connected graph (e.g. one piece) is called the degree or valence of the vertex. For example, in the diagram below the vertices on the vertical mirror line of the drawing have valence 3, 4, 5, 4, from top to bottom. Do you think there is some convex 3-dimensional polyhedron whose graph is isomorphic to this graph? (You will be able to answer this question using Steinitz's Theorem, which will be treated in the continuation of this column next month.) An intuitive way to see that convex polyhedra give rise to plane graphs (it's the converse that is the difficult part of Steinitz's Theorem) follows. Cut out one of the polyhedron's faces along the edges that bound the face, and transform the remaining faces into stretchable rubber. Now stretch the polyhedron with one face removed to lay it flat in the plane so that the removed face becomes the unbounded region associated with the resulting plane graph. To help you understand this process look at the box-like polyhedron below, which from a combinatorial point of view is a "cube." The lines which are red are those edges of the box which are "hidden" when the box is viewed from the front. In the diagram below we have highlighted in turquoise the face of the polyhedron that we remove from this box, and now pretend that the remaining structure is made of rubber. Figure 1 We now stretch and flatten out the remaining surface so that the turquoise face becomes the unbounded region of the resulting graph that can be drawn in the plane (see Figure 2 below). The position of the red edges is shown to help you visualize what is happening. Of course the turquoise region should be shown as going off to "infinity," even though it appears only as a bounded face in the original polyhedron. The back face of the box is now shown as having a vertical and horizontal red edge. The plane that contains this back face can be thought of as the plane into which we stretch and flatten out the surface of the box with the turquoise face removed. It's fairly clear that the faces of the original polyhedron are preserved by this process. It is also clear where these faces appear in the plane graph representation of the polyhedron, except perhaps for the turquoise face in Figure 1. It is common for people to forget to count this face when they are counting the faces of a graph drawn in the plane. Plane graphs always have at least one face, the "infinite face." Figure 2 Although there are other ways of seeing that the graphs of convex 3-dimensional polyhedra or graphs on the sphere have planar graphs (i.e. at some stage use stereographic projection), this argument hopefully makes clear that the graphs that can be drawn on a sphere (polyhedra whose graphs are homeomorphic to a sphere) where edges meet only at vertices are the same as the graphs that can be drawn in a plane where edges cross only at vertices. 4. Proofs of the polyhedral formula There are many proofs of the Euler polyhedral formula, and, perhaps, one indication of the importance of the result is that David Eppstein has been able to collect 17 different proofs . In a sense the most straightforward proofs are ones using mathematical induction. One can prove the result by doing induction on either the number of edges, faces or vertices of the graph. The following proof is especially attractive. It follows the development given by H. Rademacher and O. Toeplitz based on an approach of Von Staudt (1798-1867). Suppose that G is a connected graph that has been embedded in the plane. Think of the infinite face of the graph as being the ocean, the edges of the graphs being dikes made of earth, and the faces of the graph other than the infinite face as being dry fields. A typical example of such a connected plane graph is shown below. The blue in this diagram does not go off to infinity but you should think of it as doing that. In the diagram below an edge of one circuit in the graph which bounds a face is shown in red, and this edge is adjacent to the ocean. When the dike represented by this red edge is "breached," the field on the other side of the dike is opened up to the ocean and becomes flooded, as indicated by the turquoise (for illustrative purposes - the color of the water really should be the same as the ocean!). If a connected graph has circuits that bound faces, one can eliminate one edge from each of these circuits in a way that preserves the property that the graph remains connected. There is a one-to-one correspondence between the edges removed to get to a connected graph without circuits (because of the Jordan curve theorem) and the faces of the graph which are not the infinite face. The edges necessary to do this are shown in red below, and the fact that the fields flood is shown in turquoise. The resulting graph of black edges forms a spanning tree of the original graph. A spanning tree of a connected graph is a subgraph which is a tree (e.g. a graph which is connected and has no circuit) and includes all the vertices of the original graph. Thus, a spanning tree of a connected graph has the same number of vertices as the original graph, but fewer edges (unless the original graph is a tree). Note that for any tree, we have: Now, let us count the edges of the original graph by adding the number of edges removed to get a tree, the red edges above, to the number of black edges above. Thus, Now, we know that ERED is equal to one less than the number of faces of the original graph. This is true because we needed one red edge to flood each dry field (e.g. the non-infinite faces), and the graph had one more face, the infinite face. The number of EBLACK edges is given by one less than the number of vertices in the graph, because the black edges are a spanning tree of the original graph. Hence we have: Rearranging the terms in this equation gives: Euler's formula does not hold for any graph embedded on a surface. It holds for graphs embedded so that edges meet only at vertices on a sphere (or in the plane), but not for graphs embedded on the torus, a one-holed donut. The fundamental theorem that is used directly or indirectly in a proof of the Euler polyhedra formula for graphs depends on the Jordan Curve theorem , which states that any simple closed curve divides the plane into three sets, those points on the curve, those inside the curve, and those outside the curve. As interest in Euler's polyhedral formula evolved in the 19th century, many attempts were made to generalize the formula. This research laid the foundation for the development of topology and algebraic topology and for a theory of surfaces. It was soon discovered that some surfaces could be "oriented" in a consistent way only locally but not globally. The famous Möbius (1790-1868) Band serves as an illustration of a surface of this kind. (This surface was independently discovered somewhat earlier by Johann Listing (1808-1882).) This research led to a classification of "orientable surfaces" in terms of what is known as the Euler Characteristic . This concept involves the notion of the genus of a graph: the smallest number of handles g that must be added to the surface of a sphere so that a graph can be embedded on the augmented surface in such a way that edges meet only at vertices. It turns out that every orientable surface in Euclidean space can be thought of as a sphere with a certain number of handles. To construct a handle on a sphere so as to eliminate an edge crossing for a graph on the surface, one can cut out two circles from the surface and join these circles by a cylindrical tube, the handle, as shown below. If two edges in a graph drawn on a surface cross, one can eliminate the crossing by having one of the edges go over a small handle to avoid the crossing. It turns out that if a graph can be embedded without edges meeting anywhere other than at vertices on a surface with g handles, but on no surface with a smaller number of handles, then: Since the sphere has no handles, g = 0 for the sphere, and the formula above reduces to Euler's formula. The connection between Euler's polyhedral formula and the mathematics that led to a theory of surfaces, both the orientable and unorientable surfaces, is still being pursued to this day. In addition, motivated by many problems involving the design of computer chips (integrated circuits), there has been an explosion of research about crossing number problems for graphs in the plane. This involves finding the minimum number of crossings when an abstractly defined graph is drawn in the plane. It turns out that the crossing number depends on whether or not the edges are allowed to be straight or curved. Many of these questions involve the use of Euler's formula to get estimates for the smallest number of crossings. Furthermore, Euler's formula is intimately connected with coloring problems of the faces of a graph. These coloring problems include not only the famous 4-color conjecture but also problems about coloring the faces of graphs on surfaces such as spheres with many handles. Next month, I will continue my discussion of Euler's formula and point out some of the many lovely results that it has led to. Joseph Malkevitch York College (CUNY) malkevitch at york.cuny.edu 5. References Aigner, M. and G. Ziegler, Proofs from the Book, 3rd. edition, Springer-Verlag. Appel, K. ; Haken, W. Every planar map is four colorable. Bull. Amer. Math. Soc. 82 (1976), no. 5, 711--712. MR0424602 (54 #12561) Barnette, David . Trees in polyhedral graphs. Canad. J. Math. 18 1966 731--736. MR0195753 (33 #3951) Barnette, David . $W\sb{v}$ paths on 3-polytopes. J. Combinatorial Theory 7 1969 62--70. MR0248636 (40 #1887) Barnette, David . On $p$-vectors of $3$-polytopes. J. Combinatorial Theory 7 1969 99--103. MR0244851 (39 #6165) Barnette, David W. Projections of 3-polytopes. Israel J. Math. 8 1970 304--308. MR0262923 (41 #7528) Barnette, David . Map coloring, polyhedra, and the four-color problem. The Dolciani Mathematical Expositions, 8. Mathematical Association of America, Washington, DC, 1983. x+168 pp. ISBN: 0-88385-309-4 MR0741465 (85e:05001). Barnette, David W. ; Grünbaum, Branko . On Steinitz's theorem concerning convex $3$-polytopes and on some properties of planar graphs. 1969 The Many Facets of Graph Theory (Proc. Conf., Western Mich. Univ., Kalamazoo, Mich., 1968) pp. 27--40 Springer, Berlin MR0250916 (40 #4148) Barnette, David ; Grünbaum, Branko . Preassigning the shape of a face. Pacific J. Math. 32 1970 299--306. MR0259744 (41 #4377) Beck, Anatole ; Bleicher, Michael N. ; Crowe, Donald W. Excursions into mathematics. The millennium edition. With a foreword by Martin Gardner. A K Peters, Ltd., Natick, MA, 2000. xxvi+499 pp. ISBN: 1-56881-115-2 MR1744676 (2000k:00002) Graph connections. Relationships between graph theory and other areas of mathematics. Edited by Lowell W. Beineke and Robin J. Wilson. Oxford Lecture Series in Mathematics and its Applications, 5. The Clarendon Press, Oxford University Press, New York, 1997. xii+291 pp. ISBN: 0-19-851497-2 MR1634542 (99a:05001) Biggs, Norman L. ; Lloyd, E. Keith ; Wilson, Robin J. Graph theory. 1736--1936. Second edition. The Clarendon Press, Oxford University Press, New York, 1986. xii+239 pp. ISBN: 0-19-853916-9 MR0879117 (88e:01035) Bruggesser, H. ; Mani, P. Shellable decompositions of cells and spheres. Math. Scand. 29 (1971), 197--205 (1972). MR0328944 (48 #7286) Cauchy, A., Recherches sur les Polyèdres - Premier Mémoire, Journal de l'École Polytechnique, 9 (Cahier 16) (1813) 68-86. Dress, Andreas W. M. A combinatorial theory of Grünbaum's new regular polyhedra. II. Complete enumeration. Aequationes Math. 29 (1985), no. 2-3, 222--243. MR0819312 (87e:51028) Eberhard, V., Zur Morphologie der Polyeder, Leipzig, 1891. Euler, L., Elementa doctrinae solidorum, Novi Comm. Acad. Sci. Imp. Petropol., 4 (1752-3) 109-140 (published 1758), Opera Omnia (1) Volume 26, 72-93. Euler, L., Demonstratio nonnullarum insignium proprietatum quibas solida hedris planis inclusa sunt praedita, Novi Comm. Acad. Sci. Imp. Petropol., 4 (1752-1753), 140-160 (published 1758) Opera Omnia (1) Volume 26, 94-108. Fáry, István . On straight line representation of planar graphs. Acta Univ. Szeged. Sect. Sci. Math. 11, (1948). 229--233. MR0026311 (10,136f) Möbius and his band. Mathematics and astronomy in nineteenth-century Germany. Edited by John Fauvel, Raymond Flood and Robin Wilson. The Clarendon Press, Oxford University Press, New York, 1993. vi+172 pp. ISBN: 0-19-853969-X MR1249066 (94m:01023) Federico, Pasquale Joseph . Descartes on polyhedra. A study of the De solidorum elementis . Sources in the History of Mathematics and Physical Sciences, 4. Springer-Verlag, New York-Berlin, 1982. viii+145 pp. ISBN: 0-387-90760-2 MR0680214 (84c:01024) Felsner, Stefan . Convex drawings of planar graphs and the order dimension of 3-polytopes. Order 18 (2001), no. 1, 19--37. MR1844514 (2002f:05061) Fisher, J. C. An existence theorem for simple convex polyhedra. Discrete Math. 7 (1974), 75--97. MR0333984 (48 #12303) Fisher, J. C. Five-valent convex polyhedra with prescribed faces. J. Combinatorial Theory Ser. A 18 (1975), 1--11. MR0365360 (51 #1612) Handbook of discrete and computational geometry. Second edition. Edited by Jacob E. Goodman and Joseph O'Rourke. Discrete Mathematics and its Applications (Boca Raton). Chapman & Hall/CRC, Boca Raton, FL, 2004. xviii+1539 pp. ISBN: 1-58488-301-4 MR2082993 Grünbaum, B. Some analogues of Eberhard's theorem on convex polytopes. Israel J. Math. 6 1968 398--411 (1969). MR0244854 (39 #6168) Grünbaum, Branko . Planar maps with prescribed types of vertices and faces. Mathematika 16 1969 28--36. MR0245460 (39 #6768) Grünbaum, Branko . Polytopes, graphs, and complexes. Bull. Amer. Math. Soc. 76 1970 1131--1201. MR0266050 (42 #959) Grünbaum, Branko . Polytopal graphs. Studies in graph theory, Part II, pp. 201--224. Studies in Math., Vol. 12, Math. Assoc. Amer., Washington, D. C., 1975. MR0406868 (53 #10654) Grünbaum, Branko . Regular polyhedra---old and new. Aequationes Math. 16 (1977), no. 1-2, 1--20. MR0467497 (57 #7353) Grünbaum, Branko . A convex polyhedron which is not equifacettable. Geombinatorics 10 (2001), no. 4, 165--171. MR1825338 Grünbaum, B., Convex Polytopes, 2nd. edition, Springer-Verlag, New York, 2003 (first edition, 1967, Wiley-Interscience). Grünbaum, B. ; Johnson, N. W. The faces of a regular-faced polyhedron. J. London Math. Soc. 40 1965 577--586. MR0181935 (31 #6161) Grünbaum, Branko ; Malkevitch, Joseph . Pairs of edge-disjoint Hamiltonian circuits. Aequationes Math. 14 (1976), no. 1/2, 191--196. MR0414443 (54 #2544b) Grünbaum, B. ; Motzkin, T. S. Longest simple paths in polyhedral graphs. J. London Math. Soc. 37 1962 152--160. MR0139161 (25 #2598) Grünbaum, Branko ; Motzkin, Theodore S. On polyhedral graphs. 1963 Proc. Sympos. Pure Math., Vol. VII pp. 285--290 Amer. Math. Soc., Providence, R.I. MR0153005 (27 #2976) Grünbaum, B. ; Motzkin, T. S. The number of hexagons and the simplicity of geodesics on certain polyhedra. Canad. J. Math. 15 1963 744--751. MR0154182 (27 #4133) Grünbaum, Branko ; Shephard, G. C. Tilings and patterns. W. H. Freeman and Company, New York, 1987. xii+700 pp. ISBN: 0-7167-1193-1 MR0857454 (88k:52018) Grünbaum, Branko ; Zaks, Joseph . The existence of certain planar maps. Discrete Math. 10 (1974), 93--115. MR0349455 (50 #1949) Haussner, R., Abhandlungen über die regelmäßigen Sternkörper, Ostwald's Klassiker de Exakten Wissenschaften, Nr. 151, Wilhelm Engelmann, Leipzig, 1906. Hilton, Peter ; Pedersen, Jean . Descartes, Euler, Poincaré, Pólya---and polyhedra. Enseign. Math. (2) 27 (1981), no. 3-4, 327--343 (1982). MR0659155 (83g:52008) Jendrol', Stanislav . A new proof of Eberhard's theorem. Acta Fac. Rerum Natur. Univ. Comenian. Math. 31 (1975), 1--9. MR0385718 (52 #6577) Jendro\soft l, Stanislav . On the face-vectors of trivalent convex polyhedra. Math. Slovaca 33 (1983), no. 2, 165--180. MR0699086 (85f:52018) Jendro\soft l, Stanislav . On face vectors and vertex vectors of convex polyhedra. Discrete Math. 118 (1993), no. 1-3, 119--144. MR1230057 (94h:52017) Johnson, Norman W. Convex polyhedra with regular faces. Canad. J. Math. 18 1966 169--200. MR0185507 (32 #2973) Jucovi\v c, Ernest . On the number of hexagons in a map. J. Combinatorial Theory Ser. B 10 1971 232--236. MR0278179 (43 #3910) Jucovi\v c, E. On the face vector of a $4$-valent $3$-polytope. Studia Sci. Math. Hungar. 8 (1973), 53--57. MR0333985 (48 #12304) Kalai, Gil . A simple way to tell a simple polytope from its graph. J. Combin. Theory Ser. A 49 (1988), no. 2, 381--383. MR0964396 (89m:52006) Polytopes---combinatorics and computation. Including papers from the DMV-Seminar "Polytopes and Optimization" held in Oberwolfach, November 1997. Edited by Gil Kalai and Günter M. Ziegler. DMV Seminar, 29. Birkhäuser Verlag, Basel, 2000. vi+225 pp. ISBN: 3-7643-6351-7 MR1785290 (2001d:52002) Klee, Victor . The Euler characteristic in combinatorial geometry. Amer. Math. Monthly 70 1963 119--127. MR0146101 (26 #3627) Lakatos, Imre . Proofs and refutations. The logic of mathematical discovery. Edited by John Worrall and Elie Zahar. Cambridge University Press, Cambridge-New York-Melbourne, 1976. xii+174 pp. MR0479916 (58 #122) Legendre, A., Élements de géometrie, Firmin Didot, Paris, 1794. Lebesgue, H., Remarques sur les deux premières demonstrations du théorèm d'Euler, rélatif aux polyèdres, Bulletin de la Socièté Mathématique de France, 52 (1924) 315-336. Lebesgue, Henri . Quelques conséquences simples de la formule d'Euler. (French) J. Math. Pures Appl. 19, (1940). 27--43. MR0001903 (1,316i) L'Huilier, S., Démonstration immédiate d'un théorème est susceptible, Mém. Acad. Imp. Sci. St. Pétersb., 4 (1811) 271-301. L'Huilier, S., Mémoire sur la Polyédrométrie, Annales de Mathématiques, 3 (1812-1813) 169-189. Malkevitch, Joseph . Properties of planar graphs with uniform vertex and face structure. Memoirs of the American Mathematical Society, No. 99 American Mathematical Society, Providence, R.I. 1970 iv+116 pp. MR0260616 (41 #5240) Malkevitch, Joseph . The first proof of Euler's formula. Mitt. Math. Sem. Giessen No. 165, (1984), 77--82. MR0745872 (86b:01015) Malkevitch, Joseph . Polytopal graphs. Selected topics in graph theory, 3, 169--188, Academic Press, San Diego, CA, 1988. MR1205401 McMullen, Peter . On simple polytopes. Invent. Math. 113 (1993), no. 2, 419--444. MR1228132 (94d:52015) McMullen, Peter ; Schulte, Egon . Abstract regular polytopes. Encyclopedia of Mathematics and its Applications, 92. Cambridge University Press, Cambridge, 2002. xiv+551 pp. ISBN: 0-521-81496-0 MR1965665 (2004a:52020) McMullen, P. ; Shephard, G. C. Convex polytopes and the upper bound conjecture. Prepared in collaboration with J. E. Reeve and A. A. Ball. London Mathematical Society Lecture Note Series, 3. Cambridge University Press, London-New York, 1971. iv+184 pp. MR0301635 (46 #791) Mohar, Bojan ; Thomassen, Carsten . Graphs on surfaces. Johns Hopkins Studies in the Mathematical Sciences. Johns Hopkins University Press, Baltimore, MD, 2001. xii+291 pp. ISBN: 0-8018-6689-8 MR1844449 (2002e:05050) Motzkin, Theodore S. The evenness of the number of edges of a convex polyhedron. Proc. Nat. Acad. Sci. U.S.A. 52 1964 44--45. MR0173198 (30 #3411) Robertson, Neil ; Sanders, Daniel ; Seymour, Paul ; Thomas, Robin . The four-colour theorem. J. Combin. Theory Ser. B 70 (1997), no. 1, 2--44. MR1441258 (98c:05065) Poinsot, L., Sur les polygones and les polyèdres, J. École Polytech., 4 (cahier 10) (1810) 16-48. Rademacher, Hans ; Toeplitz, Otto . The enjoyment of math. Translated from the second (1933) German edition and with additional chapters by H. Zuckerman. Princeton Science Library. Princeton University Press, Princeton, NJ, 1994. iv+205 pp. ISBN: 0-691-02351-4 MR1300411 (95h:00002) Read, Ronald C. ; Wilson, Robin J. An atlas of graphs. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1998. xii+454 pp. ISBN: 0-19-853289-X MR1692656 (2000a:05001) Shaping space. A polyhedral approach. Proceedings of the conference held in Northampton, Massachusetts, April 6--8, 1984. Edited by Marjorie Senechal and George Fleck. Design Science Collection. A Pro Scientia Viva Title. Birkhäuser Boston, Inc., Boston, MA, 1988. xx+284 pp. ISBN: 0-8176-3351-0 MR0937069 (89b:52016) Shephard, G. C. Combinatorial properties of associated zonotopes. Canad. J. Math. 26 (1974), 302--321. MR0362054 (50 #14496) Shephard, G. C. Convex polytopes with convex nets. Math. Proc. Cambridge Philos. Soc. 78 (1975), no. 3, 389--403. MR0390915 (52 #11738) Steinitz, E., Über die Eulerschen Polyederrelationen, Archiv. für Mathematik und Physik, 11 (1906) 86-88. Steinitz, Ernst ; Rademacher, Hans . Vorlesungen über die Theorie der Polyeder unter Einschluss der Elemente der Topologie. Reprint der 1934 Auflage. Grundlehren der Mathematischen Wissenschaften, No. 41. Springer-Verlag, Berlin-New York, 1976. viii+351 pp. MR0430958 (55 #3962) Thomassen, Carsten . Planarity and duality of finite and infinite graphs. J. Combin. Theory Ser. B 29 (1980), no. 2, 244--271. MR0586436 (81j:05056) . Thomassen, Carsten . Kuratowski's theorem. J. Graph Theory 5 (1981), no. 3, 225--241. MR0625064 (83d:05039) Thomassen, C. Plane representations of graphs. Progress in graph theory (Waterloo, Ont., 1982), 43--69, Academic Press, Toronto, ON, 1984. MR0776790 (86g:05032) Thomassen, Carsten . A refinement of Kuratowski's theorem. J. Combin. Theory Ser. B 37 (1984), no. 3, 245--253. MR0769367 (86e:05059) Thomassen, Carsten . Rectilinear drawings of graphs. J. Graph Theory 12 (1988), no. 3, 335--341. MR0956195 (89i:05111) Thomassen, Carsten . The graph genus problem is NP-complete. J. Algorithms 10 (1989), no. 4, 568--576. MR1022112 (91d:68054) Thomassen, Carsten . A link between the Jordan curve theorem and the Kuratowski planarity criterion. Amer. Math. Monthly 97 (1990), no. 3, 216--218. MR1048433 (91g:55005) Thomassen, Carsten . Every planar graph is $5$-choosable. J. Combin. Theory Ser. B 62 (1994), no. 1, 180--181. MR1290638 (95f:05045) Thomassen, Carsten . Trees in triangulations. J. Combin. Theory Ser. B 60 (1994), no. 1, 56--62. MR1256583 (95e:05039) Thomassen, Carsten . $3$-list-coloring planar graphs of girth $5$. J. Combin. Theory Ser. B 64 (1995), no. 1, 101--107. MR1328294 (96c:05070) Tutte, W. T. Convex representations of graphs. Proc. London Math. Soc. (3) 10 1960 304--320. MR0114774 (22 #5593) Von Staudt, G., Geometrie der Lage, Nürenberg, 1847. West, Douglas B. Introduction to graph theory. Prentice Hall, Inc., Upper Saddle River, NJ, 1996. xvi+512 pp. ISBN: 0-13-227828-6 MR1367739 (96i:05001) Wilson, Robert A. Graphs, colourings and the four-colour theorem. Oxford University Press, Oxford, 2002. viii+141 pp. ISBN: 0-19-851061-6 MR1888337 (2003c:05095) Zaks, Joseph . Non-Hamiltonian simple planar graphs. Theory and practice of combinatorics, 255--263, North-Holland Math. Stud., 60, North-Holland, Amsterdam, 1982. MR0806988 (86j:05095) Ziegler, Günter M. Lectures on polytopes. Graduate Texts in Mathematics, 152. Springer-Verlag, New York, 1995. x+370 pp. ISBN: 0-387-94365-X MR1311028 (96a:52011) Those who can access JSTOR can find some of the papers mentioned above there. For those with access, the American Mathematical Society's MathSciNet can be used to get additional bibliographic information and reviews of some these materials. Some of the items above can be accessed via the ACM Portal , which also provides bibliographic services. Welcome to the Feature Column! These web essays are designed for those who have already discovered the joys of mathematics as well as for those who may be uncomfortable with mathematics.
2018-04-25 22:21:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6590315699577332, "perplexity": 654.6143539260806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947968.96/warc/CC-MAIN-20180425213156-20180425233156-00181.warc.gz"}
https://math.libretexts.org/Bookshelves/Combinatorics_and_Discrete_Mathematics/Combinatorics_(Morris)/02%3A_Enumeration/03%3A_Permutations_Combinations_and_the_Binomial_Theorem/3.01%3A_Permutations
# 3.1: Permutations We begin by looking at permutations, because these are a straightforward application of the product rule. The word “permutation” means a rearrangement, and this is exactly what a permutation is: an ordering of a number of distinct items in a line. Sometimes even though we have a large number of distinct items, we want to single out a smaller number and arrange those into a line; this is also a sort of permutation. Definition: Permutation A permutation of $$n$$ distinct objects is an arrangement of those objects into an ordered line. If $$1 ≤ r ≤ n$$ (and $$r$$ is a natural number) then an r-permutation of $$n$$ objects is an arrangement of $$r$$ of the $$n$$ objects into an ordered line. So a permutation involves choosing items from a finite population in which every item is uniquely identified, and keeping track of the order in which the items were chosen. Since we are studying enumeration, it shouldn’t surprise you that what we’ll be asking in this situation is how many permutations there are, in a variety of circumstances. Let’s begin with an example in which we’ll calculate the number of $$3$$-permutations of ten objects (or in this case, people). Example $$\PageIndex{1}$$ Ten athletes are competing for Olympic medals in women’s speed skating (1000 metres). In how many ways might the medals end up being awarded? Solution There are three medals: gold, silver, and bronze, so this question amounts to finding the number of $$3$$-permutations of the ten athletes (the first person in the $$3$$-permutation is the one who gets the gold medal, the second gets the silver, and the third gets the bronze). To solve this question, we’ll apply the product rule, where the aspects that can vary are the winners of the gold, silver, and bronze medals. We begin by considering how many different athletes might get the gold medal. The answer is that any of the ten athletes might get that medal. No matter which of the athletes gets the gold medal, once that is decided we move our consideration to the silver medal. Since one of the athletes has already been awarded the gold medal, only nine of them remain in contention for the silver medal, so for any choice of athlete who wins gold, the number of choices for who gets the silver medal is nine. Finally, with the gold and silver medalists out of contention for the bronze, there remain eight choices for who might win that medal. Thus, the total number of ways in which the medals might be awarded is $$10 · 9 · 8 = 720$$. We can use the same reasoning to determine a general formula for the number of $$r$$-permutations of $$n$$ objects: Theorem $$\PageIndex{1}$$ The number of $$r$$-permutations of $$n$$ objects is $$n(n − 1). . .(n − r + 1)$$ Proof There are $$n$$ ways in which the first object can be chosen (any of the $$n$$ objects). For each of these possible choices, there remain $$n-1$$ objects to choose for the second object, etc. Note We use $$n!$$ to denote the number of permutations of $$n$$ objects, so $n! = n(n − 1). . . 1$. By convention, we define $$0! = 1$$. Definition: Factorial We read $$n!$$ as “$$n$$ factorial,” so $$n$$ factorial is $$n(n − 1). . . 1$$. Thus, the number of r-permutations of $$n$$ objects can be re-written as $$\dfrac{n!}{(n − r)!}$$. When $$n = r$$ this gives $$\dfrac{n!}{0!} = n!$$, making sense of our definition that $$0! = 1$$. Example $$\PageIndex{2}$$ There are 36 people at a workshop. They are seated at six round tables of six people each for lunch. The Morris family (of three) has asked to be seated together (side-by-side). How many different seating arrangements are possible at the Morris family’s table? Solution First, there are $$3! = 6$$ ways of arranging the order in which the three members of the Morris family sit at the table. Since the tables are round, it doesn’t matter which specific seats they take, only the order in which they sit matters. Once the Morris family is seated, the three remaining chairs are uniquely determined by their positions relative to the Morris family (one to their right, one to their left, and one across from them). There are 33 other people at the conference; we need to choose three of these people and place them in order into the three vacant chairs. There are $$\dfrac{33!}{(33 − 3)!} = \dfrac{33!}{30!}$$ ways of doing this. In total, there are $$6 \left( \dfrac{33!}{30!} \right) = 196,416$$ different seating arrangements possible at the Morris family’s table. By adjusting the details of the preceding example, it can require some quite different thought processes to find the answer. Example $$\PageIndex{3}$$ At the same workshop, there are three round dinner tables, seating twelve people each. The Morris family members (Joy, Dave, and Harmony) still want to sit at the same table, but they have decided to spread out (so no two of them should be side-by-side) to meet more people. How many different seating arrangements are possible at the Morris family’s table now? Solution Let’s begin by arbitrarily placing Joy somewhere at the table, and seating everyone else relative to her. This effectively distinguishes the other eleven seats. Next, we’ll consider the nine people who aren’t in Joy’s family, and place them (standing) in an order clockwise around the table from her. There are $$\dfrac{33!}{(33 − 9)!}$$ ways to do this. Before we actually assign seats to these nine people, we decide where to slot in Dave and Harmony amongst them. (In the above diagram, the digits $$1$$ through $$9$$ represent the nine other people who are sitting at the Morris family’s table, and the $$J$$ represents Joy’s position.) Dave can sit between any pair of non-Morrises who are standing beside each other; that is, in any of the spots marked by small black dots in the diagram above. Thus, there are eight possible choices for where Dave will sit. Now Harmony can go into any of the remaining seven spots marked by black dots. Once Dave and Harmony are in place, everyone shifts to even out the circle (so the remaining black dots disappear), and takes their seats in the order determined. We have shown that there are $$\dfrac{33!}{24!} · 8 · 7$$ possible seating arrangements at the Morris table. That’s a really big number, and it’s quite acceptable to leave it in this format. However, in case you find another way to work out the problem and want to check your answer, the total number is $$783$$, $$732$$, $$837$$, $$888$$, $$000$$. Exercise $$\PageIndex{1}$$ Use what you have learned about permutations to work out the following problems. The sum and/or product rule may also be required. 1. Six people, all of whom can play both bass and guitar, are auditioning for a band. There are two spots available: lead guitar, and bass player. In how many ways can the band be completed? 2. Your friend Garth tries out for a play. After the auditions, he texts you that he got one of the parts he wanted, and that (including him) nine people tried out for the five roles. You know that there were two parts that interested him. In how many ways might the cast be completed (who gets which role matters)? 3. You are creating an $$8$$-character password. You are allowed to use any of the $$26$$ lowercase characters, and you must use exactly one digit (from $$0$$ through $$9$$) somewhere in the password. You are not allowed to use any character more than once. How many different passwords can you create? 4. How many $$3$$-letter “words” (strings of characters, they don’t actually have to be words) can you form from the letters of the word STRONG? How many of those words contain an s? (You may not use a letter more than once.) 5. How many permutations of $$\{0, 1, 2, 3, 4, 5, 6\}$$ have no adjacent even digits? For example, a permutation like 5034216 is not allowed because $$4$$ and $$2$$ are adjacent.
2022-01-27 20:20:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6522488594055176, "perplexity": 404.3353901223706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305288.57/warc/CC-MAIN-20220127193303-20220127223303-00124.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-3rd-edition/chapter-12-parametric-equations-polar-coordinates-and-conic-sections-chapter-review-exercises-page-638/21
## Calculus (3rd Edition) The speed function: $f\left( t \right) = \frac{{ds}}{{dt}} = \sqrt {3 + 2\cos t - 2\sin t}$. The maximal speed is $f\left( { - \frac{\pi }{4}} \right) = \sqrt {3 + 2\sqrt 2 } \simeq 2.414$ We have $x\left( t \right) = \sin t + t$, ${\ \ }$ $x'\left( t \right) = \cos t + 1$, $y\left( t \right) = \cos t + t$, ${\ \ }$ $y'\left( t \right) = - \sin t + 1$. By Theorem 2 of Section 12.2, the speed along $c\left(t\right)$ is $\frac{{ds}}{{dt}} = \sqrt {{{\left( {\cos t + 1} \right)}^2} + {{\left( { - \sin t + 1} \right)}^2}}$ $\frac{{ds}}{{dt}} = \sqrt {{{\cos }^2}t + 2\cos t + 1 + {{\sin }^2}t - 2\sin t + 1}$ $\frac{{ds}}{{dt}} = \sqrt {3 + 2\cos t - 2\sin t}$ Write $f\left( t \right) = \frac{{ds}}{{dt}} = \sqrt {3 + 2\cos t - 2\sin t}$. To find the maximal speed we take the derivatives of $f\left(t\right)$: $f'\left( t \right) = \frac{{ - 2\cos t - 2\sin t}}{{2\sqrt {3 + 2\cos t - 2\sin t} }}$ $f{\rm{''}}\left( t \right) = \frac{{ - 3 - 3\cos t + 3\sin t + \sin 2t}}{{{{\left( {3 + 2\cos t - 2\sin t} \right)}^{3/2}}}}$ We find the critical point of $f\left(t\right)$ by solving $f'\left( t \right) = 0$: $f'\left( t \right) = \frac{{ - 2\cos t - 2\sin t}}{{2\sqrt {3 + 2\cos t - 2\sin t} }} = 0$ This occurs when $- 2\cos t - 2\sin t = 0$. So, $\sin t = - \cos t$, ${\ \ \ }$ $\tan t = - 1$. The solutions are $t = - \frac{\pi }{4} + \pi n$, for $n = 0, \pm 1. \pm 2, \pm 3,...$ $f\left(t\right)$ is maximal if $f{\rm{''}}\left( t \right) = \frac{{ - 3 - 3\cos t + 3\sin t + \sin 2t}}{{{{\left( {3 + 2\cos t - 2\sin t} \right)}^{3/2}}}} < 0$ At critical points, the denominator is always positive, so we solve the equation: $- 3 - 3\cos t + 3\sin t + \sin 2t < 0$ Substituting $t = - \frac{\pi }{4} + \pi n$ in the equation we get $- 3 - 3\cos \left( { - \frac{\pi }{4} + \pi n} \right) + 3\sin \left( { - \frac{\pi }{4} + \pi n} \right) + \sin \left( { - \frac{\pi }{2} + 2\pi n} \right) < 0$ $- 3 - 3\cos \left( { - \frac{\pi }{4}} \right)\cos \left( {\pi n} \right) + 3\sin \left( { - \frac{\pi }{4}} \right)\cos \left( {\pi n} \right) + \sin \left( { - \frac{\pi }{2}} \right) < 0$ $- 3 - \frac{3}{2}\sqrt 2 \cos \left( {\pi n} \right) - \frac{3}{2}\sqrt 2 \cos \left( {\pi n} \right) - 1 < 0$ The left-hand side is negative if $n$ is even. So, the the particle's speed is maximal when $t = - \frac{\pi }{4} + \pi k$ ${\ \ }$ for $k = 0, \pm 2, \pm 4,...$ We choose $t = - \frac{\pi }{4}$ and substitute it in the speed function $f\left( t \right) = \frac{{ds}}{{dt}} = \sqrt {3 + 2\cos t - 2\sin t}$. The maximal speed is $f\left( { - \frac{\pi }{4}} \right) = \frac{{ds}}{{dt}}{|_{t = - \pi /4}} = \sqrt {3 + 2\sqrt 2 } \simeq 2.414$
2021-04-21 08:03:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9753652215003967, "perplexity": 87.98272559661723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039526421.82/warc/CC-MAIN-20210421065303-20210421095303-00213.warc.gz"}
https://intomath.org/area-perimeter
CA$2.00 We are often required to calculate the area and perimeter of objects or spaces in 2 dimensions (flat surface like a piece of paper or a field). In real life, this skill is helpful when designing living spaces or calculating the amount of fencing required to enclose a certain piece of land, etc. Perimeter is the length and width around the object – the sum of all outer sides. Area – the 2 dimensional space surrounded by the sides of the shape. Grade 7 Workbook (22 pages with answers) CA$4.00 The two shapes we are looking at in this lesson are rectangle and square. A square is also a rectangle, but a regular rectangle. “Regular” means all sides and angles are equal. We develop formulas for both area and perimeter and analyze different situations where those formulas can be applied. It is important to not only memorize the formulas or substitute numbers into them. It is important to understand where the formulas are coming from, how they were derived. Making connections and understanding relationships is one of the key components of being successful in math.
2020-10-28 13:47:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7425743341445923, "perplexity": 317.13487347737924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107898577.79/warc/CC-MAIN-20201028132718-20201028162718-00254.warc.gz"}
https://www.doubtnut.com/question-answer/factorise-x2-a2-1-ax-1-644442476
Home > English > Class 9 > Maths > Chapter > Factorisation > Factorise : <br> x^(2) + (a^(2... Updated On: 27-06-2022 Get Answer to any question, just click a photo and upload the photo and get the answer completely free, Text Solution Answer : (x-(1)/(a)) (x+a) Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams. Transcript hello students in this question we have to factorise the equation X square + a square minus one upon a into x minus 1 by 4 tractor rising it let's device what is factorization of factorization is a method of breaking and equation in two factors such that the multiplication of fact that factors will result in the original equation so in this question you have to factorise this equation so equation is X square + a square minus 1 whole Upon A into x minus 1 find the next step will open this bracket and multiply with X so we have a square by a into x minus 1 by 2 into x minus 1 so this a will be cancelled out with one of the se now we have X square + X - 1 by x minus 1 if we take X common from here the remaining part will be X + A minus we will take one by a common from this so we have left parties X hundred taking minus 1 by 1 this will become Plus and it is not present here so it will be multiplied with a so now we can write it in two factor forms x minus one by X + these are the factors of equation X square + a square minus one upon a x x minus one because when we multiply these two factors we will get the original equation so hence this is the answer thank you students
2022-07-07 10:56:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7541857957839966, "perplexity": 634.0229456841611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104690785.95/warc/CC-MAIN-20220707093848-20220707123848-00697.warc.gz"}
https://friedlander.io/publications/2018-perturbation-view-level-set/
# A perturbation view of level-set methods for convex optimization R. Estrin, M. P. Friedlander arXiv:2001.06511, 2020 ## Abstract Level-set methods for convex optimization are predicated on the idea that certain problems can be parameterized so that their solutions can be recovered as the limiting process of a root-finding procedure. This idea emerges time and again across a range of algorithms for convex problems. Here we demonstrate that strong duality is a necessary condition for the level-set approach to succeed. In the absence of strong duality, the level-set method identifies ε-infeasible points that do not converge to a feasible point as ε tends to zero. The level-set approach is also used as a proof technique for establishing sufficient conditions for strong duality that are different from Slater’s constraint qualification. ## BiBTeX @misc{2001.06511, Author = {Ron Estrin and Michael P. Friedlander}, Title = {A perturbation view of level-set methods for convex optimization}, Year = {2020}, Eprint = {arXiv:2001.06511} }
2020-01-29 20:24:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5632930994033813, "perplexity": 738.4897454311225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251802249.87/warc/CC-MAIN-20200129194333-20200129223333-00071.warc.gz"}
http://stackoverflow.com/questions/11810102/eclipse-juno-startup-error-log-file/11846566
# Eclipse Juno Startup error log file I've been using Eclipse Juno perfectly fine until recently when I started it up I received the following error: An error has occurred. See the log file C:\Users\Quinn\workspace.metadata.log I've relatively new to programming so any help in layman's terms would be greatly appreciated. Thanks! !SESSION 2012-08-04 12:08:30.616 ----------------------------------------------- eclipse.buildId=I20120608-1400 java.version=1.6.0_33 java.vendor=Sun Microsystems Inc. BootLoader constants: OS=win32, ARCH=x86_64, WS=win32, NL=en_US Framework arguments: -product org.eclipse.epp.package.java.product Command-line arguments: -os win32 -ws win32 -arch x86_64 -product org.eclipse.epp.package.java.product !ENTRY org.eclipse.core.resources 2 10035 2012-08-04 12:08:32.307 !MESSAGE The workspace exited with unsaved changes in the previous session; refreshing workspace to recover changes. !ENTRY org.eclipse.equinox.preferences 4 2 2012-08-04 12:08:34.434 !MESSAGE Problems occurred when invoking code from plug-in: "org.eclipse.equinox.preferences". !STACK 0 java.lang.ExceptionInInitializerError at org.eclipse.wb.internal.core.preferences.PreferenceInitializer.initializeDefaultPreferences(PreferenceInitializer.java:50) at org.eclipse.core.internal.preferences.PreferenceServiceRegistryHelper$1.run(PreferenceServiceRegistryHelper.java:300) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.core.internal.preferences.PreferenceServiceRegistryHelper.runInitializer(PreferenceServiceRegistryHelper.java:303) at org.eclipse.core.internal.preferences.PreferenceServiceRegistryHelper.applyRuntimeDefaults(PreferenceServiceRegistryHelper.java:131) at org.eclipse.core.internal.preferences.PreferencesService.applyRuntimeDefaults(PreferencesService.java:368) at org.eclipse.core.internal.preferences.DefaultPreferences.applyRuntimeDefaults(DefaultPreferences.java:166) at org.eclipse.core.internal.preferences.DefaultPreferences.load(DefaultPreferences.java:237) at org.eclipse.core.internal.preferences.EclipsePreferences.create(EclipsePreferences.java:410) at org.eclipse.core.internal.preferences.EclipsePreferences.internalNode(EclipsePreferences.java:663) at org.eclipse.core.internal.preferences.EclipsePreferences.node(EclipsePreferences.java:805) at org.eclipse.core.internal.preferences.AbstractScope.getNode(AbstractScope.java:38) at org.eclipse.core.runtime.preferences.DefaultScope.getNode(DefaultScope.java:76) at org.eclipse.ui.preferences.ScopedPreferenceStore.getDefaultPreferences(ScopedPreferenceStore.java:250) at org.eclipse.ui.preferences.ScopedPreferenceStore.getPreferenceNodes(ScopedPreferenceStore.java:285) at org.eclipse.ui.preferences.ScopedPreferenceStore.internalGet(ScopedPreferenceStore.java:475) at org.eclipse.ui.preferences.ScopedPreferenceStore.getBoolean(ScopedPreferenceStore.java:387) at org.eclipse.wb.internal.core.editor.describer.JavaSourceUiDescriber.isGUISource(JavaSourceUiDescriber.java:65) at org.eclipse.wb.internal.core.editor.describer.JavaSourceUiDescriber.describe(JavaSourceUiDescriber.java:52) at org.eclipse.core.internal.content.ContentTypeCatalog.describe(ContentTypeCatalog.java:218) at org.eclipse.core.internal.content.ContentTypeCatalog.collectMatchingByContents(ContentTypeCatalog.java:190) at org.eclipse.core.internal.content.ContentTypeCatalog.internalFindContentTypesFor(ContentTypeCatalog.java:403) at org.eclipse.core.internal.content.ContentTypeCatalog.internalFindContentTypesFor(ContentTypeCatalog.java:450) at org.eclipse.core.internal.content.ContentTypeCatalog.getDescriptionFor(ContentTypeCatalog.java:346) at org.eclipse.core.internal.content.ContentTypeCatalog.getDescriptionFor(ContentTypeCatalog.java:360) at org.eclipse.core.internal.content.ContentTypeMatcher.getDescriptionFor(ContentTypeMatcher.java:86) at org.eclipse.core.internal.resources.ContentDescriptionManager.readDescription(ContentDescriptionManager.java:445) at org.eclipse.core.internal.resources.ContentDescriptionManager.getDescriptionFor(ContentDescriptionManager.java:355) at org.eclipse.core.internal.resources.File.internalGetCharset(File.java:246) at org.eclipse.core.internal.resources.File.getCharset(File.java:207) at org.eclipse.core.internal.resources.File.getCharset(File.java:194) at org.eclipse.jdt.internal.core.util.Util.getResourceContentsAsCharArray(Util.java:1156) at org.eclipse.jdt.internal.core.builder.SourceFile.getContents(SourceFile.java:79) at org.eclipse.jdt.internal.compiler.ReadManager.run(ReadManager.java:173) at java.lang.Thread.run(Unknown Source) Caused by: org.eclipse.swt.SWTException: Invalid thread access at org.eclipse.swt.SWT.error(SWT.java:4361) at org.eclipse.swt.SWT.error(SWT.java:4276) at org.eclipse.swt.SWT.error(SWT.java:4247) at org.eclipse.swt.widgets.Display.error(Display.java:1258) at org.eclipse.swt.widgets.Display.checkDevice(Display.java:764) at org.eclipse.swt.widgets.Display.getSystemFont(Display.java:2459) at org.eclipse.jface.preference.PreferenceConverter.<clinit>(PreferenceConverter.java:84) ... 35 more !ENTRY org.eclipse.osgi 4 0 2012-08-04 12:08:35.102 !MESSAGE Application error !STACK 1 java.lang.NoClassDefFoundError: Could not initialize class org.eclipse.jface.preference.PreferenceConverter at org.eclipse.ui.internal.themes.ThemeElementHelper.installFont(ThemeElementHelper.java:103) at org.eclipse.ui.internal.themes.ThemeElementHelper.populateRegistry(ThemeElementHelper.java:59) at org.eclipse.ui.internal.Workbench$27.runWithException(Workbench.java:1550) at org.eclipse.ui.internal.StartupThreading$StartupRunnable.run(StartupThreading.java:31) at org.eclipse.swt.widgets.RunnableLock.run(RunnableLock.java:35) at org.eclipse.swt.widgets.Synchronizer.runAsyncMessages(Synchronizer.java:135) at org.eclipse.swt.widgets.Display.runAsyncMessages(Display.java:4144) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3761) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2478) at org.eclipse.ui.internal.Workbench.access$7(Workbench.java:2386) at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:583) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:540) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:124) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:353) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:180) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:619) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:574) at org.eclipse.equinox.launcher.Main.run(Main.java:1407) !ENTRY com.android.ide.eclipse.adt 4 0 2012-08-04 12:08:36.083 !MESSAGE parseSdkContent failed !STACK 0 java.lang.NullPointerException at com.android.ide.eclipse.adt.AdtPlugin.getDisplay(AdtPlugin.java:334) at com.android.ide.eclipse.adt.AdtPlugin$7.run(AdtPlugin.java:1422) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:54) !SESSION 2012-08-04 12:25:48.967 ----------------------------------------------- eclipse.buildId=I20120608-1400 java.version=1.6.0_33 java.vendor=Sun Microsystems Inc. BootLoader constants: OS=win32, ARCH=x86_64, WS=win32, NL=en_US Framework arguments: -product org.eclipse.epp.package.java.product Command-line arguments: -os win32 -ws win32 -arch x86_64 -product org.eclipse.epp.package.java.product !ENTRY org.eclipse.core.resources 2 10035 2012-08-04 12:25:50.607 !MESSAGE The workspace exited with unsaved changes in the previous session; refreshing workspace to recover changes. !ENTRY org.eclipse.equinox.preferences 4 2 2012-08-04 12:25:52.846 !MESSAGE Problems occurred when invoking code from plug-in: "org.eclipse.equinox.preferences". !STACK 0 java.lang.ExceptionInInitializerError at org.eclipse.wb.internal.core.preferences.PreferenceInitializer.initializeDefaultPreferences(PreferenceInitializer.java:50) at org.eclipse.core.internal.preferences.PreferenceServiceRegistryHelper$1.run(PreferenceServiceRegistryHelper.java:300) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.core.internal.preferences.PreferenceServiceRegistryHelper.runInitializer(PreferenceServiceRegistryHelper.java:303) at org.eclipse.core.internal.preferences.PreferenceServiceRegistryHelper.applyRuntimeDefaults(PreferenceServiceRegistryHelper.java:131) at org.eclipse.core.internal.preferences.PreferencesService.applyRuntimeDefaults(PreferencesService.java:368) at org.eclipse.core.internal.preferences.DefaultPreferences.applyRuntimeDefaults(DefaultPreferences.java:166) at org.eclipse.core.internal.preferences.DefaultPreferences.load(DefaultPreferences.java:237) at org.eclipse.core.internal.preferences.EclipsePreferences.create(EclipsePreferences.java:410) at org.eclipse.core.internal.preferences.EclipsePreferences.internalNode(EclipsePreferences.java:663) at org.eclipse.core.internal.preferences.EclipsePreferences.node(EclipsePreferences.java:805) at org.eclipse.core.internal.preferences.AbstractScope.getNode(AbstractScope.java:38) at org.eclipse.core.runtime.preferences.DefaultScope.getNode(DefaultScope.java:76) at org.eclipse.ui.preferences.ScopedPreferenceStore.getDefaultPreferences(ScopedPreferenceStore.java:250) at org.eclipse.ui.preferences.ScopedPreferenceStore.getPreferenceNodes(ScopedPreferenceStore.java:285) at org.eclipse.ui.preferences.ScopedPreferenceStore.internalGet(ScopedPreferenceStore.java:475) at org.eclipse.ui.preferences.ScopedPreferenceStore.getBoolean(ScopedPreferenceStore.java:387) at org.eclipse.wb.internal.core.editor.describer.JavaSourceUiDescriber.isGUISource(JavaSourceUiDescriber.java:65) at org.eclipse.wb.internal.core.editor.describer.JavaSourceUiDescriber.describe(JavaSourceUiDescriber.java:52) at org.eclipse.core.internal.content.ContentTypeCatalog.describe(ContentTypeCatalog.java:218) at org.eclipse.core.internal.content.ContentTypeCatalog.collectMatchingByContents(ContentTypeCatalog.java:190) at org.eclipse.core.internal.content.ContentTypeCatalog.internalFindContentTypesFor(ContentTypeCatalog.java:403) at org.eclipse.core.internal.content.ContentTypeCatalog.internalFindContentTypesFor(ContentTypeCatalog.java:450) at org.eclipse.core.internal.content.ContentTypeCatalog.getDescriptionFor(ContentTypeCatalog.java:346) at org.eclipse.core.internal.content.ContentTypeCatalog.getDescriptionFor(ContentTypeCatalog.java:360) at org.eclipse.core.internal.content.ContentTypeMatcher.getDescriptionFor(ContentTypeMatcher.java:86) at org.eclipse.core.internal.resources.ContentDescriptionManager.readDescription(ContentDescriptionManager.java:445) at org.eclipse.core.internal.resources.ContentDescriptionManager.getDescriptionFor(ContentDescriptionManager.java:355) at org.eclipse.core.internal.resources.File.internalGetCharset(File.java:246) at org.eclipse.core.internal.resources.File.getCharset(File.java:207) at org.eclipse.core.internal.resources.File.getCharset(File.java:194) at org.eclipse.jdt.internal.core.util.Util.getResourceContentsAsCharArray(Util.java:1156) at org.eclipse.jdt.internal.core.builder.SourceFile.getContents(SourceFile.java:79) at org.eclipse.jdt.internal.compiler.ReadManager.run(ReadManager.java:173) at java.lang.Thread.run(Unknown Source) Caused by: org.eclipse.swt.SWTException: Invalid thread access at org.eclipse.swt.SWT.error(SWT.java:4361) at org.eclipse.swt.SWT.error(SWT.java:4276) at org.eclipse.swt.SWT.error(SWT.java:4247) at org.eclipse.swt.widgets.Display.error(Display.java:1258) at org.eclipse.swt.widgets.Display.checkDevice(Display.java:764) at org.eclipse.swt.widgets.Display.getSystemFont(Display.java:2459) at org.eclipse.jface.preference.PreferenceConverter.<clinit>(PreferenceConverter.java:84) ... 35 more !ENTRY org.eclipse.osgi 4 0 2012-08-04 12:25:53.650 !MESSAGE Application error !STACK 1 java.lang.NoClassDefFoundError: Could not initialize class org.eclipse.jface.preference.PreferenceConverter at org.eclipse.ui.internal.themes.ThemeElementHelper.installFont(ThemeElementHelper.java:103) at org.eclipse.ui.internal.themes.ThemeElementHelper.populateRegistry(ThemeElementHelper.java:59) at org.eclipse.ui.internal.Workbench$27.runWithException(Workbench.java:1550) at org.eclipse.ui.internal.StartupThreading$StartupRunnable.run(StartupThreading.java:31) at org.eclipse.swt.widgets.RunnableLock.run(RunnableLock.java:35) at org.eclipse.swt.widgets.Synchronizer.runAsyncMessages(Synchronizer.java:135) at org.eclipse.swt.widgets.Display.runAsyncMessages(Display.java:4144) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3761) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2478) at org.eclipse.ui.internal.Workbench.access$7(Workbench.java:2386) at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:583) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:540) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:124) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:353) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:180) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:619) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:574) at org.eclipse.equinox.launcher.Main.run(Main.java:1407) !ENTRY com.android.ide.eclipse.adt 4 0 2012-08-04 12:25:54.453 !MESSAGE parseSdkContent failed !STACK 0 java.lang.NullPointerException at com.android.ide.eclipse.adt.AdtPlugin.getDisplay(AdtPlugin.java:334) at com.android.ide.eclipse.adt.AdtPlugin$7.run(AdtPlugin.java:1422) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:54) - what is in the log file? – Mark Aug 4 '12 at 16:16 Show C:\Users\Quinn\workspace.metadata.log – mishadoff Aug 4 '12 at 16:16 This isn't just a Juno error - I got the same error with Kepler. Gabriel's accepted answer below solved it. – 8bitjunkie Feb 17 '14 at 12:41 I am having the same problem with Eclipse Juno, which I use for Android development. As a workaround for Windows, I launch it from the console like this: C:\path\to\eclipse\eclipse -clean As hinted here, you can also try to delete the file YOUR_WORKSPACE/.metadata/.plugins/org.eclipse.core.resources/.snap This also worked for me and (so far) seemed to fix the problem permanently, unlike the first option. - Thanks for the help but unfortunately it is still not working at all. – Q Liu Aug 9 '12 at 3:59 Removing the .snap file solved the issue for me as well. Thank you Gabriel. – Sergio del Amo Aug 18 '12 at 9:17 I wasn not able to fix this using the above given solution. I had to reinstall eclipse and import all my plugins – pravat Aug 2 '13 at 18:59 There is no .snap ! And clean not working. so what can i do now?! – Mr.Hyde Mar 2 '14 at 16:21 Hi, Mr. Hyde. If you're using Eclipse Kepler, it appears that there is indeed no .snap file anymore. Since I cannot reproduce your problem (it all works well for me), could you tell me what error you are getting exactly? If you try @jayeffkay's answer below, does it fix your problem? – Gabriel Mar 2 '14 at 20:21 I got my workspace starting again by deleting org.eclipse.e4.workbench from the .plugins folder. Then starting with the -clean flag. - This removed all my projects in the workspace. You could as well just delete the .metadata folder... – Daverix Jun 17 '13 at 18:53 Thank you so much! This worked for me, but not the previous tips I've seen here! I just lost one class, don't really understand why... But it's an easy one to "recode" – rsy Jun 27 '14 at 9:07 In Mac you need to delete this file of your workspace .metadata/.plugins/org.eclipse.core.resources/.snap - Delete: /[PATH TO WORKSPACE]/.metadata/.plugins/org.eclipse.core.resources It will get eclipse back to work - it didn't work for me. Eclipse did start but without any project libraries. – Kiran S Sep 30 '13 at 2:35 Cut the plugin folder out org.eclipse.core.resources [may be to the desktop] and start the eclipse. Now, you will see the eclipse open with out the projects. Now close the eclipse and Copy the plugin folder back again and start the eclipse. Now you should see all your projects with eclipse working open. - Tried the other solutions but eclipse still did not start. Tried this solution and success! Thanks @NareshVankuru – Stevko Sep 8 '14 at 19:12 Go to your eclipse workspace and locate the .snap file in the given path .metadata/.plugins/org.eclipse.core.resources/.snap deleting the snap file should do the magic, what are .snap files well AFAIK .snap files contains all the changes in workspace state of the IDE during the runtime and are used for eclipse crash recovery plan. - I have deleted metadata folder and restart eclipse and imported all project from same workspace folder. Problem solved. - Had the same error on Windows 8. Running eclipse as Administrator solved the problem. - Just delete the org.eclipse.core.resources folder at Workspace.metadata.plugins. But you will have to import all projects to the eclipse again hence we delete the resource handling folder of eclipse. Hope this helps - Simple delete .metadata file in your worksapce and again create a workspace using the same path. it worked for me. - ## protected by Community♦Dec 5 '15 at 19:24 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site.
2016-02-09 20:40:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.705613374710083, "perplexity": 7392.319390185235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157472.18/warc/CC-MAIN-20160205193917-00125-ip-10-236-182-209.ec2.internal.warc.gz"}
https://labs.tib.eu/arxiv/?author=Kirill%20Pushkin
• ### Reflectance dependence of polytetrafluoroethylene on thickness for xenon scintillation light(1608.01717) March 27, 2017 physics.ins-det Many rare event searches including dark matter direct detection and neutrinoless double beta decay experiments take advantage of the high VUV reflective surfaces made from polytetrafluoroethylene (PTFE) reflector materials to achieve high light collection efficiency in their detectors. As the detectors have grown in size over the past decade, there has also been an increased need for ever thinner detector walls without significant loss in reflectance to reduce dead volumes around active noble liquids, outgassing, and potential backgrounds. We report on the experimental results to measure the dependence of the reflectance on thickness of two PTFE samples at wavelengths near 178 nm. No change in reflectance was observed as the thickness of a cylindrically shaped PTFE vessel immersed in liquid xenon was varied between 1 mm and 9.5 mm. • ### Dark Matter Search Results from the Commissioning Run of PandaX-II(1602.06563) June 5, 2016 hep-ex, physics.ins-det We present the results of a search for WIMPs from the commissioning run of the PandaX-II experiment located at the China Jinping underground Laboratory. A WIMP search data set with an exposure of 306$\times$19.1 kg-day was taken, while its dominant $^{85}$Kr background was used as the electron recoil calibration. No WIMP candidates are identified, and a 90\% upper limit is set on the spin-independent elastic WIMP-nucleon cross section with a lowest excluded cross section of 2.97$\times$10$^{-45}$~cm$^2$ at a WIMP mass of 44.7~GeV/c$^2$. • ### Low-mass dark matter search results from full exposure of PandaX-I experiment(1505.00771) We report the results of a weakly-interacting massive particle (WIMP) dark matter search using the full 80.1\;live-day exposure of the first stage of the PandaX experiment (PandaX-I) located in the China Jin-Ping Underground Laboratory. The PandaX-I detector has been optimized for detecting low-mass WIMPs, achieving a photon detection efficiency of 9.6\%. With a fiducial liquid xenon target mass of 54.0\,kg, no significant excess event were found above the expected background. A profile likelihood analysis confirms our earlier finding that the PandaX-I data disfavor all positive low-mass WIMP signals reported in the literature under standard assumptions. A stringent bound on the low mass WIMP is set at WIMP mass below 10\,GeV/c$^2$, demonstrating that liquid xenon detectors can be competitive for low-mass WIMP searches. • ### First dark matter search results from the PandaX-I experiment(1408.5114) We report on the first dark-matter (DM) search results from PandaX-I, a low threshold dual-phase xenon experiment operating at the China Jinping Underground Laboratory. In the 37-kg liquid xenon target with 17.4 live-days of exposure, no DM particle candidate event was found. This result sets a stringent limit for low-mass DM particles and disfavors the interpretation of previously-reported positive experimental results. The minimum upper limit, $3.7\times10^{-44}$\,cm$^2$, for the spin-independent isoscalar DM-particle-nucleon scattering cross section is obtained at a DM-particle mass of 49\,GeV/c$^2$ at 90\% confidence level. • ### Measurements of W-value, Mobility and Gas Gain in Electronegative Gaseous CS2 and CS2 Gas Mixtures(0811.4194) W-value, mobility and gas gain measurements have been carried out in electronegative gaseous CS2 and CS2 gas mixtures at a pressure of 40 Torr making use of a single electron proportional counter method. The experimental results have revealed that W-values obtained for CS2 (40 Torr), CS2-CF4 (30 Torr - 10 Torr), CS2-Ar (35 Torr - 5 Torr), CS2-Ne (35 Torr - 5 Torr) and CS2-He (35 Torr - 5 Torr) gas mixtures are 21.1+/-2.7(stat)+/-3(syst) eV, 16.4+/-1.8(stat)+/-2(syst) eV, 13.1+/-1.5(stat)+/-2(syst) eV, 16.3+/-3.0(stat)+/-3(syst) eV and 17.3+/-3.0(stat)+/-3(syst) eV. The mobility for all CS2 gas mixtures was found to be slightly greater and the gas gain was found to be significantly greater relative to pure CS2.
2020-10-22 04:14:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.636483371257782, "perplexity": 5377.544676631255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878879.33/warc/CC-MAIN-20201022024236-20201022054236-00172.warc.gz"}
https://www.gamedev.net/forums/topic/550045-decomposition-of-a-2d-spectrum/
• 12 • 12 • 9 • 10 • 13 # Decomposition of a 2D spectrum This topic is 3082 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I have a problem for those with experience in 2D signal processing. Suppose I have a large 2D hermitian spectrum representing a 2D real signal (S). The straightforward way to calculate the signal S is by applying an IFFT to the entire spectrum. What I am trying to do instead is to subdivide the spectrum into non-overlapping bands, calculate the corresponding signals (Si) again with an IFFT, and then build S by combining Si. A schematic representation of the spectrum, subdivided into numbered bands, is (excuse my bad ASCII art): ^ | |-----------| | 4 | 3 | |_____|_____| | | | | 1 | 2 | |---------------------> | | | | 5 | 6 | |-----------| | 7 | 8 | |-----------| The left side of the spectrum is the conjugate of the right side and is ignored. Let's start with the bands (1,5). We can calculate the signal (S1+S5) by zeroing all other bands and calculating an IFFT of the spectrum (1,5), with half the size as the full one. This is the simple part. The problem comes in applying the same approach to the remaining couples of high-frequency bands - for example { (4,7), (2,6), (3,8) } . First, they have to be shifted to the low-frequencies (previously occupied by (1,5)) in order to use a half-sized IFFT. Second, the corresponding signal has to be brought back to the correct frequencies before being combined with the others to reconstruct the original signal S. But how?? I can't figure out the correct way to do this, considering the implicit periodic nature of the spectrum and signals, the hermitian requirement, etc. I am 100% positive this problem is shown and solved, at least in the 1D case, in my books on digital processing but unfortunately they're in other country. Can anybody help me in this or maybe point me to another forum? Thanks! Stefano Lanza ##### Share on other sites I am not really an expert and I don't have the time to work out the details in your case, but it looks like the Wikipedia page on the discrete Fourier transform has a theorem that might help.
2018-03-21 15:08:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45403769612312317, "perplexity": 504.2417691325418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647660.83/warc/CC-MAIN-20180321141313-20180321161313-00558.warc.gz"}
https://or.stackexchange.com/tags/matlab/hot
Tag Info Accepted MATLAB vs. Python in industry Regardless of what completes the phrase "Python vs ...", the answer is always going to be Python. Very few people who are serious about using optimisation in production use MATLAB, and the ones who ... • 11.2k MATLAB vs. Python in industry I agree with everything Nikos said and I add some colors to some of the reasons: Python is free and open-source but Matlab is not. Anyone can write codes in Python and share it with others who can ... • 5,756 MATLAB vs. Python in industry Nikos Kazazakis and EhsanK have given you great reasons for using Python. I will focus on the point from you about needing to use an additional package/library in Python for matrix and vector ... • 311 MATLAB vs. Python in industry I work for a company that offers a commercial optimization solver. The solver offers interfaces to both MATLAB and Python for solving problems defined in those languages. We only get one or two ... • 141 Accepted Solver rounding precision vs programming language rounding precision Numerical stability (computations going sideways) and numerical tolerances are related but not identical. Floating point arithmetic being subject to rounding and truncation errors (unavoidably), every ... • 31.1k Accepted Matlab fmincon for a problem with many nonlinear constraints The problem might be @(x) in the first line of the function. Adding this creates an anonymous function, while MATLAB simply expects a numerical vector as output. ... • 5,893 MATLAB vs. Python in industry MATLAB is a language built on top of a library. Python (with NumPy & numba) is a language with a library built under it. Neither is ideal. Like all languages, both have a few quirks, due to their ... MATLAB vs. Python in industry I did my PhD on a topic involving numerical simulations of mechanical systems. I worked primarily in MATLAB, which I already had experience in and seemed to have some good 'out of the box' ... • 81 When should I use a solver for IP and MIP and can I just use a library from Python, R, Matlab, etc...? I assume the solver you're referring to in Python/R/Matlab, are the open-source solvers such as CBC or GLPK (you can find out more in this question: Where can I find open source LP solvers?). If that'... • 5,756 Matlab fmincon for a problem with many nonlinear constraints Following Kevin Dalmeijer's answer (Accepted), I found the following approach to solve the problem that I had. (composing this answer for future similar questions): The general form of Fmincon ... • 8,340 Advantages of IBM CPLEX Studio over CPLEX in MATLAB? Since you are asking from a MSc student's point of view and actually need to use CPLEX, I assume that your research mainly focuses on the applications of OR. Therefore, two things are required to be ... • 512 Accepted What underlies intlinprog in MATLAB? Mention of intlinprog, without further specification, generally means the intlinprog of the MATLAB Optimization Toolbox. However, Gurobi also has a function called <... • 11.2k Solver rounding precision vs programming language rounding precision (1) Numerical stability is a real issue but not so common in my area of discrete optimisation (for obvious reasons). I only get it in two cases. Sometimes I get preprocessed data that has been rounded,... • 1,225 MATLAB vs. Python in industry I am geophysics professor and have been solving scientific computing problems in Matlab since 2000. In the last ~8 years graduate students have been preferring to work in Python. I have the following ... • 61 Accepted MILP Minimum set Vertex cover coding by Python or MATLAB? In Python, with pulp and networkx : ... • 10.4k Accepted Is it possible to change the underlying field from Euclidean to $\mathbb{F}_p$? Introduce an integer variable $y$ and replace the RHS with $k+py$. • 22.7k Accepted Following code doesn't work in matlab with CVX Use CVX's entr function. $\sum_{i=1}^ 4x_i\ln(x_i)$ can be entered as -sum(entr(x)) ... • 11.2k Branch and bound algorithm programming code If you want to work inside Excel (or LibreOffice), you might look at OpenSolver. Google's OR-Tools includes the CBC solver and the option to use GLPK, SCIP or Gurobi. (Gurobi is commercial software ... • 31.1k Branch and bound algorithm programming code Depending on the type of your MIP, there are numerous open-source options: MILP: CBC Convex MINLP: Bonmin Non-convex MINLP: Couenne All of the above: SCIP (free for academics) • 11.2k Branch and bound algorithm programming code There are several open-source software packages that use branch-and-bound to solve integer programming, for example: GLPK: https://www.gnu.org/software/glpk CBC: https://github.com/coin-or/Cbc • 2,275 Transform nonlinear cost function to get LP or MILP Assuming you meant $C=\max() - \min()$ instead of $C=\max() + \min()$, you can introduce nonnegative variables $X_\text{import}$ and $X_\text{export}$ and linear constraints (P_\text{charge} + P_\... • 22.7k Accepted Transform nonlinear cost function to get LP or MILP Taking a different tack from Rob, I'm going to assume that the original objective function $C$ is correct as stated and is being minimized, with $C_\text{export}>0$ being a per-unit compensation ... • 31.1k When should I use a solver for IP and MIP and can I just use a library from Python, R, Matlab, etc...? The main reasons are performance and quality of numerics. Non-professional stuff tend to lack the polish professionals spend time doing to ensure that numerical issues don't compromise the solving ... • 11.2k Matlab fmincon for a problem with many nonlinear constraints You can specify many nonlinear constraints and objectives without having to define functions with the problem-based modeling approach starting with MATLAB R2019a. "Many" are those that are polynomial ... MATLAB vs. Python in industry My first language I have learned was MATLAB. After learning C++ I realized MATLAB is bad for really learning about programming in such. I would recommend you also Python as language because it is ... • 41 MATLAB vs. Python in industry Sounds like you are intending to build solvers. You will be better off in MATLAB. As you noted, matrix and vector operations are part of the language. You'll need matrix decompositions and maybe ... Accepted How to normalize the objective functions of multi-objective optimization for a MPC? The reason for minimizing fuel consumption is presumably to minimize fuel cost. If you can come up with a cost rate for travel time, then you can combine the two objectives into an overall cost ... • 31.1k Accepted Why is the Lagrange Multiplier not equal the Shadow Price (Excel solver, Matlab linprog, Gurobi)? If you look at the "allowable decrease" in the RHS of the highlighted constraint, it's zero. A number of the binding constraints have either allowable increase or allowable decrease zero. ... • 31.1k Accepted How do I solve this Optimization problem? Here is the model you can write using LocalSolver, our global optimization solver. Please note that LocalSolver is a commercial software like MATLAB. Nevertheless, you can benefit from free trial or ... • 2,717
2022-08-14 19:27:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28895270824432373, "perplexity": 2358.972057116882}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572063.65/warc/CC-MAIN-20220814173832-20220814203832-00629.warc.gz"}
http://www.a-c-design.de/steeltank/croatia-vertical-cylindrical-tank-buildin_3478.html
Hire an Expert : 0086-371-86151527 ### call us 0086-371-86151527 ### Email us carbonsteels@hotmail.com ### croatia vertical cylindrical tank building technology • Home • Steel Tank • croatia vertical cylindrical tank building technology # croatia vertical cylindrical tank building technology ### 6.4 Physics Applications - Work, Force, and Pressure croatia vertical cylindrical tank building technology Aug 12, 2020Consider a vertical cylindrical tank of radius 2 meters and depth 6 meters. Suppose the tank is filled with 4 meters of water of mass density 1000 kg/m 3, and the top 1 meter of water is pumped over the top of the tank. Consider a hemispherical tank with a radius of 10 feet.Cited by: 6Page Count: 88File Size: 1MBAuthor: D. S. Dickey, J. B. Fasanoabout us - hraninvest - hmc ad food processingbulgaria vertical cylindrical tank building technology . Used Tanks Buy Sell EquipNet EquipNet is the world's leading provider of used tanks and various other industrial equipment Our exclusive contracts with our clients yield a wide range of used tanks from a number of OEMs including Coop Tech Lee Industries Pfaudler Feldmeier APV and many others EquipNet is constantly receiving a variety of croatia vertical cylindrical tank building technologyTank Shell - an overview ScienceDirect TopicsTank shells shall be finished with galvanized steel jacketing applied directly over the insulation. The metal sheet may be corrugated or plain. Corrugations shall run vertically with vertical seams lapped a minimum of 2 corrugations and horizontal seams lapped a minimum of 75 mm and supported on S clips. ### Title: Bachelor of Science - BS at Location: CroatiaConnections: 107Some results are removed in response to a notice of local law requirement. For more information, please see here.AC Physics Applications Work, Force, and Pressure Consider a vertical cylindrical tank of radius 2 meters and depth 6 meters. Suppose the tank is filled with 4 meters of water of mass density 1000 kg/m$$^3\text{,}$$ and the top 1 meter of water is pumped over the top of the tank. Consider a hemispherical tank with a radius of 10 feet.API 2550 - Method of Measurement and Calibration of croatia vertical cylindrical tank building technologyscope This standard describes the procedures for calibrating upright cylindrical tanks larger than a barrel or drum. It is Tesented in two parts Part I (Sections to 41) outlines procedures for making necessary measurements to determine total and incremental tank volumes; Part II (Sections 42 to 58) presents the recommended procedure for computing volumes. ### API 2550 - Method of Measurement and Calibration of croatia vertical cylindrical tank building technology scope This standard describes the procedures for calibrating upright cylindrical tanks larger than a barrel or drum. It is Tesented in two parts Part I (Sections to 41) outlines procedures for making necessary measurements to determine total and incremental tank volumes; Part II (Sections 42 to 58) presents the recommended procedure for computing volumes.API 2550 - Method of Measurement and Calibration of croatia vertical cylindrical tank building technologyscope This standard describes the procedures for calibrating upright cylindrical tanks larger than a barrel or drum. It is Tesented in two parts Part I (Sections to 41) outlines procedures for making necessary measurements to determine total and incremental tank volumes; Part II (Sections 42 to 58) presents the recommended procedure for computing volumes.API SPEC 12P Specification for Fiberglass Reinforced croatia vertical cylindrical tank building technologyOnly shop-fabricated, vertical, cylindrical tanks are covered. Tanks covered by this specification are intended for aboveground and atmospheric pressure service. Unsupported cone bottom tanks are outside the scope of this specification. This specification is designed to provide the petroleum industry with various standard sizes of FRP tanks. Toptank was established in 2007 under the Tile & Carpet Centre group.Toptank is manufactured in Kenya using the rotational moulding process and produced from food grade polyethylene.It has been approved by Kenya Bureau of Standards and has also been awarded the Diamond mark, reflecting the quality of its products and excellent performance. croatia vertical cylindrical tank building technologyBS EN 14015:2004 - Specification for the design and croatia vertical cylindrical tank building technologyFluid systems and components for general use > Fluid storage devices > Stationary containers and tanks Petroleum and related technologies > Petroleum products and natural gas handling equipment. Related Standards. 17/30350291 DC BS EN 14015. Specification for the design and manufacture of site built, vertical, cylindrical, flat-bottomed, above croatia vertical cylindrical tank building technologyBozidar Saso - Croatia Professional Profile LinkedInView Bozidar Sasos profile on LinkedIn, the world's largest professional community. Bozidar has 3 jobs listed on their profile. See the complete profile on LinkedIn and discover Bozidars connections and jobs at similar companies. ### CHAPTER 21 Mechanical Design of Mixing Equipment 1250 MECHANICAL DESIGN OF MIXING EQUIPMENT Figure 21-1 Direct-drive portable mixer. (Courtesy of Lightnin.) mixers are mounted on the vertical centerline of a tank with bafes, but may be off-center or off-center, angle mounted.Choosing the Right Water Storage for Your Community is an croatia vertical cylindrical tank building technologyThe purpose of water storage tanks is usually to meet peak demands, such as fire flows and times of the day when water use is high. There are a few items to consider when selecting a new water storage tank for your community or industry. Traditionally, it has been a common practice of many waterDESIGN OF LIQUID-STORAGE TANK RESULTS OF They can be vertical and horizontal, aboveground, semi underground, and underground, carry static and dynamic loads, and work under vacuum or over pressure, upon the wind, seismic, and temperature influences [2]. The largest segment corresponds to the aboveground steel vertical tanks, which are sheet structures with cylindrical form [3]. ### DESIGN OF LIQUID-STORAGE TANK RESULTS OF They can be vertical and horizontal, aboveground, semi underground, and underground, carry static and dynamic loads, and work under vacuum or over pressure, upon the wind, seismic, and temperature influences [2]. The largest segment corresponds to the aboveground steel vertical tanks, which are sheet structures with cylindrical form [3].DESIGN RECOMMENDATION FOR STORAGE TANKS AND A4 Above-ground, Vertical, Cylindrical Storage Tanks ----- 154 Appendix B Assessment of Seismic Designs for Under-ground Storage Tanks ----- 160 . Chapter2 1 1. General 1.1 Scope This Design Recommendation is applied to the structural design of water storage croatia vertical cylindrical tank building technologyDIESEL FUELS & DIESEL FUEL SYSTEMSJul 13, 2016Foreword This section of the Application and Installation Guide generally describes Diesel Fuels and Diesel Fuel Systems for Cat&engines listed on the cover of ### DIESEL FUELS & DIESEL FUEL SYSTEMS Jul 13, 2016Foreword This section of the Application and Installation Guide generally describes Diesel Fuels and Diesel Fuel Systems for Cat&engines listed on the cover ofDesign Calculations of Venting in Atmospheric and Low croatia vertical cylindrical tank building technologyAug 30, 2017Storage Tanks A storage tank is a container, usually for holding liquids, sometimes for compressed gases (gas tank). Storage tanks are available in many shapes vertical and horizontal cylindrical; open top and closed top; flat bottom, cone bottom. Choice of storage tanks o Tanks for a particular fluid are chosen according to the flash-point of croatia vertical cylindrical tank building technologyDevelopment of Construction Technique of LNG Storage EN14620 D 2006 Manufacture of Site Built, Vertical, Cylindrical, Flat-bottomed Steel Tanks for the Storage of Refrigerated, Liquefied Gases with Operating Temperatures between 0 ### Dissolved Air Flotation (DAF) Systems Dissolved air flotation (DAF) is a proven and effective physical/chemical technology for treating a variety of industrial and municipal process and wastewater streams. DAF systems are commonly used for the removal of oils & greases and suspended solids to meet a variety of treatment goals including:Dissolved Air Flotation (DAF) SystemsDissolved air flotation (DAF) is a proven and effective physical/chemical technology for treating a variety of industrial and municipal process and wastewater streams. DAF systems are commonly used for the removal of oils & greases and suspended solids to meet a variety of treatment goals including:EEMUA PUB NO 183 - Prevention of tank bottom leakage A croatia vertical cylindrical tank building technologyPrevention of tank bottom leakage A guide for the design and repair of foundations and bottoms of vertical, cylindrical, steel storage tanks This Publication gives guidance on a range of topics affecting tank bottom integrity including Types of tank foundations (including guidelines for required soil investigations, design and croatia vertical cylindrical tank building technology ### Environmental Protection Agency Wastewater cylindrical tank in which the flow enters tangentially, creating a vortex flow pattern. Grit settles by gravity into the bottom of the tank (in a grit hopper) while effluent exits at the top of the tank. The grit that settles into the grit hopper may be removed by a grit pump or an air lift pump. Detritus Tank A detritus tank (or square tank croatia vertical cylindrical tank building technologyFireguard - Highland TankFireguard &tanks are thermally protected, double-wall steel storage tanks and are the best alternative for safe storage of motor fuels and other flammable and combustible liquids aboveground. They are used where a fire-protected tank is needed because of setback limitations or regulatory requirements. Each tank is constructed with a minimum 3 interstice around the inner tank.Individual Component Libraries - TESS Libraries TRNSYS croatia vertical cylindrical tank building technologyThis subroutine models a vertical cylindrical storage tank with an immersed cylindrical storage tank and optional immersed heat exchangers. This routine solves the coupled differential equations imposed by considering the mass of the fluid in the main storage tank, the mass of the fluid in the smaller immersed storage tank and the mass of the croatia vertical cylindrical tank building technology ### Individual Component Libraries - TESS Libraries TRNSYS croatia vertical cylindrical tank building technology This subroutine models a vertical cylindrical storage tank with an immersed cylindrical storage tank and optional immersed heat exchangers. This routine solves the coupled differential equations imposed by considering the mass of the fluid in the main storage tank, the mass of the fluid in the smaller immersed storage tank and the mass of the croatia vertical cylindrical tank building technologyLiquid Hydrogen - an overview ScienceDirect TopicsHowever, some vertical cylindrical tanks and spherical tanks are in use. Standard tank sizes range from 1500 gallons to 25,000 gallons. Tanks are vacuum insulated. Pressure relief valves protect the tanks and are designed to ASME (American Society of Mechanical Engineers) specifications for Mixing 101 Baffled by Baffles? Dynamix AgitatorsOct 19, 2012Lets look at a common tank configuration an un-baffled cylindrical tank. If a mixer is center-mounted in this tank, what we see is a very inefficient flow pattern the tangential velocities coming from the impeller cause the entire fluid mass to spin (Fig. 1). Basically, the entire fluid (and its solids) moves like a merry-go-round. ### Mixing 101 Optimal Tank Design Dynamix Agitators Vertical Cylindrical Tanks. Vertical cylindrical tanks are the most common type of tank in use. A key consideration for cylindrical tanks is to ensure that they are either baffled or offset-mounted to prevent swirling from occurring. Refer to section 2 below (The Use of Baffling) for details.PROCESS FABRICATORS LUu)containment type tanks are available in both vertical and horizontal configurations, with a maximum capacity of 30,000 gallons. Transport Tanks Process Fabricators, Inc. fabricates transport tanks in a wide variety of shapes and sizes. * Obround truck tanks * Cylindrical truck tanks * Rectangular truck tanks * Trailer tanks * Pallet tanksPROCESS FABRICATORS LUu)containment type tanks are available in both vertical and horizontal configurations, with a maximum capacity of 30,000 gallons. Transport Tanks Process Fabricators, Inc. fabricates transport tanks in a wide variety of shapes and sizes. * Obround truck tanks * Cylindrical truck tanks * Rectangular truck tanks * Trailer tanks * Pallet tanks ### Reinforced Concrete Water Tank Design Requirements Reinforced concrete water tanks are constructed for storing water. The design of reinforced concrete water tank is based on IS 3370 2009 (Parts I IV). The design depends on the location of tanks, i.e. overhead, on ground or underground water tanks. The tanks can be made in different shapes usually circular and rectangular shapes []Request a Quote - Brawn MixerRequest a mixer quote. Step 1 Please specify industry/application category. Application:*Some results are removed in response to a notice of local law requirement. For more information, please see here.Mixing 101 Baffled by Baffles? Dynamix AgitatorsOct 19, 2012Lets look at a common tank configuration an un-baffled cylindrical tank. If a mixer is center-mounted in this tank, what we see is a very inefficient flow pattern the tangential velocities coming from the impeller cause the entire fluid mass to spin (Fig. 1). Basically, the entire fluid (and its solids) moves like a merry-go-round. ### Stability of Cylindrical Oil Storage Tanks During an croatia vertical cylindrical tank building technology building tanks on flexible foundations is more appropriate than on solid foundations, as the foundation softness extends the relocation period of tanks against hydrodynamic forces [4]. Martin Koeller and Peravin Malhotra (2003) published an article called Seismic Evaluation of Unanchored CylindricalStorage tanks, Fixed-roof tanks, Floating roof tanks croatia vertical cylindrical tank building technologyA typical fixed-roof tank consists of a cylindrical steel shell with a cone- or dome-shaped roof that is permanently affixed to the tank shell. Storage tanks are usually fully welded and designed for both liquid and vapor tight, while older tanks are often have a riveted or bolted construction and are not vapor tight.Ultrasonic Level Sensors in Above Ground Bulk Storage TanksOct 10, 2018Vertical cylindrical tanks are the most common. Depending upon the liquid, they may also have a chemically resistant inner lining. There are many environmental regulations associated with the design and operation of bulk storage tanks, and above ground tanks have different regulations than below ground tanks. · 20 Years of experience in the Steel field · 500,000 MT production capacity per year. · 100+ exporting markets covering global main countries and regions. · 2000+ MT stock per month with different materials and sizes. · 150+ projects each year, covering oil tank, shipbuilding, oil&gas pipeline, drilling, offshore, energy, construction industries. Our Steel Products' Certificates: Package:
2021-01-24 11:47:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42989104986190796, "perplexity": 6174.332137837227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703548716.53/warc/CC-MAIN-20210124111006-20210124141006-00102.warc.gz"}
https://docs.analytica.com/index.php?title=Ceil&oldid=39473
# Ceil (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) ## Ceil(x, digits, dateUnit) Ceil(x) returns the smallest integer that is greater than or equal to «x». Ceil(174.001) → 175 Ceil(-89.95) → -89 When the optional «digits» parameter is given, cuts the number off at «digits» to the right of the decimal, returning the smallest multiple of 10-«digits» that is equal to or larger than «x». Ceil( 3.141592, 2) → 3.150000 Ceil(-3.141592, 2) → -3.140000 Ceil( 3.141592, 5) → 3.141600 «digits» may also be negative, in which case the least significant -«digits» digits will be '0': Ceil(123456.789, -1) → 123460.00 Ceil(123456,789, -4) → 130000.00 If you specify the «dateUnit» parameter, Ceil rounds a date-time number upward to the earliest date-time at the «dateUnit» increment that is equal to or comes after the date-time specified in «x». For example, you can round up to the beginning of the following year, quarter, or month, or to the beginning (meaning midnight) of the next weekday, day, or up to the next hour, minute or second. The value you pass to «dateUnit» must be one of the following: 'Y', 'year', 'Q', 'quarter', 'M', 'month', 'WD', 'weekday', 'D', 'day', 'h', 'hour', 'm', 'minute', 's', or 'second'. Note that 'M' (for Month) and 'm' (for minute) are case sensitive. Ceil(MakeDate(2010, 8, 8), dateUnit: 'Y') → 2011-Jan-1 Ceil(MakeDate(2010, 8, 8), dateUnit: 'Q') → 2010-Oct-1 Ceil(MakeDate(2010, 8, 8), dateUnit: 'M') → 2010-Sep-1 Ceil(MakeDate(2010, 8, 1), dateUnit: 'M') → 2010-Aug-1 Ceil(MakeDate(2010, 8, 1)+MakeTime(0, 0, 1), dateUnit: 'M') → 2010-Sep-1 Notice in the last example that if we are even one second past midnight of Aug 1st, it rounds to the beginning of the next month. ## Number Format The Ceil, Round and Floor functions all cut a number off to the indicated number of decimal places, but note that this is separate from the number format used to display the number. The number format setting controls how many digits are shown when displaying numbers, such as in an edit or result table. For example, after rounding to 4 decimal places, a number may be 2.3456, but if your number format is set to fixed point with 2 digits, this would display as 2.35. To see the full effect of the Ceil function, set the number format to a large number of digits.
2023-03-27 07:16:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38099807500839233, "perplexity": 4324.424256266235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948609.41/warc/CC-MAIN-20230327060940-20230327090940-00673.warc.gz"}
http://www.ma.utexas.edu/mp_arc-bin/mpa?yn=09-81
09-81 Laszlo Erdos, Jose A. Ramirez, Benjamin Schlein, Horng-Tzer Yau Bulk Universality for Wigner Matrices (81K, latex file) May 26, 09 Abstract , Paper (src), View paper (auto. generated ps), Index of related papers Abstract. We consider $N\times N$ Hermitian Wigner random matrices $H$ where the probability density for each matrix element is given by the density $\nu(x)= e^{- U(x)}$. We prove that the eigenvalue statistics in the bulk is given by Dyson sine kernel provided that $U \in C^6(\RR)$ with at most polynomially growing derivatives and $\nu(x) \le C\, e^{ - C |x|}$ for $x$ large. The proof is based upon an approximate time reversal of the Dyson Brownian motion combined with the convergence of the eigenvalue density to the Wigner semicircle law on short scales.
2018-06-25 15:27:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9653795957565308, "perplexity": 962.7894228508245}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267868135.87/warc/CC-MAIN-20180625150537-20180625170537-00580.warc.gz"}
http://openstudy.com/updates/51a16ffbe4b04449b2224245
## Christos Group Title What's the derivative of sqrt(sqrt(x)) ? one year ago one year ago 1. Jhannybean $\large \sqrt{\sqrt{x}} = x ^{1/4}$ 2. primeralph |dw:1369534488508:dw| 3. Christos Just so you know I already solved it. I just need to verify my answer 4. Christos Because this is my first time doing this type of sort 5. Christos sqrt** 6. Jhannybean $\large x ^{1/4 }= x ^{n}= \frac{ nx ^{n-1} }{ n-1 }$ 7. Christos is it x^(-3/4)/4 just tell me that 8. Christos I already solved it and got this as a result. 9. Jhannybean $\frac{ 1 }{ 4 }x ^{-3/4}$ = $\large \frac{ 1 }{ 4x ^{3/4}}$ 10. Christos thanks a lot 11. Jhannybean Yep.
2014-10-24 16:30:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5412094593048096, "perplexity": 6718.809166960449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646269.50/warc/CC-MAIN-20141024030046-00279-ip-10-16-133-185.ec2.internal.warc.gz"}
https://aimsciences.org/article/doi/10.3934/jmd.2013.7.99
# American Institute of Mathematical Sciences January  2013, 7(1): 99-117. doi: 10.3934/jmd.2013.7.99 ## Topological characterization of canonical Thurston obstructions 1 Institute for Mathematical Sciences, Stony Brook University, Stony Brook, NY 11794-3660, United States Received  August 2012 Published  May 2013 Let $f$ be an obstructed Thurston map with canonical obstruction $\Gamma_f$. We prove the following generalization of Pilgrim's conjecture: if the first-return map $F$ of a periodic component $C$ of the topological surface obtained from the sphere by pinching the curves of $\Gamma_f$ is a Thurston map then the canonical obstruction of $F$ is empty. Using this result, we give a complete topological characterization of canonical Thurston obstructions. Citation: Nikita Selinger. Topological characterization of canonical Thurston obstructions. Journal of Modern Dynamics, 2013, 7 (1) : 99-117. doi: 10.3934/jmd.2013.7.99 ##### References: [1] S. Bonnot, M. Braverman and M. Yampolsky, Thurston equivalence to a rational map is decidable,, to appear in Moscow Math. J., (2010). [2] A. Chéritat, Tan Lei and Shishikura's example of non-mateable degree 3 polynomials without a Levy cycle,, to appear in Annales de la faculté des sciences de Toulouse, (2012). [3] A. Douady and J. H. Hubbard, A proof of Thurston's topological characterization of rational functions,, Acta Math., 171 (1993), 263. doi: 10.1007/BF02392534. [4] F. R. Gantmacher, "Teoriya Matrits,", Second supplemented edition, (1966). [5] J. H. Hubbard, "Teichmüller Theory and Applications to Geometry, Topology, and Dynamics. Vol. 1. Teichmüller Theory,", With contributions by Adrien Douady, (2006). [6] Y. Imayoshi and M. Taniguchi, "An Introduction to Teichmüller Spaces,", Translated and revised from the Japanese by the authors, (1992). doi: 10.1007/978-4-431-68174-8. [7] O. Lehto and K. I. Virtanen, "Quasiconformal Mappings in the Plane,", Second edition, (1973). [8] C. T. McMullen, "Complex Dynamics and Renormalization,", Annals of Mathematics Studies, 135 (1994). [9] J. Milnor, "Dynamics in One Complex Variable,", Third edition, 160 (2006). [10] _____, On Lattès maps,, in, (2006), 9. [11] K. M. Pilgrim, Canonical Thurston obstructions,, Adv. Math., 158 (2001), 154. doi: 10.1006/aima.2000.1971. [12] _____, "Combinations of Complex Dynamical Systems,", Lecture Notes in Mathematics, 1827 (2003). [13] N. Selinger, "On Thurston's Characterization Theorem for Branched Covers,", Ph.D Thesis, (2011). [14] _____, Thurston's pullback map on the augmented Teichmüller space and applications,, Inventiones Mathematicae, 189 (2012), 111. [15] S. A. Wolpert, Geometry of the Weil-Petersson completion of Teichmüller space,, in, VIII (2003), 357. [16] _____, The Weil-Petersson metric geometry,, in, 13 (2009), 47. show all references ##### References: [1] S. Bonnot, M. Braverman and M. Yampolsky, Thurston equivalence to a rational map is decidable,, to appear in Moscow Math. J., (2010). [2] A. Chéritat, Tan Lei and Shishikura's example of non-mateable degree 3 polynomials without a Levy cycle,, to appear in Annales de la faculté des sciences de Toulouse, (2012). [3] A. Douady and J. H. Hubbard, A proof of Thurston's topological characterization of rational functions,, Acta Math., 171 (1993), 263. doi: 10.1007/BF02392534. [4] F. R. Gantmacher, "Teoriya Matrits,", Second supplemented edition, (1966). [5] J. H. Hubbard, "Teichmüller Theory and Applications to Geometry, Topology, and Dynamics. Vol. 1. Teichmüller Theory,", With contributions by Adrien Douady, (2006). [6] Y. Imayoshi and M. Taniguchi, "An Introduction to Teichmüller Spaces,", Translated and revised from the Japanese by the authors, (1992). doi: 10.1007/978-4-431-68174-8. [7] O. Lehto and K. I. Virtanen, "Quasiconformal Mappings in the Plane,", Second edition, (1973). [8] C. T. McMullen, "Complex Dynamics and Renormalization,", Annals of Mathematics Studies, 135 (1994). [9] J. Milnor, "Dynamics in One Complex Variable,", Third edition, 160 (2006). [10] _____, On Lattès maps,, in, (2006), 9. [11] K. M. Pilgrim, Canonical Thurston obstructions,, Adv. Math., 158 (2001), 154. doi: 10.1006/aima.2000.1971. [12] _____, "Combinations of Complex Dynamical Systems,", Lecture Notes in Mathematics, 1827 (2003). [13] N. Selinger, "On Thurston's Characterization Theorem for Branched Covers,", Ph.D Thesis, (2011). [14] _____, Thurston's pullback map on the augmented Teichmüller space and applications,, Inventiones Mathematicae, 189 (2012), 111. [15] S. A. Wolpert, Geometry of the Weil-Petersson completion of Teichmüller space,, in, VIII (2003), 357. [16] _____, The Weil-Petersson metric geometry,, in, 13 (2009), 47. [1] Mary Wilkerson. Thurston's algorithm and rational maps from quadratic polynomial matings. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 2403-2433. doi: 10.3934/dcdss.2019151 [2] Peter Haïssinsky, Kevin M. Pilgrim. An algebraic characterization of expanding Thurston maps. Journal of Modern Dynamics, 2012, 6 (4) : 451-476. doi: 10.3934/jmd.2012.6.451 [3] Rui Gao, Weixiao Shen. Analytic skew-products of quadratic polynomials over Misiurewicz-Thurston maps. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 2013-2036. doi: 10.3934/dcds.2014.34.2013 [4] Yan Gao, Jinsong Zeng, Suo Zhao. A characterization of Sierpiński carpet rational maps. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 5049-5063. doi: 10.3934/dcds.2017218 [5] Huaibin Li. An equivalent characterization of the summability condition for rational maps. Discrete & Continuous Dynamical Systems - A, 2013, 33 (10) : 4567-4578. doi: 10.3934/dcds.2013.33.4567 [6] Pedro A. S. Salomão. The Thurston operator for semi-finite combinatorics. Discrete & Continuous Dynamical Systems - A, 2006, 16 (4) : 883-896. doi: 10.3934/dcds.2006.16.883 [7] Youming Wang, Fei Yang, Song Zhang, Liangwen Liao. Escape quartered theorem and the connectivity of the Julia sets of a family of rational maps. Discrete & Continuous Dynamical Systems - A, 2019, 39 (9) : 5185-5206. doi: 10.3934/dcds.2019211 [8] Guizhen Cui, Yan Gao. Wandering continua for rational maps. Discrete & Continuous Dynamical Systems - A, 2016, 36 (3) : 1321-1329. doi: 10.3934/dcds.2016.36.1321 [9] Cezar Joiţa, William O. Nowell, Pantelimon Stănică. Chaotic dynamics of some rational maps. Discrete & Continuous Dynamical Systems - A, 2005, 12 (2) : 363-375. doi: 10.3934/dcds.2005.12.363 [10] Eriko Hironaka, Sarah Koch. A disconnected deformation space of rational maps. Journal of Modern Dynamics, 2017, 11: 409-423. doi: 10.3934/jmd.2017016 [11] Jeffrey Diller, Han Liu, Roland K. W. Roeder. Typical dynamics of plane rational maps with equal degrees. Journal of Modern Dynamics, 2016, 10: 353-377. doi: 10.3934/jmd.2016.10.353 [12] Rabah Amir, Igor V. Evstigneev. On Zermelo's theorem. Journal of Dynamics & Games, 2017, 4 (3) : 191-194. doi: 10.3934/jdg.2017011 [13] John Hubbard, Yulij Ilyashenko. A proof of Kolmogorov's theorem. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 367-385. doi: 10.3934/dcds.2004.10.367 [14] Weiyuan Qiu, Fei Yang, Yongcheng Yin. Quasisymmetric geometry of the Cantor circles as the Julia sets of rational maps. Discrete & Continuous Dynamical Systems - A, 2016, 36 (6) : 3375-3416. doi: 10.3934/dcds.2016.36.3375 [15] Aihua Fan, Shilei Fan, Lingmin Liao, Yuefei Wang. Minimality of p-adic rational maps with good reduction. Discrete & Continuous Dynamical Systems - A, 2017, 37 (6) : 3161-3182. doi: 10.3934/dcds.2017135 [16] Hahng-Yun Chu, Se-Hyun Ku, Jong-Suh Park. Conley's theorem for dispersive systems. Discrete & Continuous Dynamical Systems - S, 2015, 8 (2) : 313-321. doi: 10.3934/dcdss.2015.8.313 [17] Sergei Ivanov. On Helly's theorem in geodesic spaces. Electronic Research Announcements, 2014, 21: 109-112. doi: 10.3934/era.2014.21.109 [18] Jun Hu, Oleg Muzician, Yingqing Xiao. Dynamics of regularly ramified rational maps: Ⅰ. Julia sets of maps in one-parameter families. Discrete & Continuous Dynamical Systems - A, 2018, 38 (7) : 3189-3221. doi: 10.3934/dcds.2018139 [19] Betseygail Rand, Lorenzo Sadun. An approximation theorem for maps between tiling spaces. Discrete & Continuous Dynamical Systems - A, 2011, 29 (1) : 323-326. doi: 10.3934/dcds.2011.29.323 [20] Eric Babson and Dmitry N. Kozlov. Topological obstructions to graph colorings. Electronic Research Announcements, 2003, 9: 61-68. 2017 Impact Factor: 0.425 ## Metrics • PDF downloads (4) • HTML views (0) • Cited by (1) ## Other articlesby authors • on AIMS • on Google Scholar [Back to Top]
2019-06-20 23:37:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6555650234222412, "perplexity": 6615.698450139411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999291.1/warc/CC-MAIN-20190620230326-20190621012326-00192.warc.gz"}
http://math.stackexchange.com/questions/115133/arc-length-problem/115144
# Arc Length Problem I am currently in the middle of the following problem. Reparametrize the curve $\vec{\gamma } :\Bbb{R} \to \Bbb{R}^{2}$ defined by $\vec{\gamma}(t)=(t^{3}+1,t^{2}-1)$ with respect to arc length measured from $(1,-1)$ in the direction of increasing $t$. By reparametrizing the curve, does this mean I should write the equation in cartesian form? If so, I carry on as follows. $x=t^{3}+1$ and $y=t^{2}-1$ Solving for $t$ $$t=\sqrt[3]{x-1}$$ Thus, $$y=(x-1)^{2/3}-1$$ Letting $y=f(x)$, the arclength can be found using the formula $$s=\int_{a}^{b}\sqrt{1+[f'(x)]^{2}}\cdot dx$$ Finding the derivative yields $$f'(x)=\frac{2}{3\sqrt[3]{x-1}}$$ and $$[f'(x)]^{2}=\frac{4}{9(x-1)^{2/3}}.$$ Putting this into the arclength formula, and using the proper limits of integration (found by using $t=1,-1$ with $x=t^{3}+1$) yields $$s=\int_{0}^{2}\sqrt{1+\frac{4}{9(x-1)^{2/3}}}\cdot dx$$ I am now unable to continue with the integration as it has me stumped. I cannot factor anything etc. Is there some general way to approach problems of this kind? - As given curve is not regular when $t=0$ and your curve parameter runs from $-1$ to $1$, Hence Below is arc lengh parameter of the curve from $0$ to $1$. And same will work for $0$ to $-1$. What is arc length formula, when curve is given parametric form as in your case $$\gamma (t)= (t^3+1, t^2-1)$$ Arc length formula is $$s(t)= \int_{1}^t\|\gamma'(t)\|dt$$ That is we have $$s(t)=\int_{1}^t\|(3t^2, 2t)\|dt$$ $$s(t)=\int_{1}^t t\sqrt{9t^2+4} dt$$ $$s(t)= \left[\frac{(4+9t^2)^\frac{3}{2}}{27}\right]_{1}^t$$ $$s(t)= \frac{(4+9t^2)^\frac{3}{2}-13^\frac{3}{2}}{27}$$ which gives $$t(s)= \left(\frac{(27s+13^\frac{3}{2})^\frac{2}{3}-4}{9}\right)^\frac{1}{2}$$ Putting the value of $t$ in $\gamma(t)$, you will have $\tilde{\gamma}(s)=\gamma(t(s))$ arc length parameterization.. - Reparametrizing the curve in terms of arc length from a base point means rewriting the equation of the curve so that it tells you what point is at distance $s$ from the base point for any given $s$. It’s easier to find the arc length parametrization directly. Let $s(u)$ be the length of the arc from $t=0$ (since that’s the value of $t$ that yields the point $\langle 1,-1\rangle$) to $t=u$; then \begin{align*}s(u)&=\int_0^u\sqrt{\left(\frac{dx}{dt}\right)^2+\left(\frac{dy}{dt}\right)^2}dt\\ &=\int_0^u\sqrt{(3t^2)^2+(2t)^2}dt\\ &=\int_0^u\sqrt{t^2(9t^2+4)}dt\\ &=\int_0^ut\sqrt{9t^2+4}dt\\ &=\frac1{27}\left[(9t^2+4)^{3/2}\right]_0^u\\ &=\frac1{27}\left((9u^2+4)^{3/2}-8\right)\;. \end{align*} Replace $u$ by $t$: the length of the arc from $\langle x(0),y(0)\rangle$ to $\langle x(t),y(t)\rangle$ is $$s(t)=\frac1{27}\left((9t^2+4)^{3/2}-8\right)\;,$$ so $$t(s)=\left(\frac19(27s+8)^{2/3}-4\right)^{1/2}\;.$$ This gives you the value of $t$ that specifies the point on the curve that is $s$ units from the initial point $\langle 1,-1\rangle$; to finish the job, you just need to express $x$ and $y$ as functions of $s$, which is a straightforward substitution into the $x(t)$ and $y(t)$ formulas. - Hint: Substitute $(x-1)^{1/3}=t$. Your integral will boil down to $$\int_{-1}^1t\sqrt{4+9t^2}\rm dt$$ Now set $4+9t^2=u$ and note that $\rm du=18t~~\rm dt$ which will complete the computation. (Note that you need to change the limits of integration while integrationg over $u$.) A Longer way: Now integrate by parts with $u=t$ and $\rm d v=\sqrt{4+9t^2}\rm dt$ and to get $v$, you'd like to keep $t=\dfrac{2\tan \theta}{3}$ - you can substitute $4+9t^2= x$ and then proceed.. –  zapkm Mar 1 '12 at 3:54 @PradipMishra Thank You. I don't know why I could not think of this! Thank you for the pointer! –  user21436 Mar 1 '12 at 4:13 Your integration could be done. However, there is a much easier way. Calculate the arc-length using the parametric version of the curve. We have $x=u^3+1$ and $y=u^2-1$. (I changed the names of the parameters because I want to reserve $t$ for the parameter of the endpoint.) Then $\frac{dx}{du}=3u^2$ and $\frac{dy}{du}=2u$. Thus the arclength from $u=0$ to $u=t$ is given by $$\int_0^t \sqrt{\left(\frac{dx}{du}\right)^2 +\left(\frac{dy}{du}\right)^2}\,du.$$ We have used the parametric arclength formula, much easier! The integration starts at $u=0$, since that is the value of the parameter that gives us the point $(1,-1)$. We end up integrating $\sqrt{9u^4+4u^2}$. Since $u\ge 0$, we can replace this by $u\sqrt{9u^2+4}$. Integrate, making the substitution $w=9u^2+4$. We arrive at $$\frac{1}{27}\left((9t^2+4)^{3/2}-8\right).\qquad (\ast)$$ This is the arclength $s$, expressed as a function of $t$. We want to parametrize in terms of $s$. So solve for $t$ in terms of $s$, using $(\ast)$. When you solve, there will be two candidate values of $t$. Take the non-negative one, since we started at $t=0$ and were told that t$is increasing. Finally, in the original parametrization, replace$t$by its value in terms of$s\$. -
2015-04-21 19:17:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9768731594085693, "perplexity": 151.88876871656643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246643088.10/warc/CC-MAIN-20150417045723-00206-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/vector-integral-question.429075/
Vector integral question 1. Sep 14, 2010 Melawrghk 1. The problem statement, all variables and given/known data 3. The attempt at a solution I get parts a and b, but I don't know what to do for part c. I can write a double integral with dx and dz (because y is constant) and substitute y=7 in, but I'm not sure how to integrate ax... Any hints would be appreciated. 2. Sep 14, 2010 diazona First, is ax constant over the integration region? If so, you can pull it out of the integral, just as with a numeric constant. If it's not constant, you'd need to find a way to express it in terms of things (i.e. other unit vectors) that are constant. 3. Sep 14, 2010 Melawrghk Well I guess ax is a unit vector in the x direction, so it must be constant. But how do I figure out the limits of integration for x and z? 4. Sep 14, 2010 diazona Well, what's the region of integration? 5. Sep 14, 2010 Melawrghk I don't know, it just says plane y=7.
2018-03-22 04:58:02
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8408814072608948, "perplexity": 384.6487789973738}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647768.45/warc/CC-MAIN-20180322034041-20180322054041-00795.warc.gz"}
https://solvedlib.com/how-do-you-find-the-radius-of-the-circle-x2-y2,324443
How do you find the radius of the circle: x^2 + y^2 + 12x - 2y + 21 = 0? Question: How do you find the radius of the circle: x^2 + y^2 + 12x - 2y + 21 = 0? Similar Solved Questions A company sells bicycles for $100 each. It costs the company$75 to make each bicycle.... A company sells bicycles for $100 each. It costs the company$75 to make each bicycle. The company has overhead costs of $25,000. What is the break point for the company? Question 30 options:$100,000 cannot be calculated with information provided 1,000 bicycles 25 25,000 bi... Problem: Using theequation y=yi+viyt-12gt2 calculate the time offlight for the initial launch angle of 45o.[y=0m ] [ Initial Velocity=12m/s ] [Cannon is firing from 'unevensurface' , 10m height] [Δx(45o )= 21.5m Δx(0o )= 17.13m]Please show work,thanks Problem: Using the equation y=yi+viyt-12gt2 calculate the time of flight for the initial launch angle of 45o. [y=0m ] [ Initial Velocity=12m/s ] [Cannon is firing from 'uneven surface' , 10m height] [ Δx(45o )= 21.5m Δx(0o )= 17.13m] Please show work,thanks... 19: When neutron strikes the nucleus of an atom; the atomic numberincreases by one unit decreases by one unit increases by two units decreases by two units remains the same 19: When neutron strikes the nucleus of an atom; the atomic number increases by one unit decreases by one unit increases by two units decreases by two units remains the same... The patient above develops CKD, His ABGS are: pH 7.32, pCO2 40, HCO3 19, PO2 80.... The patient above develops CKD, His ABGS are: pH 7.32, pCO2 40, HCO3 19, PO2 80. This acid/base disturbance is due to the inability of the kidneys to buffer the respiratory alkalosis. secrete HCO3 reabsorb HCO3. reabsorb urea 8. Sets of ABG the this went be on the GU/renal evam, bt auestions ike thi... B) The structured column of length L is subjected t0 a compressive load as shown in Figure Q1(b): The displacement y ata distance x, is given by the solution of: dy El dxz + Py = 0ColumnFigure Q1(b)Show that y = Acos(kx) + B sin (kx), where k = VEImarks) b) The structured column of length L is subjected t0 a compressive load as shown in Figure Q1(b): The displacement y ata distance x, is given by the solution of: dy El dxz + Py = 0 Column Figure Q1(b) Show that y = Acos(kx) + B sin (kx), where k = VEI marks)... The normal model connot bo Usod if the shape The sample siz0 Inoud: luss than 30. The sample siz0 nueds bo greater than 30.distnbution Wnknowin(b) What Is thu probability that random sample of nchariges resultssample Me anuu luss Ihan 15 mninules?The probabilily approximatuly (Roundl four ducImal placos naudad;)Enter your answer the answer box and then cllick Check Answerpun remainingClear Alll The normal model connot bo Usod if the shape The sample siz0 Inoud: luss than 30. The sample siz0 nueds bo greater than 30. distnbution Wnknowin (b) What Is thu probability that random sample of n chariges results sample Me anuu luss Ihan 15 mninules? The probabilily approximatuly (Roundl four ducIm... HORONTAlPLANFPROFILE PLANEFRONTAL PLANEHORIZONTAL PLANEFRONTAL PLANEPROFILE PLANEESCRIPTIVE GEOMETRYNAME_ HORONTAlPLANF PROFILE PLANE FRONTAL PLANE HORIZONTAL PLANE FRONTAL PLANE PROFILE PLANE ESCRIPTIVE GEOMETRY NAME_... Intangible assets are always carried (appear) on the balance sheet at their current market value. True False Intangible assets are always carried (appear) on the balance sheet at their current market value. True False... OURCES work Exercise 16-09 Baden Company has gathered the following information. 40,300 7,900 Units in beginning... OURCES work Exercise 16-09 Baden Company has gathered the following information. 40,300 7,900 Units in beginning work in process Units started into production Units in ending work in process Percent complete in ending work in process: Conversion costs Materials Costs incurred: Direct materials Direc... Which of the following is FALSE? One of the core principles identified by IOM to engage... Which of the following is FALSE? One of the core principles identified by IOM to engage patients in their own care is to ensure care is customized according to patient needs and values "PHR is believed to improve patient engagement, efficiency and quality of care by empowering of pat... Choose one response to answer the question below_ For the standardized test score values in this sample, 1290 is more extreme than 1080 is, that is, 1290 is farther from the sample mean standardized test score than 1080 is_ How would the 90% prediction interval for the mean grade point average when the standardized test score is 1290 compare to the 90% prediction interval for the mean grade point average when the standardized test score is 1080Choose One The intervals would be identical: The int Choose one response to answer the question below_ For the standardized test score values in this sample, 1290 is more extreme than 1080 is, that is, 1290 is farther from the sample mean standardized test score than 1080 is_ How would the 90% prediction interval for the mean grade point average when ... Test IV:Discussion/ Explanation:1. If you are small ruminant raiser it is important for you to be familiar with the diseases of your animals?if yes, why?2. What is the purpose of hoof trimming?3. What do you mean by animal breeding? Test IV:Discussion/ Explanation: 1. If you are small ruminant raiser it is important for you to be familiar with the diseases of your animals?if yes, why? 2. What is the purpose of hoof trimming? 3. What do you mean by animal breeding?... K2i:10 11 12Refer E periodic table: Give the transition ! symbol metal for an Use element - the in the that is Refe fourth metalF period the Co third period Na non -metal group 64 , main Or 74 Oroup with metal 2 < 40 in the Gth unt perod Fnsot Nont7 8Subma Gu K2i: 10 11 12 Refer E periodic table: Give the transition ! symbol metal for an Use element - the in the that is Refe fourth metalF period the Co third period Na non -metal group 64 , main Or 74 Oroup with metal 2 < 40 in the Gth unt perod Fnsot Nont 7 8 Subma Gu... For each of the salts on the left, match the salts on the right that can be compared directly; using Ksp values, to estimate solubilities_ (If more than one salt on the right can be directly compared, include all the relevant salts by writing your answer as string of characters without punctuation; e-g, ABC )magnesium fluoride 2. silver hydroxideA. BaCO3 B. AgzCrO4 Ca(OH)2 Zn(OH)2Write the expression for K in terms of the solubility; for cach salt;, when dissolved in water: magnesium fluoride si For each of the salts on the left, match the salts on the right that can be compared directly; using Ksp values, to estimate solubilities_ (If more than one salt on the right can be directly compared, include all the relevant salts by writing your answer as string of characters without punctuation; ... A) Differentiate between monopolistic and monopoly market structures. b) The short-run average total cost curve and... a) Differentiate between monopolistic and monopoly market structures. b) The short-run average total cost curve and the long-run average total cost curve are similarly shaped. What are the causes for the short run and long-run average total cost curve to slope down and up? c) What is natural monopol... A certain disease is known to be found in women over 60 with probability 0.05 plood test for the detection of the is known t0 incorrectly give negative result 12% of the time and incorrectly give positive result 3% of the time: If woman over 60 has taken the test and received negative result; what is the probability that she has the disease? A certain disease is known to be found in women over 60 with probability 0.05 plood test for the detection of the is known t0 incorrectly give negative result 12% of the time and incorrectly give positive result 3% of the time: If woman over 60 has taken the test and received negative result; what i... Factor by grouping.$9 x^{2}-13 x y+4 y^{2}$ Factor by grouping. $9 x^{2}-13 x y+4 y^{2}$... How do you solve 40- 13y = 664? How do you solve 40- 13y = 664?... How do you solve the following system?: 4x-y=-1 , 6x+5y=18 How do you solve the following system?: 4x-y=-1 , 6x+5y=18 ...
2022-05-25 00:55:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5407650470733643, "perplexity": 5441.653588144881}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577757.82/warc/CC-MAIN-20220524233716-20220525023716-00219.warc.gz"}
http://math.stackexchange.com/questions/35409/determining-the-balance-equations-for-a-poisson-process/35474
# Determining the balance equations for a Poisson Process I'm trying to do an exercise (not homework) and I fail to understand the solution the reader is giving me. Consider a gas station with one gas pump. Cars arrive at the gas station according to a Poisson process with an arrival rate of 20 cars per hour. An arriving car finding $n$ cars at the station immediately leaves with probability $q_n = \frac{n}{4}$ and joins the queue with probability $1-q_n$, where $n=0,1,2,3,4$. Cars are served in order of arrival. The service time (ie. the time for pumping and paying) is exponential and the mean service time is 3 minutes. Determine the stationary distribution of the number of cars at the gas station. Converting everything to minutes we have arrival rate $\lambda = \frac{1}{3}$ and service rate $\mu = \frac{1}{3}$. Now, the reader I use gives as solution: Solve the global balance equation $\lambda q_n p_n = \mu p_{n+1}, n=0,1,2,3$. Here, $p_n = P(L = n)$ is the probability that there are $n$ people in the system (either in the queue or in service). I fail to see how these balance equations are obtained. If I were to make a guess then I'd say "there are $\lambda$ amount of cars coming to the gas station per minute, of which $1-q_n\lambda$ goes to the gas station queue, which happens with probability $p_n$. The amount of cars leaving is $\mu p_{n+1}$ because a car was added to the queue so there were $p_{n+1}$ cars, so $\lambda(1-q_n)p_n = \mu p_{n+1}$" I'm sure this doesn't make any sense but I'm having a hard time getting a feel for this equation. Any help is appreciated. - I am studying Masters in Applied Statistics and doing the course Stochastic Models and Forecasting, similar topic to what your doing. I have some reference books below: Reference books such as Introduction to probability models by sheldon Ross and probability and Random Processes by Geoffry Grimmett and David Stirzaker, Third Edition, These books are good, but one by Ross is better to read and has explanations, easier to understand – user64079 Feb 26 at 16:04 @user64079: references are always welcome. Thanks! – Stijn Feb 26 at 16:45 Their equations $\lambda q_n p_n = \mu p_{n+1}$ are clearly wrong, and your equations $\lambda(1-q_n)p_n = \mu p_{n+1}$ are correct. They accidentally switched $q_n$ and $1-q_n$. Your chain is a "birth and death" process with birth rates $\lambda_n=\lambda (1-q_n)$ and death rates $\mu_n=\mu$. Solving the detailed balance equations $\lambda(1-q_n)p_n = \mu p_{n+1}$ proves that the process is reversible, and that the $p_n$'s are the stationary probabilities. The intuition for the balance equations is that, in the long run, the rate of transitions from $n$ to $n+1$ must equal the rate of transitions from $n+1$ to $n$. – Byron Schmuland Apr 27 '11 at 18:20
2013-05-18 18:28:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6762636303901672, "perplexity": 142.0169456686552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382705/warc/CC-MAIN-20130516092622-00051-ip-10-60-113-184.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1569633/catalan-numbers-number-of-lattice-paths-from-0-0-to-a-b-ab
# Catalan Numbers: Number of Lattice Paths from $(0,0)$ to $(a,b)$, $a>b$ The question is to find number of lattice paths from $(0,0)$ to $(a,b)$, $a>b$, such that for any point $(x,y)$ along the path, we have that $x\geqslant y$ Ive been trying to find some way to split the problem into 2 parts. The closest Ive gotten is: $$\text{Number of valid paths to } (a,b) = \left\{ \text{Number of valid paths from (0,0) to the line} (b,k), 0 \leqslant k \leqslant b \right\} + \left\{ \text{All paths from the line (b,k) to the point} (a,b) \right\}$$ The problem I am experiencing is that I can find no way to formulate this in a way such that over-counting of # of paths is avoided! • Wow sorry and thank you for the fix; I meant $a>b$ and have corrected it – HoopaU Dec 10 '15 at 18:13 • No problem; I just fixed it in the title as well. – Brian M. Scott Dec 10 '15 at 18:13 • Are you allowing only steps up and steps to the right? – Brian M. Scott Dec 10 '15 at 18:14 • Yes only steps right and upwards (monotonic?) – HoopaU Dec 10 '15 at 20:26 • – ckoe Dec 11 '15 at 7:48 This is the same as the number of Dyck paths from $\langle 0,0\rangle$ to $\langle a+b,a-b\rangle$, i.e., lattice paths using $a$ up-steps $\langle 1,1\rangle$ and $b$ down-steps $\langle 1,-1\rangle$ that never drop below the $x$-axis. (A Dyck up-step corresponds to a step to the right in your setting, and a Dyck down-step to a step up.) In what follows path means any lattice path using only these up-steps and down-steps. There are $\binom{a+b}a$ paths from $\langle 0,0\rangle$ to $\langle a+b,a-b\rangle$. If a path drops below the $x$-axis, it hits $\langle k,-1\rangle$ at some least $k>0$; reflect the part of the path to the right of this point in the line $y=-1$. Say that at $\langle k,-1\rangle$ there have been $r$ up-steps and $r+1$ down-steps; then the original path had $a-r$ up-steps and $b-r-1$ down-steps to the right of this point, so the reflected path has $a-r$ down-steps and $b-r-1$ up-steps to the right of this point. Thus, the reflected path has altogether $b-1$ up-steps and $a+1$ down-steps, so it terminates at $\langle a+b,b-a-2\rangle$. Conversely, any path of length $2n$ from $\langle 0,0\rangle$ to $\langle a+b,b-a-2\rangle$ must cross the line $y=-1$, and reflecting the part of it to the right of the first such crossing in the line $y=-1$ yields a path from $\langle 0,0\rangle$ to $\langle a+b,a-b\rangle$ that drops below the $x$-axis. There are $\binom{a+b}{a+1}$ paths of length $2n$ from $\langle 0,0\rangle$ to $\langle a+b,b-a-2\rangle$, so there are \begin{align*} \binom{a+b}a-\binom{a+b}{a+1}&=\binom{a+b}a-\frac{b}{a+1}\binom{a+b}{a+1}\\ &=\frac{a+1-b}{a+1}\binom{a+b}a \end{align*} paths from $\langle 0,0\rangle$ to $\langle a+b,a-b\rangle$ that do not drop below the $x$-axis. As a quick and dirty partial check, note that if $a=b$, this reduce to $\frac1{a+1}\binom{2a}a$, the $a$-th Catalan number, as of course it should, and if $b=0$, it’s $1$, again as it should be.
2019-06-27 00:53:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8289108276367188, "perplexity": 211.9040086971368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000609.90/warc/CC-MAIN-20190626234958-20190627020958-00128.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/5013/how-to-decompose-4-qubits-toffoli-gate-into-two-qubits-cnot-gate
# How to decompose 4 qubits Toffoli-gate into two-qubits CNOT gate? Can I decompose a 4-qubit Toffoli gate into two qubit CNOT gate without ancillary state? Yes, it is possible. A circuit is given e.g. in this answer to a closely related question.
2021-05-08 06:58:34
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9675614237785339, "perplexity": 2829.1424534402317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988850.21/warc/CC-MAIN-20210508061546-20210508091546-00480.warc.gz"}
https://math.stackexchange.com/questions/804338/expected-value-of-a-game-with-a-n-sided-die
# expected value of a game with a n sided die Suppose we have a n-sided die. When we roll it, we can be paid the outcome or we can choose to re-roll by paying $1/n$. What is the best strategy and what is the expected value of this game? As an approximation, I thought that to get the maximum value $n$ we need to roll $n$ times. So the best strategy is to roll until we get the maximum value $n$ and the expected value should be $n-1$. Is it right as an approximation? How can we calculate the exact best strategy and the exact expected value? • @ArthurSkirvin Ahh, but you're already paying for 2 rolls, so you're actually at $3 - \frac{2}{6}$, so expectation says roll again (for a d6, that is). – Zimul8r May 21 '14 at 21:48 • @Zimul8r You're right of course, my mistake. Not sure how I came up with that. I was trying to dispute a now deleted comment that suggested the optimal strategy to be to play until you got 4 or more and I got bungled up somehow. Thanks for the correction. – Arthur Skirvin May 21 '14 at 22:23 I believe that your strategy of waiting until you roll the maximum value is optimal. Let's say that you've rolled a value of $k_i$ on roll $i$ for a total score of $k_i-\frac{i-1}n$. If you can beat your roll of $k_i$ within $n-1$ rolls you'll end up beating your score as well. To demonstrate that let's take the worst-case scenario and say it takes you $n-1$ more rolls to beat $k_i$ and that you only beat it by one so that $k_{i+n-1}=k_i+1$. Your score would then be $$k_{i+n-1}-\frac{i+n-2}n=k_i+1-\frac{i+n-2}n=k_i+\frac{2-i}n\gt k_i-\frac{i-1}n.$$ So then if the probability of beating your roll of $k_i$ within $n-1$ more rolls (thus beating your score) is greater than 0.5 you should go for it. The probability of doing better than $k_i$ on your next roll is $1-\frac{k_i}n$, so the probability of first doing better than $k_i$ on your $m$th subsequent roll is geometrically distributed: $$p_M(m)=\left(\frac{k_i}n\right)^{m-1}\left(1-\frac{k_i}n\right)$$ Which means that the probability of doing better than $k_i$ within $n-1$ more rolls is $$\sum_{j=1}^{n-1}p_M(j)=\left(1-\frac{k_i}n\right)\left(\left(\frac{k_i}n\right)^{0}+\left(\frac{k_i}n\right)^{1}+...+\left(\frac{k_i}n\right)^{n-2}\right)=1-\left(\frac{k_i}n\right)^{n-1}$$ So even if $k_i=n-1$ we have that the probability of improving your score within $n-1$ more rolls is $$1-\left(\frac{n-1}n\right)^{n-1}$$ Which increases as $n$ increases. If $n=2$ this would be $0.5$, so for any $n \gt 2$ the probability of improving your score within $n-1$ more rolls even if you've rolled an $n-1$ is greater than 0.5, so you should do it. Thus if you haven't rolled an $n$ it's always best to keep going until you have. Since the expected number of rolls to get $n$ is $n$, the expected score under this strategy is $n-\frac{n-1}n$ . Note: Turns out this is not optimal. My strategy depends on looking at the probability of beating your current score with the next roll. In his answer, @ArthurSkirvin looks at the probability of beating it on any subsequent roll. As I should have expected, taking the longer view provides a better strategy. (I'm assuming an even number of die sides, so there's an $\frac{n}{2}$ side that's the highest side less than the die's $\frac{n+1}{2}$ average.) Proof: If you rolled $\ \frac{n}{2}$ or less on your $i+1^{st}$ roll and stopped, you have at most $\ \frac{n}{2}-\frac{i}{n}$, whereas the expected value of a reroll (minus the total cost) is $\ \frac{n}{2}+\frac{1}{2}-\frac{i+1}{n}$, which gives you an expected gain of $\frac{1}{2}-\frac{1}{n}$, which is $>0$ for anything bigger than a 2-sided die, so you should re-roll. (But amend the strategy if you're flipping coins.) If you rolled $\ \frac{n}{2}+1$ or higher on your $i+1^{st}$ roll, you have at least $\ \frac{n}{2}+1-\frac{i}{n}$, whereas the expected value of a reroll (minus the total cost) is still $\ \frac{n}{2}+\frac{1}{2}-\frac{i+1}{n}$, which gives you an expected loss of $\frac{1}{2}+\frac{1}{n}$, which is $>0$, so you should stop. To compute the expected value of the strategy, $\frac{1}{2}$ the time you roll higher than average on the first roll and stop. Given that you rolled on the upper half of the die, you'll average $\ \frac{3n+2}{4}$. The other half you re-roll. Half of those times, (now $\frac{1}{4}$ of the total probability), you roll higher than average and stop, this time winning $\ \frac{3n+2}{4}-\frac{1}{n}$ to pay for the re-roll. The next, you're at $\frac{1}{8}$ of the total probability, and you stop with $\ \frac{3n+2}{4}-\frac{2}{n}$, or re-roll, and so on. So the expected outcome looks like $\sum_{i=0}^{\infty} (\frac{3n+2}{4}- \frac{i}{n})(\frac{1}{2})^{i+1}$ For a 6-sided die, that's $4\frac{5}{6}$. • The expected value of the strategy in the original question (playing until you roll the maximum) seems to have an expected value greater than $4\frac56$ for a d6. Perhaps I'm missing something again, though. Could you explain? – Arthur Skirvin May 21 '14 at 23:01 • Nope, looks like you nailed it. My strategy depends on looking at the probability of beating your current score with the next roll. Yours looks at the probability of beating it on any subsequent roll. As I should have expected, taking the longer view provides a better strategy. – Zimul8r May 22 '14 at 13:00
2019-10-21 11:14:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7457349300384521, "perplexity": 357.06377012029424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987769323.92/warc/CC-MAIN-20191021093533-20191021121033-00078.warc.gz"}
http://tex.stackexchange.com/questions/91224/placing-two-figures-side-by-side
Placing two figures side by side [duplicate] Possible Duplicate: Two figures side by side Subfigures placed horizontally I want to place two figures side by side in my document. I am using the subcaption package and my code is as follows \documentclass[a4paper, 12pt, notitlepage]{report} \usepackage{amsfonts} % if you want blackboard bold symbols e.g. for real numbers \usepackage{graphicx} % if you want to include jpeg or pdf pictures \usepackage{amsmath} \usepackage{chngcntr} \usepackage{wrapfig} \usepackage{caption} \usepackage{subcaption} \begin{document} \begin{figure}[h] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=0.4\textwidth]{tworoundgameeffortvaryingmujnegative} \caption{J=-1} \label{fig:tworoundvarymujnegative} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=0.4\textwidth]{tworoundgameeffortvaryingmujpositive} \caption{$J=1$} \label{fig:tworoundvarymujpositive} \end{subfigure} \caption{Effort levels in round one and two with varying $\mu$. $\alpha=1$, $v=0.25$ and $w=0.25$} \label{fig:effortlevelsvarymu} \end{figure} \end{document} For some reason my two pictures get put on top of each other. Does anybody know why this may be the case? -
2014-11-24 23:08:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8438430428504944, "perplexity": 1798.1436123551237}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416405292026.28/warc/CC-MAIN-20141119135452-00089-ip-10-235-23-156.ec2.internal.warc.gz"}
http://accessmedicine.mhmedical.com/content.aspx?bookid=331&sectionid=40727167
Chapter 360 Wilson's disease is an autosomal recessive disorder caused by mutations in the ATP7Bgene, a membrane-bound, copper-transporting ATPase. Clinical manifestations are caused by copper toxicity and primarily involve the liver and the brain. Because effective treatment is available, it is important to make this diagnosis early. The frequency of Wilson's disease in most populations is about 1 in 30,000–40,000, and the frequency of carriers of ATP7B mutations is ˜1%. Siblings of a diagnosed patient have a 1 in 4 risk of Wilson's disease, whereas children of an affected patient have about a 1 in 200 risk. Because a large number of inactivating mutations have been reported in the ATP7Bgene, mutation screening for diagnosis is not routine, although this may be practical in the future. DNA haplotype analysis can be used to genotype siblings of an affected patient. Pathogenesis ATP7B protein deficiency impairs biliary copper excretion, resulting in positive copper balance, hepatic copper accumulation, and copper toxicity from oxidant damage. Excess hepatic copper is initially bound to metallothionein, but as this storage capacity is exceeded, liver damage begins as early as three years of age. Defective copper incorporation into apoceruloplasmin leads to excess catabolism and low blood levels of ceruloplasmin. Serum copper levels are usually lower than normal because of low blood ceruloplasmin, which normally binds >90% of serum copper. As the disease progresses, nonceruloplasmin serum copper (“free” copper) levels increase, resulting in copper buildup in other parts of the body, such as the brain, leading to neurologic and psychiatric disease. Clinical Presentation Hepatic Wilson's disease may present as hepatitis, cirrhosis, or as hepatic decompensation, typically in the mid to late teenage years in western countries, although the age of presentation is quite broad and extends into the fifth decade of life. An episode of hepatitis may occur, with elevated blood transaminase enzymes, with or without jaundice, and then spontaneously regress. Hepatitis often reoccurs, and most of these patients eventually develop cirrhosis. Hepatic decompensation is associated with elevated serum bilirubin, reduced serum albumin and coagulation factors, ascites, peripheral edema, and hepatic encephalopathy. In severe hepatic failure, hemolytic anemia may occur because large amounts of copper derived from hepatocellular necrosis are released into the bloodstream. The association of hemolysis and liver disease makes Wilson's disease a likely diagnosis. Neurologic The neurologic manifestations of Wilson's disease typically occur in patients in their early twenties, although the age of onset extends into the sixth decade of life. MRI and CT scans reveal damage in the basal ganglia and occasionally in the pons, medulla, thalamus, cerebellum, and subcortical areas. The three main movement disorders include dystonia, incoordination, and tremor. Dysarthria and dysphagia are common. In some patients, the clinical picture closely resembles that of Parkinson's disease. Dystonia can involve any part of the body and ... Sign in to your MyAccess Account while you are actively authenticated on this website via your institution (you will be able to tell by looking in the top right corner of any page – if you see your institution’s name, you are authenticated). You will then be able to access your institute’s content/subscription for 90 days from any location, after which you must repeat this process for continued access. Ok If your institution subscribes to this resource, and you don't have a MyAccess account, please contact your library's reference desk for information on how to gain access to this resource from off-campus. Subscription Options AccessMedicine Full Site: One-Year Subscription Connect to the full suite of AccessMedicine content and resources including more than 250 examination and procedural videos, patient safety modules, an extensive drug database, Q&A, Case Files, and more.
2015-07-04 22:29:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.193566232919693, "perplexity": 7726.331097644878}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096991.38/warc/CC-MAIN-20150627031816-00252-ip-10-179-60-89.ec2.internal.warc.gz"}
http://worksheets.tutorvista.com/limit-worksheets.html
# Limit Worksheets Limit Worksheets • Page 1 1. Evaluate $\underset{x\to 7}{\mathrm{lim}}$ . a. 10 b. 8 c. 9 d. 7 #### Solution: limx7 8x -56x - 7 = limx78(x - 7)x - 7 = limx7 8 = 8. 2. Find the value of $\underset{x\to 7}{\mathrm{lim}}$ . a. - $\frac{1}{5}$ b. - $\frac{1}{7}$ c. does not exist #### Solution: limx7 x - 7x2 - 14x + 49 = limx7 x - 7(x - 7)2 = limx7 1x - 7 = ± ∞ So , the limit does not exist. 3. What is the value of $\underset{x\to 5}{\mathrm{lim}}$ ? a. - 5 b. $\frac{5}{4}$ c. $\frac{5}{6}$ #### Solution: limx5 5(x - 5)x2 - 6x + 5 = limx5 5(x - 5)(x - 1)(x - 5) [Factor x2 - 6 x + 5.] = limx5 5x - 1 = 55 - 1 = 5 / 4 4. Find the value of $\underset{x\to 0}{\mathrm{lim}}$. a. 2 b. 3 c. -1 #### Solution: limx02x - 3x2x = limx0x(2 - 3x)x [Factor 2x - 3x2.] = limx0 (2 - 3x) = 2 - 3(0) = 2. 5. Evaluate $\underset{x\to 3}{\mathrm{lim}}$ . a. 6 b. 3 c. ∞ #### Solution: limx3 x2 - 9x - 3 = limx3 (x - 3)(x + 3)x - 3 [Factor x2 - 9.] = limx3 (x + 3) = (3 + 3) = 6. 6. Evaluate $\underset{x\to 1}{\mathrm{lim}}$. a. $\frac{1}{\sqrt{82}}$ b. does not exist c. ∞ #### Solution: limx1x2 + 81 -82x - 1 = limx1 (x2 + 81-82)(x2 + 81 +82)(x - 1)(x2 + 81 +82) [Rationalize the numerator.] = limx1 x2 - 1(x - 1)(x2 + 81 +82) = limx1 (x +1)(x - 1)(x - 1)(x2 + 81 +82) [Factor (x2 - 1).] = limx1 x + 1(x2 + 81 +82) [Cancel the common factor.] = 2282 = 182. 7. Find the equation of the tangent to the curve $f$ ($x$) = $x$2 such that $f$ (8) = 64. a. $y$ - 16$x$ - 64 = 0 b. $y$ = 16$x$ + 64 c. 16$x$ - $y$ - 64 = 0 d. None of the above #### Solution: f (x) = x2, f (8) = 64. limh0f (8 + h) - f (8)h = limh0(8 + h )2 - 64h = limh0 64 + 16h +h2 - 64h = limh016 + h =16. So, the slope of the tangent line to the curve at the point (8, 64) = 16. The equation of the tangent line to the curve at the point (8, 64) is y - 64 = 16 (x - 8) or y = 16x - 64. [Use slope point form.] 8. Find the equation of the tangent to the curve $f$ ($x$) = 1 - 2$x$ + $x$2 ; $f$ (-4) = 25. a. $y$ - 2$x$ = 0 b. $y$ + $x$ = 0 c. $y$ + 10 $x$ = -15 d. $y$ - 10$x$ = 0 #### Solution: f (x) = 1 - 2x + x2, f (-4) = 25 f (-4 + h) - f (-4)h = 1 - 2(-4 + h) +(-4 + h)2 - 25h = h - 10 limh0 f (-4 + h) - f (-4)h = limh0 ( h - 10) = - 10 So, the slope of the tangent to the curve at the point (-4, 25) = - 10 The equation of the tangent line to the curve at the point (-4, 25) is y - 25 = - 10(x-(-4)) y + 10x = -15. [Use slope point form.] 9. Find the equation of the tangent to the curve $f$ ($x$) = $\sqrt{x}$, $f$ (8) = $\sqrt{8}$. a. $y$ = $x$ + $\frac{\sqrt{8}}{2}$ b. $y$ = $\frac{x}{2\sqrt{8}}$ + $\frac{\sqrt{8}}{2}$ c. $y$ = $x$ - 1 d. $y$ = $x$ - $\frac{1}{2}$ #### Solution: f (x) = x; f (8) = 8 f (8 + h) - f (8)h = 8 + h -8 h = 8 + h -8h . 8 + h +88 + h +8 = hh(8 + h +8) = 18 + h +8 limh0 f (1 + h) - f (1)h = limh0 18 + h +8 = 128 So , the slope of the tangent line to the curve at the point (8,8) = 128 . The equation of the tangent line to the curve at the point (8,8) is y - 8 = 128(x - 8) or y = x28 + 82. [Use slope point form.] 10. Which of the following is the tangent to the curve $f$ ($x$) = $x$5 + 4 such that $f$ (0) = 4? a. $y$ + $x$ = 1 b. $y$ = 0 c. $y$ = $x$ d. $y$ = 4 #### Solution: f (x) = x5 + 4; f (0) = 4 f (0 + h) - f (0)h = (h5 + 4) - (0 + 4)h = h4 limh0 f (0 + h) - f (0)h = limh0 h4 = 0 So, The slope of the tangent line to the curve at the poiunt (0,4) = 0. The equation of the tangent line to the curve at the point (0,4) is y - 4 = 0(x - 0) y = 4 [Use slope point form.]
2017-07-21 00:44:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 60, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.676230788230896, "perplexity": 256.8773924945479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423629.36/warc/CC-MAIN-20170721002112-20170721022112-00213.warc.gz"}
http://cms.math.ca/cmb/kw/f?page=4
location:  Publications → journals Search results Search: All articles in the CMB digital archive with keyword f Expand all        Collapse all Results 76 - 100 of 443 76. CMB 2013 (vol 56 pp. 729) Currey, B.; Mayeli, A. The Orthonormal Dilation Property for Abstract Parseval Wavelet Frames In this work we introduce a class of discrete groups containing subgroups of abstract translations and dilations, respectively. A variety of wavelet systems can appear as $\pi(\Gamma)\psi$, where $\pi$ is a unitary representation of a wavelet group and $\Gamma$ is the abstract pseudo-lattice $\Gamma$. We prove a condition in order that a Parseval frame $\pi(\Gamma)\psi$ can be dilated to an orthonormal basis of the form $\tau(\Gamma)\Psi$ where $\tau$ is a super-representation of $\pi$. For a subclass of groups that includes the case where the translation subgroup is Heisenberg, we show that this condition always holds, and we cite familiar examples as applications. Keywords:frame, dilation, wavelet, Baumslag-Solitar group, shearletCategories:43A65, 42C40, 42C15 77. CMB 2013 (vol 56 pp. 449) Akbari, S.; Chavooshi, M.; Ghanbari, M.; Zare, S. The $f$-Chromatic Index of a Graph Whose $f$-Core has Maximum Degree $2$ Let $G$ be a graph. The minimum number of colors needed to color the edges of $G$ is called the chromatic index of $G$ and is denoted by $\chi'(G)$. It is well-known that $\Delta(G) \leq \chi'(G) \leq \Delta(G)+1$, for any graph $G$, where $\Delta(G)$ denotes the maximum degree of $G$. A graph $G$ is said to be Class $1$ if $\chi'(G) = \Delta(G)$ and Class $2$ if $\chi'(G) = \Delta(G) + 1$. Also, $G_\Delta$ is the induced subgraph on all vertices of degree $\Delta(G)$. Let $f:V(G)\rightarrow \mathbb{N}$ be a function. An $f$-coloring of a graph $G$ is a coloring of the edges of $E(G)$ such that each color appears at each vertex $v\in V(G)$ at most $f (v)$ times. The minimum number of colors needed to $f$-color $G$ is called the $f$-chromatic index of $G$ and is denoted by $\chi'_{f}(G)$. It was shown that for every graph $G$, $\Delta_{f}(G)\le \chi'_{f}(G)\le \Delta_{f}(G)+1$, where $\Delta_{f}(G)=\max_{v\in V(G)} \big\lceil \frac{d_G(v)}{f(v)}\big\rceil$. A graph $G$ is said to be $f$-Class $1$ if $\chi'_{f}(G)=\Delta_{f}(G)$, and $f$-Class $2$, otherwise. Also, $G_{\Delta_f}$ is the induced subgraph of $G$ on $\{v\in V(G):\,\frac{d_G(v)}{f(v)}=\Delta_{f}(G)\}$. Hilton and Zhao showed that if $G_{\Delta}$ has maximum degree two and $G$ is Class $2$, then $G$ is critical, $G_{\Delta}$ is a disjoint union of cycles and $\delta(G)=\Delta(G)-1$, where $\delta(G)$ denotes the minimum degree of $G$, respectively. In this paper, we generalize this theorem to $f$-coloring of graphs. Also, we determine the $f$-chromatic index of a connected graph $G$ with $|G_{\Delta_f}|\le 4$. Keywords:$f$-coloring, $f$-Core, $f$-Class $1$Categories:05C15, 05C38 78. CMB Online first Zhang, Jiao; Wang, Qing-Wen An Explicit Formula for the Generalized Cyclic Shuffle Map We provide an explicit formula for the generalized cyclic shuffle map for cylindrical modules. Using this formula we give a combinatorial proof of the generalized cyclic Eilenberg-Zilber theorem. Keywords:generalized Cyclic shuffle map, Cylindrical module, Eilenberg-Zilber theorem, Cyclic homologyCategories:19D55, 05E45 79. CMB 2013 (vol 57 pp. 210) Zhang, Jiao; Wang, Qing-Wen An Explicit Formula for the Generalized Cyclic Shuffle Map We provide an explicit formula for the generalized cyclic shuffle map for cylindrical modules. Using this formula we give a combinatorial proof of the generalized cyclic Eilenberg-Zilber theorem. Keywords:generalized Cyclic shuffle map, Cylindrical module, Eilenberg-Zilber theorem, Cyclic homologyCategories:19D55, 05E45 80. CMB 2013 (vol 56 pp. 745) Fu, Xiaoye; Gabardo, Jean-Pierre Dimension Functions of Self-Affine Scaling Sets In this paper, the dimension function of a self-affine generalized scaling set associated with an $n\times n$ integral expansive dilation $A$ is studied. More specifically, we consider the dimension function of an $A$-dilation generalized scaling set $K$ assuming that $K$ is a self-affine tile satisfying $BK = (K+d_1) \cup (K+d_2)$, where $B=A^t$, $A$ is an $n\times n$ integral expansive matrix with $\lvert \det A\rvert=2$, and $d_1,d_2\in\mathbb{R}^n$. We show that the dimension function of $K$ must be constant if either $n=1$ or $2$ or one of the digits is $0$, and that it is bounded by $2\lvert K\rvert$ for any $n$. Keywords:scaling set, self-affine tile, orthonormal multiwavelet, dimension functionCategory:42C40 81. CMB 2013 (vol 56 pp. 673) Ayadi, K.; Hbaib, M.; Mahjoub, F. Diophantine Approximation for Certain Algebraic Formal Power Series in Positive Characteristic In this paper, we study rational approximations for certain algebraic power series over a finite field. We obtain results for irrational elements of strictly positive degree satisfying an equation of the type $$\alpha=\displaystyle\frac{A\alpha^{q}+B}{C\alpha^{q}}$$ where $(A, B, C)\in (\mathbb{F}_{q}[X])^{2}\times\mathbb{F}_{q}^{\star}[X]$. In particular, we will give, under some conditions on the polynomials $A$, $B$ and $C$, well approximated elements satisfying this equation. Keywords:diophantine approximation, formal power series, continued fractionCategories:11J61, 11J70 82. CMB 2012 (vol 57 pp. 209) Zhao, Wei Erratum to the Paper "A Lower Bound for the Length of Closed Geodesics on a Finsler Manifold" We correct two clerical errors made in the paper "A Lower Bound for the Length of Closed Geodesics on a Finsler Manifold". Keywords:Finsler manifold, closed geodesic, injective radiusCategories:53B40, 53C22 83. CMB 2012 (vol 57 pp. 289) Ghasemi, Mehdi; Marshall, Murray; Wagner, Sven Closure of the Cone of Sums of $2d$-powers in Certain Weighted $\ell_1$-seminorm Topologies In a paper from 1976, Berg, Christensen and Ressel prove that the closure of the cone of sums of squares $\sum \mathbb{R}[\underline{X}]^2$ in the polynomial ring $\mathbb{R}[\underline{X}] := \mathbb{R}[X_1,\dots,X_n]$ in the topology induced by the $\ell_1$-norm is equal to $\operatorname{Pos}([-1,1]^n)$, the cone consisting of all polynomials which are non-negative on the hypercube $[-1,1]^n$. The result is deduced as a corollary of a general result, established in the same paper, which is valid for any commutative semigroup. In later work, Berg and Maserick and Berg, Christensen and Ressel establish an even more general result, for a commutative semigroup with involution, for the closure of the cone of sums of squares of symmetric elements in the weighted $\ell_1$-seminorm topology associated to an absolute value. In the present paper we give a new proof of these results which is based on Jacobi's representation theorem from 2001. At the same time, we use Jacobi's representation theorem to extend these results from sums of squares to sums of $2d$-powers, proving, in particular, that for any integer $d\ge 1$, the closure of the cone of sums of $2d$-powers $\sum \mathbb{R}[\underline{X}]^{2d}$ in $\mathbb{R}[\underline{X}]$ in the topology induced by the $\ell_1$-norm is equal to $\operatorname{Pos}([-1,1]^n)$. Keywords:positive definite, moments, sums of squares, involutive semigroupsCategories:43A35, 44A60, 13J25 84. CMB 2012 (vol 56 pp. 881) Xie, BaoHua; Wang, JieYan; Jiang, YuePing Free Groups Generated by Two Heisenberg Translations In this paper, we will discuss the groups generated by two Heisenberg translations of $\mathbf{PU}(2,1)$ and determine when they are free. Keywords:free group, Heisenberg group, complex triangle groupCategories:30F40, 22E40, 20H10 85. CMB 2012 (vol 57 pp. 326) Ivanov, S. V.; Mikhailov, Roman On Zero-divisors in Group Rings of Groups with Torsion Nontrivial pairs of zero-divisors in group rings are introduced and discussed. A problem on the existence of nontrivial pairs of zero-divisors in group rings of free Burnside groups of odd exponent $n \gg 1$ is solved in the affirmative. Nontrivial pairs of zero-divisors are also found in group rings of free products of groups with torsion. Keywords:Burnside groups, free products of groups, group rings, zero-divisorsCategories:20C07, 20E06, 20F05, , 20F50 86. CMB 2012 (vol 57 pp. 105) Luca, Florian; Shparlinski, Igor E. On the Counting Function of Elliptic Carmichael Numbers We give an upper bound for the number elliptic Carmichael numbers $n \le x$ that have recently been introduced by J. H. Silverman in the case of an elliptic curve without complex multiplication (non CM). We also discuss several possible ways for further improvements. Keywords:elliptic Carmichael numbers, applications of sieve methodsCategories:11Y11, 11N36 87. CMB 2012 (vol 57 pp. 240) Bernardes, Nilson C. Addendum to Limit Sets of Typical Homeomorphisms'' Given an integer $n \geq 3$, a metrizable compact topological $n$-manifold $X$ with boundary, and a finite positive Borel measure $\mu$ on $X$, we prove that for the typical homeomorphism $f : X \to X$, it is true that for $\mu$-almost every point $x$ in $X$ the restriction of $f$ (respectively of $f^{-1}$) to the omega limit set $\omega(f,x)$ (respectively to the alpha limit set $\alpha(f,x)$) is topologically conjugate to the universal odometer. Keywords:topological manifolds, homeomorphisms, measures, Baire category, limit setsCategories:37B20, 54H20, 28C15, 54C35, 54E52 88. CMB 2012 (vol 56 pp. 570) Hoang, Giabao; Ressler, Wendell Conjugacy Classes and Binary Quadratic Forms for the Hecke Groups In this paper we give a lower bound with respect to block length for the trace of non-elliptic conjugacy classes of the Hecke groups. One consequence of our bound is that there are finitely many conjugacy classes of a given trace in any Hecke group. We show that another consequence of our bound is that class numbers are finite for related hyperbolic $$\mathbb{Z}[\lambda]$$-binary quadratic forms. We give canonical class representatives and calculate class numbers for some classes of hyperbolic $$\mathbb{Z}[\lambda]$$-binary quadratic forms. Keywords:Hecke groups, conjugacy class, quadratic formsCategories:11F06, 11E16, 11A55 89. CMB 2012 (vol 57 pp. 194) Zhao, Wei A Lower Bound for the Length of Closed Geodesics on a Finsler Manifold In this paper, we obtain a lower bound for the length of closed geodesics on an arbitrary closed Finsler manifold. Keywords:Finsler manifold, closed geodesic, injective radiusCategories:53B40, 53C22 90. CMB 2012 (vol 57 pp. 61) Geschke, Stefan 2-dimensional Convexity Numbers and $P_4$-free Graphs For $S\subseteq\mathbb R^n$ a set $C\subseteq S$ is an $m$-clique if the convex hull of no $m$-element subset of $C$ is contained in $S$. We show that there is essentially just one way to construct a closed set $S\subseteq\mathbb R^2$ without an uncountable $3$-clique that is not the union of countably many convex sets. In particular, all such sets have the same convexity number; that is, they require the same number of convex subsets to cover them. The main result follows from an analysis of the convex structure of closed sets in $\mathbb R^2$ without uncountable 3-cliques in terms of clopen, $P_4$-free graphs on Polish spaces. Keywords:convex cover, convexity number, continuous coloring, perfect graph, cographCategories:52A10, 03E17, 03E75 91. CMB 2012 (vol 57 pp. 178) Rabier, Patrick J. Quasiconvexity and Density Topology We prove that if $f:\mathbb{R}^{N}\rightarrow \overline{\mathbb{R}}$ is quasiconvex and $U\subset \mathbb{R}^{N}$ is open in the density topology, then $\sup_{U}f=\operatorname{ess\,sup}_{U}f,$ while $\inf_{U}f=\operatorname{ess\,inf}_{U}f$ if and only if the equality holds when $U=\mathbb{R}^{N}.$ The first (second) property is typical of lsc (usc) functions and, even when $U$ is an ordinary open subset, there seems to be no record that they both hold for all quasiconvex functions. This property ensures that the pointwise extrema of $f$ on any nonempty density open subset can be arbitrarily closely approximated by values of $f$ achieved on large'' subsets, which may be of relevance in a variety of issues. To support this claim, we use it to characterize the common points of continuity, or approximate continuity, of two quasiconvex functions that coincide away from a set of measure zero. Keywords:density topology, quasiconvex function, approximate continuity, point of continuityCategories:52A41, 26B05 92. CMB 2012 (vol 56 pp. 683) Nikseresht, A.; Azizi, A. Envelope Dimension of Modules and the Simplified Radical Formula We introduce and investigate the notion of envelope dimension of commutative rings and modules over them. In particular, we show that the envelope dimension of a ring, $R$, is equal to that of the $R$-module $R^{(\mathbb{N})}$. Also we prove that the Krull dimension of a ring is no more than its envelope dimension and characterize Noetherian rings for which these two dimensions are equal. Moreover we generalize and study the concept of simplified radical formula for modules, which we defined in an earlier paper. Keywords:envelope dimension, simplified radical formula, prime submoduleCategories:13A99, 13C99, 13C13, 13E05 93. CMB 2012 (vol 57 pp. 42) Covering the Unit Sphere of Certain Banach Spaces by Sequences of Slices and Balls e prove that, given any covering of any infinite-dimensional Hilbert space $H$ by countably many closed balls, some point exists in $H$ which belongs to infinitely many balls. We do that by characterizing isomorphically polyhedral separable Banach spaces as those whose unit sphere admits a point-finite covering by the union of countably many slices of the unit ball. Keywords:point finite coverings, slices, polyhedral spaces, Hilbert spacesCategories:46B20, 46C05, 52C17 94. CMB 2012 (vol 57 pp. 12) Aribi, Amine; Dragomir, Sorin; El Soufi, Ahmad On the Continuity of the Eigenvalues of a Sublaplacian We study the behavior of the eigenvalues of a sublaplacian $\Delta_b$ on a compact strictly pseudoconvex CR manifold $M$, as functions on the set ${\mathcal P}_+$ of positively oriented contact forms on $M$ by endowing ${\mathcal P}_+$ with a natural metric topology. Keywords:CR manifold, contact form, sublaplacian, Fefferman metricCategories:32V20, 53C56 95. CMB 2012 (vol 56 pp. 477) Hypercyclic Abelian Groups of Affine Maps on $\mathbb{C}^{n}$ We give a characterization of hypercyclic abelian group $\mathcal{G}$ of affine maps on $\mathbb{C}^{n}$. If $\mathcal{G}$ is finitely generated, this characterization is explicit. We prove in particular that no abelian group generated by $n$ affine maps on $\mathbb{C}^{n}$ has a dense orbit. Keywords:affine, hypercyclic, dense, orbit, affine group, abelianCategories:37C85, 47A16 96. CMB 2012 (vol 56 pp. 709) Bartošová, Dana Universal Minimal Flows of Groups of Automorphisms of Uncountable Structures It is a well-known fact, that the greatest ambit for a topological group $G$ is the Samuel compactification of $G$ with respect to the right uniformity on $G.$ We apply the original description by Samuel from 1948 to give a simple computation of the universal minimal flow for groups of automorphisms of uncountable structures using Fraïssé theory and Ramsey theory. This work generalizes some of the known results about countable structures. Keywords:universal minimal flows, ultrafilter flows, Ramsey theoryCategories:37B05, 03E02, 05D10, 22F50, 54H20 97. CMB 2012 (vol 57 pp. 145) Mustafayev, H. S. The Essential Spectrum of the Essentially Isometric Operator Let $T$ be a contraction on a complex, separable, infinite dimensional Hilbert space and let $\sigma \left( T\right)$ (resp. $\sigma _{e}\left( T\right) )$ be its spectrum (resp. essential spectrum). We assume that $T$ is an essentially isometric operator, that is $I_{H}-T^{\ast }T$ is compact. We show that if $D\diagdown \sigma \left( T\right) \neq \emptyset ,$ then for every $f$ from the disc-algebra, \begin{equation*} \sigma _{e}\left( f\left( T\right) \right) =f\left( \sigma _{e}\left( T\right) \right) , \end{equation*} where $D$ is the open unit disc. In addition, if $T$ lies in the class $C_{0\cdot }\cup C_{\cdot 0},$ then \begin{equation*} \sigma _{e}\left( f\left( T\right) \right) =f\left( \sigma \left( T\right) \cap \Gamma \right) , \end{equation*} where $\Gamma$ is the unit circle. Some related problems are also discussed. Keywords:Hilbert space, contraction, essentially isometric operator, (essential) spectrum, functional calculusCategories:47A10, 47A53, 47A60, 47B07 98. CMB 2012 (vol 56 pp. 844) Shparlinski, Igor E. On the Average Number of Square-Free Values of Polynomials We obtain an asymptotic formula for the number of square-free integers in $N$ consecutive values of polynomials on average over integral polynomials of degree at most $k$ and of height at most $H$, where $H \ge N^{k-1+\varepsilon}$ for some fixed $\varepsilon\gt 0$. Individual results of this kind for polynomials of degree $k \gt 3$, due to A. Granville (1998), are only known under the $ABC$-conjecture. Keywords:polynomials, square-free numbersCategory:11N32 99. CMB 2012 (vol 57 pp. 113) A Lower Bound for the End-to-End Distance of Self-Avoiding Walk For an $N$-step self-avoiding walk on the hypercubic lattice ${\bf Z}^d$, we prove that the mean-square end-to-end distance is at least $N^{4/(3d)}$ times a constant. This implies that the associated critical exponent $\nu$ is at least $2/(3d)$, assuming that $\nu$ exists. Keywords:self-avoiding walk, critical exponentCategories:82B41, 60D05, 60K35 Subadditivity Inequalities for Compact Operators Some subadditivity inequalities for matrices and concave functions also hold for Hilbert space operators, but (unfortunately!) with an additional $\varepsilon$ term. It seems not possible to erase this residual term. However, in case of compact operators we show that the $\varepsilon$ term is unnecessary. Further, these inequalities are strict in a certain sense when some natural assumptions are satisfied. The discussion also stresses on matrices and their compressions and several open questions or conjectures are considered, both in the matrix and operator settings. Keywords:concave or convex function, Hilbert space, unitary orbits, compact operators, compressions, matrix inequalitiesCategories:47A63, 15A45
2015-01-31 11:37:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9539268016815186, "perplexity": 488.8259708471288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115869647.75/warc/CC-MAIN-20150124161109-00040-ip-10-180-212-252.ec2.internal.warc.gz"}
https://2021.help.altair.com/2021.2/winprop/topics/winprop/user_guide/aman/introduction/bilinear_interpolation_algorithm_winprop.htm
# Bilinear Interpolation (BI) Algorithm Four gain values are determined in the horizontal and vertical patterns depending on the two angles $\vartheta$ and $\phi$ . The gain values are weighted with their angle distances: (1) $G\left(\phi ,\vartheta \right)=\frac{|\phi |\cdot {G}_{v}\left(2\pi -\vartheta \right)+\left(\pi -|\phi |\right)\cdot {G}_{V}\left(\vartheta \right)+\vartheta \cdot {G}_{H}\left(\phi \right)+\left(\frac{\pi }{2}-\vartheta \right)\cdot {G}_{V}\left(0\right)}{|\phi |+\left(\pi -|\phi |\right)+\vartheta +\left(\frac{\pi }{2}-\vartheta \right)}$
2022-10-06 05:20:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9898185133934021, "perplexity": 627.9446765426517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00174.warc.gz"}
https://powersnail.com/2016/drawing-plants/
# Plants ## Turtle Graphics A turtle with a pen and draws a line along the path it moves. It receives commands like “forward”, “turning left”, etc. A tree can be drawn by a turtle with the commands: T = F L T R R T L B. An L-System can be described using this turtle. ## Trend in Graphics Planes: Good! Fractals! : Good! How nature does it? Now we understand it! • Build a simulation, but it turns out to be a bad one • As a result, we are exploring increasingly complex simulations/models of the nature. ### How to model a tree? Woody Plants: • Buds: where growth happens • Tip: One active tip at a time, all further growth occur here. Branching: • Side-Buds grows into branches. #### Simpler Model: Vines They do not branch unless it is snipped. When a vine is snipped, the next available bud will become the new tip, and starts growing there. This is called reiteration. #### Branching Model • Active Bud: Seek up and light • All Bud has a chance to branch • Reiteration: snipping and go to the next available tip ### 2D Model Elements: • branches • old buds (in case of reiteration) • active buds • The paths to each bud This can be easily modelled by a Computer Science Tree. Pruning: Move the pointer of active buds to the next node closer to root after the cut. Side Bugs will have their own grows over time. ### Basic Structure of Plants #### Trunk • Outer Bark • Inner Bark • Live Wood • Core The outer bark are dead tissues, that are cracked when the inner parts grow larger. The thickness, cracking patterns, etc. all have impacts on the exact outcome of the growth of barks. Usually, we photograph the bark, and use the photo as a texture, or even with depth information. Then we patch the photograph all over the trunk. #### Leaves Most leaves have simple structures. They either have parallel veins, or a principal vein with side veins. Leaves are hard to draw, not because of the shape, but because leaves are designed to absorb light. It has to let the light in but little light out. Four things affects its appearance: • Surface • Thickness • Translucency • Layers ##### Surface Some leaves have a tendency to dry, by having either wax or fur. Wax results in high reflectivity. Therefore, there has to be a lot of shine. This is just specularity for modelling. Fur, on the other hand, is complicated. When furs, especially short furs, are struck by light, it is more interesting.Example of furry plants: Peach Fruit It is more fuzzy when the angle is getting perpendicular to our plane, because there are more furs that light has to travel through. When we model it, we have two colors, for surface and for furs independently, mixed according to the angle between the Normal of surface and eye vector. ie. \nabla N \cdot \nabla e\$ ##### Thickness Sub-surface Scattering occurs in leaves, and the thicker the leaves are, the more light bounces inside the leaves. Moreover, there are chlorophylls inside the leaves that absorb different light wavelengths. ##### Translucency This is dependent on the concentration of chlorophylls. The less chlorophyll, the more transparent the leaves are. ##### Layers In some leaves, there are reflective layers below the chlorophyll layer, so light can be reflected and passes chlorophyll layer twice. ## Pixar Short Film: Lifted Tree Scene: Light through trees How to do this? A Big Clutter of fractal structure, plus some surface finishing. Then, just simulate the light travel through the tree. STUDIES graphics notes
2020-04-07 10:00:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6381022334098816, "perplexity": 4994.533576242539}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371700247.99/warc/CC-MAIN-20200407085717-20200407120217-00468.warc.gz"}
https://www.5minutefinance.org/concepts/foreign-exchange-arbitrage
SHARE: # Arbitrage in Foreign Exchange Markets ## Arbitrage in Foreign Exchange (FX) Markets In this presentation we’ll cover three arbitrages that are common in FX markets. These are: • Locational Arbitrage • Triangular Arbitrage • Covered Interest Arbitrage ## Importance Understanding these arbitrages is important in understanding how the FX market works. • Arbitrage will ensure that you always get a reasonable price in a liquid market. • So as the manager of a corporation, you can be sure you won’t get a bad cross or forward rate. ## Locational Arbitrage Say we have two banks, East and West. Ignoring bid/ask spreads, East quotes USD 1.50/GBP, and West quotes USD 1.40/GBP. • We can then simultaneously buy GBP at West, and sell at East, and earn USD 0.10 for every GBP traded in the arbitrage. Note that in this presentation we will be using the following common abbreviations: • EUR: Euros • GBP: U.K. Pounds • USD: U.S. Dollars • CHF: Swiss Francs • AUD: Australian Dollars • JPY: Japanese Yen Now consider East quotes USD 1.50/1.55 for GBP, and West quotes USD 1.56/1.58. The notation refers to the bid/ask. Is there an arbitrage opportunity? • Yes, buy 1 GBP from East for USD 1.55, and sell it to West for USD 1.56, earning USD 0.01 per GBP traded. What about if East quotes USD 1.50/1.55 for GBP, and West quotes USD 1.54/1.58? Is there an arbitrage opportunity? • No, you would be buying a GBP at East for USD 1.55 and selling at West for USD 1.54, thereby losing USD 0.01 per GBP traded. ## Currency Cross Rates Before talking about triangular arbitrage, it is helpful to define a ‘cross rate.’ • A currency cross-rate is an exchange rate that does not involve the USD. • For example, EUR/CHF and GBP/AUD are cross rates. CHF/USD is not a cross-rate. ## Calculating Cross-Rates Given direct or indirect quotes (quotes involving the USD) we can calculate the cross-rate. • For example, say it is USD 1.5/GBP and USD 0.8/CHF. Then it is \frac{1.5}{0.8} = \text{CHF}\ 1.875/\text{USD}. • To know that 1.875 is the amount of CHF for a GBP, you can manipulate the units algebraically: \frac{\frac{USD}{GBP}}{\frac{USD}{CHF}} = \frac{USD}{GBP}\frac{CHF}{USD} = \frac{CHF}{GBP} • Or simply note that it must be more than 1 CHF for a GBP, and not vice versa. ## Triangular Arbitrage Triangular arbitrage takes advantage of mispriced cross-rates. For example, if you open your terminal and see the following quotes: • USD 1.2/EUR • USD 1.5/GBP • EUR 1.3/GBP Is there an arbitrage opportunity? ## Triangular Arbitrage Let’s check. The cross-rate implied by the USD/EUR and USD/GBP quotes is EUR 1.25/GBP. However, the quote on our terminal is EUR 1.3/GBP, so yes, there is an arbitrage. • We’ll replicate buying the cross rate at EUR 1.25/GBP by trading through the USD/EUR and USD/GBP. We’ll also sell GBP for the quoted rate of EUR 1.3/GBP. Doing so correctly will earn us EUR 0.05. ## Triangular Arbitrage • Starting in USD, we first have to decide if we buy EUR or GBP. The key is to note that at EUR 1.3/GBP we are given too many EUR for 1 GBP. So we want to sell GBP for EUR here. • This tells us we want to go from USD to GBP, then from GBP to EUR, and finally back to USD. The arbitrage gets its name from the triangular route which we are taking through currencies. ## Triangular Arbitrage • So starting with USD 1.5, we convert it into GBP 1. • We then take the GBP 1 and convert it into EUR 1.3. • Finally we cover the EUR 1.3 into EUR 1.3 * USD 1.2/EUR = USD 1.56. • We return the USD 1.5, and are left with a profit of USD 0.06. Note, USD 0.06 converts into a profit of EUR 0.05 (0.06/1.2). This matches the profit we expected from the beginning: the difference in the cross rates. In the following app, you can put in any values for the exchange rates and see a sequence diagram of the arbitrage. You can also choose to see a triangle diagram (scroll down to see the profit). ## Covered Interest Arbitrage Given spot FX rates and interest rates, covered interest arbitrage will tell us what the forward/futures rate must be. • Covered interest arbitrage exploits interest rate differentials using forward/futures contracts to mitigate FX risk. • It ensures that you get a reasonable futures price for currency if you are trading in a liquid market. ## A Simple Example Say both the spot and one-year forward rate of the GBP is USD 1.5/GBP. Let the one-year interest rate in the US and UK be 2% and 5% respectively. • An arbitrage exists. Borrow USD 1.5 at 2% and convert it into GBP 1 and lend it at 4%. Also enter into a forward to sell GBP 1.04 one year forward at USD 1.5/GBP. • At the end of 1 year, you receive your GBP 1.04, convert it to USD 1.56, and repay the USD 1.53 you owe from your loan, leaving you with a USD 0.03 arbitrage profit. ## Calculator The following app will calculate covered interest arbitrage profits given a set of inputs. • The solid lines are transactions made immediately. The dotted lines are transactions which were arranged immediately, but do not take place until the expiration of the forward contract. • The interest rates must match the term of the forward contract. For example, if the forward expires in 6 months, then the interest rates are 6 month (not annualized) rates. ## ‘Uncovered’ Interest Arbitrage If you don’t sell the currency forward, then you are engaging in uncovered interest arbitrage, meaning you are attempting to exploit an interest rate differential without using forward/futures contracts. • Uncovered interest arbitrage is a inaccurate name, though, because the activity it describes is not an arbitrage. The trade is uncovered, and so there is exposure – sometimes significant – to FX risk. • A better title – and one that is often used – is the ‘Carry Trade.’ ## Does Someone Actually Earn These Arbitrages, and Can I? Yes, large banks earn these arbitrages every day. The process is completely automated – algorithms will do the trading without human intervention. • On each arbitrage however, they earn very small amounts of money. So transaction costs become very important. The lower your transaction costs, the smaller the arbitrage you can profitably take advantage of. • Because an individual could never get their transaction costs as low as a large bank, they couldn’t profitably take advantage of the small arbitrages which exist. ## Credits and Collaboration Click the following links to see the codeline-by-line contributions to this presentation, and all the collaborators who have contributed to 5-Minute Finance via GitHub.
2019-02-22 10:19:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19262658059597015, "perplexity": 5969.410587883573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247515149.92/warc/CC-MAIN-20190222094419-20190222120419-00411.warc.gz"}
http://math.stackexchange.com/questions/98936/on-atomic-and-atomless-subsets?answertab=oldest
# On atomic and atomless subsets In a measure space, let's call a measurable subset atomless wrt the measure, if it does not have an atomic subset. In particular a measurable subset with zero measure is atomless. There may be measurable subsets that are neither atomic nor atomless, for instance, the union of atomic subset(s) and atomless subset(s) with positive measure(s). 1. I was wondering if the converse is true, i.e. if a measurable subset is neither atomic nor atomless, then must it be the union of atomic subset(s) and atomless subset(s) with positive measure(s)? 2. I think the previous question is equivalent to whether any measurable subset can be partitioned into atomic subset(s) and atomless subset(s)? Thanks and regards! - See math.stackexchange.com/questions/56327/…. An answer hasn't yet been posted, but it may be helpful. –  Jonas Meyer Jan 14 '12 at 4:57 1. It is at least true if the measure is $\sigma$-finite. This guarantees that the measure space has at most countably many atomic subsets, so the union of them is measurable, and the complement of that union has positive measure and is atomless. (I don't see a reason why the union of all of the atomic subsets would have to be measurable in the non-$\sigma$-finite case, nor do I know of a counterexample.) 2. In the $\sigma$-finite case at least, you can apply the same reasoning. Restricting the measure to the $\sigma$-algebra you get by intersecting each of the sets in the original $\sigma$-algebra with some particular measurable set gives you another $\sigma$-finite measure space. Then you can use a method similar to the above to get the indicated partition. Or, just intersect with the partition of the whole space. (1) Here is an outline: First consider the finite case. Show that distinct atoms are disjoint. Show that if there are uncountably many atoms, then there is a positive integer $n$ and infinitely many atoms with measure greater than $1/n$. This is impossible in the finite case, so there are only countably many atoms. In the general $\sigma$-finite case, the space is a countable union of sets of finite measure, and each of those only has countably many atoms, and this implies that the whole space has only countably many atoms. –  Jonas Meyer Jan 14 '12 at 9:02
2015-10-10 00:22:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9035744071006775, "perplexity": 239.44067907474363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737936627.82/warc/CC-MAIN-20151001221856-00048-ip-10-137-6-227.ec2.internal.warc.gz"}
https://bcdudek.net/anova/beginning-to-explore-the-emmeans-package-for-post-hoc-tests-and-contrasts.html
# Chapter 6 Beginning to Explore the emmeans package for post hoc tests and contrasts The emmeans package is one of several alternatives to facilitate post hoc methods application and contrast analysis. It is a relatively recent replacement for the lsmeans package that some R users may be familiar with. It is intended for use with a wide variety of ANOVA models, including repeated measures and nested designs where the initial modeling would employ ‘aov,’ ‘lm’ ‘ez’ or ‘lme4’ (mixed models). ## 6.1 Using emmeans for pairwise post hoc multiple comparisons. Initially, a minimal illustration is presented. First is a “pairwise” approach to followup comparisons, with a p-value adjustment equivalent to the Tukey test. The emmeans function requires a model object to be passed as the first argument. We could use either fit1 (the aov object) or fit2 (the lm object) originally created in the base Anova section of this document. #library(emmeans) # reminder: fit.2 <- lm(dv~factora, data=hays) fit2.emm.a <- emmeans(fit.2, "factora", data=hays) pairs(fit2.emm.a, adjust="tukey") ## contrast estimate SE df t.ratio p.value ## control - fast 6.2 1.99 27 3.113 0.0117 ## control - slow 5.6 1.99 27 2.811 0.0239 ## fast - slow -0.6 1.99 27 -0.301 0.9513 ## ## P value adjustment: tukey method for comparing a family of 3 estimates plot(fit2.emm.a, comparisons = TRUE) #pairs(fit2.emm.a, adjust="none") The blue bars on the plot are the confidence intervals. The red arrowed lines represent a scheme to determine homogeneous groups. If the red lines overlap for two groups, then they are not signficantly different using the method chosen. The ‘adjust’ argument can take one of several useful methods. ‘tukey’ is default, but others including ‘sidak,’ ‘bonferroni,’ etc can be specified. Specifying ‘none’ produces unadjusted p-values. See help with ‘?emmeans::summary.emmGrid’ for details. Here is an example using the ‘holm’ method of adjustment. library(emmeans) fit2.emm.b <- emmeans(fit.2, "factora", data=hays) pairs(fit2.emm.b, adjust="holm") ## contrast estimate SE df t.ratio p.value ## control - fast 6.2 1.99 27 3.113 0.0131 ## control - slow 5.6 1.99 27 2.811 0.0181 ## fast - slow -0.6 1.99 27 -0.301 0.7655 ## ## P value adjustment: holm method for 3 tests plot(fit2.emm.b, comparisons = TRUE) #pairs(fit2.emm.a, adjust="none") ## 6.2 Analytical Contrasts Next, we will create linear contrasts and test them. Notice that in “testing” the contrast, no alpha-rate control adustments are made. This produces t-values that are the square root of the F’s we found above for the ‘split’ approach on ‘aov’ or the regression coefficient t values from ‘lm’ objects with ‘summary.’ It is also possible to obtain confidence intervals on the contrasts, and I show how an adjustment can be done (but it wouldn’t make sense to adjust on the CIs as I did here and not on the tests; both should be the same). Interpreting the CIs is problematic. They are in the scale of 2, -1, -1 and 0, 1, -1 (thus the 11.8 and -.6 psi values we have seen several times previously for this data set. It is all well and good if the only thing we are using the CIs for is to evaluate whether they overlap zero (as a proxy for the hypothesis test). But the actual range of values is arbitrarily dependent on the values of the Coefficients (thus their variance). One strategy might be to implement what we saw in SPSS UNIANOVA. We could constrain the largest coefficient value to be a “1,” and use fractions for the remainder, when necessary. So a redo of the earlier approach to contrasts above could look like this using test function on the contrasts fit to the emm model grid of means: lincombs <- contrast(fit2.emm.a, list(ac1=c(1,-.5,-.5), ac2=c(0,1,-1))) # second one not changed test(lincombs, adjust="none") ## contrast estimate SE df t.ratio p.value ## ac1 5.9 1.72 27 3.420 0.0020 ## ac2 -0.6 1.99 27 -0.301 0.7655 confint(lincombs, adjust="sidak") ## contrast estimate SE df lower.CL upper.CL ## ac1 5.9 1.72 27 1.82 9.98 ## ac2 -0.6 1.99 27 -5.32 4.12 ## ## Confidence level used: 0.95 ## Conf-level adjustment: sidak method for 2 estimates Now the psi value for the contrasts are directly related to how we speak about what the contrasts are evaluating. The mean of “control” is 5.9 units above the average of the “fast” and “slow” conditions. And for the second contrast, “fast” is .6 units below the mean of “slow.” In this scaling, the CIs are more directly interpretable at their edges. But make sure to note that the t values and p values do not change with this scaling change. ## 6.3 Concluding comments on emmeans The emmeans package is a very powerful tool. But it is almost overkill for a one-way design. Its utility will become impressive for factorial between-groups designs, for repeated measures designs, and for linear mixed effect models. The goal is to revisit it with the first two of those three applications.
2023-02-03 06:11:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5251157283782959, "perplexity": 3402.660273438282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.16/warc/CC-MAIN-20230203055519-20230203085519-00200.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/given-electric-field-region-e-2xi-find-net-electric-flux-through-cube-charge-enclosed-it-electric-flux_4396
# Given the Electric Field in the Region E=2xi,Find the Net Electric Flux Through the Cube and the Charge Enclosed by It. - Physics Given the electric field in the region vecE=2xhati, find the net electric flux through the cube and the charge enclosed by it. #### Solution Since the electric field has only x component, for faces normal to x direction, the angle between E and S is ±π/2. Therefore, the flux is separately zero for each face of the cube except the two shaded ones. The magnitude of the electric field at the left face is EL = 0      (As x = 0 at the left face) The magnitude of the electric field at the right face is ER = 2a    (As x = a at the right face) The corresponding fluxes are phi_L=vecE.DeltavecS=0 phi_R=vecE_R.DeltavecS=E_RDeltaScostheta=E_RDeltaS " "(.:theta=0^@) ϕR= ERa2 Net flux (ϕ) through the cube = ϕL+ϕR=0+ERa2=ERa2 ϕ=2a(a)2=2a3 We can use Gauss’s law to find the total charge q inside the cube. phi=q/(epsilon_0) q=ϕε0=2a3ε0 Concept: Electric Flux Is there an error in this question or solution?
2021-10-20 22:21:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.72200608253479, "perplexity": 586.2427983155376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585353.52/warc/CC-MAIN-20211020214358-20211021004358-00618.warc.gz"}
https://socratic.org/questions/58fe60a7b72cff2ea32d20e1
# Question d20e1 Sep 3, 2017 WARNING! Long answer! The limiting reactant is ${\text{O}}_{2}$; theoretical yield = 5,97 g; percent yield = 600 %. #### Explanation: We know that we will need a balanced equation with masses, moles, and molar masses of the compounds involved. 1. Gather all the information in one place with molar masses above the formulas and everything else below them. ${M}_{\textrm{r}} : \textcolor{w h i t e}{m m m} \text{28,02"color(white)(mll)"32,00"color(white)(mml)"44,01}$ $\textcolor{w h i t e}{m m m m m l} {\text{2CO" +color(white)(m) "O"_2 → color(white)(m)2"CO}}_{2}$ $\text{Mass/g:"color(white)(mm)25color(white)(mmll)"2,17} \textcolor{w h i t e}{m m m l} 36$ $\text{Moles:" color(white)(mmml)"0,892"color(white)(ll)"0,067 81} \textcolor{w h i t e}{m l l}$ $\text{Divide by:} \textcolor{w h i t e}{m l} 2 \textcolor{w h i t e}{m m m} 1$ $\text{Moles rxn:"color(white)(ml)"0,446"color(white)(m)"0,067 81}$ (a) Calculate the moles of $\text{CO}$ $\text{Moles of CO" = 25 color(red)(cancel(color(black)("g CO"))) × ("1 mol CO")/("28,02" color(red)(cancel(color(black)("g CO")))) = "1,12 mol CO}$ (b) Calculate moles of ${\text{O}}_{2}$ ${\text{2,17"color(red)(cancel(color(black)("g O"_2))) × ("1 mol O"_2)/("32,00" color(red)(cancel(color(black)("g O"_2)))) = "0,067 81 mol O}}_{2}$ 2. Identify the limiting reactant An easy way to identify the limiting reactant is to calculate the "moles of reaction" each will give: You divide the moles of each reactant by its corresponding coefficient in the balanced equation. I did that for you in the table above. ${\text{O}}_{2}$ is the limiting reactant because it gives the fewer moles of reaction. 3. Calculate the theoretical yield of ${\text{CO}}_{2}$. ${\text{Theoretical yield" = "0,067 81" color(red)(cancel(color(black)("mol O"_2))) × (2 color(red)(cancel(color(black)("mol CO"_2))))/(1 color(red)(cancel(color(black)("mol O"_2)))) × "44,01 g CO"_2/(1 color(red)(cancel(color(black)("mol CO"_2)))) = "5,97 g CO}}_{2}$ The theoretical yield of ${\text{CO}}_{2}$ is 5,97 g. 4. Calculate the percent yield of ${\text{CO}}_{2}$ The formula for percentage yield is color(blue)(bar(ul(|color(white)(a/a)"% yield" = "actual yield"/"theoretical yield" × 100 %color(white)(a/a)|)))" " "Percent yield" = (36 color(red)(cancel(color(black)("g"))))/("5,97" color(red)(cancel(color(black)("g")))) × 100 % = 600 %# The percent yield is 600 %. This is an impossible result. Are you sure you gave us the correct data?
2019-11-22 09:31:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 18, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8963416218757629, "perplexity": 4262.626420876505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671249.37/warc/CC-MAIN-20191122092537-20191122120537-00156.warc.gz"}
https://tex.stackexchange.com/questions/338699/macro-with-random-number-not-working
# Macro with Random Number Not Working I am attempting to create math fact worksheets for kids to practice, say, 2+3=5, but in every form of that equation: 2+3=_ _ _ 2+_ _ _=5 _ _ _+3=5 _ _ _=2+3 5=_ _ _+3 5=2+_ _ _ I would like random numbers to select the addends as well as the structure. Why does this code not work? \RandomType is probably where the bug lies, but I don't see the issue... \documentclass{article} \usepackage{ifthen} \usepackage{pgf} \usepackage{pgffor} \setlength{\parindent}{0pt} \pgfmathsetseed{\number\pdfrandomseed} \newcommand{\InitVariables} { \pgfmathsetmacro{\a}{int(random(0,10))} \pgfmathsetmacro{\b}{int(random(0,10))} \pgfmathsetmacro{\c}{int(\a+\b)} \pgfmathsetmacro{\r}{int(random(1,6))} % r will select one of the six types below. } \newcommand{\blank}{\_\_\_\_\_} \newcommand{\TypeOne} { \large \InitVariables \huge$$\a+\b=$$\blank \hspace{3cm} $$\a+\b=\c$$ \vspace{0.8cm} } % Types two through six you can probably skip reading. Start reading code again at the next comment. \newcommand{\TypeTwo} { \large \InitVariables \huge$$\a+\blank=\c$$ \hspace{3cm} $$\a+\b=\c$$ \vspace{0.8cm} } \newcommand{\TypeThree} { \large \InitVariables \huge$$\blank+\b=\c$$ \hspace{3cm} $$\a+\b=\c$$ \vspace{0.8cm} } \newcommand{\TypeFour} { \large \InitVariables \huge$$\blank=\a+\b$$ \hspace{3cm} $$\c=\a+\b$$ \vspace{0.8cm} } \newcommand{\TypeFive} { \large \InitVariables \huge$$\c=\blank+\b$$ \hspace{3cm} $$\c=\a+\b$$ \vspace{0.8cm} } \newcommand{\TypeSix} { \large \InitVariables \huge$$\c=\a+\blank$$ \hspace{3cm} $$\c=\a+\b$$ \vspace{0.8cm} } % start reading here again. :) \newcommand{\RandomType} { \ifcase\r\relax% \or \TypeOne \or \TypeTwo \or \TypeThree \or \TypeFour \or \TypeFive \or \TypeSix \fi } \begin{document} \RandomType % if I only write "\TypeSix" and \TypeFour" in the body, it compiles. % but it does not compile as given \end{document} \RandomType tests \r which is initialized in \InitVariables. Since \TypeOne, ..., \TypeSix do not use \r, you must move \pgfmathsetmacro{\r}{int(random(1,6))} to \RandomType.
2020-01-19 00:25:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5550190806388855, "perplexity": 12462.73841682431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250593994.14/warc/CC-MAIN-20200118221909-20200119005909-00510.warc.gz"}
https://www.math.purdue.edu/~chen/papers/aabcw-abs.html
#### Comparisons between the BBM equation and a Boussinesq system This project aims to cast light on a Boussinesq system of equations modelling two-way propagation of surface waves. Included in the study are existence results, comparisons between the Boussinesq equations and other wave models, and several numerical simulations. The existence theory is in fact a local well-posedness result that becomes global when the solution satisfies a practically reasonable constraint. The comparison result is concerned with initial velocities and wave profiles that correspond to unidirectional propagation. In this circumstance, it is shown that the solution of the Boussinesq system is very well approximated by an associated solution of the KdV or BBM equation over a long time scale of order ${1 \over \epsilon}$, where $\epsilon$ is the ratio of the maximum wave amplitude to the undisturbed depth of the liquid. This result confirms earlier numerical simulations and suggests further numerical experiments which are reported here. Min Chen (chen@math.purdue.edu)
2018-01-20 21:11:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6773821115493774, "perplexity": 404.9888046324568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889733.57/warc/CC-MAIN-20180120201828-20180120221828-00354.warc.gz"}
http://openstudy.com/updates/502e4ed1e4b09495313d6ffa
4meisu 3 years ago Given that 2sin^2 θ + sinθ - 1 = 0, find the two values for sin θ 1. ashna 2sin² θ + sin θ - 1 = 0 (2sin θ - 1)(sin θ + 1) = 0 2. lgbasallote imagine $$\sin \theta = x$$ so this thingy will become 2x^2 + x - 1 = 0 do you know how to solve for x there? 3. ashna so much can you understand ? 4. ashna @4meisu ? 5. 4meisu No I do not know how to solve for x.. What's next? 6. ashna Now we can split that into two equations 2sin θ - 1 = 0 and sin θ + 1 = 0 this much understood ? 7. 4meisu yes 8. 4meisu sinθ = 1/2 and sinθ = -1 ? 9. ashna Solving the first one: 2sin θ - 1 = 0 sin θ = 1/2 θ = 30°, 150° 10. ashna Second: sin θ + 1 = 0 sin θ = -1 θ = 270° 11. 4meisu Okay thank you 12. ashna :) 13. 4meisu The question is asking for only two values for theta though, which one would we choose?
2016-02-09 16:08:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7266452312469482, "perplexity": 1916.517752564422}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157262.85/warc/CC-MAIN-20160205193917-00134-ip-10-236-182-209.ec2.internal.warc.gz"}
https://itprospt.com/num/13959781/determine-whether-each-situation-involves-a-permutation
5 # Determine whether each situation involves a permutation or a combination. Then find the number of possibilities.How many ways can a hand of five cards consisting of... ## Question ###### Determine whether each situation involves a permutation or a combination. Then find the number of possibilities.How many ways can a hand of five cards consisting of four cards from one suit and one card from another suit be drawn from a standard deck of cards? Determine whether each situation involves a permutation or a combination. Then find the number of possibilities. How many ways can a hand of five cards consisting of four cards from one suit and one card from another suit be drawn from a standard deck of cards? #### Similar Solved Questions ##### Processes are always spontaneous whenKHand S refer to the system) Processes are always spontaneous when KHand S refer to the system)... ##### 13,) Here is an- electron = secondary density - structure predominates _ tesDsurrounding an X-ray crystal structure on the leftSide of the picture? model, Which kind ofAlpha helix Paralle bela sheet Antiparallel beta ~sheet Beta turns 13,) Here is an- electron = secondary density - structure predominates _ tesDsurrounding an X-ray crystal structure on the leftSide of the picture? model, Which kind of Alpha helix Paralle bela sheet Antiparallel beta ~sheet Beta turns... ##### Answer all parts: Evaluate the prefix expression: 26 13 2 X 3 3 10 b) Represent (~p - q) V ((p - ~r) Ar) using an ordered rooted tree: For the following tree, list the vertices visited using an inorder traversal. Answer all parts: Evaluate the prefix expression: 26 13 2 X 3 3 10 b) Represent (~p - q) V ((p - ~r) Ar) using an ordered rooted tree: For the following tree, list the vertices visited using an inorder traversal.... ##### 2[The probability of having a defective phone in a1 office is 45%6. If a sample of 11 phones are selected, find the probability that: (a) Less than 4 of them are defective (b) At least 8 of them are defective. (c) Compute the expected number and standard deviation of defective phone 2[The probability of having a defective phone in a1 office is 45%6. If a sample of 11 phones are selected, find the probability that: (a) Less than 4 of them are defective (b) At least 8 of them are defective. (c) Compute the expected number and standard deviation of defective phone... ##### An electron located on the x-axis at x= 3.55 proton is located on the V-axis at y 5.00x 10 - m: What is the electric potential at the origin? 1.01*10-v -2.68 x 10 6.94* 10 -V -1.18 x 10 3.05x 10 ~7.28* 10 -V 2.34*10-v -4,89 x 10 Zero Im not getting one of the answers listed above_ but /'m sure /'m working correctly: Please grade the work AND ANSWER that is neatlv oresented in mv Blue Book. An electron located on the x-axis at x= 3.55 proton is located on the V-axis at y 5.00x 10 - m: What is the electric potential at the origin? 1.01*10-v -2.68 x 10 6.94* 10 -V -1.18 x 10 3.05x 10 ~7.28* 10 -V 2.34*10-v -4,89 x 10 Zero Im not getting one of the answers listed above_ but /'m sure ... ##### What volume is occupied by an ideal gas at absolute zero? What volume is occupied by an ideal gas at absolute zero?... ##### Non-uniform but spherically symmetric distribution of charge has charge density p(r) given by p(r) po(1 -r/R) for r < R and p(r) = 0 for r > R_ Here the positive constant /o is 30/(7 R*)_ For full credit , show the Gaussian surfaces used and all steps_a) What is the magnitdue and direction of the electric field for r < R ?(b) What is the magnitude and direction of the electric field for r > R ?(c) How does the result for locatedl at the origincompare to the field of a point charge non-uniform but spherically symmetric distribution of charge has charge density p(r) given by p(r) po(1 -r/R) for r < R and p(r) = 0 for r > R_ Here the positive constant /o is 30/(7 R*)_ For full credit , show the Gaussian surfaces used and all steps_ a) What is the magnitdue and direction of... ##### 2.Solve the initial value problem; y dy _ 2+,)-1 dx 2.Solve the initial value problem; y dy _ 2+,)-1 dx... ##### A nerve impulse is propagated across a synapse when the neurotransmitter acetylcholine is released from the pre-synaptic membrane and diffuses across the 20 -nm-wide synaptic cleft to the post-synaptic membrane. a. How long does it take a nerve signal to cross a synapse? You can assume that the synaptic fluid is essentially water. b. What is the speed of transmission, in $\mathrm{m} / \mathrm{s} ?$ A nerve impulse is propagated across a synapse when the neurotransmitter acetylcholine is released from the pre-synaptic membrane and diffuses across the 20 -nm-wide synaptic cleft to the post-synaptic membrane. a. How long does it take a nerve signal to cross a synapse? You can assume that the syna... ##### Wter polnt chuBet #te (cd J2 shori Wter polnt chuBet #te (cd J2 shori... ##### Determine the truth value of each of the following logical statements:Let P(T.y. :) denote the statement "1 + ry = " where €.y. < € RVaVy_iP(T,y.:)Let P(. !. :) denote the statement "1 + ry =< where €.y. < €N:JyV:JlP(c.y, 2)Select one valid conclusion from the following list:Both and 2 are true.1 is true and 2 is false. Both and 2 arc falsc.1is false and 2 is true. Determine the truth value of each of the following logical statements: Let P(T.y. :) denote the statement "1 + ry = " where €.y. < € R VaVy_iP(T,y.:) Let P(. !. :) denote the statement "1 + ry =< where €.y. < €N: JyV:JlP(c.y, 2) Select one valid conclu... ##### 012) In & certain election 38% of the voters will choose candidate A, 12% will choose candidate B, and the remaining 50% will not elect After ' the clection day: you randomly take sample of 10 voters What is the probability that voters have chosen candidate A, has chosen candidate B, and the remaining did not elect at all?0.0990.0920.0840.074 012) In & certain election 38% of the voters will choose candidate A, 12% will choose candidate B, and the remaining 50% will not elect After ' the clection day: you randomly take sample of 10 voters What is the probability that voters have chosen candidate A, has chosen candidate B, and th... ##### Write possible equation for the fcllowing graph using transformations. Use coefficient of -1 or +1 Write possible equation for the fcllowing graph using transformations. Use coefficient of -1 or +1... ##### The polynomial of degree 4, P(c) - has root of multiplicity 2 at € 3 and roots of multiplicity I = 0 and € 4. It goes through the point (5,36)_Find formula for P(c)_Plz) = The polynomial of degree 4, P(c) - has root of multiplicity 2 at € 3 and roots of multiplicity I = 0 and € 4. It goes through the point (5,36)_ Find formula for P(c)_ Plz) =... ##### Problem 7.business determines that the lim 5(t) 27300 wnere S(t) models the total cost of new item in dollars_ montns after its introduction. This means that;By the end of the year the total cost will exceed 27300 dollars:The marginal cost will eventually reach 27300 dollars/day:The total cost will approach 27300 dollars over long enough time;The total cost will eventually reach 27300 dollars.none of the above Problem 7. business determines that the lim 5(t) 27300 wnere S(t) models the total cost of new item in dollars_ montns after its introduction. This means that; By the end of the year the total cost will exceed 27300 dollars: The marginal cost will eventually reach 27300 dollars/day: The total cost w... ##### Use the graph of the function f to determine the following limits, if they exist: (If an answer does not exist; enter DNE: ) lim f () 140lim f (c) 14-0f() = R+T+X 1 2-3 - - Use the graph of the function f to determine the following limits, if they exist: (If an answer does not exist; enter DNE: ) lim f () 140 lim f (c) 14-0 f() = R+T +X 1 2 -3 - -...
2022-08-11 06:35:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.742067277431488, "perplexity": 2558.645391537012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571234.82/warc/CC-MAIN-20220811042804-20220811072804-00466.warc.gz"}
http://mathhelpforum.com/advanced-algebra/190382-eigenvalues-2.html
# Math Help - Eigenvalues 1. ## Re: Eigenvalues we used row-operations and column operations to evaluate det(A - λI). the matrix we wound up with is NOT A-λI, it just has the same determinant. it is clear that with your original matrix A(e1) = (1,1,1,.....,1) ≠ n(e1) = (n,0,0,.....,0). 2. ## Re: Eigenvalues Originally Posted by Deveno we used row-operations and column operations to evaluate det(A - λI). the matrix we wound up with is NOT A-λI, it just has the same determinant. it is clear that with your original matrix A(e1) = (1,1,1,.....,1) ≠ n(e1) = (n,0,0,.....,0). What? 3. ## Re: Eigenvalues you're trying to find the eigenvectors of A = (a_ij) ,where every entry is 1, right? if v is an eigenvector, then Av = λv, so (A-λI)v = 0. when λ = n, this is: (A-nI)v = 0, a homogeneous linear system of equations. e1 is not a solution to that system. 4. ## Re: Eigenvalues Originally Posted by Deveno you're trying to find the eigenvectors of A = (a_ij) ,where every entry is 1, right? if v is an eigenvector, then Av = λv, so (A-λI)v = 0. when λ = n, this is: (A-nI)v = 0, a homogeneous linear system of equations. e1 is not a solution to that system. Is this not we obtain when we plug in n? $\begin{bmatrix}0&1&\cdots & 1\\0&-n&\cdots &0\\ \vdots & 0& \ddots \\ 0&\cdots & &-n \end{bmatrix}\Rightarrow\begin{bmatrix}0&0&\cdots & 0\\0&1&\cdots &0\\ \vdots & 0& \ddots \\ 0&\cdots & &1 \end{bmatrix}$ 5. ## Re: Eigenvalues Originally Posted by dwsmith Is this not we obtain when we plug in n? $\begin{vmatrix}0&1&\cdots & 1\\0&-n&\cdots &0\\ \vdots & 0& \ddots \\ 0&\cdots & &-n \end{vmatrix}\Rightarrow\begin{vmatrix}0&0&\cdots & 0\\0&1&\cdots &0\\ \vdots & 0& \ddots \\ 0&\cdots & &1 \end{vmatrix}$ is: $\begin{bmatrix}0&1&\cdots & 1\\0&-n&\cdots &0\\ \vdots & 0& \ddots \\ 0&\cdots & &-n \end{bmatrix}$ the original matrix we are finding eigenvalues/eigenvectors for? no, it is not. that matrix is: $\begin{bmatrix}1&1&\cdots & 1\\1&1&\cdots &1\\ \vdots & 1& \ddots \\ 1&\cdots & &1 \end{bmatrix}$ 6. ## Re: Eigenvalues Originally Posted by Deveno is: $\begin{bmatrix}0&1&\cdots & 1\\0&-n&\cdots &0\\ \vdots & 0& \ddots \\ 0&\cdots & &-n \end{bmatrix}$ the original matrix we are finding eigenvalues/eigenvectors for? no, it is not. that matrix is: $\begin{bmatrix}1&1&\cdots & 1\\1&1&\cdots &1\\ \vdots & 1& \ddots \\ 1&\cdots & &1 \end{bmatrix}$ $\begin{bmatrix}1-n&1&\cdots & 1\\1&1-n&\cdots &1\\ \vdots & 1& \ddots \\ 1&\cdots & &1-n \end{bmatrix}$ From this matrix, I can obtain the previous one I posted by elementary row operations. This matrix will revert to the matrix in post 5 and from there it can be further simplified to the one I just posted but a 2 in the a_{11} position not a zero. 7. ## Re: Eigenvalues the matrix in post 5 was not obtained by elementary row operations, but by elementary row AND column operations. column operations DO NOT preserve the solution space. 8. ## Re: Eigenvalues Originally Posted by Deveno the matrix in post 5 was not obtained by elementary row operations, but by elementary row AND column operations. column operations DO NOT preserve the solution space. From simplifying I obtain, $(2-n)x_1=-x_3-\cdots -x_n$ and $x_1=x_2$ with the rest free variables. What can I do now if this is correct? 9. ## Re: Eigenvalues is it not the case that: $\begin{bmatrix}1&1&\cdots&1\\1&1&\cdots&1\\ \vdots&\vdots&\ddots&\vdots\\1&1&\cdots&1 \end{bmatrix}\begin{bmatrix}1\\1\\ \vdots\\1 \end{bmatrix} = \begin{bmatrix}n\\n\\ \vdots\\n \end{bmatrix} = n\begin{bmatrix}1\\1\\ \vdots\\1 \end{bmatrix}$ doesn't that make (1,1,1,...,1) an eigenvector corresponding to the eigenvalue n? 10. ## Re: Eigenvalues Dealing with the same matrix but the diagonals now all zeros, the characteristic equation would be $(n-1-\lambda)(-\lambda-1)^{n-1}$ Page 2 of 2 First 12
2015-11-28 00:39:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8959099650382996, "perplexity": 1875.3697627039242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398450715.63/warc/CC-MAIN-20151124205410-00109-ip-10-71-132-137.ec2.internal.warc.gz"}
http://mathhelpforum.com/trigonometry/12698-trigonometric-identities.html
# Math Help - Trigonometric Identities 1. ## Trigonometric Identities My teacher was sick the day we learned the trig identities in class and I can't seem to find them in my text book - can someone post the oh say ten or so basic ones I should for sure know? Double angle functions, pythag ones, etc.? 2. Originally Posted by Pajkaj My teacher was sick the day we learned the trig identities in class and I can't seem to find them in my text book - can someone post the oh say ten or so basic ones I should for sure know? Double angle functions, pythag ones, etc.? sin^2(x) + cos^2(x) = 1 tan(x) = sin(x)/cos(x) sec^2(x) = 1 + tan(x) csc^2(x) = 1 + cot^2(x) sec(x) = 1/cos(x) csc(x) = 1/sin(x) cot(x) = 1/tan(x) = cos(x)/sin(x) sin(A + B) = sinAcosB + sinBcosA sin(A - B) = sinAcosB - sinBcosA cos(A + B) = cosAcosB - sinAsinB cos(A - B) = cosAcosB + sinAsinB tan(A + B) = (tanA + tanB)/(1 - tanAtanB) tan(A - B) = (tanA - tanB)/(1 + tanAtanB) the tan formulas are not as important to memorize, you can use the identity tan(x) = sin(x)/cos(x) to find them if necessary Double angle formulas: for these, you just apply the addtion formulas for two angles being the same. sin(2A) = sin(A + A) = sinAcosA + sinAcosA = 2sinAcosA cos(2A) = cos(A + A) = cosAcosA - sinAsinA = cos^2(A) - sin^2(A) remember, sin^2(A) + cos^2(A) = 1 so sin^2(A) = 1 - cos^2(A) and, cos^2(A) = 1 - sin^2(A) so cos(2A) = 1 - sin^2(A) - sin^2(A) = 1 - 2sin^2(A) or cos(2A) = cos^2(A) - (1 - cos^2(A)) = 2cos^2(A) - 1 tan(2A) = 2tanA/(1 - tan^2(A)) 3. Originally Posted by Pajkaj My teacher was sick the day we learned the trig identities in class and I can't seem to find them in my text book - can someone post the oh say ten or so basic ones I should for sure know? Double angle functions, pythag ones, etc.? for pythogorian identities, see diagram. sin(x) = opp/hyp
2015-04-25 15:47:50
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8759705424308777, "perplexity": 3979.409554950277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246649738.26/warc/CC-MAIN-20150417045729-00002-ip-10-235-10-82.ec2.internal.warc.gz"}
http://mathhelpforum.com/discrete-math/151470-axiom-regularity-problem.html
# Thread: Axiom of Regularity Problem 1. ## Axiom of Regularity Problem In conjunction with my other post on checking the regularity of a database, I am interested in solving Problem 1 on page 56 of Suppes' Axiomatic Set Theory: Prove that for all sets $A$, $B$, and $C$ it is not the case that $A\in B\land B\in C\land C\in A.$ Now, the proof of Theorem 2.106 on page 54 is bound to be similar. Here's the theorem statement: $\neg(A\in B\land B\in A).$ Proof: (Quoting directly from Suppes) Suppose that $A\in B\land B\in A.$ Then $(1)\quad A\in\{A,B\}\cap B\quad\text{and}\quad B\in\{A,B\}\cap A.$ By the axiom of regularity there is an $x$ in $\{A,B\}$ such that $\{A,B\}\cap x=0$ and by Theorem 2.43 [Ackbeet: on page 31. It states that $z\in\{x,y\}\iff z=x\vee z=y.$] $x=A\quad\text{or}\quad x=B.$ Hence $\{A,B\}\cap A=0\quad\text{or}\quad\{A,B\}\cap B=0,$ So here's my attempt at a proof of the non-existence of the 3-cycle: Suppose the assertion were false. Then the following hold: $(1)\quad A\in\{A,B,C\}\cap B,$ $(2)\quad B\in\{A,B,C\}\cap C,$ $(3)\quad C\in\{A,B,C\}\cap A.$ By the axiom of regularity there is an $x\in\{A,B,C\}$ such that $\{A,B,C\}\cap x=0.$ By the obvious extension of Theorem 43, $x=A\quad\text{or}\quad x=B\quad\text{or}\quad x=C.$ Hence, at least one of $\{A,B,C\}\cap A$ or $\{A,B,C\}\cap B$ or $\{A,B,C\}\cap C$ is zero, contradicting (1), (2), or (3). QED. Is this a valid proof? 2. Originally Posted by Ackbeet Is this a valid proof? Took me a little while to decipher, but I say yes, it is valid.
2016-08-26 14:40:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9191077947616577, "perplexity": 293.68304704434127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982295854.33/warc/CC-MAIN-20160823195815-00245-ip-10-153-172-175.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/401418/still-overfitting-svm-with-cross-validation-and-grid-search
Still Overfitting SVM with Cross-Validation and Grid Search I am relatively new to machine learning and am trying to implement an SVM for the first time on a project, but I'm running into some overfitting-related issues. Basically, I created a function called optimize() to optimize the hyperparameters C and gamma (using an rbf kernel) based on the average cross-validation accuracy obtained with each combination of gamma and C within a grid search (10^-3 to 10^3). I tested optimize() on the Iris dataset, using a 75%-25% training-testing split. I ran optimize() on the training set, and trained the model on this set using the hyperparameters that gave the best accuracy. I end up with a really high accuracy on the training set (~97%), but when I apply it to the test set I end up with really low accuracy (~32%). I know my problems most likely have to do with overfitting on the training set, but that confuses me since I thought that using cross-validation to tune the hyperparameters would avoid that. Any suggestions would be appreciated, thanks :) As suggested, I've added the function in question, written in Python, below. NOTE: normalize(x_train, x_test) normalizes the train and test data by subtracting the mean and dividing by the standard deviation of each feature in the train set. def optimize(X, Y, fold=3): acc_list = [] print("") print("Initiating Grid Search ...") print("") for i in range(-3, 4): print("C = {}".format(10**i)) sub_acc_list = [] for j in range(-3, 4): print(" Gamma = {}".format(10**j)) start = 0 stop = 0 validate_list = [] for i in range(fold): width = int(0.25*len(X)) start = random.randint(0, (len(X)-1)-width) stop = start + width train_features = X[:start] + X[stop:] test_features = X[start:stop] train_features, test_features = normalize(train_features, test_features) train_labels = Y[:start] + Y[stop:] test_labels = Y[start:stop] model = svm.SVC(kernel="rbf", gamma=10**j, C=10**i, decision_function_shape="ovo") model.fit(train_features, train_labels) output = model.predict(test_features) num_correct = 0 for k in range(len(output)): if output[k] == test_labels[k]: num_correct += 1 # print(num_correct) validate_list += [(num_correct/float(len(output)))*100, ] # print(" Validate List = {}".format(validate_list)) sub_acc_list += [round(average(validate_list), 0), ] acc_list += [sub_acc_list, ] max_acc = 0 max_parameters = () for i in range(len(acc_list)): for j in range(len(acc_list[i])): if acc_list[i][j] > max_acc: max_parameters = (10**i, 10**j) max_acc = acc_list[i][j] print("") print("Best Accuracy: {}%".format(max_acc)) print("") model = svm.SVC(C=max_parameters[0], gamma=max_parameters[1], decision_function_shape="ovo") model.fit(X, Y) return model • One possibility is that you have a non-random train-test split. Did you shuffle your data before the split? You can check which target values ended up in the train set and which ones ended up in the test set and see if the patterns are similar. – AlexK Apr 5 '19 at 21:20 • @AlexK Thanks a bunch for the suggestion. I did make sure to shuffle everything beforehand, and my train-test split is done by taking a random chunk of 40 or so examples from the shuffled dataset. After some more exploring I found that it seems to be predicting the same label for ALL the test examples, which makes sense why the accuracy is about 30% since there are 3 labels to pick from. – AlexP Apr 6 '19 at 16:34 • The accuracy should definitely not be that low. It seems like something is off in your training pipeline, so I recommend you add your code to the question. – AlexK Apr 6 '19 at 20:06 • Thanks, I've added the optimize() function to the post. – AlexP Apr 6 '19 at 22:32 • Don't appear to be any major issues with this code, other than you have two i counters in the nested for loops and that seems like an odd type of cross-validation where you allow the same samples to appear in different training iterations. You should add the remaining code, and probably post it on Stack Overflow. The answer to your original question is yes, cross-validation, at least in one of its standard forms like k-fold and stratified k-fold, is meant to make your training results generalizable to unseen data. – AlexK Apr 7 '19 at 4:35
2020-07-10 03:11:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47292405366897583, "perplexity": 1583.6373997377489}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655902496.52/warc/CC-MAIN-20200710015901-20200710045901-00000.warc.gz"}
https://en.wikipedia.org/wiki/Pseudorandom_generator
Pseudorandom generator This page is about the formal concept in theoretical computer science. For the common meaning of this term, see Pseudorandom number generator In theoretical computer science and cryptography, a pseudorandom generator (PRG) for a class of statistical tests is a deterministic procedure that maps a random seed to a longer pseudorandom string such that no statistical test in the class can distinguish between the output of the generator and the uniform distribution. The random seed is typically a short binary string drawn from the uniform distribution. Many different classes of statistical tests have been considered in the literature, among them the class of all Boolean circuits of a given size. It is not known whether good pseudorandom generators for this class exist, but it is known that their existence is in a certain sense equivalent to (unproven) circuit lower bounds in computational complexity theory. Hence the construction of pseudorandom generators for the class of Boolean circuits of a given size rests on currently unproven hardness assumptions. Definition Let ${\displaystyle {\mathcal {A}}=\{A:\{0,1\}^{n}\to \{0,1\}^{*}\}}$ be a class of functions. These functions are the statistical tests that the pseudorandom generator will try to fool, and they are usually algorithms. Sometimes the statistical tests are also called adversaries. A function ${\displaystyle G:\{0,1\}^{\ell }\to \{0,1\}^{n}}$ with ${\displaystyle \ell \leq n}$ is a pseudorandom generator against ${\displaystyle {\mathcal {A}}}$ with bias ${\displaystyle \epsilon }$ if, for every ${\displaystyle A}$ in ${\displaystyle {\mathcal {A}}}$, the statistical distance between the distributions ${\displaystyle A(G(U_{\ell }))}$ and ${\displaystyle A(U_{n})}$ is at most ${\displaystyle \epsilon }$, where ${\displaystyle U_{k}}$ is the uniform distribution on ${\displaystyle \{0,1\}^{k}}$. The quantity ${\displaystyle \ell }$ is called the seed length and the quantity ${\displaystyle n-\ell }$ is called the stretch of the pseudorandom generator. A pseudorandom generator against a family of adversaries ${\displaystyle ({\mathcal {A}}_{n})_{n\in \mathbb {N} }}$ with bias ${\displaystyle \epsilon (n)}$ is a family of pseudorandom generators ${\displaystyle (G_{n})_{n\in \mathbb {N} }}$, where ${\displaystyle G_{n}:\{0,1\}^{\ell (n)}\to \{0,1\}^{n}}$ is a pseudorandom generator against ${\displaystyle {\mathcal {A}}_{n}}$ with bias ${\displaystyle \epsilon (n)}$ and seed length ${\displaystyle \ell (n)}$. In most applications, the family ${\displaystyle {\mathcal {A}}}$ represents some model of computation or some set of algorithms, and one is interested in designing a pseudorandom generator with small seed length and bias, and such that the output of the generator can be computed by the same sort of algorithm. Applications Cryptography In cryptography, the class ${\displaystyle {\mathcal {A}}}$ usually consists of all circuits of size polynomial in the input and with a single bit output, and one is interested in designing pseudorandom generators that are computable by a polynomial-time algorithm and whose bias is negligible in the circuit size. These pseudorandom generators are sometimes called cryptographically secure pseudorandom generators (CSPRGs). It is not known if cryptographically secure pseudorandom generators exist. Proving that they exist is difficult since their existence implies P ≠ NP, which is widely believed but a famously open problem. The existence of cryptographically secure pseudorandom generators is widely believed as well[citation needed] and they are necessary for many applications in cryptography. The pseudorandom generator theorem shows that cryptographically secure pseudorandom generators exist if and only if one-way functions exist. Uses Pseudorandom generators have numerous applications in cryptography. For instance, pseudorandom generators provide an efficient analog of one-time pads. It is well known that in order to encrypt a message m in a way that the cipher text provides no information on the plaintext, the key k used must be random over strings of length |m|. Perfectly secure encryption is very costly in terms of key length. Key length can be significantly reduced using a pseudorandom generator if perfect security is replaced by semantic security. Common constructions of stream ciphers are based on pseudorandom generators. Pseudorandom generators may also be used to construct symmetric key cryptosystems, where a large number of messages can be safely encrypted under the same key. Such a construction can be based on a pseudorandom function family, which generalizes the notion of a pseudorandom generator. Pseudorandom generators and derandomization A main application of pseudorandom generators lies in the derandomization of computation that relies on randomness, without corrupting the result of the computation. Physical computers are deterministic machines, and obtaining true randomness can be a challenge. Pseudorandom generators can be used to efficiently simulate randomized algorithms with using little or no randomness. In such applications, the class ${\displaystyle {\mathcal {A}}}$ describes the randomized algorithm or class of randomized algorithms that one wants to simulate, and the goal is to design an "efficiently computable" pseudorandom generator against ${\displaystyle {\mathcal {A}}}$ whose seed length is as short as possible. If a full derandomization is desired, a completely deterministic simulation proceeds by replacing the random input to the randomized algorithm with the pseudorandom string produced by the pseudorandom generator. The simulation does this for all possible seeds and averages the output of the various runs of the randomized algorithm in a suitable way. Constructions Pseudorandom generators for polynomial time A fundamental question in computational complexity theory is whether all polynomial time randomized algorithms for decision problems can be deterministically simulated in polynomial time. The existence of such a simulation would imply that BPP = P. To perform such a simulation, it is sufficient to construct pseudorandom generators against the family F of all circuits of size s(n) whose inputs have length n and output a single bit, where s(n) is an arbitrary polynomial, the seed length of the pseudorandom generator is O(log n) and its bias is ⅓. In 1991, Noam Nisan and Avi Wigderson provided a candidate pseudorandom generator with these properties. In 1997 Russell Impagliazzo and Avi Wigderson proved that the construction of Nisan and Wigderson is a pseudorandom generator assuming that there exists a decision problem that can be computed in time 2O(n) on inputs of length n but requires circuits of size 2Ω(n). Pseudorandom generators for logarithmic space While unproven assumption about circuit complexity are needed to prove that the Nisan–Wigderson generator works for time-bounded machines, it is natural to restrict the class of statistical tests further such that we need not rely on such unproven assumptions. One class for which this has been done is the class of machines whose work space is bounded by ${\displaystyle O(\log n)}$. Using a repeated squaring trick known as Savitch's theorem, it is easy to show that every probabilistic log-space computation can be simulated in space ${\displaystyle O(\log ^{2}n)}$. Noam Nisan (1992) showed that this derandomization can actually be achieved with a pseudorandom generator of seed length ${\displaystyle O(\log ^{2}n)}$ that fools all ${\displaystyle O(\log n)}$-space machines. Nisan's generator has been used by Saks and Zhou (1999) to show that probabilistic log-space computation can be simulated deterministically in space ${\displaystyle O(\log ^{1.5}n)}$. This result is still the best known derandomization result for general log-space machines in 2012. Pseudorandom generators for linear functions When the statistical tests consist of all multivariate linear functions over some finite field ${\displaystyle \mathbb {F} }$, one speaks of epsilon-biased generators. The construction of Naor & Naor (1990) achieves a seed length of ${\displaystyle \ell =\log n+O(\log(\epsilon ^{-1}))}$, which is optimal up to constant factors. Pseudorandom generators for linear functions often serve as a building block for more complicated pseudorandom generators. Pseudorandom generators for polynomials Viola (2008) proves that taking the sum of ${\displaystyle d}$ small-bias generators fools polynomials of degree ${\displaystyle d}$. The seed length is ${\displaystyle \ell =d\cdot \log n+O(2^{d}\cdot \log(\epsilon ^{-1}))}$. Pseudorandom generators for constant-depth circuits Constant depth circuits that produce a single output bit.[citation needed] Pseudorandom generators testing NIST announced SP800-22 Randomness tests to test whether a pseudorandom generator produces high quality random bits. Yongge Wang showed that NIST testing is not enough to detect weak pseudorandom generators and developed statistical distance based testing technique LILtest.[1] Limitations on the probability of pseudorandom generators The pseudorandom generators used in cryptography and universal algorithmic derandomization have not been proven to exist, although their existence is widely believed. Proofs for their existence would imply proofs of lower bounds on the circuit complexity of certain explicit functions. Such circuit lower bounds cannot be proved in the framework of natural proofs assuming the existence of stronger variants of cryptographic pseudorandom generators.
2016-10-26 19:26:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 35, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8494179844856262, "perplexity": 381.3547411517082}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720972.46/warc/CC-MAIN-20161020183840-00080-ip-10-171-6-4.ec2.internal.warc.gz"}
https://labs-org.ru/c-sharp17-eng/
# Lesson # 17. Abstract Data Types Дата изменения: 3 июня 2021 Programming on c sharp in Microsoft Visual Studio. Abstract Data Types in C# Содержание: ## Theory ### Stack • Stack represents a simple last-in-first-out (LIFO) non-generic collection of objects. • Stack interface: class Stack<T> { public int Count { get; } public void Push(T item); public T Pop(); public T Peek(); } • If Count is less than the capacity of the stack, Push is an O(1) operation. If the capacity needs to be increased to accommodate the new element, Push becomes an O(n) operation, where n is Count. Pop is an O(1) operation. • Stack accepts null as a valid value and allows duplicate elements. • Examples of working with a Stack: • Push method Inserts an object at the top of the Stack • // namespace to work with a Stack class using System.Collections; static void Main(string[] args) { // Creates and initializes a new Stack. Stack myStack = new Stack(); myStack.Push("Hello"); myStack.Push("World"); myStack.Push("!");   // Displays the properties and values of the Stack. Console.WriteLine("myStack"); Console.WriteLine($"Count: {myStack.Count}"); // Count: 3 Console.Write("Values: "); foreach (Object obj in myStack) Console.Write($"{obj} "); // Values: ! World Hello } • Pop method removes and returns the object at the top of the Stack • var s = new Stack<int>(); s.Push(1); s.Push(2); s.Push(3); while (s.Count > 0) Console.WriteLine(s.Pop()); // 3 2 1 Console.WriteLine($"s count: {s.Count}"); // 0 var s = new Stack<int>(); s.Push(1); s.Push(2); s.Push(3); var s1 = new Stack<int>(); while (s.Count > 0) s1.Push(s.Pop()); Console.WriteLine($"s1 count: {s1.Count}"); // 3 while (s1.Count > 0) Console.WriteLine(s1.Pop()); // 1 2 3 • Peek method returns the object at the top of the Stack without removing it ### Queue • Queue is a collection of data organized by the FIFO principle: First In – First Out • Queue interface: class Queue<T> { public int Count { get; } public void Enqueue(T item); public T Dequeue(); public T Peek(); } Examples of working with a Queue: var q = new Queue<int>(); // Initializing of the new instance of the Queue<T> class q.Enqueue(3); // Adds an object to the end of the Queue q.Enqueue(2); q.Enqueue(5); // 3 2 5   var q1 = new Queue<int>(); while (q.Count > 0) q1.Enqueue(q.Dequeue()); while (q1.Count>0) WriteLine(q1.Dequeue()); Stack Lab 1. Working with a Stack class To do: There is a string containing three types of brackets: round (), square [], and curly {}, and any other characters. Check whether the brackets are correctly placed in it. For example, in the line ([]{})[] brackets are placed correctly, but not in the line ([]]. Note: it is better to use the next algorithm: 1. Create an empty stack. 2. Iterate over the characters of the string using a loop. 3. If the current character is an opening bracket, then push it at the top of the stack. 4. If the current character is a closing bracket, then check the top element of the stack: the corresponding opening bracket must be there. 5. In the end, the stack must become empty, otherwise, the string had incorrect brackets. The resulting example: The string: ([a]{b}c)[] The brackets are placed correctly! The string: (a[]] The brackets are placed INcorrectly! The string: ()) The brackets are placed INcorrectly! [Solution and Project name: Lesson_17Lab1, file name L17Lab1.cs] ✍ How to do: • Create a new project with the name and file name as it is specified in the task. • Create a method to check the characters of the string. Only one parameter is needed — the string itself. The method will return a boolean type: true — if the brackets are placed correctly inside the string, and false — if the brackets are placed incorrectly inside the string. The signature of the method: • public static bool CheckBrackets(string s) { ... } 1. Create an empty stack. • After, create an instance of the Stack class to push opening brackets into it. Since we’re going to work with characters it has to be of char type. Place the declaration inside the method: • Stack<char> st = new Stack<char>(); 2. Iterate over the characters of the string using a loop. • To iterate over the string characters we’re going to use foreach loop: • foreach (char c in s) { ... } 3. If the current character is an opening bracket, then push it at the top of the stack. • We’re going to store all opening brackets in our stack. Create an if statement to check the current string character to see if it is an opening bracket, if so, push this bracket into the stack. Place the code into the foreach loop, instead of ...: • if (c == '[' || c == '(' || c == '{') st.Push(c); Push(Object) method inserts an object at the top of the Stack. 4. If the current character is a closing bracket, then check the top element of the stack: the corresponding opening bracket must be there. • Let’s imagine that our stack is no longer empty, and we have to check whether the opening bracket at the top of the stack (that is, it was pushed last) corresponds to the closing bracket — the value of the current character. Add the If statement to check these conditions: • if (st.Count > 0) if ((st.Peek() == '[' && c == ']') || (st.Peek() == '(' && c == ')') || (st.Peek() == '{' && c == '}')) { ... } Peek() method returns the object at the top of the Stack without removing it. • At the end of the check, our stack should become empty. So we need to remove the corresponding bracket from the stack. Place the statement inside the if: • // ... st.Pop(); Pop() removes and returns the object at the top of the Stack. • One more thing we have to do is to count all the brackets inside the string. If the number of the brackets is odd, so there is an extra bracket or a lack of bracket inside the string. Define the counter before the foreach loop: • int countBrackets = 0; string brackets = "[]{}()"; • Inside the loop, we need to increase the counter in the case if the current character is a bracket: • if (brackets.IndexOf(c) >= 0) countBrackets++; 5. In the end, the stack must become empty, otherwise, the string had incorrect brackets. • The method returns true if the stack is empty or the countBrackets is an even number, and false otherwise. Add the code at the end of the method code: • if (countBrackets % 2 > 0) // if the number of brackets is odd return false; else return st.Count == 0; • Within the main function initialize a string variable. Call the CheckBrackets method to check the string. Print out the proper messages: • Write("The string: "); string s = "([a]{b}c)[]"; WriteLine(s); if (CheckBrackets(s)) WriteLine("The brackets are placed correctly!"); else WriteLine("The brackets are placed INcorrectly!"); • Run the application again and check the output. • Add some more values of the string and check them too. • Write("The string: "); s = "(a[]]"; WriteLine(s); if (CheckBrackets(s)) WriteLine("The brackets are placed correctly!"); else WriteLine("The brackets are placed INcorrectly!"); • Run the application again and check the output. • Save and upload the file into the moodle system. Queue Lab 2. Working with a Queue To do: Output the first n natural numbers in ascending order, whose Prime factorization includes only the numbers 2, 3, 5. For example, for n = 11 the result must be 1 2 3 4 5 6 8 9 10 12 15. Because there are no multiples of 7, 11 and 13; for 14 — the multiples are 2 and 7, and 7 can’t be used. Note: it is better to use the next algorithm: 1. Use three queues that store numbers that are 2 (3, 5) times larger than those already printed, but they are not printed. 2. Each time the smallest value located at the top of one of the queues is selected and printed, and corresponding multiples of it are added to the tails of the queues. 3. The process starts with printing the number 1. The resulting example: Please, enter n: 20 The result is: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 [Solution and Project name: Lesson_17Lab2, file name L17Lab2.cs] ✍ How to do: • Create a new project with the name and file name as it is specified in the task. • Ask user to input n and read the inputted number, storing it in the n variable: • Console.WriteLine("Please, enter n:"); int n = int.Parse(Console.ReadLine()); • Don’t forget to check to see if the entered number is larger than 0: • System.Diagnostics.Debug.Assert(n > 0); 1. Use three queues that store numbers that are 2 (3, 5) times larger than those already printed, but they are not printed. • Initialize three empty instances of the Queue class. Give them the names (e.g. Two, Three, Five): • Queue<int> Two = new Queue<int>(); Queue<int> Three = new Queue<int>(); Queue<int> Five = new Queue<int>(); • It is said in the task that «the process starts with printing the number 1«. Let’s print the 1: • int min = 1; Console.Write(min); Console.Write(" "); 2. Each time the smallest value located at the top of one of the queues is selected and printed, and corresponding multiples of it are added to the tails of the queues. • After, you have to add to Two queue the number that is 2 times larger than that already printed (than min = 1), to add to Tree queue the number that is 3 times larger than that already printed, and to Five queue the number that is 5 times larger than that already printed: • Two.Enqueue(min * 2); Three.Enqueue(min * 3); Five.Enqueue(min * 5); Enqueue(object obj) adds an object to the end of the Queue. • To print out the next value, it should be 2, you have to find the minimum number among the top numbers of the queues and print it out to the console window. This process will be repeated n times, so you’ll need a for loop: • for (int i = 1; i < n; ++i) { min = Math.Min(Two.Peek(), Math.Min(Three.Peek(), Five.Peek())); Console.Write(min); ... } Peek() method returns the object at the beginning of the Queue without removing it. • Inside the same loop you have to add the next values which are 2, 3 or 5 times larger than that already printed (min) (in the task we have «corresponding multiples of it are added to the tails of the queues»): • Console.Write(" "); Two.Enqueue(min*2); Three.Enqueue(min*3); Five.Enqueue(min*5); • At the same time you have to remove the printed value from the queue: • if (Two.Peek() == min) Two.Dequeue(); if (Three.Peek() == min) Three.Dequeue(); if (Five.Peek() == min) Five.Dequeue(); Dequeue() method removes and returns the object at the beginning of the Queue. • Run the program to see the output. Lists Lab 3. Working with a LinkedList Linkedlists are usually used in situations where you need to perform a lot of inserts and deletions in the middle of some sequence. To do: 1. A long list of random integers (100,000 items) must be generated. 2. Iterate through the list and remove all numbers from the list that are divisible by the first element of it. 3. The number 0 must be inserted between any two elements of the same parity (e.g. 2 and 8 are both even, so we must output 2 0 8; 3 and 9 are both odd, so we must output 3 0 9). The resulting example: The beginning of the resulting output: Before: 9 9 2 6 6 8 7 1 1 7 6 7 8 5 1 3 6 6 5 8 3 2 8 6 8 4 4 4 7 9 4 5 9 2 6 5 4 3 8 6 5 8 1 6 4 5 7 3 9 8 4 2 2 4 6 8 1 5 7 3 4 4 3 2 3 1 8 6 8 5 9 3 7 9 4 5 2 After: 9 2 0 6 0 6 0 8 7 0 1 0 1 0 7 6 7 8 5 0 1 0 3 6 0 6 5 8 3 2 0 8 0 6 0 8 0 4 0 4 0 4 7 4 5 2 [Solution and Project name: Lesson_17Lab3, file name L17Lab3.cs] ✍ How to do: • Create a new project with the name and file name as it is specified in the task. • Class LinkedList represents a doubly linked list. We’re going to create an instance of an object of that class. Let’s call it l: • LinkedList<int> l = new LinkedList<int>(); • The list of 100 000 numbers must be generated randomly. So we need to use the instance of class Random object, number generator: • Random r = new Random() • To create a list of generated number with the boundaries [1;10] we’re going to use the for loop: • for (int i = 0; i < 100000; ++i) { l.AddLast(r.Next(1, 10)); } AddLast(T value) method adds a new node containing the specified value at the end of the LinkedList. • After, we can print the elements of the list out to the console window: • Console.WriteLine("before"); foreach (int i in l) { Console.Write($"{i} "); } • Run the program to see the output. • 2. Iterate through the list and remove all numbers from the list that are divisible by the first element of it. • Now, we’re going to use the variable to store the value of the first element. And after, we’ll check the elements one by one to see if they are divisible by the first element without a remainder. • var head = l.First; int fst = l.First.Value; The property First returns the first node of the LinkedList. The property Value is used to get value of the node. • Let’s check the rest elements (nodes) of the list to remove those of them which are devisable by fst (the value of the first element). We’re going to use the while loop: • head = head.Next; while (head != null) { var t = head; // the current node of the list head = head.Next; // the next node of the list if (t.Value % fst == 0) { l.Remove(t); // removes the node of the list } } 3. The number 0 must be inserted between any two elements of the same parity. • Now we’re going to use the while loop to iterate through the list’s nodes to check the neighboring nodes to see if they are of the same parity. We’ll have if statement with double conditions — for odd and even cases: • head = l.First; // the current node var headp = head.Next; // the next node while (headp != null) { if ((head.Value % 2 == 0 && headp.Value % 2 == 0) || // even nodes (head.Value % 2 != 0 && headp.Value % 2 != 0)) // odd nodes { l.AddAfter(head, 0); } head = headp; // moves to the next pair of the nodes headp = headp.Next; } The AddAfter method adds the specified new node after the specified existing node in the LinkedList. • After, we can output the resulting LinkedList: • Console.WriteLine(); Console.WriteLine("after"); foreach (int i in l) { Console.Write($"{i} "); } • Run the program again to see the output. Вставить формулу как Дополнительные настройки Цвет формулы Используйте LaTeX для набора формулы Предпросмотр $${}$$ Формула не набрана Вставить
2022-01-24 20:32:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1838030368089676, "perplexity": 1182.7733827680283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304600.9/warc/CC-MAIN-20220124185733-20220124215733-00515.warc.gz"}
https://brilliant.org/discussions/thread/please-help-32/
× $\large \sin^{-1}\dfrac{1}{\sqrt{2}} + \sin^{-1}\dfrac{\sqrt{2} - 1}{\sqrt{6}} + \sin^{-1}\dfrac{\sqrt{3} - \sqrt{2}}{\sqrt{12}} + \cdots= \ ?$ I am not able to generalise the series.. This is a MCQ type question and options are : • 0 • 1 • $$\pi / 2$$ • 2 • None of these Note by Akhil Bansal 12 months ago Sort by: $\sum_{n=1}^{\infty}sin^{-1}(\frac{\sqrt{n}-\sqrt{n-1}}{\sqrt{n(n+1)}})$ The other side in a right angled Triangle with other sides $$\sqrt{n(n+1)},\sqrt{n}-\sqrt{n-1}$$ is $$\sqrt{n^{2}-n}+1$$ Therefore $\sum_{n=1}^{\infty}sin^{-1}(\frac{\sqrt{n}-\sqrt{n-1}}{\sqrt{n(n+1)}})= \sum_{n=1}^{\infty} tan^{-1}(\frac{\sqrt{n}-\sqrt{n-1}}{1+\sqrt{n(n-1)}})$ $= \sum_{n=1}^{\infty} tan^{-1}(\sqrt{n})-tan^{-1}(\sqrt{n-1})$ $= tan^{-1}(\infty)-tan^{-1}(0)$ $=\frac{\pi}{2}$. · 12 months ago nice one. dont give $$\infty$$ in an expression like that. arithmetic for $$\infty$$ is not defined. write as $\lim_{n\rightarrow \infty} (\tan^{-1}(n)-\tan^{-1}(0))$ · 11 months, 4 weeks ago Notist that sin(arcsin(1/√2)+arcsin((√2-1)/√6)= 1/√2(√2-1)/√6+1/√2(√2+1)/√6= 2√2/√12=√(2/3) So sum first two terms is arcsin(√(2/3)) Now sin(arcsin(√(2/3)+arcsin((√3-√2)/√12)=√(2/3)(√6+1)/√12+ +√(1/3)(√3-√2)/√12= 1/√36*(√12+√2+√3-√2)=3√3/6= √(3/4) so using mathematical induction you can prove that sum of first n terms is arccos(√(n/(n+1))= arccos(√(1-1/(n+1)) So when you put lim n->infinify You'll get arcsin(1)=π/2 · 12 months ago Comment deleted 12 months ago What is the general term to given expression? · 12 months ago @Akhil Bansal The general term is $$\sin^{-1} \left( \dfrac{\sqrt n -\sqrt{n-1}}{\sqrt{n(n+1)}} \right)$$. · 12 months ago
2016-12-07 08:32:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8926684856414795, "perplexity": 3554.7282562522355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542009.32/warc/CC-MAIN-20161202170902-00472-ip-10-31-129-80.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-4-polynomials-4-3-polynomials-4-3-exercise-set-page-251/36
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition) The answers are: The standard form of this polynomial is written as $\color{red}{4n^2 + 6n + 1}$. \begin{align} &\text{a) The degree of the terms are: }\color{red}{2, 1, 0} \\ &\text{b) The leading term and coefficient is: }\color{red}{4n^2} \\ &\text{c) The degree of the polynomial is: }\color{red}2 \end{align} To identify the degree and the leading coefficient of the polynomial, we first write it in the standard form and look at the first term. The standard form is when the terms' exponents are arranged in descending order from left to right. The exponent of the first term is the degree of the polynomial and the coefficient of the first term is the leading coefficient. The standard form of this polynomial is written as $\color{red}{4n^2 + 6n + 1}$. \begin{align} &\text{a) The degree of the terms are: }\color{red}{2, 1, 0} \\ &\text{b) The leading term and coefficient is: }\color{red}{4n^2} \\ &\text{c) The degree of the polynomial is: }\color{red}2 \end{align}
2018-08-14 11:49:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999837875366211, "perplexity": 447.76372288512147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209021.21/warc/CC-MAIN-20180814101420-20180814121420-00330.warc.gz"}
https://labs.tib.eu/arxiv/?author=C.%20C.%20Thone
• ### The X-shooter GRB afterglow legacy sample (XS-GRB)(1802.07727) Feb. 21, 2018 astro-ph.HE In this work we present spectra of all $\gamma$-ray burst (GRB) afterglows that have been promptly observed with the X-shooter spectrograph until 31-03-2017. In total, we obtained spectroscopic observations of 103 individual GRBs observed within 48 hours of the GRB trigger. Redshifts have been measured for 97 per cent of these, covering a redshift range from 0.059 to 7.84. Based on a set of observational selection criteria that minimize biases with regards to intrinsic properties of the GRBs, the follow-up effort has been focused on producing a homogeneous sample of 93 afterglow spectra for GRBs discovered by the Swift satellite. We here provide a public release of all the reduced spectra, including continuum estimates and telluric absorption corrections. For completeness, we also provide reductions for the 18 late-time observations of the underlying host galaxies. We provide an assessment of the degree of completeness with respect to the parent GRB population, in terms of the X-ray properties of the bursts in the sample and find that the sample presented here is representative of the full Swift sample. We constrain the fraction of dark bursts to be < 28 per cent and we confirm previous results that higher optical darkness is correlated with increased X-ray absorption. For the 42 bursts for which it is possible, we provide a measurement of the neutral hydrogen column density, increasing the total number of published HI column density measurements by $\sim$ 33 per cent. This dataset provides a unique resource to study the ISM across cosmic time, from the local progenitor surroundings to the intervening universe. • ### The optical afterglow of the short gamma-ray burst associated with GW170817(1801.02669) Jan. 19, 2018 astro-ph.HE The binary neutron star merger GW170817 was the first multi-messenger event observed in both gravitational and electromagnetic waves. The electromagnetic signal began ~ 2 seconds after the merger with a weak, short burst of gamma-rays, which was followed over the course of the next hours and days by the ultraviolet, optical and near-infrared emission from a radioactively-powered kilonova. The low luminosity of the gamma-rays and the rising radio and X-ray flux from the source at late times could indicate that we are viewing this event outside the opening angle of the beamed relativistic jet launched during the merger. Alternatively, the emission could be arising from a cocoon of material formed from the interaction between a (possibly choked) jet and the merger ejecta. Here we present late-time optical detections and deep near-infrared limits on the emission from GW170817 at 110 days after the merger. Our new observations are at odds with expectations of late-time emission from kilonova models, being too bright and blue. Instead, this late-time optical emission arises from the optical afterglow of GRB 170817A, associated with GW170817. This emission matches the expectations of a structured relativistic jet, that would have launched a high luminosity short GRB to an aligned observer. The distinct predictions for the future optical behaviour in the structured-jet and cocoon models will directly distinguish the origin of the emission. • ### The luminous, massive and solar metallicity galaxy hosting the Swift gamma-ray burst, GRB 160804A at z = 0.737(1711.02706) Nov. 7, 2017 astro-ph.GA We here present the spectroscopic follow-up observations with VLT/X-shooter of the Swift long-duration gamma-ray burst GRB 160804A at z = 0.737. Typically, GRBs are found in low-mass, metal-poor galaxies which constitute the sub-luminous population of star-forming galaxies. For the host galaxy of the GRB presented here we derive a stellar mass of $\log(M_*/M_{\odot}) = 9.80\pm 0.07$, a roughly solar metallicity (12+log(O/H) = $8.74\pm 0.12$) based on emission line diagnostics, and an infrared luminosity of $M_{3.6/(1+z)} = -21.94$ mag, but find it to be dust-poor ($E(B-V) < 0.05$ mag). This establishes the galaxy hosting GRB 160804A as one of the most luminous, massive and metal-rich GRB hosts at z < 1.5. Furthermore, the gas-phase metallicity is found to be representative of the physical conditions of the gas close to the explosion site of the burst. The high metallicity of the host galaxy is also observed in absorption, where we detect several strong FeII transitions as well as MgII and MgI. While host galaxy absorption features are common in GRB afterglow spectra, we detect absorption from strong metal lines directly in the host continuum (at a time when the afterglow was contributing to < 15%). Finally, we discuss the possibility that the geometry and state of the absorbing and emitting gas is indicative of a galactic scale outflow expelled at the final stage of two merging galaxies. • ### Cosmic evolution and metal aversion in super-luminous supernova host galaxies(1612.05978) Nov. 3, 2017 astro-ph.GA The SUperluminous Supernova Host galaxIES (SUSHIES) survey aims to provide strong new constraints on the progenitors of superluminous supernovae (SLSNe) by understanding the relationship to their host galaxies. We present the photometric properties of 53 H-poor and 16 H-rich SLSN host galaxies out to $z\sim4$. We model their spectral energy distributions to derive physical properties, which we compare with other galaxy populations. At low redshift, H-poor SLSNe are preferentially found in very blue, low-mass galaxies with high average specific star-formation rates. As redshift increases, the host population follows the general evolution of star-forming galaxies towards more luminous galaxies. After accounting for secular evolution, we find evidence for differential evolution in galaxy mass, but not in the $B$-band and the far UV luminosity ($3\sigma$ confidence). Most remarkable is the scarcity of hosts with stellar masses above $10^{10}~M_\odot$ for both classes of SLSNe. In the case of H-poor SLSNe, we attribute this to a stifled production efficiency above $\sim0.4$ solar metallicity. However, we argue that, in addition to low metallicity, a short-lived stellar population is also required to regulate the SLSN production. H-rich SLSNe are found in a very diverse population of star-forming galaxies. Still, the scarcity of massive hosts suggests a stifled production efficiency above $\sim0.8$ solar metallicity. The large dispersion of the H-rich SLSNe host properties is in stark contrast to those of gamma-ray burst, regular core-collapse SN, and H-poor SLSNe host galaxies. We propose that multiple progenitor channels give rise to this sub-class. • ### The host of the Type I SLSN 2017egm: A young, sub-solar metallicity environment in a massive spiral galaxy(1708.03856) Oct. 2, 2017 astro-ph.GA, astro-ph.HE Here we present an integral-field study of the massive, high-metallicity spiral NGC 3191, the host of SN 2017egm, the closest SLSN Type I to date. We use data from PMAS/CAHA and the public MaNGA survey to shed light on the properties of the SLSN site and the origin of star-formation in this non-starburst spiral galaxy. We map the physical properties different \ion{H}{II} regions throughout the galaxy and characterize their stellar populations using the STARLIGHT fitting code. Kinematical information allows to study a possible interaction with its neighbouring galaxy as the origin of recent star formation activity which could have caused the SLSN. NGC 3191 shows intense star-formation in the western part with three large SF regions of low metallicity. The central regions of the host have a higher metallicity, lower specific star-formation rate and lower ionization. Modeling the stellar populations gives a different picture: The SLSN region has two dominant stellar populations with different ages, the youngest one with an age of 2-10 Myr and lower metallicity, likely the population from which the SN progenitor originated. Emission line kinematics of NGC 3191 show indications of interaction with its neighbour MCG+08-19-017 at $\sim$45 kpc, which might be responsible for the recent starburst. In fact, this galaxy pair has in total hosted 4 SNe, 1988B (Type Ia), SN 2003ds (Type Ic in MCG+08-19-017), PTF10bgl (SLSN-Type II) and 2017egm, underlying the enhanced SF in both galaxies due to interaction. Our study shows that one has to be careful interpreting global host and even gas properties without looking at the stellar population history of the region. SLSNe seem to still be consistent with massive stars ($>$ 20 M$_\odot$) requiring low ($< 0.6Z_{\odot}$) metallicity and those environments can also occur in massive, late-type galaxies but not necessarily starbursts. • ### Solving the conundrum of intervening strong MgII absorbers towards GRBs and quasars(1709.01084) Sept. 4, 2017 astro-ph.GA Previous studies have shown that the incidence rate of intervening strong MgII absorbers towards GRBs were a factor of 2 - 4 higher than towards quasars. Exploring the similar sized and uniformly selected legacy data sets XQ-100 and XSGRB, each consisting of 100 quasar and 81 GRB afterglow spectra obtained with a single instrument (VLT/X-shooter), we demonstrate that there is no disagreement in the number density of strong MgII absorbers with rest-frame equivalent widths $W_r^{2796} >$ 1 {\AA} towards GRBs and quasars in the redshift range 0.1 < z < 5. With large and similar sample sizes, and path length coverages of $\Delta$z = 57.8 and 254.4 for GRBs and quasars, respectively, the incidences of intervening absorbers are consistent within 1 sigma uncertainty levels at all redshifts. For absorbers at z < 2.3 the incidence towards GRBs is a factor of 1.5$\pm$0.4 higher than the expected number of strong MgII absorbers in SDSS quasar spectra, while for quasar absorbers observed with X-shooter we find an excess factor of 1.4$\pm$0.2 relative to SDSS quasars. Conversely, the incidence rates agree at all redshifts with reported high spectral resolution quasar data, and no excess is found. The only remaining discrepancy in incidences is between SDSS MgII catalogues and high spectral resolution studies. The rest-frame equivalent width distribution also agrees to within 1 sigma uncertainty levels between the GRB and quasar samples. Intervening strong MgII absorbers towards GRBs are therefore neither unusually frequent, nor unusually strong. • ### The MUSE view of the host galaxy of GRB 100316D(1704.05509) Aug. 29, 2017 astro-ph.GA The low distance, $z=0.0591$, of GRB 100316D and its association with SN 2010bh represent two important motivations for studying this host galaxy and the GRB's immediate environment with the Integral-Field Spectrographs like VLT/MUSE. Its large field-of-view allows us to create 2D maps of gas metallicity, ionization level, and the star-formation rate distribution maps, as well as to investigate the presence of possible host companions. The host is a late-type dwarf irregular galaxy with multiple star-forming regions and an extended central region with signatures of on-going shock interactions. The GRB site is characterized by the lowest metallicity, the highest star-formation rate and the youngest ($\sim$ 20-30 Myr) stellar population in the galaxy, which suggest a GRB progenitor stellar population with masses up to 20 -- 40 $M_{\odot}$. We note that the GRB site has an offset of $\sim$660pc from the most luminous SF region in the host. The observed SF activity in this galaxy may have been triggered by a relatively recent gravitational encounter between the host and a small undetected ($L_{H\alpha} \leq 10^{36}$ erg/s) companion. • ### The host galaxy of the short GRB111117A at $z = 2.211$: impact on the short GRB redshift distribution and progenitor channels(1707.01452) July 3, 2017 astro-ph.HE It is notoriously difficult to localize short $\gamma$-ray bursts (sGRBs) and their hosts to measure their redshifts. These measurements, however, are critical to constrain the nature of sGRB progenitors, their redshift distribution and the $r$-process element enrichment history of the universe. Here, we present spectroscopy of the host galaxy of GRB111117A and measure its redshift to be $z = 2.211$. This makes GRB111117A the most distant high-confidence short duration GRB detected to date. Our spectroscopic redshift supersedes a lower, previously estimated photometric redshift value for this burst. We use the spectroscopic redshift, as well as new imaging data to constrain the nature of the host galaxy and the physical parameters of the GRB. The rest-frame X-ray derived hydrogen column density, for example, is the highest compared to a complete sample of sGRBs and seems to follow the evolution with redshift as traced by the hosts of long GRBs (lGRBs). The host lies in the brighter end of the expected sGRB host brightness distribution at $z = 2.211$, and is actively forming stars. Using the host as a benchmark for redshift determination, we find that between 43 and 71 per cent of all sGRB redshifts should be missed due to host faintness for hosts at $z\sim2$. The high redshift of GRB111117A is evidence against a lognormal delay-time model for sGRBs through the predicted redshift distribution of sGRBs, which is very sensitive to high-$z$ sGRBs. From the age of the universe at the time of GRB explosion, an initial neutron star (NS) separation of $a_0 < 3.2~R_\odot$ is required in the case where the progenitor system is a circular pair of inspiralling NSs. This constraint excludes some of the longest sGRB formation channels for this burst. • ### SN 2015bh: NGC 2770's 4th supernova or a luminous blue variable on its way to a Wolf-Rayet star?(1606.09025) June 29, 2016 astro-ph.SR, astro-ph.HE Very massive stars in the final phases of their lives often show unpredictable outbursts that can mimic supernovae, so-called, "SN impostors", but the distinction is not always straigthforward. Here we present observations of a luminous blue variable (LBV) in NGC 2770 in outburst over more than 20 years that experienced a possible terminal explosion as type IIn SN in 2015, named SN 2015bh. This possible SN or "main event" was preceded by a precursor peaking $\sim$ 40 days before maximum. The total energy release of the main event is $\sim$1.8$\times$10$^{49}$ erg, which can be modeled by a $<$ 0.5 M$_\odot$ shell plunging into a dense CSM. All emission lines show a single narrow P-Cygni profile during the LBV phase and a double P-Cygni profile post maximum suggesting an association of this second component with the possible SN. Since 1994 the star has been redder than during a typical S-Dor like outburst. SN 2015bh lies within a spiral arm of NGC 2770 next to a number of small star-forming regions with a metallicity of $\sim$ 0.5 solar and a stellar population age of 7-10 Myr. SN 2015bh shares many similarities with SN 2009ip, which, together with other examples may form a new class of objects that exhibit outbursts a few decades prior to "hyper-eruption" or final core-collapse. If the star survives this event it is undoubtedly altered, and we suggest that these "zombie stars" may evolve from an LBV to a Wolf Rayet star over a very short timescale of only a few years. The final fate of these types of massive stars can only be determined with observations years after the possible SN. • ### The Swift Gamma-Ray Burst Host Galaxy Legacy Survey - I. Sample Selection and Redshift Distribution(1504.02482) Jan. 20, 2016 astro-ph.GA, astro-ph.HE We introduce the Swift Gamma-Ray Burst Host Galaxy Legacy Survey ("SHOALS"), a multi-observatory high-redshift galaxy survey targeting the largest unbiased sample of long-duration gamma-ray burst hosts yet assembled (119 in total). We describe the motivations of the survey and the development of our selection criteria, including an assessment of the impact of various observability metrics on the success rate of afterglow-based redshift measurement. We briefly outline our host-galaxy observational program, consisting of deep Spitzer/IRAC imaging of every field supplemented by similarly-deep, multi-color optical/NIR photometry, plus spectroscopy of events without pre-existing redshifts. Our optimized selection cuts combined with host-galaxy follow-up have so far enabled redshift measurements for 110 targets (92%) and placed upper limits on all but one of the remainder. About 20% of GRBs in the sample are heavily dust-obscured, and at most 2% originate from z>5.5. Using this sample we estimate the redshift-dependent GRB rate density, showing it to peak at z~2.5 and fall by about an order of magnitude towards low (z=0) redshift, while declining more gradually towards high (z~7) redshift. This behavior is consistent with a progenitor whose formation efficiency varies modestly over cosmic history. Our survey will permit the most detailed examination to date of the connection between the GRB host population and general star-forming galaxies, directly measure evolution in the host population over cosmic time and discern its causes, and provide new constraints on the fraction of cosmic star-formation occurring in undetectable galaxies at all redshifts. • ### GRB 140606B / iPTF14bfu: Detection of shock-breakout emission from a cosmological gamma-ray burst?(1505.03522) June 10, 2015 astro-ph.SR, astro-ph.HE We present optical and near-infrared photometry of GRB~140606B ($z=0.384$), and optical photometry and spectroscopy of its associated supernova (SN). The results of our modelling indicate that the bolometric properties of the SN ($M_{\rm Ni} = 0.4\pm0.2$~M$_{\odot}$, $M_{\rm ej} = 5\pm2$~M$_{\odot}$, and $E_{\rm K} = 2\pm1 \times 10^{52}$ erg) are fully consistent with the statistical averages determined for other GRB-SNe. However, in terms of its $\gamma$-ray emission, GRB~140606B is an outlier of the Amati relation, and occupies the same region as low-luminosity ($ll$) and short GRBs. The $\gamma$-ray emission in $ll$GRBs is thought to arise in some or all events from a shock-breakout (SBO), rather than from a jet. The measured peak photon energy ($E_{\rm p}\approx800$ keV) is close to that expected for $\gamma$-rays created by a SBO ($\gtrsim1$ MeV). Moreover, based on its position in the $M_{V,\rm p}$--$L_{\rm iso,\gamma}$~plane and the $E_{\rm K}$--$\Gamma\beta$~plane, GRB~140606B has properties similar to both SBO-GRBs and jetted-GRBs. Additionally, we searched for correlations between the isotropic $\gamma$-ray emission and the bolometric properties of a sample of GRB-SNe, finding that no statistically significant correlation is present. The average kinetic energy of the sample is $\bar{E}_{\rm K} = 2.1\times10^{52}$ erg. All of the GRB-SNe in our sample, with the exception of SN 2006aj, are within this range, which has implications for the total energy budget available to power both the relativistic and non-relativistic components in a GRB-SN event. • ### The Needle in the 100 deg2 Haystack: Uncovering Afterglows of Fermi GRBs with the Palomar Transient Factory(1501.00495) The Fermi Gamma-ray Space Telescope has greatly expanded the number and energy window of observations of gamma-ray bursts (GRBs). However, the coarse localizations of tens to a hundred square degrees provided by the Fermi GRB Monitor instrument have posed a formidable obstacle to locating the bursts' host galaxies, measuring their redshifts, and tracking their panchromatic afterglows. We have built a target-of-opportunity mode for the intermediate Palomar Transient Factory in order to perform targeted searches for Fermi afterglows. Here, we present the results of one year of this program: 8 afterglow discoveries out of 35 searches. Two of the bursts with detected afterglows (GRBs 130702A and 140606B) were at low redshift (z=0.145 and 0.384 respectively) and had spectroscopically confirmed broad-line Type Ic supernovae. We present our broadband follow-up including spectroscopy as well as X-ray, UV, optical, millimeter, and radio observations. We study possible selection effects in the context of the total Fermi and Swift GRB samples. We identify one new outlier on the Amati relation. We find that two bursts are consistent with a mildly relativistic shock breaking out from the progenitor star, rather than the ultra-relativistic internal shock mechanism that powers standard cosmological bursts. Finally, in the context of the Zwicky Transient Facility, we discuss how we will continue to expand this effort to find optical counterparts of binary neutron star mergers that may soon be detected by Advanced LIGO and Virgo. • ### The warm, the excited, and the molecular gas: GRB 121024A shining through its star-forming galaxy(1409.6315) May 10, 2015 astro-ph.GA We present the first reported case of the simultaneous metallicity determination of a gamma-ray burst (GRB) host galaxy, from both afterglow absorption lines as well as strong emission-line diagnostics. Using spectroscopic and imaging observations of the afterglow and host of the long-duration Swift GRB121024A at z = 2.30, we give one of the most complete views of a GRB host/environment to date. We observe a strong damped Ly-alpha absorber (DLA) with a hydrogen column density of log N(HI) = 21.88 +/- 0.10, H2 absorption in the Lyman-Werner bands (molecular fraction of log(f)~ -1.4; fourth solid detection of molecular hydrogen in a GRB-DLA), the nebular emission lines H-alpha, H-beta, [O II], [O III] and [N II], as well as metal absorption lines. We find a GRB host galaxy that is highly star-forming (SFR ~ 40 solar masses/yr ), with a dust-corrected metallicity along the line of sight of [Zn/H]corr = -0.6 +/- 0.2 ([O/H] ~ -0.3 from emission lines), and a depletion factor [Zn/Fe] = 0.85 +/- 0.04. The molecular gas is separated by 400 km/s (and 1-3 kpc) from the gas that is photoexcited by the GRB. This implies a fairly massive host, in agreement with the derived stellar mass of log(M/M_solar ) = 9.9+/- 0.2. We dissect the host galaxy by characterising its molecular component, the excited gas, and the line-emitting star-forming regions. The extinction curve for the line of sight is found to be unusually flat (Rv ~15). We discuss the possibility of an anomalous grain size distributions. We furthermore discuss the different metallicity determinations from both absorption and emission lines, which gives consistent results for the line of sight to GRB 121024A. • ### A young stellar environment for the superluminous supernova PTF12dam(1411.1104) Nov. 4, 2014 astro-ph.GA The progenitors of super luminous supernovae (SLSNe) are still a mystery. Hydrogen-poor SLSN hosts are often highly star-forming dwarf galaxies and the majority belongs to the class of extreme emission line galaxies hosting young and highly star-forming stellar populations. Here we present a resolved long-slit study of the host of the hydrogen-poor SLSN PTF12dam probing the kpc environment of the SN site to determine the age of the progenitor. The galaxy is a "tadpole" with uniform properties and the SN occurred in a star-forming region in the head of the tadpole. The galaxy experienced a recent star-burst superimposed on an underlying old stellar population. We measure a very young stellar population at the SN site with an age of ~3 Myr and a metallicity of 12+log(O/H)=8.0 at the SN site but do not observe any WR features. The progenitor of PTF12dam must have been a massive star of at least 60 M_solar and one of the first stars exploding as a SN in this extremely young starburst. • ### The mysterious optical afterglow spectrum of GRB140506A at z=0.889(1409.4975) Sept. 18, 2014 astro-ph.GA, astro-ph.HE Context. Gamma-ray burst (GRBs) afterglows probe sightlines to star-forming regions in distant star-forming galaxies. Here we present a study of the peculiar afterglow spectrum of the z = 0.889 Swift GRB 140506A. Aims. Our aim is to understand the origin of the very unusual properties of the absorption along the line-of-sight. Methods. We analyse spectroscopic observations obtained with the X-shooter spectrograph mounted on the ESO/VLT at two epochs 8.8 h and 33 h after the burst as well as imaging from the GROND instrument. We also present imaging and spectroscopy of the host galaxy obtained with the Magellan telescope. Results. The underlying afterglow appears to be a typical afterglow of a long-duration GRB. However, the material along the line-of- sight has imprinted very unusual features on the spectrum. Firstly, there is a very broad and strong flux drop below 8000 AA (4000 AA in the rest frame), which seems to be variable between the two spectroscopic epochs. We can reproduce the flux-drops both as a giant 2175 AA extinction bump and as an effect of multiple scattering on dust grains in a dense environment. Secondly, we detect absorption lines from excited H i and He i. We also detect molecular absorption from CH+ . Conclusions. We interpret the unusual properties of these spectra as reflecting the presence of three distinct regions along the line-of-sight: the excited He i absorption originates from an H ii-region, whereas the Balmer absorption must originate from an associated photodissociation region. The strong metal line and molecular absorption and the dust extinction must originate from a third, cooler region along the line-of-sight. The presence of (at least) three separate regions is reflected in the fact that the different absorption components have different velocities relative to the systemic redshift of the host galaxy. • ### A Trio of GRB-SNe: GRB 120729A, GRB 130215A / SN 2013ez and GRB 130831A / SN 2013fu(1405.3114) May 13, 2014 astro-ph.HE We present optical and near-infrared (NIR) photometry for three gamma-ray burst supernovae (GRB-SNe): GRB 120729A, GRB 130215A / SN 2013ez and GRB 130831A / SN 2013fu. In the case of GRB 130215A / SN 2013ez, we also present optical spectroscopy at t-t0=16.1 d, which covers rest-frame 3000-6250 Angstroms. Based on Fe II (5169) and Si (II) (6355), our spectrum indicates an unusually low expansion velocity of 4000-6350 km/s, the lowest ever measured for a GRB-SN. Additionally, we determined the brightness and shape of each accompanying SN relative to a template supernova (SN 1998bw), which were used to estimate the amount of nickel produced via nucleosynthesis during each explosion. We find that our derived nickel masses are typical of other GRB-SNe, and greater than those of SNe Ibc that are not associated with GRBs. For GRB 130831A / SN 2013fu, we use our well-sampled R-band light curve (LC) to estimate the amount of ejecta mass and the kinetic energy of the SN, finding that these too are similar to other GRB-SNe. For GRB 130215A, we take advantage of contemporaneous optical/NIR observations to construct an optical/NIR bolometric LC of the afterglow. We fit the bolometric LC with the millisecond magnetar model of Zhang & Meszaros (2001), which considers dipole radiation as a source of energy injection to the forward shock powering the optical/NIR afterglow. Using this model we derive an initial spin period of P=12 ms and a magnetic field of B=1.1 x 10^15 G, which are commensurate with those found for proposed magnetar central engines of other long-duration GRBs. • ### GRB 120422A/SN 2012bz: Bridging the Gap between Low- And High-Luminosity GRBs(1401.3774) Jan. 15, 2014 astro-ph.CO, astro-ph.HE At low redshift, a handful of gamma-ray bursts (GRBs) have been discovered with peak luminosities ($L_{\rm iso} < 10^{48.5}~\rm{erg\,s}^{-1}$) substantially lower than the average of the more distant ones ($L_{\rm iso} > 10^{49.5}~\rm{erg\,s}^{-1}$). The properties of several low-luminosity (low-$L$) GRBs indicate that they can be due to shock break-out, as opposed to the emission from ultrarelativistic jets. Owing to this, it is highly debated how both populations are connected, and whether there is a continuum between them. The burst at redshift $z=0.283$ from 2012 April 22 is one of the very few examples of intermediate-$L$ GRBs with a $\gamma$-ray luminosity of $L\sim10^{48.9}~\rm{erg\,s}^{-1}$ that have been detected up to now. Together with the robust detection of its accompanying supernova SN 2012bz, it has the potential to answer important questions on the origin of low- and high-$L$ GRBs and the GRB-SN connection. We carried out a spectroscopy campaign using medium- and low-resolution spectrographs at 6--10-m class telescopes, covering the time span of 37.3 days, and a multi-wavelength imaging campaign from radio to X-ray energies over a duration of $\sim270$ days. Furthermore, we used a tuneable filter centred at H$\alpha$ to map star formation in the host galaxy and the surrounding galaxies. We used these data to extract and model the properties of different radiation components and incorporate spectral-energy-distribution fitting techniques to extract the properties of the host galaxy. Modelling the light curve and spectral energy distribution from the radio to the X-rays revealed the blast-wave to expand with an initial Lorentz factor of $\Gamma_0\sim60$, low for a high-$L$ GRB, and that the afterglow had an exceptional low peak luminosity-density of $\lesssim2\times10^{30}~\rm{erg\,s}^{-1}\,\rm{Hz}^{-1}$ in the sub-mm. [Abridged] • ### The low-extinction afterglow in the solar-metallicity host galaxy of GRB 110918A(1308.5520) Aug. 26, 2013 astro-ph.CO, astro-ph.HE Galaxies selected through long gamma-ray bursts (GRBs) could be of fundamental importance when mapping the star formation history out to the highest redshifts. Before using them as efficient tools in the early Universe, however, the environmental factors that govern the formation of GRBs need to be understood. Metallicity is theoretically thought to be a fundamental driver in GRB explosions and energetics, but is still, even after more than a decade of extensive studies, not fully understood. This is largely related to two phenomena: a dust-extinction bias, that prevented high-mass and thus likely high-metallicity GRB hosts to be detected in the first place, and a lack of efficient instrumentation, that limited spectroscopic studies including metallicity measurements to the low-redshift end of the GRB host population. The subject of this work is the very energetic GRB 110918A, for which we measure one of the largest host-integrated metallicities, ever, and the highest stellar mass for z<1.9. This presents one of the very few robust metallicity measurements of GRB hosts at z~1, and establishes that GRB hosts at z~1 can also be very metal rich. It conclusively rules out a metallicity cut-off in GRB host galaxies and argues against an anti-correlation between metallicity and energy release in GRBs. • ### The low-extinction afterglow in the solar-metallicity host galaxy of gamma-ray burst 110918A(1306.0892) June 4, 2013 astro-ph.CO, astro-ph.HE Metallicity is theoretically thought to be a fundamental driver in gamma-ray burst (GRB) explosions and energetics, but is still, even after more than a decade of extensive studies, not fully understood. This is largely related to two phenomena: a dust-extinction bias, that prevented high-mass and thus likely high-metallicity GRB hosts to be detected in the first place, and a lack of efficient instrumentation, that limited spectroscopic studies including metallicity measurements to the low-redshift end of the GRB host population. The subject of this work is the very energetic GRB 110918A, for which we measure a redshift of z=0.984. GRB 110918A gave rise to a luminous afterglow with an intrinsic spectral slope of b=0.70, which probed a sight-line with little extinction (A_V=0.16 mag) typical of the established distributions of afterglow properties. Photometric and spectroscopic follow-up observations of the galaxy hosting GRB 110918A, including optical/NIR photometry with GROND and spectroscopy with VLT/X-shooter, however, reveal an all but average GRB host in comparison to the z~1 galaxies selected through similar afterglows to date. It has a large spatial extent with a half-light radius of ~10 kpc, the highest stellar mass for z<1.9 (log(M_*/M_sol) = 10.68+-0.16), and an Halpha-based star formation rate of 41 M_sol/yr. We measure a gas-phase extinction of ~1.8 mag through the Balmer decrement and one of the largest host-integrated metallicities ever of around solar (12 + log(O/H) = 8.93+/-0.13). This presents one of the very few robust metallicity measurements of GRB hosts at z~1, and establishes that GRB hosts at z~1 can also be very metal rich. It conclusively rules out a metallicity cut-off in GRB host galaxies and argues against an anti-correlation between metallicity and energy release in GRBs. • ### Flux and color variations of the doubly imaged quasar UM673(1302.0766) May 6, 2013 astro-ph.CO With the aim of characterizing the flux and color variations of the multiple components of the gravitationally lensed quasar UM673 as a function of time, we have performed multi-epoch and multi-band photometric observations with the Danish 1.54m telescope at the La Silla Observatory. The observations were carried out in the VRi spectral bands during four seasons (2008--2011). We reduced the data using the PSF (Point Spread Function) photometric technique as well as aperture photometry. Our results show for the brightest lensed component some significant decrease in flux between the first two seasons (+0.09/+0.11/+0.05 mag) and a subsequent increase during the following ones (-0.11/-0.11/-0.10 mag) in the V/R/i spectral bands, respectively. Comparing our results with previous studies, we find smaller color variations between these seasons as compared with previous ones. We also separate the contribution of the lensing galaxy from that of the fainter and close lensed component. • ### The optical counterpart of the bright X-ray transient Swift J1745-26(1303.6317) March 25, 2013 astro-ph.HE We present a 30-day monitoring campaign of the optical counterpart of the bright X-ray transient Swift J1745-26, starting only 19 minutes after the discovery of the source. We observe the system peaking at i' ~17.6 on day 6 (MJD 56192) to then decay at a rate of ~0.04 mag/day. We show that the optical peak occurs at least 3 days later than the hard X-ray (15-50 keV) flux peak. Our measurements result in an outburst amplitude greater than 4.3 magnitudes, which favours an orbital period < 21 h and a companion star with a spectral type later than ~ A0. Spectroscopic observations taken with the GTC-10.4 m telescope reveal a broad (FWHM ~ 1100 km/s), double-peaked H_alpha emission line from which we constrain the radial velocity semi-amplitude of the donor to be K_2 > 250 km/s. The breadth of the line and the observed optical and X-ray fluxes suggest that Swift J1745-26 is a new black hole candidate located closer than ~7 kpc. • ### Identifying the Location in the Host Galaxy of the Short GRB 111117A with the Chandra Sub-arcsecond Position(1205.6774) Jan. 7, 2013 astro-ph.HE We present our successful Chandra program designed to identify, with sub-arcsecond accuracy, the X-ray afterglow of the short GRB 111117A, which was discovered by Swift and Fermi. Thanks to our rapid target of opportunity request, Chandra clearly detected the X-ray afterglow, though no optical afterglow was found in deep optical observations. The host galaxy was clearly detected in the optical and near-infrared band, with the best photometric redshift of z=1.31_{-0.23}^{+0.46} (90% confidence), making it one of the highest known short GRB redshifts. Furthermore, we see an offset of 1.0 +- 0.2 arcseconds, which corresponds to 8.4 +- 1.7 kpc, between the host and the afterglow position. We discuss the importance of using Chandra for obtaining sub-arcsecond X-ray localizations of short GRB afterglows to study GRB environments. • ### Pre-ALMA observations of GRBs in the mm/submm range(1108.1797) Dec. 14, 2011 astro-ph.CO GRBs generate an afterglow emission that can be detected from radio to X-rays during days, or even weeks after the initial explosion. The peak of this emission crosses the mm/submm range during the first hours to days, making their study in this range crucial for constraining the models. Observations have been limited until now due to the low sensitivity of the observatories in this range. We present observations of 10 GRB afterglows obtained from APEX and SMA, as well as the first detection of a GRB with ALMA, and put them into context with all the observations that have been published until now in the spectral range that will be covered by ALMA. The catalogue of mm/submm observations collected here is the largest to date and is composed of 102 GRBs, of which 88 had afterglow observations, whereas the rest are host galaxy searches. With our programmes, we contributed with data of 11 GRBs and the discovery of 2 submm counterparts. In total, the full sample, including data from the literature, has 22 afterglow detections with redshift ranging from 0.168 to 8.2. GRBs have been detected in mm/submm wavelengths with peak luminosities spanning 2.5 orders of magnitude, the most luminous reaching 10^33erg s^-1 Hz^-1. We observe a correlation between the X-ray brightness at 0.5 days and the mm/submm peak brightness. Finally we give a rough estimate of the distribution of peak flux densities of GRB afterglows, based on the current mm/submm sample. Observations in the mm/submm bands have been shown to be crucial for our understanding of the physics of GRBs, but have until now been limited by the sensitivity of the observatories. With the start of the operations at ALMA, the sensitivity will be increased by more than an order of magnitude. Our estimates predict that, once completed, ALMA will detect up to 98% of the afterglows if observed during the passage of the peak synchrotron emission. • ### An unusual stellar death on Christmas Day(1105.3015) Oct. 3, 2011 astro-ph.HE Long Gamma-Ray Bursts (GRBs) are the most dramatic examples of massive stellar deaths, usually as- sociated with supernovae (Woosley et al. 2006). They release ultra-relativistic jets producing non-thermal emission through synchrotron radiation as they interact with the surrounding medium (Zhang et al. 2004). Here we report observations of the peculiar GRB 101225A (the "Christmas burst"). Its gamma-ray emission was exceptionally long and followed by a bright X-ray transient with a hot thermal component and an unusual optical counterpart. During the first 10 days, the optical emission evolved as an expanding, cooling blackbody after which an additional component, consistent with a faint supernova, emerged. We determine its distance to 1.6 Gpc by fitting the spectral-energy distribution and light curve of the optical emission with a GRB-supernova template. Deep optical observations may have revealed a faint, unresolved host galaxy. Our proposed progenitor is a helium star-neutron star merger that underwent a common envelope phase expelling its hydrogen envelope. The resulting explosion created a GRB-like jet which gets thermalized by interacting with the dense, previously ejected material and thus creating the observed black-body, until finally the emission from the supernova dominated. An alternative explanation is a minor body falling onto a neutron star in the Galaxy (Campana et al. 2011). • ### I. Flux and color variations of the quadruply imaged quasar HE 0435-1223(1101.3664) Feb. 16, 2011 astro-ph.CO aims: We present VRi photometric observations of the quadruply imaged quasar HE 0435-1223, carried out with the Danish 1.54m telescope at the La Silla Observatory. Our aim was to monitor and study the magnitudes and colors of each lensed component as a function of time. methods: We monitored the object during two seasons (2008 and 2009) in the VRi spectral bands, and reduced the data with two independent techniques: difference imaging and PSF (Point Spread Function) fitting.results: Between these two seasons, our results show an evident decrease in flux by ~0.2-0.4 magnitudes of the four lensed components in the three filters. We also found a significant increase (~0.05-0.015) in their V-R and R-i color indices. conclusions: These flux and color variations are very likely caused by intrinsic variations of the quasar between the observed epochs. Microlensing effects probably also affect the brightest "A" lensed component.
2021-03-07 22:09:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6233478784561157, "perplexity": 3297.004677901332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178378872.82/warc/CC-MAIN-20210307200746-20210307230746-00382.warc.gz"}
https://nccumc.org/learning/mod/page/view.php?id=1769
## Prayer & Scriptures - The Economic Community ##### Opening Prayer "A Worker's Prayer" My Lord, Jesus Christ I offer You, this day, all my work, my hopes and struggles, my joys and sorrows. Grant me, and all my fellow workers, the grace to think like You, to work with You, and to live in You. Help me to love You with all my heart, and to serve You with all my strength. Lord, Jesus Christ, Carpenter of Nazareth, You were a worker as I am. Give me, and all workers, the privilege to work as You did, so that everything we do will be to the greater glory of God., and the benefit of our fellow men. May Your kingdom come into our offices, our factories and shops, into our homes and into our streets. Give us this day a living wage, so that we may be able to keep Your law. May we earn it without envy or injustice. To us who labor and are heavily burdened, send speedily the refreshment of Your love. May we never sin against You. Show us Your way to work, and when our work here is done, may we, with all our fellow workers, rest in peace. Amen. Gilbert Hay, The Father Gilbert Prayer Book, 1st ed. (Silver Spring, MD: Trinity Missions, 1965). ##### Scriptures Deuteronomy 24:10-17 (NIV) When you make a loan of any kind to your neighbor, do not go into their house to get what is offered to you as a pledge. Stay outside and let the neighbor to whom you are making the loan bring the pledge out to you. If the neighbor is poor, do not go to sleep with their pledge in your possession. Return their cloak by sunset so that your neighbor may sleep in it. Then they will thank you, and it will be regarded as a righteous act in the sight of the Lord your God. Do not take advantage of a hired worker who is poor and needy, whether that worker is a fellow Israelite or a foreigner residing in one of your towns. Pay them their wages each day before sunset, because they are poor and are counting on it. Otherwise they may cry to the Lord against you, and you will be guilty of sin. Parents are not to be put to death for their children, nor children put to death for their parents; each will die for their own sin. Do not deprive the foreigner or the fatherless of justice, or take the cloak of the widow as a pledge. Luke 19:45-47 (NIV) When Jesus entered the temple courts, he began to drive out those who were selling. “It is written,” he said to them, “‘My house will be a house of prayer’; but you have made it ‘a den of robbers.’” Every day he was teaching at the temple. But the chief priests, the teachers of the law and the leaders among the people were trying to kill him. Acts 4:32-37 (NRSV) Now the whole group of those who believed were of one heart and soul, and no one claimed private ownership of any possessions, but everything they owned was held in common. With great power the apostles gave their testimony to the resurrection of the Lord Jesus, and great grace was upon them all. There was not a needy person among them, for as many as owned lands or houses sold them and brought the proceeds of what was sold. They laid it at the apostles’ feet, and it was distributed to each as any had need. There was a Levite, a native of Cyprus, Joseph, to whom the apostles gave the name Barnabas (which means “son of encouragement”). He sold a field that belonged to him, then brought the money, and laid it at the apostles’ feet.
2020-01-25 15:05:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17951343953609467, "perplexity": 5688.115751415918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251672537.90/warc/CC-MAIN-20200125131641-20200125160641-00233.warc.gz"}
https://www.gamedev.net/forums/topic/417155-unfamiliar-notation-that-looks-like-a-vector-but-isnt/
# Unfamiliar notation that looks like a vector but isnt This topic is 4311 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I'm trying to get my head around a certain equation (well, actually a few), but they have this notation that I initially thought was a vector with 2 elements, but when I worked through it I found that didnt make sense... Luckily in the paper I was reading it gives the parameters and the answer, but I dont get how they have got from one to the other e.g. (assume the little brackets are big ones) 126 = (9) (5) 126 = (9) (4) 84 = (9) (3) 36 = (9) (7) Wierd stuff. Anyone any clues here? Cheers, Dave. ##### Share on other sites that's the "choose" function, used in many fields but originating in probability. as an example, lets say you had 9 types of topping for a hot dog, and you wanted to find out how many different combinations you could make by picking 5 of those toppings, 9 choose 5 is your answer, or 126. Mathworld explanation ##### Share on other sites Excellent! Cheers for that, you have no idea how stumped that had me. I know a little about combinatorics, but only ever seen nCk style before, never this damned ambiguous notation! Thanks again, 1. 1 Rutin 25 2. 2 3. 3 JoeJ 19 4. 4 5. 5 • 10 • 11 • 9 • 9 • 10 • ### Forum Statistics • Total Topics 631753 • Total Posts 3002098 × ## Important Information We are the game development community. Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up! Sign me up!
2018-07-19 16:04:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33092156052589417, "perplexity": 1986.8238876530474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591140.45/warc/CC-MAIN-20180719144851-20180719164851-00581.warc.gz"}
https://tug.org/pipermail/xetex/2007-December/008138.html
# [XeTeX] Active characters John Was john.was at ntlworld.com Sun Dec 23 17:16:37 CET 2007 Hello Michael Trausch recently asked about accessing a separate font for just a few characters missing from the main font that he wanted to use. The answer that JK gave was to make the character active so that it could become a single-character macro. I seem to have misunderstood the syntax, so would be grateful for some help. I want to make < and > go to another font and access the obtuse-angled tall brackets (Unicode values 3008 and 3009) rather than the mathematical less-than and greater-than signs (values 003C and 003E), but I find that having made < and > active and defining them as e.g. {{\altfont \char"3008}} and {{\altfont \char"3009}} (the two pairs of {} being necessary, of course, so that subsequent text is not given in the wrong font) I have to give {\<} and {\>} rather than just < and > to produce the desired result. This isn't a great burden and can easily be taken care of by global search-and-replace, but I'd like to know what the trick is that I'm missing. JK's answer (referring to a different character) was: \catcode"2191 = \active \def^^^^2191{{\arrowfont\char"2191}} But I find that XeTeX halts at \def^^^^, which it can't understand. Should there be some preceding code that makes \def^^^^ comprehensible to XeTeX?
2023-03-28 21:56:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8654332756996155, "perplexity": 1933.0367546048499}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948871.42/warc/CC-MAIN-20230328201715-20230328231715-00333.warc.gz"}
https://mathematica.stackexchange.com/questions/135021/minimization-with-integration-over-region
# Minimization with integration over region I'm having some difficulties implementing a minimization problem were the objective function involves a numeric integration. This simplified problem highlights one of the issues I'm currently having: how to find the location of the minimum average value of a function. For this problem assume the function can be described by exactfun[x_?NumericQ, y_?NumericQ] := (x - .5)^4 + (y - .5)^4 I want to minimize the average value of the function over a disk region centered at $\left(a,b\right)$ with radius $r$. The exact value of $f_{avg} \left(a,b,r\right)$ can be determined explicitly for this particular function, though in general (and in my application), it is not possible. Here, $f_{avg} \left(a,b,r\right)$ is FullSimplify[ Integrate[(x - 1/2)^4 + (y - 1/2)^4, {x, y} \[Element] Disk[{a, b}, r], Assumptions -> {{a, b} \[Element] Reals && r > 0}]/ Area@Disk[{a, b}, r]] (*1/8 (1 + 4 (-1 + b) b (1 + 2 (-1 + b) b) + 6 r^2 + 2 (2 (-1 + a) a (1 + 2 (-1 + a) a) + 6 ((-1 + a) a + (-1 + b) b) r^2 + r^4))*) Minimizing with respect to the disk center yields Solve[{D[1/ 8 (1 + 4 (-1 + b) b (1 + 2 (-1 + b) b) + 6 r^2 + 2 (2 (-1 + a) a (1 + 2 (-1 + a) a) + 6 ((-1 + a) a + (-1 + b) b) r^2 + r^4)), a] == 0, D[1/8 (1 + 4 (-1 + b) b (1 + 2 (-1 + b) b) + 6 r^2 + 2 (2 (-1 + a) a (1 + 2 (-1 + a) a) + 6 ((-1 + a) a + (-1 + b) b) r^2 + r^4)), b] == 0}, {a, b}, Reals] (*{{a -> 1/2, b -> 1/2}}*) I'm interested in doing this same problem numerically. However, in the evaluation of NMinimize there is an error regarding integration limits that I can't sort out. I tried constructing the optimization function two ways, and each failed. numericint[a_?NumericQ, b_?NumericQ, r_?NumericQ] := NIntegrate[exactfun[x, y], {x, y} \[Element] Disk[{a, b}, r]]/ Area@Disk[{a, b}, r] numericint2[a_?NumericQ, b_?NumericQ, r_?NumericQ] := Module[{disk = Disk[{a, b}, r]}, NIntegrate[exactfun[x, y], {x, y} \[Element] disk, Method -> {Automatic, "SymbolicProcessing" -> False}]/Area@disk] Running the optimization for a particular $r$ and in a certain region fails NMinimize[ numericint[x, y, .05], {x, y} \[Element] Rectangle[{0, 0}, {1, 1}]] (*NIntegrate::ilim: Invalid integration variable or limit(s) in True.*) NMinimize[ numericint2[x, y, .05], {x, y} \[Element] Rectangle[{0, 0}, {1, 1}]] (*NIntegrate::ilim: Invalid integration variable or limit(s) in True.*) Each of the functions appear to evaluate fine outside of the NMinimize environment, but are crashing and burning during the minimization. For reference: numericint[.25, .25, .05] (*0.00828281*) numericint2[.25, .25, .05] (*0.00828281*) 1/8 (1 + 4 (-1 + b) b (1 + 2 (-1 + b) b) + 6 r^2 + 2 (2 (-1 + a) a (1 + 2 (-1 + a) a) + 6 ((-1 + a) a + (-1 + b) b) r^2 + r^4)) /. {a -> .25, b -> .25, r -> .05} (*0.00828281*) The issue seems to be with the specification of the region of integration in the form {x, y} ∈ Disk[{a, b}, r], which is evaluating to True when being evaluated inside the NMinimize command. The problem persists when using the FindMinimum command as well. It can be worked around, however, by specifying the region of integration in terms of explicit bounds on x and y. For a circle of radius $r$ centered at $(a,b)$, we have \begin{align} a - r \leq &\: x \leq a + r, \\ b - \sqrt{ r^2 - (x - a)^2} \leq &\: y \leq b + \sqrt{ r^2 - (x - a)^2} \end{align} and we can program these bounds in explicitly: numericint3[a_?NumericQ, b_?NumericQ, r_?NumericQ] := NIntegrate[ exactfun[x, y], {x, a - r, a + r}, {y, b - Sqrt[r^2 - (x - a)^2], b + Sqrt[r^2 - (x - a)^2]}]/(π r^2) FindMinimum[{numericint3[x, y, .05], 0 <= x <= 1, 0 <= y <= 1}, {x, y}] (* {1.5625*10^-6, {x -> 0.5, y -> 0.5}} *) Using FindMinimum takes about 6 seconds to evaluate on my machine. On the other hand, my attempt to use NMinimize to do the same thing is still running after about 5–10 minutes. I'll update this answer when & if it finishes. UPDATE: NMinimize finished after 13m05s on my machine, yielding the same result: NMinimize[ numericint3[x, y, .05], {x, y} ∈ Rectangle[{0, 0}, {1, 1}]] (* {1.5625*10^-6, {x -> 0.5, y -> 0.5}} *) • To be honest, this behavior seems buggy to me. It might be worth reporting this as a bug to WRI & seeing what they say. My version is MM 10.4 on Mac OS; it's possible this issue was fixed in a later version. – Michael Seifert Jan 9 '17 at 17:50 • Thanks for the suggestion. The problem persists on my edition (W10, v11.0). Explicit definitions appear to work, though I'd really love to apply this for general integration domains. I'll look into following up on the issue with Wolfram. – Marchi Jan 9 '17 at 19:46
2019-11-14 06:16:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4112720191478729, "perplexity": 1606.9381934305236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668004.65/warc/CC-MAIN-20191114053752-20191114081752-00262.warc.gz"}
https://www.sparrho.com/item/conserved-energies-for-the-cubic-nls-in-1-d/8fa6aa/
# Conserved energies for the cubic NLS in 1-d Research paper by Herbert Koch, Daniel Tataru Indexed on: 08 Jul '16Published on: 08 Jul '16Published in: Mathematics - Analysis of PDEs #### Abstract We consider the cubic Nonlinear Schr\"odinger Equation (NLS) as well as the modified Korteweg-de Vries (mKdV) equation in one space dimension. We prove that for each $s>-\frac12$ there exists a conserved energy which is equivalent to the $H^s$ norm of the solution. For the Korteweg-de Vries (KdV) equation there is a similar conserved energy for every $s\ge -1$.
2021-06-20 15:31:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.57452791929245, "perplexity": 414.90008114751146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488249738.50/warc/CC-MAIN-20210620144819-20210620174819-00421.warc.gz"}
https://search.r-project.org/CRAN/refmans/dplyr/html/band_members.html
band_members {dplyr} R Documentation Band membership Description These data sets describe band members of the Beatles and Rolling Stones. They are toy data sets that can be displayed in their entirety on a slide (e.g. to demonstrate a join). Usage band_members band_instruments band_instruments2 Format Each is a tibble with two variables and three observations Details band_instruments and band_instruments2 contain the same data but use different column names for the first column of the data set. band_instruments uses name, which matches the name of the key column of band_members; band_instruments2 uses artist, which does not. Examples band_members band_instruments band_instruments2 [Package dplyr version 1.0.10 Index]
2022-10-07 01:55:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2671826481819153, "perplexity": 4280.324964200868}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337906.7/warc/CC-MAIN-20221007014029-20221007044029-00705.warc.gz"}
https://dspace.kaist.ac.kr/handle/10203/31757
#### Development of nanowire based surface-enhanced raman scattering sensor: optical properties and biomedical applications = 나노선 기반 표면 증강 라만 산란 센서 개발: 광학적 특성 및 생의학 응용 Cited 0 time in Cited 0 time in • Hit : 462 In 1977, Jeanmaire and Van Duyne and Albrecht and Creighton independently observed that adsorption of pyridine to electrochemically roughened silver surfaces could increase the Raman signals of pyridine by a factor of ~$10^6$.$^{1,2}$ This striking discovery was denoted Surface-enhanced Raman Scattering (SERS) effect and immediately became the subject of intensive study.$^3$ After its discovery more than 30 years ago, SERS was expected to have major impact as a sensitive analytical technique and tool for fundamental studies of surface molecules as well as biomedical applications. Unfortunately, the lack of reliable and reproducible SERS signals limited its applicability. In recent years, the synthesis of well-defined nanostructures, the discovery of single molecule SERS, and the design of SERS active platform that maximize the electromagnetic enhancement are driving the resurgence of this field in both fundamentals and applications. In this dissertation, we present the works on SERS using single crystalline noble metal nanowires (NWs). The Au and Ag NWs synthesized in vapor phase have atomically smooth surfaces and thus they can be used as well-defined and well-characterized SERS active platforms. Initially, the optical properties of noble metal NWs were discussed in conjunction with theoretical calculations. This allows us to find simple and reproducible SERS active nanostructures. Secondly, the NW based SERS active platforms were used for molecular detection. These nanostructures enable to detect various analytes such as avidin and various metal ions. Last, the biomedical applications of NW based SERS sensors were presented. The multiplex pathogen DNAs sensing and single nucleotide polymorphism (SNP) detection were possible using these SERS sensors. The abstracts for each chapter are as follows. $\emph{Chapter 1. Optical Properties of Well-defined Surface-enhanced Raman Scattering Active Nanostructures}$ Well-defined SERS active systems were fabricated by ... Kim, Bong-Sooresearcher김봉수researcher Description 한국과학기술원 : 화학과, Publisher 한국과학기술원 Issue Date 2010 Identifier 455476/325007  / 020047014 Language eng Description 학위논문(박사) - 한국과학기술원 : 화학과, 2010.08, [ xxi, 149 p. ] Keywords 표면 증강 라만 산란; 나노선; 디엔에이; 나노입자; 센서; Nanowire; Surface-enhanced Raman Scattering; Sensor; DNA; Nanoparticle URI http://hdl.handle.net/10203/31757
2020-11-25 03:16:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5552995800971985, "perplexity": 6809.335782804208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141180636.17/warc/CC-MAIN-20201125012933-20201125042933-00045.warc.gz"}
https://techwhiff.com/learn/00-a-software-development-company-offers/123983
# 00. A software development company offers different products that can be purchased as stand- alone applications... 00. A software development company offers different products that can be purchased as stand- alone applications or that can be bundled together. The Icon Editor sells for $19.95 and the Icon Manager sells for$35.50. The 2 products are available as a bundle for $39.95. What is the revenue allocated to the Icon Manager if the company uses the individual selling prices and the stand-alone method to allocate revenue between bundled products? 1)$19.55 2) $19.95 3)$20.00 4) $25.58 ## Answers #### Similar Solved Questions 1 answer ##### A vehicle has initial velocity of V。= 31 m/s and driver sees a fallen tree 81... A vehicle has initial velocity of V。= 31 m/s and driver sees a fallen tree 81 m in front of the car. The car has an acceleration a = 6.1 m/s2 and the perception-reaction time of the driver is 0.75 s.Calculate the velocity of the vehicle when it hits the tree.... 1 answer ##### D) What is the relation between the highest pressure and temperature of the fuel air cycle... d) What is the relation between the highest pressure and temperature of the fuel air cycle with the compression ratio of the cycle?... 1 answer ##### Need help with question 1 and 3. please explain the correct answers. PIULHILL Questions Based on... Need help with question 1 and 3. please explain the correct answers. PIULHILL Questions Based on your knowledge of carbon's atomic structure, which of the following orbital diagrams below represents the hybridization of the indicated carbon atom in the following molecule? 11- 11 1 11 lab 1111... 1 answer ##### Probability a)You roll a dice with six sides. What is the probability of rolling a 3... probability a)You roll a dice with six sides. What is the probability of rolling a 3 or a 4? b)You roll one dice, twice. What is the probability of rolling an even number the first time and a 1 the second time? c) There are ten marbles in a bag, three yellow, three blue and four red. You draw one ma... 1 answer ##### After reading Chapter 10 (Essentials of Economics) and Chapters 15/16 (Economics in One Lesson), In one... After reading Chapter 10 (Essentials of Economics) and Chapters 15/16 (Economics in One Lesson), In one to two single spaced pages, choose either the Anti-growth view of economics or the Economic growth view and discuss Whether this is desirable and sustainable. I understand if you did not read thos... 1 answer ##### Cam you please hand write. please i need the answers 13) A loan of$5,600 with... Cam you please hand write. please i need the answers 13) A loan of $5,600 with an interest rate of 14% compounded annually can be repaid in 3 equal annual payments of$2,412.10. Complete the amortization schedule for the loan Period Payment Interest Balance Unpaid reduction balance XXXXXXXXX 5600.0... ##### Hammer regularly took his car to be serviced at his local garage, Vanilla Ice Limited. On... Hammer regularly took his car to be serviced at his local garage, Vanilla Ice Limited. On several occasions before handing his car over to the garage, Vanilla Ice always required Hammer to read and sign a contractual document which contained the following statement in big bold red type; “Vanil... ##### Lab Instructor: Lab Section: R03 Use the information in Figure 2 (your graph) and Table 1... Lab Instructor: Lab Section: R03 Use the information in Figure 2 (your graph) and Table 1 to answer the following questions. 1. Describe the function of the cuticle and stomata in terrestrial plants. (2 marks) The function of a cuticle in terrestrial plants is to control and minimize cellular transp... ##### You are considering an investment in Justus Corporation's stock, which is expected to pay a dividend... You are considering an investment in Justus Corporation's stock, which is expected to pay a dividend of $2.75 a share at the end of the year (D1 =$2.75) and has a beta of 0.9. The risk-free rate is 5.4%, and the market risk premium is 6%. Justus currently sells for \$37.00 a share, and its divid... ##### Problem 8 Let P4 be the space of polynomials of degree less than 4 with real... Problem 8 Let P4 be the space of polynomials of degree less than 4 with real coefficients. Define L: PA + P4 by L(p(x)) = 5xp" (x) – (3x + 2)p" (x) + 7p'(x) a) (5 pts) Find the matrix representing L with respect to the standard basis S = = {1, 2, 22, 23} of P4. Explain how this can... ##### (1 point) A tank initially contains 25 liters of salt water solution with 10 grams of... (1 point) A tank initially contains 25 liters of salt water solution with 10 grams of salt dissolved in it. Salt water containing one gram of salt per liter pours in at the rate of one liter per minute and the well-stired mixture drains out at the rate of 2 liters per minute. How many grams of salt ... ##### Given 2x+y = 6, what is (2,____) (0,____) (1,____) ? Given 2x+y = 6, what is (2,____) (0,____) (1,____) ?... ##### From the below output, identify and report the median. tab childs number of children Freq. Percent... From the below output, identify and report the median. tab childs number of children Freq. Percent Cum. @ 1 2 3 4 5 6 7 eight or more 663 349 623 383 166 74 47 23 16 28.28 14.89 26.58 16.34 7.08 3.16 2.01 0.98 0.68 28.28 43.17 69.75 86.09 93.17 96.33 98.34 99.32 100.00 Total 2,344 100.00... ##### 4. The random variables X and Y have joint probability density function fx,y(x, ) given by: fx,y(... 4. The random variables X and Y have joint probability density function fx,y(x, ) given by: fx,y(x, y) 0, else (a) Find c. (b) Find fx(x) and fy (), the marginal probability density functions of X and Y, respectively (c) Find fxjy (xly), the conditional probability density function of X given Y. For... ##### 1) Scientists are working on a synthetic vaccine​ (antigen) for a particular parasite. The success of... 1) Scientists are working on a synthetic vaccine​ (antigen) for a particular parasite. The success of the vaccine hinges on the characteristics of DNA in peptide​ (protein) produced by the antigen. In one​ study, scientists tested alleles of​ antigen-produced protein for lev...
2023-03-30 04:46:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38633379340171814, "perplexity": 1901.8940231717506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949097.61/warc/CC-MAIN-20230330035241-20230330065241-00310.warc.gz"}
https://socratic.org/questions/the-sum-of-3-consecutive-integral-numbers-is-117-what-are-the-numbers
# The sum of 3 consecutive integral numbers is 117. What are the numbers? May 24, 2016 $38 , 39 , 40$ #### Explanation: If the second of the three numbers is $n$, then the first and third are $n - 1$ and $n + 1$, so we find: $117 = \left(n - 1\right) + n + \left(n + 1\right) = 3 n$ Dividing both ends by $3$ we find: $n = \frac{117}{3} = 39$ So the three numbers are: $38 , 39 , 40$
2021-12-04 17:11:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9791741967201233, "perplexity": 379.7166642933313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362999.66/warc/CC-MAIN-20211204154554-20211204184554-00195.warc.gz"}
https://repository.asu.edu/collections/7?sub=Physics&cont=Chamberlin%2C+Ralph+V&cont=Mauskopf%2C+Philip
## ASU Electronic Theses and Dissertations This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media. In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog. Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu. Contributor Date Range 2018 2018 ## Recent Submissions Fluctuations with a power spectral density depending on frequency as $1/f^\alpha$ ($0<\alpha<2$) are found in a wide class of systems. The number of systems exhibiting $1/f$ noise means it has far-reaching practical implications; it also suggests a possibly universal explanation, or at least a set of shared properties. Given this diversity, there are numerous models of $1/f$ noise. In this dissertation, I summarize my research into models based on linking the characteristic times of fluctuations of a quantity to its multiplicity of states. With this condition satisfied, I show that a quantity will undergo $1/f$ fluctuations and exhibit associated properties, … Contributors Davis, Bryce, Chamberlin, Ralph V, Mauskopf, Philip, et al. Created Date 2018
2019-10-24 05:30:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22620756924152374, "perplexity": 3407.0259969692765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987841291.79/warc/CC-MAIN-20191024040131-20191024063631-00347.warc.gz"}
https://www.gamedev.net/forums/topic/343130-formal-definition-of-the-integral/
Formal definition of the integral? This topic is 4544 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. Recommended Posts I've just realized that often, especially is physics, integrals are used merely as syntactic sugar representing an infinite sum of the form: lim(dx->0) sum(f(x)*dx) where dx is delta-x. Take for example the simplified equation for center of mass: cm = I(p dm)/tm where I is the integral symbol, p is the position of a point mass relative to the centroid of the body, dm is the mass of the individual point masses (and so p dm is the first moment), and tm is the total mass. The integral in that equation, as far I know, isn't really calculating an area, volume, etc. It's merely a fancy way of writing an infinite sum of the above form. What I'm wondering is how the integral is formally defined. I've looked it up on the internet and found it to always be described as the area under the curve, volume, curl, etc. Is it correct to say that, in a very broad sense, the integral is just fancy syntax for infinite sums of the above form? Share on other sites I'm not a math expert but this is the way I look at it. The indefinite integral is the infinite sum: lim(dx->0) sum(f(x)*dx) = F(x). The definite integeral is F(b)-F(a) which also "happens" to represents the area under f(x) from x=a to x=b. I think it's important to not forget that the integral is an infinite sum. I've seen many students distribute roots inside the integral. If they would have known the definition they would have known that this is a no-no. Share on other sites Well, the integral is the anti-derivative. That formula is the Riemann sum. I think a mathematician would most likely be annoyed by making very broad generalizations. Share on other sites Quote: Original post by nilknI've just realized that often, especially is physics, integrals are used merely as syntactic sugar representing an infinite sum of the form:lim(dx->0) sum(f(x)*dx)where dx is delta-x. Take for example the simplified equation for center of mass:cm = I(p dm)/tmwhere I is the integral symbol, p is the position of a point mass relative to the centroid of the body, dm is the mass of the individual point masses (and so p dm is the first moment), and tm is the total mass.The integral in that equation, as far I know, isn't really calculating an area, volume, etc. It's merely a fancy way of writing an infinite sum of the above form.What I'm wondering is how the integral is formally defined. I've looked it up on the internet and found it to always be described as the area under the curve, volume, curl, etc.Is it correct to say that, in a very broad sense, the integral is just fancy syntax for infinite sums of the above form? Pretty much. What you are describing is called the Reimann integral. There are other definitions of integrals and that leads to measure theory. But integration is all about adding stuff up. Perhaps try the definition in wikipedia -Josh Share on other sites Quote: Original post by Name_UnknownI think a mathematician would most likely be annoyed by making very broad generalizations. 20th Century mathematics was mostly about generalization [smile] -Josh Share on other sites Thanks for clearing that up. jjd: Of all the many places I looked, wikipedia just wasn't among them! The definition provided there satisifed my curiosity. Thanks for the link. Now that this great burden has been lifted I can continue with my life. [grin] Share on other sites Quote: Original post by jjd Quote: Original post by Name_UnknownI think a mathematician would most likely be annoyed by making very broad generalizations. 20th Century mathematics was mostly about generalization [smile] -Josh Ok, I think my Differential Equations professor would be very annoyed by broad generalizations ;-) Share on other sites Quote: Original post by Name_UnknownOk, I think my Differential Equations professor would be very annoyed by broad generalizations ;-) Haha! [smile] I don't doubt... although I'm sure he finds the Sturm-Louiville theorem most satisfying [wink][wink] -Josh Share on other sites Quote: Original post by nilknThanks for clearing that up.jjd: Of all the many places I looked, wikipedia just wasn't among them! The definition provided there satisifed my curiosity. Thanks for the link.Now that this great burden has been lifted I can continue with my life. [grin] You're welcome! Calculus is actually a pet peeve of mine. I really don't like the way it is taught in most universities. One thing that I would love to see in a calculus course is historical material used to introduce and motivate the topics covered. The fundamental theorem of calculus is often presented after teaching techniques of integration and I think its a real anticlimax for the student. I mean, it's the FUNDAMENTAL THEOREM!!! But the fact that the area under a curve and the slope of the curve are so tightly related is truly weird! I mean, why would you expect this?! OK, I'll hold my rant in... [wink] -Josh Share on other sites I think it would be strange if the shape of the curve had nothing to do with the area under it. Share on other sites Quote: Original post by LilBudyWizerI think it would be strange if the shape of the curve had nothing to do with the area under it. Sorry, I think you're missing the point [smile] -Josh Share on other sites Perhaps you are missing mine which is that the slope defines the shape. I think that is a concept that could be introduced pretty much as soon as the slope-intercept form of the equation for a line is introduced. Share on other sites Have a look at this: http://mathworld.wolfram.com/RiemannIntegral.html I think the moderator should put a link to mathworld from this forum. Share on other sites Quote: Original post by LilBudyWizerI think it would be strange if the shape of the curve had nothing to do with the area under it. Quote: Original post by LilBudyWizerPerhaps you are missing mine which is that the slope defines the shape. I think that is a concept that could be introduced pretty much as soon as the slope-intercept form of the equation for a line is introduced. Well, actually you're just proving my original point. It is easy to be told that this is the way things are and believe them. That was not the case for Newton or Leibnitz. The first person known to have integrated a function was Abu Ali al-Hasan ibn al-Haytham (also know as Alhazen) some time around 970 BC. It would be remiss to not point out that Archimedes (287 - 212 BC) integrated all sorts of spirals and curves, and that he also showed how to calculate the slopes of tangents to spirals. Calculating derivatives and integrals were not new to Newton or Leibnitz. The tools you are talking about have been around for a very long time. So to suggest that the concept of a slope, and how it relates to a curves shape, should easily lead to calculus, is pretty naive to say the least. However, the calculations preceeding Newton and Leibnitz only applied to special cases and limited classes of functions. What Newton and Leibnitz showed was a way to make the same calculations for almost any function conceivable at the time using a single method, which was actually far simpler than some of the other, more limited, techniques. That is why their calculus is hailed as such a phenomenal achievement and why you do not learn the "method of exhaustion" that Archimedes used. And that is why I say that the fact that slope and area of a curve are so tightly, and perhaps I should also say "simply", related is incredible! Furthermore, I did not suggest that the slope of a curve (or shape if you prefer) would not be related to the area under the curve, but that the relationship can expressed so succinctly (tightly) and with such an enormous amount of generality is, IMO, mind-blowing. -Josh Share on other sites Quote: Original post by jjd Quote: Original post by LilBudyWizerI think it would be strange if the shape of the curve had nothing to do with the area under it. Quote: Original post by LilBudyWizerPerhaps you are missing mine which is that the slope defines the shape. I think that is a concept that could be introduced pretty much as soon as the slope-intercept form of the equation for a line is introduced. Well, actually you're just proving my original point. It is easy to be told that this is the way things are and believe them. That was not the case for Newton or Leibnitz. The first person known to have integrated a function was Abu Ali al-Hasan ibn al-Haytham (also know as Alhazen) some time around 970 BC. It would be remiss to not point out that Archimedes (287 - 212 BC) integrated all sorts of spirals and curves, and that he also showed how to calculate the slopes of tangents to spirals. Calculating derivatives and integrals were not new to Newton or Leibnitz. The tools you are talking about have been around for a very long time. So to suggest that the concept of a slope, and how it relates to a curves shape, should easily lead to calculus, is pretty naive to say the least. However, the calculations preceeding Newton and Leibnitz only applied to special cases and limited classes of functions. What Newton and Leibnitz showed was a way to make the same calculations for almost any function conceivable at the time using a single method, which was actually far simpler than some of the other, more limited, techniques. That is why their calculus is hailed as such a phenomenal achievement and why you do not learn the "method of exhaustion" that Archimedes used. And that is why I say that the fact that slope and area of a curve are so tightly, and perhaps I should also say "simply", related is incredible! Furthermore, I did not suggest that the slope of a curve (or shape if you prefer) would not be related to the area under the curve, but that the relationship can expressed so succinctly (tightly) and with such an enormous amount of generality is, IMO, mind-blowing. -Josh Perhaps I have a basic missunderstanding, but I dont understand why you think it is so important to relate the slope to the integral. The definition of the rieman integral does not include the slope or derivative in it, and I think for a reason. You can have functions, not only they do not have a slope, but they also are not continuos. Which you can calculate their rieman integral. I believe there are also such functions with infinity amount of points(Countable, and in a finity fragment) with no continuity, which you can calculate their rieman integral. -josh Share on other sites "Perhaps I have a basic missunderstanding, but I dont understand why you think it is so important to relate the slope to the integral." Yup, you have an incredibly complete and basic misunderstanding. The briefest look at the equation d = r * t makes the misunderstanding clear. Rate of change and area are obviously and necessarily intimately related. Share on other sites Quote: Original post by The C modest godPerhaps I have a basic missunderstanding, but I dont understand why you think it is so important to relate the slope to the integral.The definition of the rieman integral does not include the slope or derivative in it, and I think for a reason.You can have functions, not only they do not have a slope, but they also are not continuos. Which you can calculate their rieman integral. Correct. But the Riemann Integral was invented in the 19th century, while the Fundamental Theorem of Calculus was first observed by Newton and Leibnitz in the 17th century, and the proof of it relied on an informal geometrical argument which did not consider the pathological curves which you mention here. Still, the Fundamental Theorem of Calculus is enormously important. Many of the functions we are interested in have antiderivatives on some interval, and for these curves the FTC connects the problem of finding areas to the problem of finding slopes. The FTC is also part of the reason why Calculus is called what it is. The FTC showed that there is a systematic method for solving many area problems. The Method of Exhaustion which jjd mentions above does nothing of the sort: It required that you had already solved the area problem before you started, and just gave you a technique for proving that your solution was correct (though the logic behind the Method of Exhaustion was virtually watertight; Newton and Leibnitz' calculus was sloppy in comparison). Quote: I believe there are also such functions with infinity amount of points(Countable, and in a finity fragment) with no continuity, which you can calculate their rieman integral. The definition of a Riemann Integral requires a function to be defined at all points on an interval. Share on other sites I think it's important to note here that even though the finite integral of a function happens to coincide with the area under the graph of the function, integration has nothing to do with geometry in general. Geometry is merely the field where we first stumbled upon integraion and differentiation from the beginning. Integration and differentiation are mathematical operations that exists on their own outside specific fields in mathematics, much like multiplication and division. In fact it is very useful to view integration and differentiation as continous analogues to multiplication and division. The fundamental theorem of calculus is remarkable indeed, but not more remarkable than the fact that multiplication and division are each other's inverse operation, or addition and subtraction. Share on other sites "The definition of a Riemann Integral requires a function to be defined at all points on an interval." False. A Riemann-integrable function's domain does not have to include an interval. Consider the function f, defined only at 0, such that f(0)=0. This is Riemann-integrable (everywhere continuous and differentiable even). But its domain does not include any interval. Share on other sites Quote: Quote: I believe there are also such functions with infinity amount of points(Countable, and in a finity fragment) with no continuity, which you can calculate their rieman integral. The definition of a Riemann Integral requires a function to be defined at all points on an interval. Sigh. It is the case that Riemann-integrable functions may be discontinuous at an infinite number of points. A correct statement is: A Riemann-integrable function is continuous except possibly on a set of measure zero. Intelligent discussion of the limits of Riemann integration requires an understanding of the Lesbeque and the Stieltjes integrals (they "solve" different "problems" with Riemann's version). Share on other sites Quote: Original post by sherifffruitflyFalse. A Riemann-integrable function's domain does not have to include an interval. Consider the function f, defined only at 0, such that f(0)=0. This is Riemann-integrable (everywhere continuous and differentiable even). But its domain does not include any interval. It's a degenerate case, where definitions often disagree. Still, I think most definitions of Riemann-integral will entail that f in this case is not Riemann-integrable. The theorem you are thinking to justify otherwise will state that any function continuous on a closed interval is integrable on that interval, not that a function which is everywhere continuous is everywhere Riemann-integrable. Share on other sites Quote: Original post by NotAnAnonymousPoster Quote: Original post by sherifffruitflyFalse. A Riemann-integrable function's domain does not have to include an interval. Consider the function f, defined only at 0, such that f(0)=0. This is Riemann-integrable (everywhere continuous and differentiable even). But its domain does not include any interval. It's a degenerate case, where definitions often disagree. Still, I think most definitions of Riemann-integral will entail that f in this case is not Riemann-integrable. The theorem you are thinking to justify otherwise will state that any function continuous on a closed interval is integrable on that interval, not that a function which is everywhere continuous is everywhere Riemann-integrable. You can say that all you want - it's just false. Definitions do not vary on this to any substantial degree (and hence neither to the theorems). ROFL - you think there's substantial variance in exactly *what* the Riemann integral is? LMAO. And there's nothing "degenerate" about my example. It's a perfectly fine continuous, differentiable, and Riemann-integrable function. Share on other sites Quote: Original post by sherifffruitflyYou can say that all you want - it's just false. Definitions do not vary on this to any substantial degree (and hence neither to the theorems). ROFL - you think there's substantial variance in exactly *what* the Riemann integral is? LMAO. Not for the Riemann integral, no. Which is why I said: Quote: Original post by MeI think most definitions of Riemann-integral will entail that f in this case is not Riemann-integrable. Quote: Original post by sherifffruitflyAnd there's nothing "degenerate" about my example. It's a perfectly fine continuous, differentiable, and Riemann-integrable function. I wasn't clear there. I meant that your function f is defined on the degenerate interval [0,0]. But we won't get anywhere here without actually supplying some definitions. Try this one at PlanetMath: Riemann Integral The definition requires that the function in question be defined on a non-degenerate interval. Share on other sites Quote: Original post by NotAnAnonymousPoster Quote: Original post by sherifffruitflyFalse. A Riemann-integrable function's domain does not have to include an interval. Consider the function f, defined only at 0, such that f(0)=0. This is Riemann-integrable (everywhere continuous and differentiable even). But its domain does not include any interval. It's a degenerate case, where definitions often disagree. Still, I think most definitions of Riemann-integral will entail that f in this case is not Riemann-integrable. The theorem you are thinking to justify otherwise will state that any function continuous on a closed interval is integrable on that interval, not that a function which is everywhere continuous is everywhere Riemann-integrable. Moreover, the theorem I was thinking of was not the one you tried to putin my mouth. The one I was after was the one I actually said - that the Riemann-integrable functions are those which are continuous except on a set of measure zero.
2018-02-20 05:45:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8587896823883057, "perplexity": 541.4807848807358}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812880.33/warc/CC-MAIN-20180220050606-20180220070606-00165.warc.gz"}
https://nullset.xyz/2017/01/12/spring-pathvariable-dots-failed-exception-handling/
12 Jan 2017 Spring, @PathVariable, dots, and failed exception handling If you use Spring Boot to set up an endpoint using @RequestMapping with a @PathVariable at the end of the request URI, you might run into several quirks if this path variable happens to include a dot. For example, consider the following mapping: @RequestMapping(value = "/api/resource/{id}", method = RequestMethod.GET) @ResponseBody public Resource getResource(@PathVariable("id") final String id) { final Resource resource = resourceService.findById(id); if (resource == null) { throw new ResourceNotFoundException(); } return resource; } This mapping returns a Resource object in the response body, which is serialized to JSON using Jackson2. Let’s also assume that we handle the ResourceNotFoundException using an @ExceptionHandler. @ExceptionHandler(ResourceNotFoundException.class) @ResponseStatus(HttpStatus.NOT_FOUND) @ResponseBody public ErrorResponse handleResourceNotFoundException(final ResourceNotFoundException exception) { return new ErrorResponse(); } The ErrorResponse object is similarly serialized to JSON. If you perform a GET request to /api/resource/this.is.my.id you may notice several things. For one, the id parameter is equal to "this.is.my". The last dot and everything after were truncated! Secondly, if the resource this.is.my does not exist, a 500 is returned, not the 404 with the expected JSON object. What’s going on? A quick search on stack overflow will reveal a fix for the first problem: a regex pattern on the path variable. @RequestMapping(value = "/api/resource/{id:.+}", method = RequestMethod.GET) This hack works, but it is far from ideal, as we would have to use it on every endpoint. How can we fix this globally instead? Spring thinks the characters after the last dot are a file extension, and removes them for you. Since 4.0.1, we can configure this ourselves using the PathMatchConfigurer. We can turn off suffix pattern matching entirely, or only allow it for extensions that we explicitly register ourselves. Let’s turn it off entirely: @Configuration public static class WebConfig extends WebMvcConfigurerAdapter { @Override public void configurePathMatch(final PathMatcherConfigurer configurer) { configurer.setUseSuffixPatternMatch(false); } } That’s much better. However, keep in mind that a method mapped to /users will no longer match to /users.*. I prefer that behaviour, but your mileage may vary. We’re still stuck with the second problem; requesting script.js returns 500. A bit of digging reveals the handleResourceNotFoundException is being called as it should be, and it returns an ErrorResponse object. Afterwards, a HttpMediaTypeNotAcceptableException is thrown somewhere in the Spring framework. This indicates Spring is unable to generate a response that is acceptable by the client. However, the client didn’t specify the response format. What is going on? It turns out that Spring is overriding the response format based on the file format specified in the path. In this case, given script.js at the end of the path, Spring assumes it needs to return a javascript file. It cannot convert the ErrorResponse object to the application/javascript content type, so it throws an exception. We can fix this using the ContentNegotiatorConfigurer. In the same WebConfig class, we implement another override to make sure Spring no longer uses the path extension to determine the content type of the response. @Configuration public static class WebConfig extends WebMvcConfigurerAdapter { @Override public void configurePathMatch(final PathMatcherConfigurer configurer) { configurer.setUseSuffixPatternMatch(false); } @Override public void configureContentNegotiation(final ContentNegotiationConfigurer configurer) { configurer.favorPathExtension(false); } } Voila, by setting two configuration options the mapping now works as desired.
2019-10-16 03:04:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23659682273864746, "perplexity": 4035.314697938414}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986661296.12/warc/CC-MAIN-20191016014439-20191016041939-00081.warc.gz"}
http://en.wikipedia.org/wiki/Deep_sequencing
# Deep sequencing Depth (coverage) in DNA sequencing refers to the number of times a nucleotide is read during the sequencing process. Deep sequencing indicates that the total number of reads is many times larger than the length of the sequence under study. Coverage is the average number of reads representing a given nucleotide in the reconstructed sequence. Depth can be calculated from the length of the original genome (G), the number of reads(N), and the average read length(L) as $N\times L/G$. For example, a hypothetical genome with 2,000 base pairs reconstructed from 8 reads with an average length of 500 nucleotides will have 2x redundancy. This parameter also enables one to estimate other quantities, such as the percentage of the genome covered by reads (sometimes also called coverage). A high coverage in shotgun sequencing is desired because it can overcome errors in base calling and assembly. The subject of DNA sequencing theory addresses the relationships of such quantities. Sometimes a distinction is made between sequence coverage and physical coverage. Sequence coverage is the average number of times a base is read (as described above). Physical coverage is the average number of times a base is read or spanned by mate paired reads.[1] The term "deep" has been used for a wide range of depths (>7×),[citation needed] and the newer term "ultra-deep" has appeared in the scientific literature to refer to even higher coverage (>100×).[2] Even though the sequencing accuracy for each individual nucleotide is very high, the very large number of nucleotides in the genome means that if an individual genome is only sequenced once, there will be a significant number of sequencing errors. Furthermore rare single-nucleotide polymorphisms (SNPs) are common. Hence to distinguish between sequencing errors and true SNPs, it is necessary to increase the sequencing accuracy even further by sequencing individual genomes a large number of times. ## Deep sequencing of transcriptome or RNA Deep sequencing of transcriptome, also known as RNA-Seq, provides both the sequence and frequency of RNA molecules that are present at any particular time in a specific cell type, tissue or organ. Counting the number of mRNAs that are encoded by individual genes provides an indicator of protein-coding potential, a major contributor to phenotype.[3] ## References 1. ^ Meyerson, M.; Gabriel, S.; Getz, G. (2010). "Advances in understanding cancer genomes through second-generation sequencing". Nature Reviews Genetics 11 (10): 685–696. doi:10.1038/nrg2841. PMID 20847746. edit 2. ^ Ajay SS, Parker SC, Abaan HO, Fajardo KV, Margulies EH (September 2011). "Accurate and comprehensive sequencing of personal genomes". Genome Res. 21 (9): 1498–505. doi:10.1101/gr.123638.111. PMC 3166834. PMID 21771779. 3. ^ Hampton M, Melvin RG, Kendall AH, Kirkpatrick BR, Peterson N, Andrews MT (2011). "Deep sequencing the transcriptome reveals seasonal adaptive mechanisms in a hibernating mammal". PLoS One. 6 (10). doi:10.1371/journal.pone.0027021. PMC 3203946. PMID 22046435.
2014-07-11 06:58:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7352613806724548, "perplexity": 3332.2126720263745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776425666.11/warc/CC-MAIN-20140707234025-00032-ip-10-180-212-248.ec2.internal.warc.gz"}
http://lambda-the-ultimate.org/archive/2012/09/25
## Proof formats During the development of a language which allows formal verification I have experimented a lot with proof formats. First I have started with a proof language with tactics similar to Isabelle/HOL and Coq. But then I realized that such proofs are fairly difficult to read. In order to understand a proof one often writes down the state of the proof engine at intermediate positions of the tactics. Then I have introduced proof procedures stimulated by some examples in the dafny language. Up to now I think that tactics are not appropriate to write down and document proofs. In the posts Boolean lattices, Predicates as sets and Complete lattices and closure systems I have used a proof format which I think is good to write down and document proofs. Any opinions, remarks etc. are welcome.
2019-11-22 02:31:22
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.80949467420578, "perplexity": 663.1246152335702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671106.83/warc/CC-MAIN-20191122014756-20191122042756-00141.warc.gz"}
https://nbviewer.jupyter.org/github/YData123/sds123-sp20/blob/master/hw/hw02/hw02.ipynb
# Homework 2: Arrays and Tables¶ Please complete this notebook by filling in the cells provided. Before you begin, execute the following cell to load some necessary information for the assignment. Homework 2 is due Thursday, 1/30 at 11:59pm. Start early so that you can come to office hours if you're stuck. Check the website for the office hours schedule. Late work will not be accepted as per the policies of this course. In [ ]: # Don't change this cell; just run it. import numpy as np from datascience import * ## 1. Creating Arrays¶ Question 1.1. Make an array called weird_numbers containing the following numbers (in the given order): 1. -2 2. the sine of 1.2 3. 3 4. 5 to the power of the cosine of 1.2 Hint: sin and cos are functions in the math module. In [ ]: # Our solution involved one extra line of code before creating # weird_numbers. ... weird_numbers = ... weird_numbers Question 1.2. Make an array called book_title_words containing the following three strings: "Eats", "Shoots", and "and Leaves". In [ ]: book_title_words = ... book_title_words Strings have a method called join. join takes one argument, an array of strings. It returns a single string. Specifically, the value of a_string.join(an_array) is a single string that's the concatenation ("putting together") of all the strings in an_array, except a_string is inserted in between each string. Question 1.3. Use the array book_title_words and the method join to make two strings: 1. "Eats, Shoots, and Leaves" (call this one with_commas) 2. "Eats Shoots and Leaves" (call this one without_commas) Hint: If you're not sure what join does, first try just calling, for example, "foo".join(book_title_words) . In [ ]: with_commas = ... without_commas = ... print('with_commas:', with_commas) print('without_commas:', without_commas) ## 2. Indexing Arrays¶ These exercises give you practice accessing individual elements of arrays. In Python (and in many programming languages), elements are accessed by index, so the first element is the element at index 0. Question 2.1. The cell below creates an array of some numbers. Set third_element to the third element of some_numbers. In [ ]: some_numbers = make_array(-1, -3, -6, -10, -15) third_element = ... third_element Question 2.2. The next cell creates a table that displays some information about the elements of some_numbers and their order. Run the cell to see the partially-completed table, then fill in the missing information in the cell (the strings that are currently "???") to complete the table. In [ ]: elements_of_some_numbers = Table().with_columns( "English name for position", make_array("first", "second", "???", "???", "fifth"), "Index", make_array("???", "1", "2", "???", "4"), "Element", some_numbers) elements_of_some_numbers Question 2.3. You'll sometimes want to find the last element of an array. Suppose an array has 142 elements. What is the index of its last element? In [ ]: index_of_last_element = ... More often, you don't know the number of elements in an array, its length. (For example, it might be a large dataset you found on the Internet.) The function len takes a single argument, an array, and returns the length of that array (an integer). Question 2.4. The cell below loads an array called president_birth_years. The last element in that array is the most recent birth year of any deceased president as of 2017. Assign that year to most_recent_birth_year. In [ ]: president_birth_years = Table.read_table("president_births.csv").column('Birth Year') most_recent_birth_year = ... most_recent_birth_year Question 2.5. Finally, assign sum_of_birth_years to the sum of the first, tenth, and last birth year in president_birth_years In [ ]: sum_of_birth_years = ... ## 3. Basic Array Arithmetic¶ Question 3.1. Multiply the numbers 42, 4224, 42422424, and -250 by 157. For this question, don't use arrays. In [ ]: first_product = ... second_product = ... third_product = ... fourth_product = ... print(first_product, second_product, third_product, fourth_product) Question 3.2. Now, do the same calculation, but using an array called numbers and only a single multiplication (*) operator. Store the 4 results in an array named products. In [ ]: numbers = ... products = ... products Question 3.3. Oops, we made a typo! Instead of 157, we wanted to multiply each number by 1577. Compute the fixed products in the cell below using array arithmetic. Notice that your job is really easy if you previously defined an array containing the 4 numbers. In [ ]: fixed_products = ... fixed_products Question 3.4. We've loaded an array of temperatures in the next cell. Each number is the highest temperature observed on a day at a climate observation station, mostly from the US. Since they're from the US government agency NOAA, all the temperatures are in Fahrenheit. Convert them all to Celsius by first subtracting 32 from them, then multiplying the results by $\frac{5}{9}$. Make sure to ROUND each result to the nearest integer using the np.round function. In [ ]: max_temperatures = Table.read_table("temperatures.csv").column("Daily Max Temperature") celsius_max_temperatures = ... celsius_max_temperatures Question 3.5. The cell below loads all the lowest temperatures from each day (in Fahrenheit). Compute the size of the daily temperature range for each day. That is, compute the difference between each daily maximum temperature and the corresponding daily minimum temperature. Give your answer in Celsius! Make sure NOT to round your answer for this question! In [ ]: min_temperatures = Table.read_table("temperatures.csv").column("Daily Min Temperature") celsius_temperature_ranges = ... celsius_temperature_ranges ## 4. World Population¶ The cell below loads a table of estimates of the world population for different years, starting in 1950. The estimates come from the US Census Bureau website. In [ ]: world = Table.read_table("world_population.csv").select('Year', 'Population') world.show(4) The name population is assigned to an array of population estimates. In [ ]: population = world.column(1) population In this question, you will apply some built-in Numpy functions to this array. The difference function np.diff subtracts each element in an array by the element that preceeds it. As a result, the length of the array np.diff returns will always be one less than the length of the input array. The cumulative sum function np.cumsum outputs an array of partial sums. For example, the third element in the output array corresponds to the sum of the first, second, and third elements. Question 4.1. Very often in data science, we are interested understanding how values change with time. Use np.diff and np.max (or just max) to calculate the largest annual change in population between any two consecutive years. In [ ]: largest_population_change = ... largest_population_change Question 4.2. Describe in words the result of the following expression. What do the values in the resulting array represent (choose one)? In [ ]: np.cumsum(np.diff(population)) 1) The total population change between consecutive years, starting at 1951. 2) The total population change between 1950 and each later year, starting at 1951. 3) The total population change between 1950 and each later year, starting inclusively at 1950 (with a total change of 0). In [ ]: # Assign cumulative_sum_answer to 1, 2, or 3 ## 5. Old Faithful¶ Old Faithful is a geyser in Yellowstone that erupts every 44 to 125 minutes (according to Wikipedia). People are often told that the geyser erupts every hour, but in fact the waiting time between eruptions is more variable. Let's take a look. Question 5.1. The first line below assigns waiting_times to an array of 272 consecutive waiting times between eruptions, taken from a classic 1938 dataset. Assign the names shortest, longest, and average so that the print statement is correct. In [ ]: waiting_times = Table.read_table('old_faithful.csv').column('waiting') shortest = ... longest = ... average = ... print("Old Faithful erupts every", shortest, "to", longest, "minutes and every", average, "minutes on average.") Question 5.2. Assign biggest_decrease to the biggest decrease in waiting time between two consecutive eruptions. For example, the third eruption occurred after 74 minutes and the fourth after 62 minutes, so the decrease in waiting time was 74 - 62 = 12 minutes. Hint: You'll need an array arithmetic function mentioned in the textbook. Hint 2: The function you use may report positive or negative values. You will have to determine if the biggest decrease corresponds to the highest or lowest value. Ultimately, we want to return the absolute value of the biggest decrease so if it is a negative number, make it positive. In [ ]: biggest_decrease = ... biggest_decrease Question 5.3. If you expected Old Faithful to erupt every hour, you would expect to wait a total of 60 * k minutes to see k eruptions. Set difference_from_expected to an array with 272 elements, where the element at index i is the absolute difference between the expected and actual total amount of waiting time to see the first i+1 eruptions. Hint: You'll need to compare a cumulative sum to a range. For example, since the first three waiting times are 79, 54, and 74, the total waiting time for 3 eruptions is 79 + 54 + 74 = 207. The expected waiting time for 3 eruptions is 60 * 3 = 180. Therefore, difference_from_expected.item(2) should be $|207 - 180| = 27$. In [ ]: difference_from_expected = ... difference_from_expected Question 5.4. If instead you guess that each waiting time will be the same as the previous waiting time, how many minutes would your guess differ from the actual time, averaging over every wait time except the first one. For example, since the first three waiting times are 79, 54, and 74, the average difference between your guess and the actual time for just the second and third eruption would be $\frac{|79-54|+ |54-74|}{2} = 22.5$. In [ ]: average_error = ... average_error ## 6. Tables¶ Question 6.1. Suppose you have 4 apples, 3 oranges, and 3 pineapples. (Perhaps you're using Python to solve a high school Algebra problem.) Create a table that contains this information. It should have two columns: "fruit name" and "count". Give it the name fruits. Note: Use lower-case and singular words for the name of each fruit, like "apple". In [ ]: # Our solution uses 1 statement split over 3 lines. fruits = ... ... ... fruits Question 6.2. The file inventory.csv contains information about the inventory at a fruit stand. Each row represents the contents of one box of fruit. Load it as a table named inventory. In [ ]: inventory = ... inventory Question 6.3. Does each box at the fruit stand contain a different fruit? In [ ]: # Set all_different to "Yes" if each box contains a different fruit or # to "No" if multiple boxes contain the same fruit all_different = ... all_different Question 6.4. The file sales.csv contains the number of fruit sold from each box last Saturday. It has an extra column called "price per fruit (\\$)" that's the price per item of fruit for fruit in that box. The rows are in the same order as the inventory table. Load these data into a table called sales. In [ ]: sales = ... sales Question 6.5. How many fruits did the store sell in total on that day? In [ ]: total_fruits_sold = ... total_fruits_sold Question 6.6. What was the store's total revenue (the total price of all fruits sold) on that day? Hint: If you're stuck, think first about how you would compute the total revenue from just the grape sales. In [ ]: total_revenue = ... total_revenue Question 6.7. Make a new table called remaining_inventory. It should have the same rows and columns as inventory, except that the amount of fruit sold from each box should be subtracted from that box's count, so that the "count" is the amount of fruit remaining after Saturday. In [ ]: remaining_inventory = ... ... ... ... remaining_inventory ## 7. Submission¶ Once you're finished, submit your assignment as a .ipynb (Jupyter Notebook) and .pdf (download as .html, then print to save as a .pdf) on the class Canvas site.
2021-03-01 01:13:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32387876510620117, "perplexity": 1424.7771336611934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361808.18/warc/CC-MAIN-20210228235852-20210301025852-00401.warc.gz"}
http://visionmax.com/docs/html-manager-howto.html
# Apache Tomcat 7 Version 7.0.86, Apr 9 2018 User Guide Reference Apache Tomcat Development # Tomcat Web Application Manager How To Introduction In many production environments it is very useful to have the capability to manage your web applications without having to shut down and restart Tomcat. This document is for the HTML web interface to the web application manager. The interface is divided into six sections: Message - Displays success and failure messages. Manager - General manager operations like list and help. Applications - List of web applications and commands. Deploy - Deploying web applications. Diagnostics - Identifying potential problems. Server Information - Information about the Tomcat server. Message Displays information about the success or failure of the last web application manager command you performed. If it succeeded OK is displayed and may be followed by a success message. If it failed FAIL is displayed followed by an error message. Common failure messages are documented below for each command. The complete list of failure messages for each command can be found in the manager web application documentation. Manager The Manager section has three links: List Applications - Redisplay a list of web applications. HTML Manager Help - A link to this document. Manager Help - A link to the comprehensive Manager App HOW TO. Applications The Applications section lists information about all the installed web applications and provides links for managing them. For each web application the following is displayed: • Path - The web application context path. • Display Name - The display name for the web application if it has one configured in its "web.xml" file. • Running - Whether the web application is running and available (true), or not running and unavailable (false). • Sessions - The number of active sessions for remote users of this web application. The number of sessions is a link which when submitted displays more details about session usage by the web application in the Message box. • Commands - Lists all commands which can be performed on the web application. Only those commands which can be performed will be listed as a link which can be submitted. No commands can be performed on the manager web application itself. The following commands can be performed: • Start - Start a web application which had been stopped. • Stop - Stop a web application which is currently running and make it unavailable. • Reload - Reload the web application so that new ".jar" files in /WEB-INF/lib/ or new classes in /WEB-INF/classes/ can be used. • Undeploy - Stop and then remove this web application from the server. Start Signal a stopped application to restart, and make itself available again. Stopping and starting is useful, for example, if the database required by your application becomes temporarily unavailable. It is usually better to stop the web application that relies on this database rather than letting users continuously encounter database exceptions. If this command succeeds, you will see a Message like this: OK - Started application at context path /examples Otherwise, the Message will start with FAIL and include an error message. Possible causes for problems include: Encountered exception An exception was encountered trying to start the web application. Check the Tomcat logs for the details. Invalid context path was specified The context path must start with a slash character, unless you are referencing the ROOT web application -- in which case the context path must be a zero-length string. No context exists for path /foo There is no deployed application on the context path that you specified. No context path was specified The path parameter is required. Stop Signal an existing application to make itself unavailable, but leave it deployed. Any request that comes in while an application is stopped will see an HTTP error 404, and this application will show as "stopped" on a list applications command. If this command succeeds, you will see a Message like this: OK - Stopped application at context path /examples Otherwise, the Message will start with FAIL and include an error message. Possible causes for problems include: Encountered exception An exception was encountered trying to stop the web application. Check the Tomcat logs for the details. Invalid context path was specified The context path must start with a slash character, unless you are referencing the ROOT web application -- in which case the context path must be a zero-length string. No context exists for path /foo There is no deployed application on the context path that you specified. No context path was specified The path parameter is required. Reload Signal an existing application to shut itself down and reload. This can be useful when the web application context is not reloadable and you have updated classes or property files in the /WEB-INF/classes directory or when you have added or updated jar files in the /WEB-INF/lib directory. NOTE: The /WEB-INF/web.xml web application configuration file is not checked on a reload; the previous web.xml configuration is used. If you have made changes to your web.xml file you must stop then start the web application. If this command succeeds, you will see a Message like this: OK - Reloaded application at context path /examples Otherwise, the Message will start with FAIL and include an error message. Possible causes for problems include: Encountered exception An exception was encountered trying to restart the web application. Check the Tomcat logs for the details. Invalid context path was specified The context path must start with a slash character, unless you are referencing the ROOT web application -- in which case the context path must be a zero-length string. No context exists for path /foo There is no deployed application on the context path that you specified. No context path was specified The path parameter is required. Reload not supported on WAR deployed at path /foo Currently, application reloading (to pick up changes to the classes or web.xml file) is not supported when a web application is installed directly from a WAR file, which happens when the host is configured to not unpack WAR files. As it only works when the web application is installed from an unpacked directory, if you are using a WAR file, you should undeploy and then deploy the application again to pick up your changes. Undeploy WARNING - This command will delete the contents of the web application directory and/or ".war" file if it exists within the appBase directory (typically "webapps") for this virtual host . The web application temporary work directory is also deleted. If you simply want to take an application out of service, you should use the /stop command instead. Signal an existing application to gracefully shut itself down, and then remove it from Tomcat (which also makes this context path available for reuse later). This command is the logical opposite of the /deploy Ant command, and the related deploy features available in the HTML manager. If this command succeeds, you will see a Message like this: OK - Undeployed application at context path /examples Otherwise, the Message will start with FAIL and include an error message. Possible causes for problems include: Encountered exception An exception was encountered trying to undeploy the web application. Check the Tomcat logs for the details. Invalid context path was specified The context path must start with a slash character, unless you are referencing the ROOT web application -- in which case the context path must be a zero-length string. No context exists for path /foo There is no deployed application on the context path that you specified. No context path was specified The path parameter is required. Deploy Web applications can be deployed using files or directories located on the Tomcat server or you can upload a web application archive (WAR) file to the server. To install an application, fill in the appropriate fields for the type of install you want to do and then submit it using the Install button. Deploy directory or WAR file located on server Deploy and start a new web application, attached to the specified Context Path: (which must not be in use by any other web application). This command is the logical opposite of the Undeploy command. There are a number of different ways the deploy command can be used. Deploy a Directory or WAR by URL Install a web application directory or ".war" file located on the Tomcat server. If no Context Path is specified, the directory name or the war file name without the ".war" extension is used as the path. The WAR or Directory URL specifies a URL (including the file: scheme) for either a directory or a web application archive (WAR) file. The supported syntax for a URL referring to a WAR file is described on the Javadocs page for the java.net.JarURLConnection class. Use only URLs that refer to the entire WAR file. In this example the web application located in the directory C:\path\to\foo on the Tomcat server (running on Windows) is deployed as the web application context named /footoo. Context Path: /footoo WAR or Directory URL: file:C:/path/to/foo In this example the ".war" file /path/to/bar.war on the Tomcat server (running on Unix) is deployed as the web application context named /bar. Notice that there is no path parameter so the context path defaults to the name of the web application archive file without the ".war" extension. WAR or Directory URL: jar:file:/path/to/bar.war!/ Deploy a Directory or War from the Host appBase Install a web application directory or ".war" file located in your Host appBase directory. If no Context Path is specified the directory name or the war file name without the ".war" extension is used as the path. In this example the web application located in a subdirectory named foo in the Host appBase directory of the Tomcat server is deployed as the web application context named /foo. Notice that there is no path parameter so the context path defaults to the name of the web application directory. WAR or Directory URL: foo In this example the ".war" file bar.war located in your Host appBase directory on the Tomcat server is deployed as the web application context named /bartoo. Context Path: /bartoo WAR or Directory URL: bar.war Deploy using a Context configuration ".xml" file If the Host deployXML flag is set to true, you can install a web application using a Context configuration ".xml" file and an optional ".war" file or web application directory. The Context Path is not used when installing a web application using a context ".xml" configuration file. A Context configuration ".xml" file can contain valid XML for a web application Context just as if it were configured in your Tomcat server.xml configuration file. Here is an example for Tomcat running on Windows: Use of the WAR or Directory URL is optional. When used to select a web application ".war" file or directory it overrides any docBase configured in the context configuration ".xml" file. Here is an example of installing an application using a Context configuration ".xml" file for Tomcat running on Windows. XML Configuration file URL: file:C:/path/to/context.xml Here is an example of installing an application using a Context configuration ".xml" file and a web application ".war" file located on the server (Tomcat running on Unix). XML Configuration file URL: file:/path/to/context.xml WAR or Directory URL: jar:file:/path/to/bar.war!/ Upload a WAR file to install Upload a WAR file from your local system and install it into the appBase for your Host. The name of the WAR file without the ".war" extension is used as the context path name. Use the Browse button to select a WAR file to upload to the server from your local desktop system. The .WAR file may include Tomcat specific deployment configuration, by including a Context configuration XML file in /META-INF/context.xml. Upload of a WAR file could fail for the following reasons: File uploaded must be a .war The upload install will only accept files which have the filename extension of ".war". War file already exists on server If a war file of the same name already exists in your Host's appBase the upload will fail. Either undeploy the existing war file from your Host's appBase or upload the new war file using a different name. File upload failed, no file The file upload failed, no file was received by the server. Install Upload Failed, Exception: The war file upload or install failed with a Java Exception. The exception message will be listed. Deployment Notes If the Host is configured with unpackWARs=true and you install a war file, the war will be unpacked into a directory in your Host appBase directory. If the application war or directory is deployed in your Host appBase directory and either the Host is configured with autoDeploy=true the Context path must match the directory name or war file name without the ".war" extension. For security when untrusted users can manage web applications, the Host deployXML flag can be set to false. This prevents untrusted users from installing web applications using a configuration XML file and also prevents them from installing application directories or ".war" files located outside of their Host appBase. Deploy Message If deployment and startup is successful, you will receive a Message like this: OK - Deployed application at context path /foo Otherwise, the Message will start with FAIL and include an error message. Possible causes for problems include: Application already exists at path /foo The context paths for all currently running web applications must be unique. Therefore, you must either undeploy the existing web application using this context path, or choose a different context path for the new one. Document base does not exist or is not a readable directory The URL specified by the WAR or Directory URL: field must identify a directory on this server that contains the "unpacked" version of a web application, or the absolute URL of a web application archive (WAR) file that contains this application. Correct the value entered for the WAR or Directory URL: field. Encountered exception An exception was encountered trying to start the new web application. Check the Tomcat logs for the details, but likely explanations include problems parsing your /WEB-INF/web.xml file, or missing classes encountered when initializing application event listeners and filters. Invalid application URL was specified The URL for the WAR or Directory URL: field that you specified was not valid. Such URLs must start with file:, and URLs for a WAR file must end in ".war". Invalid context path was specified The context path must start with a slash character, unless you are referencing the ROOT web application -- in which case the context path must be a "/" string. Context path must match the directory or WAR file name: If the application war or directory is deployed in your Host appBase directory and either the Host is configured with autoDeploy=true the Context path must match the directory name or war file name without the ".war" extension. Only web applications in the Host web application directory can be deployed If the Host deployXML flag is set to false this error will happen if an attempt is made to install a web application directory or ".war" file outside of the Host appBase directory. Diagnostics Finding memory leaks The find leaks diagnostic triggers a full garbage collection. It should be used with extreme caution on production systems. The find leaks diagnostic attempts to identify web applications that have caused memory leaks when they were stopped, reloaded or undeployed. Results should always be confirmed with a profiler. The diagnostic uses additional functionality provided by the StandardHost implementation. It will not work if a custom host is used that does not extend StandardHost. This diagnostic will list context paths for the web applications that were stopped, reloaded or undeployed, but which classes from the previous runs are still present in memory, thus being a memory leak. If an application has been reloaded several times, it may be listed several times. Explicitly triggering a full garbage collection from Java code is documented to be unreliable. Furthermore, depending on the JVM used, there are options to disable explicit GC triggering, like -XX:+DisableExplicitGC. If you want to make sure, that the diagnostics were successfully running a full GC, you will need to check using tools like GC logging, JConsole or similar. Server Information This section displays information about Tomcat, the operating system of the server Tomcat is hosted on, the Java Virtual Machine Tomcat is running in, the primary host name of the server (may not be the host name used to access Tomcat) and the primary IP address of the server (may not be the IP address used to access Tomcat). Comments Notice: This comments section collects your suggestions on improving documentation for Apache Tomcat. If you have trouble and need help, read Find Help page and ask your question on the tomcat-users mailing list. Do not ask such questions here. This is not a Q&A section. The Apache Comments System is explained here. Comments may be removed by our moderators if they are either implemented or considered invalid/off-topic.
2018-05-27 08:02:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28959575295448303, "perplexity": 3250.043286189448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794868132.80/warc/CC-MAIN-20180527072151-20180527092151-00461.warc.gz"}
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=GCSHCI_2013_v38Bn11_861
Analysis on Operation of Anti-Virus Systems with Real-Time Scan and Batch Scan Title & Authors Analysis on Operation of Anti-Virus Systems with Real-Time Scan and Batch Scan Yang, Won Seok; Kim, Tae-Sung; Abstract We consider an information system where viruses arrive according to a Poisson process with rate $\small{{\lambda}}$. The information system has two types of anti-virus operation policies including 'real-time scan' and 'batch scan.' In the real-time scan policy, a virus is assumed to be scanned immediately after its arrival. Consequently, the real-time scan policy assumes infinite number of anti-viruses. We assume that the time for scanning and curing a virus follows a general distribution. In the batch scan policy, a system manager operates an anti-virus every deterministic time interval and scan and cure all the viruses remaining in the system simultaneously. In this paper we suggest a probability model for the operation of anti-virus software. We derive a condition under which the operating policy is achieved. Some numerical examples with various cost structure are given to illustrate the results. Keywords anti-virus system;real-time scan;batch scan;economic analysis;probability model; Language English Cited by References 1. Computer Security Institute, Computer Crime and Security Survey, Jun. 2011. 2. L. A. Gordon and M. P. Loeb, "The economics of information security investment," ACM Trans. Inform. Syst. Security, vol. 5, no. 4, pp. 438-457, Nov. 2002. 3. W. S. Yang, T. S. Kim, and H. M. Park, "Probabilistic modeling for evaluation of information security investment portfolios," J. Korean Operations Research Management Sci. Soc., vol. 34, no. 3, pp. 155-163, Sep. 2009. 4. W. S. Yang, T. S. Kim, and H. M. Park, "Considering system throughput to evaluate information security investment portfolios," J. Korea Inst. Inform. Security Cryptology, vol. 20, no. 2, pp. 109-116, Apr. 2010. 5. H. Cavusoglu, B. Mishra, and S. Raghunathan, "The value of intrusion detection systems in information technology security architecture," Inform. Syst. Research, vol. 16, no. 1, pp. 28-46, Mar. 2005. 6. H. Cavusoglu, B. Mishra, and S. Raghunathan, "A model for evaluating IT security investments," Commun. ACM, vol. 47, no. 7, pp. 87-92, July 2004. 7. L. D. Bodin, L. A. Gordon, and M. P. Loeb, "Evaluating information security investments using the analytic hierarchy process," Commun. ACM, vol. 48, no. 2, pp. 79-83, Feb. 2005. 8. H. K. Kong, T. S. Kim, and J. Kim, "An analysis on effects of information security investments: a BSC perspective," J. Intell. Manufacturing, vol. 23, no. 4, pp. 941-953, Aug. 2012. 9. Korea Communication Commission (KCC) and Korea Internet & Security Agency (KISA), Information Security Survey-Businesses, Mar. 2012. 10. W. S. Yang, J. D. Kim, and K. C. Chae, "Analysis of M/G/1 stochastic clearing systems," Stochastic Anal. Applicat., vol. 20, no. 5, pp. 1083-1100, Oct. 2002. 11. G. Jain and K. Sigman, "A Pollaczek-Khintchine formula for M/G/1 queues with disasters," J. Applied Probability, vol. 33, no. 4, pp. 1191-1200, Dec. 1996. 12. I. Atencia and P. Moreno, "The discrete-time Geo/Geo/1 queue with negative customers and disasters," Comput. Operations Research, vol. 31, no. 9, pp. 1537-1548, Aug. 2004. 13. A. Gomez-Corral, "On a finite-buffer bulk-service queue with disasters," Math. Methods Operations Research, vol. 61, no. 1, pp. 57-84, Mar. 2005. 14. F. Jolai, S. M. Asadzadeh, and M. R. Taghizadeh, "Performance estimation of an Email contact center by a finite source discrete time Geo/Geo/1 queue with disasters," Comput. Ind. Eng., vol. 55, no. 3, pp. 543-556, Oct. 2008. 15. X. W. Yi, J. D. Kim, D. W. Choi, and K. C. Chae, "The Geo/G/1 queue with disasters and multiple working vacations," Stochastic Models, vol. 23, no. 4, pp. 21-31, Nov. 2007. 16. H. M. Park, W. S. Yang, and K .C. Chae, "Analysis of the GI/Geo/1 queue with disasters," Stochastic Anal. Applicat., vol. 28, no. 1, pp. 44-53, Jan. 2010. 17. D. H. Lee, W. S. Yang, and H. M. Park, "Geo/G/1 queues with disasters and general repair times," Applied Math. Modelling, vol. 35, no. 4, pp. 1561-1570, Apr. 2011. 18. A. Chen and E. Renshaw, "The M/M/1 queue with mass exodus and mass arrivals when empty," J. Applied Probability, vol. 34, no. 1, pp. 192-207, Mar. 1997. 19. D. Towsley and S. K. Tripathi, "A single server priority queue with server failures and queue flushing," Operations Research Lett., vol. 10, no. 6, pp. 353-362, Aug. 1991. 20. E. G. Kyriakidis and A. Abakuks, "Optimal pest control through catastrophes," J. Applied Probability, vol. 27, no. 4, pp. 873-879, Dec. 1989. 21. X. Chao, "A queueing network model with catastrophes and product form solution," Operations Research Lett., vol. 18, no. 2, pp. 75-79, Sep. 1995. 22. J. R. Artalejo and A. Gomez-Corral, "Analysis of a stochastic clearing system with repeated attempts," Stochastic Models, vol. 14, no. 3, pp. 623-645, Jun. 1998. 23. D. Gross and G. M. Harris, Fundamentals of Queueing Theory, John Wiley & Sons, 1974.
2018-06-23 08:27:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.480643630027771, "perplexity": 5687.746580985646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864953.36/warc/CC-MAIN-20180623074142-20180623094142-00100.warc.gz"}
https://plainmath.net/4590/rational-number-wha-is-0-02-as-a-rational-number
# Rational number: wha is 0.02 as a rational number glamrockqueen7 2021-03-04 Answered Rational number: wha is 0.02 as a rational number You can still ask an expert for help Expert Community at Your Service • Live experts 24/7 • Questions are typically answered in as fast as 30 minutes • Personalized clear answers Solve your problem for the price of one coffee • Available 24/7 • Math expert for every subject • Pay only if we can solve it ## Expert Answer diskusje5 Answered 2021-03-05 Author has 82 answers we need to write 0.02 as a rational number we have two numbers after decimal point If we have two numbers after the decimal point we multiply by 100 $\frac{0.02}{1}$ multiply both top and bottom by 100 $\frac{0.02\ast 100}{1\ast 100}=\frac{2}{100}$ Divide both top and bottom by 2 $\frac{1}{50}$ write 0.02 as a rational number Answer $\left(\frac{1}{50}\right)$ ###### Not exactly what you’re looking for? Expert Community at Your Service • Live experts 24/7 • Questions are typically answered in as fast as 30 minutes • Personalized clear answers Solve your problem for the price of one coffee • Available 24/7 • Math expert for every subject • Pay only if we can solve it
2022-06-28 11:47:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 26, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41599777340888977, "perplexity": 4780.698062632909}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103516990.28/warc/CC-MAIN-20220628111602-20220628141602-00391.warc.gz"}
https://math.stackexchange.com/questions/1204885/determine-the-values-that-series-converges
# determine the values that series converges Determine for what values of $x \in \Bbb R$ the series $$\sum_{n = 1}^\infty \frac{(-1)^n}{2n+1}\left(\frac{1-x}{1+x}\right)^n$$ coverges. I have tried the alternating series test but I don't think I am doing it correctly because I keep getting infinity. Does that mean it converges for all values? Thank you. • would the root test apply to this situation? could I apply the root test – user226131 Mar 24 '15 at 19:39 • That's a good idea, but I think it'll be easier for you if you apply the ratio test. – kobe Mar 24 '15 at 19:41 • Yes you can apply the root test. – science Mar 24 '15 at 19:57 • $\displaystyle\sum_{n=0}^\infty(-1)^n\frac{x^{2n+1}}{2n+1}~=~\arctan x$. – Lucian Mar 24 '15 at 20:33 Since $(1 - x)/(1 + x)$ is defined only for $x \neq -1$, we can rule out $x = -1$. Let $a_n(x)$ be the $n$th term of the series. Then $$\lim_{n\to \infty} \left|\frac{a_{n+1}(x)}{a_n(x)}\right| = \lim_{n\to \infty} \frac{2n+1}{2n+3}\left|\frac{1-x}{1+x}\right| = \left|\frac{1 - x}{1 + x}\right|.$$ Now $|(1 - x)/(1 + x)| < 1 \iff|1 - x| < |1 + x| \iff|1 - x|^2 < |1 + x|^2$. By expanding both sides of the equality, we get the equivalent inequality $x > 0$. By the ratio test, the series $\sum a_n(x)$ converges when $x > 0$ and diverges when $x < 0$. When $x = 0$, the alternating series test shows $\sum a_n(x)$ converges. Thus $\sum a_n(x)$ converges if and only if $x \ge 0$. Applying the ratio test, one finds the limiting ratio: $$\left|\frac{1-x}{1+x}\right|.$$ The series therefore converges if that ratio is $<1$ and diverges if it is $>1$. What happens when it is equal to $1$ must be looked at separately; the ratio test doesn't help there. So we have $$-1<\frac{1-x}{1+x}<1.$$ We cannot multiply all three members by $1+x$ because that is sometimes positive and sometimes negative, depending on $x$. So use a common denominator: $$\frac{-1-x}{1+x} < \frac{1-x}{1+x} < \frac{1+x}{1+x}$$ The first inequality becomes $$0 < \frac{2}{1+x}$$ and the second becomes $$\frac{2x}{1+x}>0.$$ The first is satisfied if $x>-1$ and the second if either $x>0$ or $x<-1$. You need both, so the solution is $x>0$.
2019-12-09 08:09:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.938382089138031, "perplexity": 128.3751086522585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518337.65/warc/CC-MAIN-20191209065626-20191209093626-00023.warc.gz"}
https://tetrisconcept.net/threads/tgm-tap-the-mfm005s-challenge.251/page-4
# Challenge: [TGM][TAP] The mfm005's Challenge Thread in 'Competition' started by Amnesia, 7 Jan 2008. 1. ### rain 597. still feels very improvable. my 20g stacking for combos is so bad FeV and Qlex like this. 2. ### rain Turns out the combo formula on the wiki is wrong, and thus the combo formula in FB. So scratch those scores for me. Edit: Also I would like to point out it was wrong because I interpreted it incorrectly. It should be correct now. Last edited: 25 Feb 2017 4. ### cyberguile Salaud ! Need to play again grrrr Qlex likes this. 5. ### JBroms s9 @ 754 This was mostly just done as a score attack attempt, but I messed up the last two sections. No (intentional) level stop abuse. Qlex and Xaphiosis like this. 6. ### JBroms EnchantressOfNumbers likes this.
2017-07-23 08:45:01
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8015572428703308, "perplexity": 12467.818168118572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424296.90/warc/CC-MAIN-20170723082652-20170723102652-00189.warc.gz"}
https://math.stackexchange.com/questions/1519452/can-any-real-number-be-typed-in-a-computer/1519472?noredirect=1
# Can any Real number be typed in a computer? Suppose that we have a computer program, my question is whether a human can type in any real number - say in $[0,10]$ - that she would like to type in a finite amount of time? Suppose that the program allows typing in any basic expressions like sum, multiplication, square-root, limits etc. in a more formal way: Choose one real number $x\in [0,10]$, is it possible to express $x$ in a way that a computer can understand*? *can understand = if $y$ is another such number, computer can tell whether $x>y, y>x$ or $x=y$. EDIT: For example, square-root 2 can be typed as $\sqrt{2}$ and that is okay. As asked in comments, computer does not have an infinite memory, What is imporant is that the program can distinguish any two numbers that are typed in. For example, even though it is not possible to represent $\sqrt{2}$ in a computer, it can understand that $\sqrt{3}$ is bigger than $\sqrt{2}$. • Nope, they have infinite decimals. – Zelos Malum Nov 8 '15 at 19:08 • You mean like 0.88888888 in definitely? – Nonlinear Nov 8 '15 at 19:10 • that, squareroot of 2, any real number has infinitely many decimals so they cannot be written out by computers ever. – Zelos Malum Nov 8 '15 at 19:11 • @Zelos Malum: Yes but the square-root of $2$ can be typed as the square-root of $2$. – nombre Nov 8 '15 at 19:12 • "That she would like to type" is a possible loophole. The human has a finite brain, and therefore can't think of more than finitely many numbers. – Robert Israel Nov 8 '15 at 19:35 When you have finite or countable many symbols: No, when you only allow symbols from a finite (or countable) set of possible symbols like $\lim$, $\sqrt{}$, digits and so on, the set of all possible terms (with finte length) someone can type is countable. But there are uncountable many reals in $[0,10]$. Thus there are reals which are "untypable"... This is related to definable numbers: There are only countable many expressions to define a number in first order logic but there are uncountable many reals -> There are undefinable numbers. See also the answers and comments to the question Is there an example for an undefinable number? for examples of undefinable/uncomputable numbers. When you have uncountable many symbols: Then you can assign to each real number a different symbol which stands for it (like the symbol "$\pi$" stands for the number $\pi$) and voila: Every real can be typed by using its symbol ;-) (Let's assume that the continuum hypothesis is true as a axiom for this answer: There is no uncountable set with cardinality less then the cardinality of $\mathbb R$) Update: There are computable numbers $x$ for which you cannot decide whether $x=0$ or $x > 0$. To cite a comment of Robert Israel to this answer: Consider a predicate $S(n)$ that can be computed for each natural number $n$. Define $x=\sum_{n=1}^\infty 2^{−n}d(n)$, where $d(n)=1$ if $S(n)$ is true and 0 otherwise. Since each $S(n)$ can be computed, $x$ is computable in the sense that arbitrarily good rational approximations of $x$ can be computed. Now it might be that all $S(n)$ happen to be false, but there is no proof (in your favourite consistent formal system) of this fact. Then it is impossible to decide (in that system) whether $x=0$ [or $x > 0$]. • and if an uncountable set of characters is allowed, you can type any real number using single character – z100 Nov 8 '15 at 19:21 • @z100: You are right! I add your comment to my answer... – Stephan Kulla Nov 8 '15 at 19:24 • @z100: But I guess we need the continuum hypothesis as an axiom so that there is no uncountable set of symbols with cardinality less then the cardinality of $\mathbb R$ ;-) – Stephan Kulla Nov 8 '15 at 19:31 • @tampis I did not mean to write every real number down, then you are right there are uncountable of them. I am wondering if there is a reference / paper about the issue you raise, best. – Nonlinear Nov 8 '15 at 19:52 • Consider a predicate $S(n)$ that can be computed for each natural number $n$. Define $x = \sum_{n=1}^\infty 2^{-n} d(n)$, where $d(n) = 1$ if $S(n)$ is true and $0$ otherwise. Since each $S(n)$ can be computed, $x$ is computable in the sense that arbitrarily good rational approximations of $x$ can be computed. Now it might be that all $S(n)$ happen to be false, but there is no proof (in your favourite consistent formal system) of this fact. Then it is impossible to decide (in that system) whether $x = 0$. – Robert Israel Nov 8 '15 at 21:54 Any real number $\xi$ you, or a more experienced mathematician, can think of can be defined and described in terms of text and universally accepted formulas, whence: be expressed in TeX-code on less than two ASCI-pages or so. Any computer will accept this as input. Such an input determines $\xi$ as an element of ${\mathbb R}$ once and for all, in other words: to any desired number of decimal places. Of course it may be the case that your pocket calculator will not be sufficient to compute these digits one after the other. • is there a reference that you know about this result? – Nonlinear Nov 8 '15 at 19:34 • I think this answer is wrong. See the wikipedia article "Computable number": There are numbers a computer can never calculate (see the article computability for more details). There are also undefinable numbers... There are only countable many possible TeX-articles. How do you want to define with them each of the uncountable many real numbers?! ;-) – Stephan Kulla Nov 8 '15 at 19:41 • Can you "think of" an undefinable number? – Robert Israel Nov 8 '15 at 19:47 • @RobertIsrael -- I,too, think this answer is wrong, because of its claim about what can be defined in "two ASCII-pages or so". Many reals in $[0,10]$ can of course be defined by explicitly listing their binary digits, and a trillion (say) independent random bits is practically certain to define a number that cannot be expressed in "two ASCII-pages or so". (This supposes that after generating these bits, I would say that I have "thought of" the number.) – r.e.s. Nov 8 '15 at 20:04 • Is this a joke? – suriv Feb 2 '17 at 23:26
2020-05-29 21:16:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8133026957511902, "perplexity": 431.23228562881167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347406365.40/warc/CC-MAIN-20200529183529-20200529213529-00492.warc.gz"}
https://hsm.stackexchange.com/questions/11215/einstein-praising-sophus-lie/11229
# Einstein praising Sophus Lie p. 153 of quotes (but does not cite) Einstein saying that without the discoveries of Lie, the Theory of Relativity would probably never have been born. Did Einstein actually say this (or something similar)? I can only find a few references to Sophus Lie in the Einstein Papers, and they are only in letters addressed to Einstein.
2020-09-29 19:21:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.931795060634613, "perplexity": 3256.1460687115823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402088830.87/warc/CC-MAIN-20200929190110-20200929220110-00517.warc.gz"}
https://socratic.org/questions/58c3181711ef6b0b254183f7
# Question #183f7 ##### 1 Answer Mar 10, 2017 $12$ boys #### Explanation: We know that there is a ratio of $3 : 2$ of girls:boys We know that there are $18$ girls so we can see that the $3$ in the ratio refers to $18$ people. Know that we know that $3$ means $18$ people, we can figure out that $1$ would mean $6$ people. As there are $2$ boys in the ratio, there are $6 \times 2$ boys in the class.
2020-08-11 07:21:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7280884385108948, "perplexity": 350.9325177095041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738735.44/warc/CC-MAIN-20200811055449-20200811085449-00502.warc.gz"}
https://codegolf.stackexchange.com/questions/15546/even-or-odd-three-player
# Even or odd: three player It's a three players game, play with one hand. At same time, each player show his hand with 0 to 5 fingers extended. If all player show same kind of (even or odd) number, there is no winner. But else, the player showing different kind the two other win. P l a y e r s A B C Winner Even Even Even No winner Odd Odd Odd No winner Even Odd Odd Player A Odd Even Even Player A Even Odd Even Player B Odd Even Odd Player B Odd Odd Even Player C Even Even Odd Player C The requested tool could use arguments (3 arguments as numeric between 0 and 5) or STDIN (3 values by line, separated by spaces). There is no need to check input: Irregular input could produce unexpected output. Arguments or values on line is given from left to right, from player A to player C. Output must only contain A, B or C (capitalized) or the string no one (in lower case, with a regular space). Tool may work only once or as a filter on all input lines. Shortest code win. • Might be more interesting as a [king-of-the-hill]. Play the game. – dmckee --- ex-moderator kitten Dec 1 '13 at 1:49 • I wonder how the fact we have 5 fingers (so there are 3 odd and 2 even numbers of fingers possible) affects the winning strategy... – Olivier Dulac Dec 2 '13 at 11:16 • @OlivierDulac, 0 is also an even number. – Peter Taylor Dec 2 '13 at 12:25 • In this game rule yes, this let same chances for even than for odd numbers (0 2 4 vs 1 3 5) – F. Hauri Dec 2 '13 at 12:38 • @PeterTaylor: thanks, I misread the question (and I didn't think it would count). – Olivier Dulac Dec 2 '13 at 13:04 ## APL (34 30) (1⍳⍨+/∘.=⍨2|⎕)⊃'ABC',⊂'no one' Explanation: • 2|⎕: read a line of input, take the mod-2 of each number (giving a list, i.e. 1 0 1) • ∘.=⍨: compare each element in the vector to each element in the vector, giving a matrix • +/: sum the rows of the matrix, giving for each element how many elements it was equal to. If there were two the same and one different, we now have a vector like 2 1 2 where the 1 denotes who was different. If they were all the same, we get 3 3 3. • 1⍳⍨: find the position of the 1. If there is no 1, this returns one more than the length of the vector, in this case 4. • ⊃'ABC',⊂'no one': display the string at the given index. • Nice, good use of array-oriented programming to compute the index. – FireFly Nov 30 '13 at 23:54 # Mathematica, 45 43 42 41 chars f="no one"[A,B,C]〚Mod[Tr@#-#,2].{2,1,0}〛& ### Example: f[{0 ,0, 0}] no one f[{1, 3, 5}] no one f[{2, 3, 5}] A f[{2, 3, 4}] B ### Another solution with 43 42 chars: f=Mod[Tr@#-#-1,2].{A,B,C}/._+__->"no one"& ## Powershell, 65 param($a,$b,$c)"@ABCCBA@"[$a%2+$b%2*2+$c%2*4]-replace"@","no one" ## Befunge-98, 61 50 45 characters &&&:11p+2%2*\11g+2%+:"@"+#@\#,_0"eno on">:#,_ Uses Fors' clever expression to shave off yet a few more characters. Now single-line (i.e. Unefunge-compatible)! Reads until a game is won; add @ at the end for a one-shot program. Treats input mod 2 as a binary number, as with my JS answer, then relies on lookup for A-C and falls back to 'no one' if out-of-bounds (by testing if the character is ≥'A', which allows me to use nearby code as data :D). Variation that reads a line of input, produces output, reads a new line of input, etc until a game is decided (i.e. not 'no one'): &2%4*&2%2*&2%++1g:" "#@-#,_0"eno on">:#,_ CBAABC • I ported this to fish for my answer. I noted you. +1 btw – Cruncher Dec 2 '13 at 16:59 ## APL, 30 (1+2=/2|⎕)⊃'BA'('C',⊂'no one') If I am allowed to change system variables by configuration, 2 chars can be shaved off. (Specifically, changing index origin ⎕IO to 0) The crucial bit If we represent all odds the same way and all evens the same way, then a pair-wise equality operation can distinguish all 4 cases: 0 0 for B wins, 0 1 for A wins, etc. Explanation 2|⎕ Takes input and mod 2 2=/ Pair-wise equality 1+ Add 1 for indexing (APL arrays are 1-based by default) 'BA'('C',⊂'no one') Nested array ⊃ Picks out the correct element from the nested array # C: 88 characters Unfortunately C, as always, requires quite a lot of unnecessary junk. But still, in which other language can one write =**++b+**(++ and it actually means something? Quite simply sublime. main(int a,char**b){(a=**++b+**(++b+1)&1|2*(**b+**++b&1))?putchar(a+64):puts("no one");} Simply pass three numbers as arguments, and voilà! • Is the precise order of those preincrements specified? I thought it wasn't... although, you could replace them with *b[1] etc for no size difference (though some loss in elegance.. :( ) – FireFly Dec 1 '13 at 18:18 • In Ruby: s = "=**++b+**(++" :P in all seriousness, wow, how does that... how does that even work? :O – Doorknob Dec 1 '13 at 18:23 • @Doorknob it's very clever, but if you substitute the dereferencing and preincrements with indexing b instead, and pretty-print the condition some, you should be able to figure it out. :D (pen and paper also helps, for the resulting truth table) – FireFly Dec 1 '13 at 19:01 ~]{1&}%.$1=!?)'no one A B C'n/= Very simple logic: reduce the input modulo 2, then sort a copy. The middle item of the sorted array is in the majority, so look for an index which is different (and hence in the minority). # Ruby (function body), 42 chars Assuming 3 numerical arguments a, b, and c: ['zCBAABCz'[a%2*4|b%2*2|c%2],'no one'].min # Ruby (command line tool), 61 chars Version 1 comes in at 62 chars: $><<["zCBAABCz"[$*.reduce(0){|i,n|i*2|n.to_i%2}],'no one'].min But, by piggybacking off of Darren Stone's answer, Version 2 gets down to 61 chars: i=0;$*.map{|n|i+=i+n.to_i%2};$><<['zCBAABCz'[i],'no one'].min ## Ruby, 61 chars w=0$*.map{|p|w+=w+p.to_i%2} $><<%w(no\ one C B A)[w>3?w^7:w] • ['no one',?C,?B,?A] == %w(no\ one C B A) (2 chars saved). – daniero Jan 4 '14 at 0:54 • Nice. Applied that. Thx! – Darren Stone Jan 4 '14 at 7:56 ### JavaScript (node), 87 characters p=process.argv;console.log("ABC"[3-Math.min(x=p[2]%2*4+p[3]%2*2+p[4]%2,7-x)]||"no one") To get the ball rolling... expects input as three extra arguments. Makes use of the following pattern for input/output (/ represents "no-one"): A B C res # ─────────────── 0 0 0 / 0 0 0 1 C 1 0 1 0 B 2 0 1 1 A 3 1 0 0 A 4 1 0 1 B 5 1 1 0 C 6 1 1 1 / 7 ### GolfScript, 3635 33 characters ~]0\{1&\.++}/'no one C B A'n/.$+= Takes input as described from STDIN. You can also test the code online. Perl, 84 characters. $x=oct"0b".join"",map{$_%2}<>=~/(\d)/g;print"",('no one','C','B','A')[$x>=4?7-$x:$x] • <>=~/(\d)/g parses the input line into distinct digits • map{$_%2 takes this list and computes value mod 2 (even or odd) • oct"0b".join"", takes this list of mod values, joins them into a string, appends an octal specifier, and converts the string to a number. Basically what I did was to create a truth table, and then carefully reordered it so I had an inversion operation around $x == 4. So if $x >=4, we did the inversion [$x>=4?7-$x:$x] which we used to index into the array ('no one','C','B','A') Its not the shortest possible code, but its actually not line noise ... which is remarkable in and of itself. Perl: 74 characters + 3 flags = 77, run with perl -anE '(code)' s/(\d)\s*/$1%2/eg;$x=oct"0b".$_;say"",("no one","C","B","A")[$x>3?7-$x:$x] This is an improvement, by leveraging autosplit (the -a), say (the -E), and finally figuring out what was wrong with the comparison. • Why >=4 instead of simply >3? +1 for the tips '0b'. I didn't know before – F. Hauri Dec 1 '13 at 7:46 • I tried both in the debugger ( >3 and >=4), and I am not sure why, but the >=4 worked, but >3 did not. I can't explain it (possibly borked debugger?) to my own satisfaction either – Joe Dec 1 '13 at 14:23 • you seem to have had an extra char in both counts, which I fixed. Also, flags count as characters. – Doorknob Dec 1 '13 at 23:27 # Common Lisp, 114 106 70 chars From the three values create a pair representing difference in parity between adjacent elements. Treat that as a binary number to index into result list. (defun f(a b c)(elt'(no_one c a b)(+(mod(- b c)2)(*(mod(- a b)2)2))))) Older algorithm: (defun f(h)(let*((o(mapcar #'oddp h))(p(position 1(mapcar(lambda(x)(count x o))o))))(if p(elt'(a b c)p)"no one"))) # Python 2, 54 f=lambda a,b,c:[['no one','C'],'BA'][(a^b)&1][(a^c)&1] # Mathematica 100 94 89 f=If[(t=Tally[b=Boole@OddQ@#][[-1,2]])==1,{"A","B","C"}[[Position[b,t][[-1,1]]]],"no one"]& Testing f[{5, 3, 1}] f[{2, 0, 4}] f[{0, 1, 2}] f[{0, 1, 3}] f[{1, 3, 0}] "no one" "no one" "B" "A" "C" ## Haskell, 97 main=interact$(["no one","A","B","C"]!!).(\x->min x$7-x).foldr(\y x->x*2+mod y 2)0.map read.words ## R 67 z="no one";c("A","B","C",z,z)[match(2-sum(x<-scan()%%2),c(x,2,-1))] • Interesting! How could I test this? What do I have to run (maybe a shebang?) – F. Hauri Dec 2 '13 at 1:53 • You'll need to start an interactive R session (e.g. /usr/bin/R) then enter the code above. scan() is what will prompt you for the input: an example would be to type 1[space]3[space]5[space][enter][enter] and you will see the corresponding output (here, no one) printed to the screen. You can download R here: cran.r-project.org/mirrors.html – flodel Dec 2 '13 at 3:42 # C, 85 chars main(int j,char**p){puts("C\0A\0B\0no one"+((**++p&1)*2+(**++p&1)^(**++p&1?0:3))*2);} Not as short as my Ruby answer but I'm happy with it, considering the main cruft. Fish - 41 :&+2%2*$&+2%+:"@"+!;$!o :?#"eno on"!;ooo< Stole FireFly's befunge answer and ported it to fish because using registers in fish allows us to shave off some characters. Lost a few characters on not having the horizontal if operator though. This takes parameters in through arguments. python fish.py evenodd.fish -v 2 2 2 no one python fish.py evenodd.fish -v 2 3 2 B python fish.py evenodd.fish -v 2 3 3 A python fish.py evenodd.fish -v 3 3 4 C • Ooooooooooo nice! – F. Hauri Dec 2 '13 at 16:45 • Hm, fish huh. Do you have trampoline? if so, maybe you could use (the equivalent of) #@...< at the end to save a char. Oh, and your current code looks like 42 chars to me, so decrement that char-count of yours. :) – FireFly Dec 2 '13 at 18:18 • @FireFly thanks! That did save a char, and you're right, I was over counted by one. My text editor said "col 43" at the end. But of course, cursor on an empty line says "col 1". – Cruncher Dec 2 '13 at 18:21 # Smalltalk, 128 characters [:c|o:=c collect:[:e|e odd].k:=o collect:[:e|o occurrencesOf:e].u:=k indexOf:1.^(u>0)ifTrue:[#($a $b$c)at:u]ifFalse:['no one']] send value: with a collection ## JavaScript (ES6) / CoffeeScript, 50 bytes Uses the truth table as per Firefly's answer but takes a more direct approach in character access: f=(a,b,c)=>'CBAABC'[(a%2*4+b%2*2+c%2)-1]||'no one' // JavaScript f=(a,b,c)->'CBAABC'[(a%2*4+b%2*2+c%2)-1]||'no one' # CoffeeScript ### Demo // Translated into ES5 for browser compatibility f=function(a,b,c){return'CBAABC'[(a%2*4+b%2*2+c%2)-1]||'no one'} //f=(a,b,c)=>'CBAABC'[(a%2*4+b%2*2+c%2)-1]||'no one' for(i=6;i--;) for(j=6;j--;) for(k=6;k--;) O.innerHTML += i + ', ' + j + ', ' + k + ' => ' + f(i,j,k) + "\n" <pre id=O></pre> # Python 3, 115 l=[int(x)%2for x in input().split()];0if[print("ABC"[i])for i,x in enumerate(l)if l.count(x)%2]else print("no one") # Python 3, 114 r=[0,3,2,1,1,2,3,0][int("".join(map(str,(int(x)%2for x in input().split()))),2)];print("ABC"[r-1]if r else"no one") ## Two different method in two different languages + variations -> 6 answers There are essentially 2 way for this operation: • array based: Built a binary number of 3 digit, than take answer from an array • count based: Count even and odd, and look if there is a count == 1 ### Perl 71 (array based + variation) s/(.)\s*/$1&1/eg;$==oct"0b".$_;$==$=>3?7-$=:$=;say$=?chr 68-$=:"no one" One of my shortest perl: • s/(.)\s*/$1&1/eg; transform a string like 1 2 3 into 101 • $==oct"0b".$_; transform binary to oct (same as dec, under 8) • $==$=>3?7-$=:$=; if > 3 oper 7-. (From there no one == 0) • say$=?chr 68-$=:"no one" if not 0, print char from value, else print no one. ### Perl 71 (array based) s/(.)\s*/$1&1/eg;$==oct"0b".$_;say@{["no one",qw{A B C}]}[$=>3?7-$=:$=] slightly different in the print step: The output is based on an 'array'. ### Perl 81 (count based) $c=A;map{$a[$_%2]=$c++;$b[$_%2]++}split;say$b[0]==1?$a[0]:$b[0]==2?$a[1]:"no one" Different meaning: • $c=A Initialise a counter c with A. • map{$a[$_%2]=$c++;$b[$_%2]++}split Counter b count even and odd, a only store which one • say$b[0]==1?$a[0]: if even counter == 1 ? also print even player. • $b[0]==2?$a[1]: if even counter == 2 ? also print odd player. • :"no one" else print no one. ### Bash 85 (array based) c=$((($1%2)<<2|($2&1)*2+($3%2)));((c>3))&&c=$((7-c));o=("no one" C B A);echo${o[c]} This is based on my second perl version: • c=$((($1%2)<<2|($2&1)*2+($3%2))) make the index pos. • $1%2 transform first arg in binary by using mod •$2&1 transform second arg in binary by using and • <<2 shift-to-the-left is same than *4 • *2 multiply by 2 is same than <<1. • ((c>3))&&c=$((7-c)) if c >= 4 then c = 7-c. • o=() declare an array • echo${o[c]} based on array ### Bash 133 (count based) a[$1&1]=A;a[$2&1]=B;a[$3&1]=C;((b[$1&1]++));((b[$2&1]++));((b[$3&1]++)) case $b in 1)echo${a[0]};;2)echo ${a[1]};;*)echo no one;esac • a[$1&1]=A;a[$2&1]=B;a[$3&1]=C store player into variable a[even] and a[odd] • ((b[$1&1]++));((b[$2&1]++));((b[$3&1]++)) count even/odd into b. • case$b in 1) echo ${a[0]} in case even counter==1 print even player • 2)echo${a[1]};; case even counter==2 print odd player • *)echo no one;esac else print no one. ### Bash 133 (count based) a[$1&1]=A;a[$2&1]=B;a[$3&1]=C;((b[$1&1]++));((b[$2&1]++));((b[$3&1]++)) ((b==1))&&echo ${a[0]}||(((b==2))&&echo${a[1]}||echo no one) Same version, using bash's condition and command group instead of case ... esac ## Game Maker Language, 116 My new answer relies heavily on FireFly's formula: a[1]="no one"a[2]='A'a[3]='B'a[4]='C'return a[(3-min(b=argument0 mod 2*4+argument1 mod 2*2+argument2 mod 2,7-b)||1)] The old code compiled with uninitialized variables as 0, 183 characters: a=argument0 mod 2b=argument1 mod 2c=argument2 mod 2if((a&&b&&c)||(!a&&!b&&!c))return "no one" else{if((a&&b)||(!a&&!b))return "C" else{if((a&&c)||(!a&&!c))return "B" else return "A"}} Edit #1 - Whole different code • Interesting!? I didn't know this language before! But as this language does permit the use of array, this code seem not to be the smallest possible for this job. – F. Hauri Nov 30 '13 at 22:19 • @F.Hauri Yes, I'm trying to use arrays to get it shorter. – Timtech Nov 30 '13 at 22:22 # Clojure, 116 (fn[& h](let[a(map #(bit-and % 1)h)](["no one"\A\B\C](+(.indexOf(map(fn[x](reduce +(map #(if(= x %)1 0)a)))a)1)1)))) ## vba, 116 Function z(q,r,s) a="A":b="B":c="C":d="no one" z=Array(d,c,b,a,a,b,c,d)((q And 1)*4+(r And 1)*2+(s And 1)) End Function call with ?z(1,2,3) or assign to a variable with q=z(1,2,3), or even use as a UDF within excel, and use =z(1,2,3) in your excel formula # Python 3, 80 chars r=int(input().replace(' ',''),2) print(['no one','C','B','A'][7-r if r>3 else r]) Note: input must be '1' [odd] or '0' [even]. For example: > 1 1 0 C > 1 1 1 no one • The challenge requires the input to be "separated by spaces". Perhaps there is a way to effectively golf a split and join so you can still use your (very smart) int(...input...) idea. ? – Darren Stone Dec 2 '13 at 19:52 • You could remove the '0b'+ I think, at least it looks redundant to me, and just int(input(),2) seems to work in a REPL. – FireFly Dec 3 '13 at 6:58 • @DarrenStone I used a string replace instead, see edit – Dhara Dec 3 '13 at 8:31 • @FireFly Thanks, you're right, I edited my answer. I first tested the code with Python 2, where the '0b' was needed. – Dhara Dec 3 '13 at 8:32 • The challenge also requires the input to accept numbers from 0 to 5. – Peter Taylor Dec 3 '13 at 10:17 Java, 226 chars void func(int a, int b, int c){ a = a%2; b = b%2; c = c%2; String out = "B"; if((a+b+c)==0 || (a+b+c)==3) out = "no one"; else if(a==b) out = "C"; else if(b==c) out = "A"; System.out.println(out); } # C 101 96 C, probably the most trivial example (some ternary operations): main(int v,char** c){(v=c[1]==c[2]?c[1]==c[3]?0:3:c[1]==c[3]?2:1)?puts("no one"):putchar(v+64);}
2020-07-04 06:27:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3883632719516754, "perplexity": 7638.901214030988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655884012.26/warc/CC-MAIN-20200704042252-20200704072252-00547.warc.gz"}
http://math.stackexchange.com/questions/68364/integral-of-the-derivative-of-a-function-of-bounded-variation
# Integral of the derivative of a function of bounded variation Let $f\colon [a,b] \to \mathbb R$ be of bounded variation. Must it be the case that $|\int_a ^b f' (x) |\leq |TV(f)|$, where $TV(f)$ is the total variation of $f$ over $[a,b]$? If so, how can one prove this? In the standard proof of the monotone differentiation theorem, it is shown tat this holds for increasing functions: if $f$ is increasing, then $\int_a ^b f'(x) \leq f(b) - f(a) = TV(f)$. I am trying to generalize this to functions of bounded variation. - Just use the fact that a finite variation function $f$ can be decomposed as the difference of two nonnegative increasing functions, whose sum is bounded by the variation of $f$. –  George Lowther Sep 29 '11 at 0:19 It's easy to show that if $f = f_1 - f_2$ for increasing functions $f_1$ and $f_2$, then $TV(f) \leq TV(f_1) + TV(f_2)$. However, this places an UPPER bound on $TV(f)$. I am trying to show that $TV(f)$ is greater than another quantity, so this doesn't really help here. Are you claiming that in fact $TV(f) \geq TV(f_1) + TV(f_2)$ (and therefore $TV(f) = TV(f_1) + TV(f_2)$? If so, how would I show this? –  user15464 Sep 29 '11 at 1:08 Yes, you have $TV(f)=TV(f_1)+TV(f_2)$ in the case where $f_1,f_2$ are the minimal nonnegative increasing functions with $f-f(a)=f_1-f_2$. –  George Lowther Sep 29 '11 at 1:11
2015-07-29 13:53:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9610756039619446, "perplexity": 96.12911599943759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986444.39/warc/CC-MAIN-20150728002306-00260-ip-10-236-191-2.ec2.internal.warc.gz"}
http://quant.stackexchange.com/questions/3437/using-volatility-cycles-to-switch-between-trend-following-range-bound-trading?answertab=votes
# Using volatility cycles to switch between trend following & range bound trading? [closed] "...a low volatility environment is usually a good environment for trend following strategies; see Jez Liberty’s state of trend following report here..." http://quantumfinancier.wordpress.com/2010/08/27/regime-switching-system-using-volatility-forecast/ (By the way the volatility is defined as the "std. deviation of the price rate of change" ) I am not a professional quant with education on related subjects. Therefore I am not capable of testing the idea above thoroughly. Would you approve that trend following strategies(TFS) perform better under lower vol.? If yes then what would be a suitable method to exploit this idea? ==> switching to TF strategies when the volatility is below a trigger value (e.g. mov. average)? ==> switching to TF strategies when the volatility is turning down from an extreme reading (e.g. going back under the upper bollinger band)? ==> none? - -1 As always, this depends on the specific trend following system. Do your own testing. There is no substitute. –  Tal Fishman May 4 '12 at 13:43
2013-12-11 16:44:06
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9164868593215942, "perplexity": 3753.0884868361213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164039245/warc/CC-MAIN-20131204133359-00006-ip-10-33-133-15.ec2.internal.warc.gz"}
http://nrassignmentxplz.du-opfer.info/how-to-write-a-batch-file.html
# How to write a batch file For example, say you wanted to create a batch file that produces a directory listing of a folder, and puts that listing into a file named dirlisttxt. You can write a batch file that runs one or more report scripts, and includes operating system commands see your operating system instructions to learn the . For example, netmailbot is invoked directly on the command line (c:\) according to the batch files are special files, often called scripts, that allow you to run. You can write all your r commands in a file such as rscriptr, then how can i load packages from an r script using r cmd batch via the command prompt. Introduction in this article, i'll show you how to write a simple batch file you'll learn the basics of what batch files can do and how to write them. Write this batch file in the following exercises remember to: •use comments at the start of the batch according to the set standard for header comments. When a batch file is run, cmd reads the file and executes its commands to open your example: you know the drill, copy into notepad and save as a bat file. To write a bash script , start the file with a shebang or #/bin/bash #/bin/bash batch file equivalent in linux is shell script (sh) you can use. That batch file works great if you want the same applications to always open at once for example, you have a book report due and you need to research the. How to write a batch file this wikihow teaches you how to write and save a basic batch file on a windows computer a batch file contains a. Before we can create a batch file, we have to be familiar with writing cmd commands we need to be familiar of some basic windows cmd. A when you call a batch file, you can enter data after the command that the batch file refers to as %1, %2, etc for example, in the batch file hellobat, the. Windows batch file programming/book and disk write a customer review get by without the cd, but realize you won't be able to do some of his examples. In windows, the batch file is a file that stores commands in a serial order command line interpreter takes the file as an input and executes in the same order. A batch file contains a series of dos commands, and is commonly written once you have finished writing you batch file, the command should. Here's an example of a simple batch file that runs the two scripts above to make this batch file, you could put the text below inside an empty notepad file and. How can you create a batch file download of batch file examples a batch file is a program which contains ms-dos commands each command used in the. I do hope that this is the appropriate forum for this question i have a few batch files i used in microsoft windows that helped me to do ordinary. Do you know how to use the command prompt if you do, you can write a batch file in its simplest form, a batch file (or batch script) is a list of. ## How to write a batch file Then, in this tutorial, i'll show you the steps to create a batch file to run your python script before we dive into an example, here is the batch file. Batch files in windows 10 can help to ease your efforts example, i'm using a random code as an example, and executing it with my batch. I was able to write a batch file to shutdown this is the code in the batch file c:\ windows\system32\shutdown -s -f -t 000 i've done a lot of google. We can turn on or turn off echo at any point in a batch file for example, you may want echo to be on for certain commands in the batch file, and then you may. How to make a batch file in ms-dos, windows command line, and in windows with information on what to write in the batch file and how to. You might have done it many times, there are batch files in which you need to make a selection to continue execution for example, it may ask you yes or no in . In batch mode you can supply a file with a list of commands that you want mothur to run this would for example the batch file could look like. For example, to copy c:\myfiles\xyztxt to c:\temp, type xcopy c:\myfiles\xyztxt c:\temp notepad will save the batch file to your desktop. In this article, i'll show you how to write a simple batch file you'll learn the basics of what batch files can do and how to write them yourself i'll also provide you. Bat has some weaknesses: for example, it will not alert you if one of the commands failed for some reason the sections below show how to make bat files. How to write a batch file Rated 5/5 based on 42 review 2018.
2018-10-19 05:11:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8055953979492188, "perplexity": 1217.9911373522493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512323.79/warc/CC-MAIN-20181019041222-20181019062722-00234.warc.gz"}
https://physics.stackexchange.com/questions/19954/the-sun-can-make-stuff-hotter-than-itself
# The Sun Can Make Stuff Hotter Than Itself How is it possible that the area around the sun is about 200 times hotter than its surface? This question is #6 from http://www.cracked.com/article_19668_6-scientific-discoveries-that-laugh-in-face-physics.html: We intuitively understand the direction that energy travels -- from the thing with energy to the thing with less energy. That's why the second law of thermodynamics is among the first things you learn in science class that makes you say, "Well, I could have told you that." If you're too hot, you move away from the campfire, not toward it. You don't need science to tell you that heat energy travels from the hot thing to the less-hot thing. Well, everywhere in the universe except the sun. There's a discrepancy between what science says should happen and what the sun actually does, and it's known as the sun's coronal heating problem. Essentially, when heat leaves the sun, the laws of thermodynamics just totally break down for a few hundred miles, and nobody can quite figure out why. The facts are pretty straightforward; the sun's surface sits comfortably at a blazing temperature of roughly 5,500 degrees Celsius. No problem there. However, as the heat travels from the sun's surface to the layer a few hundred miles away from its surface (known as the sun's corona), it rises to a temperature of 1,000,000 degrees Celsius. Which is 995,000 degrees Celsius, or 1,791,000 degrees Fahrenheit, or 1 billion gigawatts per 1/4 gigabyte jiggawatt hour (metric) hotter than it has any right to be. He's a loose cannon! The heat source (the giant ball of nuclear explosions and plasma) should be the hottest thing, not the empty vacuum of space around it. This is the only instance in the known universe where the thing doing the heating is actually cooler than the thing it's heating. And it's been plaguing solar physicists worldwide since they discovered the little disagreement reality has with our universe in 1939. How is it possible that the area around the sun is about 200 times hotter than its surface? • The only instance in the known universe? How about a microwave oven? – hdhondt Apr 29 '18 at 10:22 • @hdhondt Those don't exist. – immibis Apr 30 '18 at 0:23 • "the only instance in the known universe where the thing doing the heating is actually cooler than the thing it's heating" doesn't make any sense. If I rub cold sandpaper on cold wood, they both heat up. Or a million other examples. – DaveInCaz Sep 27 '18 at 11:42 The "heating things hotter than itself" only applies to thermal radiation. I'm not an expert, but the corona could be heated by absorption of high energy particles from within the sun, from it's magnetic field, or from a variety of non-thermal sources. edit: Wikipedia has a description of the current theories http://en.wikipedia.org/wiki/Corona#Physics_of_the_corona There is a nice article about this on the Scientific American web site. The simple answer is that we don't know why the corona is so hot though there are a few well established and plausible suggestions. Some introduction: The coronal temperature is not just a bit higher than the Sun's surface, it's getting on for a thousand times hotter - a few million kelvin as opposed to around $5,700$ K at the surface of the Sun. But what we mean by this is the atoms/ions in the corona have very high velocities. The corona is pretty tenuous. The pressure in the corona is around a million times lower than the pressure at the Earth's surface, so the corona is pretty close to a vacuum. Rather than thinking of the corona as a hot gas we should think of it as a region in which a low density of atoms/ions have been accelerated to high velocities. The question is then what is doing the accelerating? An obvious suggestion is that the Sun's magnetic field is responsible for transferring energy to the corona. The magnetic field is subject to various forms of turbulence and since the ions in the corona interact strongly with the magnetic field it is quite plausible that turbulence in the field is accelerating the ions. The other suggestion is that turbulence at the surface generates sound waves, in effect the sort of shock waves created by explosions on Earth, and it's these shock waves that transfer energy to the particles in the corona. But right now we don't have the experimental data to tell which, if either, of these mechanisms is correct. Perhaps the outer layers of sun are transparent to thermal x-rays generated in the very hot core of the sun. • I don't think this holds together...the bulk of the star is a plasma, so it should be pretty opaque in all bands (due to Compton scattering off of the free electrons). – dmckee Jan 25 '12 at 0:15 I know this can be silly but is it not the same in case of a simple candle, where we can pass our finger through the bottom or center portion of the candle but not the upper part of the flame which is hotter. Can it not be the case with the sun. Could be that the ejected plasma combined with intense magnetic field creates higher plasma velocities, pehaps momentarily greater than lightspeed, creating "lightless" sunspots while violently altering the electro-magnetic frequencies/pulses, resulting in a superheated corona. -AMFM-
2019-06-26 16:50:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5175550580024719, "perplexity": 573.2064667890455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000367.74/warc/CC-MAIN-20190626154459-20190626180459-00239.warc.gz"}
https://tex.stackexchange.com/questions/416490/balancing-opacities-between-fill-and-draw
# Balancing opacities between fill and draw [duplicate] Probably my Title is not so informative. My problem is that I try to "draw" hyperedges on some vertices with the following code: \usepackage{tikz} \tikzstyle{vertex} = [fill, shape=circle, opacity=1, node distance=80pt] \tikzstyle{hyperedge} = [fill, opacity=1, cap=round, join=round, line width=60pt] \tikzstyle{elabel} = [fill, shape=circle, node distance=30pt] \pgfdeclarelayer{background} \pgfsetlayers{background,main} \begin{document} \begin{tikzpicture} \node[vertex,label=above left:$v_1$] (v1) {}; \node[vertex,right of=v1,label=above right:$v_2$] (v2) {}; \node[vertex,below of=v1,label=below left:$v_3$] (v3) {}; \node[vertex,right of=v3,label=below right:$v_4$] (v4) {}; \begin{pgfonlayer}{background} \draw[hyperedge, color=yellow] (v1.center)--(v2.center)--(v3.center)--cycle; \draw[hyperedge, color=pink, line width=45pt] (v2.center)--(v3.center)--(v4.center)--cycle; \end{pgfonlayer} \node[elabel,color=yellow,label=right:$C_1$] (e1) at (-3,0) {}; \node[elabel,below of=e1,color=pink,label=right:$C_2$] (e2) {}; \end{tikzpicture} \end{document} This produces the following: It is not bad, but the intersection is not so visible. If I set opacity of the hyperedges to 0.5: \tikzstyle{hyperedge} = [fill, opacity=0.5, cap=round, join=round, line width=60pt] I get: The intersection is more visible here, but since the "fill" and the "lines" of the cycles also intersect, the affected parts become more opaque(?). Is there a workaround for this situation? Maybe to draw the lines only on one side of the cycle avoiding intersection with the fill, but is it possible? ## marked as duplicate by user36296, Phelype Oleinik, dexteritas, Circumscribe, TroyJan 11 at 14:42 • Welcome to the site. Look at the blend modes in the PGF/TikZ manual. – percusse Feb 21 '18 at 16:59 • Thank you @percusse! There are many blend modes as I see. I'm not sure which one would fit the color1+color1=color1 criteria with the same opacity, but I will try to find one. – Mathiassa Feb 21 '18 at 17:09 Dirty hack: If the lines of your particular shape are a tiny bit broader, you don't need to worry about any fill colour as the lines fill the whole shape: \documentclass{standalone} \usepackage{tikz} \tikzstyle{vertex} = [fill, shape=circle, opacity=1, node distance=80pt] \tikzstyle{hyperedgeline} = [opacity=0.5, cap=round, join=round,line width=60pt] \tikzstyle{elabel} = [fill, shape=circle, node distance=30pt] \pgfdeclarelayer{background} \pgfsetlayers{background,main} \begin{document} \begin{tikzpicture} \node[vertex,label=above left:$v_1$] (v1) {}; \node[vertex,right of=v1,label=above right:$v_2$] (v2) {}; \node[vertex,below of=v1,label=below left:$v_3$] (v3) {}; \node[vertex,right of=v3,label=below right:$v_4$] (v4) {}; \begin{pgfonlayer}{background} \draw[hyperedgeline, color=yellow] (v1.center)--(v2.center)--(v3.center)--cycle; \draw[hyperedgeline, color=pink, line width=47pt] (v2.center)--(v3.center)--(v4.center)--cycle; \end{pgfonlayer} \node[elabel,color=yellow,label=right:$C_1$] (e1) at (-3,0) {}; \node[elabel,below of=e1,color=pink,label=right:$C_2$] (e2) {}; \end{tikzpicture} \end{document} • @Mathiassa And I love your clean and correct answer :) – user36296 Feb 21 '18 at 20:57 I think to proper solution is this. I found it in the manual: Remove opacity from the "hyperedge" style: \tikzstyle{hyperedge} = [fill, cap=round, join=round, line width=60pt] Put every edge in its own transparency group: \begin{scope}[transparency group, opacity=0.5] \draw[hyperedge, color=yellow] (v1.center)--(v2.center)--(v3.center)--cycle; \end{scope} \begin{scope}[transparency group, opacity=0.5] \draw[hyperedge, color=pink, line width=45pt] (v2.center)--(v3.center)--(v4.center)--cycle; \end{scope} • We are the ones who should thank you! Thanks for sharing your answer with the community. Together we are strong! – Paulo Cereda Feb 21 '18 at 18:55 Use fill opacity. \documentclass{article} \usepackage{tikz} \tikzstyle{vertex} = [fill, shape=circle, opacity=1, node distance=80pt] \tikzstyle{hyperedge} = [opacity=0.5,fill opacity=1, cap=round, join=round, line width=60pt] \tikzstyle{elabel} = [fill, shape=circle, node distance=30pt] \pgfdeclarelayer{background} \pgfsetlayers{background,main} \begin{document} \begin{tikzpicture} \node[vertex,label=above left:$v_1$] (v1) {}; \node[vertex,right of=v1,label=above right:$v_2$] (v2) {}; \node[vertex,below of=v1,label=below left:$v_3$] (v3) {}; \node[vertex,right of=v3,label=below right:$v_4$] (v4) {}; \begin{pgfonlayer}{background} \draw[hyperedge, color=yellow] (v1.center)--(v2.center)--(v3.center)--cycle; \draw[hyperedge, color=pink, line width=45pt] (v2.center)--(v3.center)--(v4.center)--cycle; \end{pgfonlayer} \node[elabel,color=yellow,label=right:$C_1$] (e1) at (-3,0) {}; \node[elabel,below of=e1,color=pink,label=right:$C_2$] (e2) {}; \end{tikzpicture} \end{document} • +1, however there is a tiny yellow triangle in the middle of the pink shape :) – user36296 Feb 21 '18 at 17:22 • @samcarter Thanks. I'll (try to) fix it later... (need to run now) – marmot Feb 21 '18 at 17:27 • @samcarter I am actually really surprised that it is there and impressed by the OP. I am also surprised that TikZ builds the shapes like this, meaning that the transparency group thing should not be necessary. But thanks to the OP I learned something. ;-) – marmot Feb 21 '18 at 20:59 • @marmot I guess the tiny triangle is there because fill is no longer in the hyperedge style, so you only draw the lines, but you are not filling the shape (just like me :) and the triangle is the small area which is not covered by the line. (I totally agree that this behaviour of tikz is strange) – user36296 Feb 21 '18 at 21:12 • @samcarter Yes, I agree with everything you wrote. It's just really strange that TikZ sort of knows what it is supposed to do when one uses these transparency groups, and otherwise not. Why isn't that the default behavior? (Or is it a triangular duck? It has the same color, after all... ;-) – marmot Feb 21 '18 at 21:35
2019-06-26 18:25:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5833023190498352, "perplexity": 7189.936019219423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000414.26/warc/CC-MAIN-20190626174622-20190626200622-00500.warc.gz"}
https://robert-williams.org/posts/page/2/
From epistemic to metaphysical charity I’ll start by recapping a little about epistemic charity. The picture was that we can get some knowledge of other minds from reliable criterion-based rules. We become aware of the behaviour-and-circumstances B of an agent, and form the belief that they are in S, in virtue of a B-to-S rule we have acquired through nature or nuture. But this leaves a lot of what we think we ordinarily know about other minds unexplained (mental states that aren’t plausibly associated with specific criteria). Epistemic charity is a topic-specific rule (a holistic one) which takes us from the evidence acquired e.g. through criterion-based rules like the above, to belief and desire ascriptions. The case for some topic-specific rule will have to be made by pointing to problems with topic-neutral rules that might be thought to do the job (like IBE). Once that negative case is made we can haggle about the character of the subject-specific rule in question. If we want to make the case that belief-attributions are warranted in the Plantingan sense, the central question will be whether (in worlds like our own, in application to the usual targets, and in normal circumstances) the rule of interpreting others via the charitable instruction to “maximize rationality” is a reliable one. That’s surely a contingent matter, but it might be true. But we shouldn’t assume that just because a rule like this is reliable in application to humans, that we can similarly extend it to other entities—animals and organizations and future general AI. There’s also the option of defending epistemic charity as the way we ought to interpret others, without saying it leads to beliefs that are warranted in Plantinga’s sense. One way of doing that would be to emphasize and build on some of the pro-social aspects of charity. The idea is that we maximize our personal and collective interests by cooperating, and defaulting to charitable interpretation promotes cooperation. One could imagine charity being not very truth-conducive, and these points about its pragmatic benefits obtaining—especially if we each take advantage of others’ tendancy to charitably interpret us by hiding our flaws as best we can. Now, if we let this override clear evidence of stupidity or malignity, then the beneficial pro-social effects might be outweighed by constant disappointment as people fail to meet our confident expectations. So this may work best as a tie-breaking mechanism, where we maximize individual and collective interest by being as pro-social as possible under constraints of respecting clear evidence. I think the strongest normative defence of epistemic charity will have to mix and match a bit. It maybe that some aspects of charitable interpretation (e.g. restricting the search space to “theories” of other minds of a certain style, e.g. broadly structurally rational) look tempting targets to defend as reliable, in application to creatures like us. But as we give the principles of interpretation-selection greater and greater optimism bias, they get harder to defend as reliable, and it’s more tempting to reach for a pragmatic defence. All this was about epistemic charity, and is discussed in the context of flesh and blood creatures forming beliefs about other minds. There’s a different context in which principles of charity get discussed, and that’s in the metaphysics of belief and desire. The job in that case is to take a certain range of ground-floor facts about how an agent is disposed to act and the perceptual information available to them (and perhaps their feelings and emotions too) and then selecting the most reason-responsive interpretation of all those base-level facts. The following is then proposed as a real definition of what it is for an agent to believe that p or desire that q: it is for that belief or desire to be part of the selected interpretation. Metaphysical charity says what it is for someone to believe or desire something in the first place, doesn’t make reference to any flesh and blood interpreter, and a fortiori doesn’t have its base facts confined to those to which flesh and blood interpreters have access. But the notable thing is that (at this level of abstract definition) it looks like principles of epistemic and metaphysical charity can be paired. Epistemic charity describes, inter alia, a function from a bunch of information about acts/intentions and perceivings to overall interpretations (or sets of interpretations, or credence distributions over sets of interpretations). It looks like you can generate a paired principle of metaphysical charity out of this by applying that function to a particular rich starting set: the totality of (actual and counterfactual) base truths about the intentions/perceivings of the target. (We’ll come back to slippage between the two on the way). It’s no surprise, then, that advocates of metaphysical charity have often framed the theory in terms of what an “ideal interpreter” would judge. We imagine a super-human agent whose “evidence base” were the totality of base facts about our target, and ask what interpretation (or set of interpretations, or credences over sets of interpretations) they would come up with. An ideal interpeter implementing a maximize-rationality priciple of epistemic charity would pick out the interpretation which maximizes rationality with respect to the total base facts, which is exactly what metaphysical charity selected as the belief-and-desire fixing theory. (What happens if the ideal interpreter would deliver a set of interpretations, rather than a single? That’d correspond to a tweak on metaphysical charity where agreement among all selected interpretations suffices for determinate truth. What if it delivers a credence distribution over such a set? That’d correspond to a second tweak, where the degree of truth is fixed by the ideal interpreters’ credence). You could derive metaphysical charity from epistemic charity by adding (some refinement of) an ideal-interpreter bridge principle: saying that what it is for an agent to believe that p/desire that q is for it to be the case that an ideal interpreter, with awareness of all and only a certain range of base facts, would attribute those attitudes to them. Granted this, and also the constraint that they any interpreter ought to conform to epistemic charity, anything we say about epistemic charity will induce a corresponding metaphysical charity. The reverse does not hold. It is perfectly consistent to endorse metaphysical charity, but think that epistemic charity is all wrong. But with this ideal-interpreter bridge set up, whatever we say about epistemic charity will carry direct implications for the metaphysics of mental content. Now metaphysical charity relates to the reliability of epistemic charity in one very limited respect. Given metaphysical charity, epistemic charity is bound to be reliable in one very restricted range of cases: a hypothetical case where a flesh and blood interpreter has total relevant information about the base facts, and so exactly replicates the ideal interpreter counterfactuals about whom fixes the relevant facts. Now, these cases are pure fiction–they do not arise in the actual world. And they cannot be straightforwardly used as the basis for a more general reliability principle. Here’s a recipe that illustrates this, that I owe to Ed Elliott. Suppose that our total information about x is Z, which leaves open the two total patterns of perceivings/intendings A and B. Ideal interpretation applied to A delivers interpretation 1, the same applied to B delivers interpretation 2. 1 is much more favourable than 2. Epistemic charity applied to limited information Z tells us to attribute 1. But there’s nothing in the ideal interpreter/metaphysical charity picture that tells us A/1 is more likely to come about than B/2. On the other hand, consider the search-space restrictions—say to interpretations that make a creature rational, or rational-enough. If we have restricted the search space in this way for any interpreter, then we have an ex ante guarantee that whatever the ideal interpreter comes up with, it’ll be an interpretation within their search space, i.e. one that makes the target rational, or rational-enough. So constraints on the interpretive process will be self-vindicating, if we add metaphysical charity/ideal interpeter bridges to the package, though as we saw, maximizing aspects of the methodology will not be. I think it’s very tempting for fans of epistemic charity to endorse metaphysical charity. It’s not at all clear to me whether fans of metaphysical charity should taken on the burden of defending epistemic charity. If they do, then the key question will be the normative status of any maximizing principles they embrace as part of the characterization of charity. Let me just finish by emphasizing both the flexibility and the limits to this package deal. The flexibility comes because you can understand “maximize reasonableness within search-space X” or indeed “maximize G-ness within search-space X” in all sorts of ways, and the bulk of the above discussion will go through. That means we can approach epistemic charity by fine-tuning for the maximization principle that allows us the best chance of normative success. On the other hand, there are some approaches that are very difficult to square with metaphysical charity or ideal interpreters. I mentioned in the previous post a “projection” or “maximize similarity to one’s own psychology” principle, which has considerable prima facie attraction—after all, the idea that humans have quite similar psychologies looks like a decent potential starting point. It’ll be complex translating that into a principle of metaphysical charity. What psychology would the ideal interpreter have, similarity of which must be maximized? Well, perhaps we can make this work: perhaps the ideal interpreter, being ideal, would be omnsicient and saintly? If so, perhaps this form of epistemic charity would predict a kind of knowledge-and-morality-maximization principle in the metaphysical limit. So this is a phenomenon worth noting: metaphysical knowledge-and-morality maximization could potentially be derived either from epistemic similarity-maximization or epistemic knowledge-and-morality maximization. The normative defences these epistemologies of other minds call for would be very different. Epistemic charity as proper function. Our beliefs about the specific beliefs and desires of others are not formed directly on the basis of manifest behaviour or circumstances, simply because in general individual beliefs and desires are not paired up in a one-to-one fashion with specific behaviour/circumstances (that is what I took away from the circularity objection to behaviourism). And with Plantinga, let’s set aside the suggestion we base such attributions in an inference by IBE. As discussed in the last post, the Plantingan complaint is that IBE is only somewhat reliable, and (on a Plantingan theory) this means it could only warrant a rather tenuous, unfirm belief that the explanation is right. (Probably I should come back to that criticism—it seems important to Plantinga’s case that he thinks there would be close competitors to the other-minds hypothesis, if we were to construe attributions as the result of IBE, as the case for the comparative lack of reliability of IBE is very much stronger when we’re considering picking one out of a bunch of close competitor theories, than when e.g. there’s one candidate explanation that stands out a mile from the field, particularly when we remember we are interested only in reliability in normal circumstances. But surely there are some scientific beliefs that we initially form tentatively by an IBE which we end up believing very firmly, when the explanation they are a part of has survived a long process of testing and confirmation. So this definitely could do with more examination, to see if Plantinga’s charge stands up. It seems to me that Wright’s notion of wide vs. narrow cognitive roles here might be helpful—the thought being that physicalistic explanatory hypothesis we might arrive at by IBE tend to have multiple manifestations and so admit of testing and confirmation in ways that are not just “more of the same” (think: Brownian motion vs statistical mechanical phenomenon as distinct manifestations of an atomic theory of matter.) What I’m now going to examine is a candidate solution to the second problem of other minds that can sit within a broadly Plantingan framework. Just as with criterion-based inferential rules that on the Plantingan account underpin ascriptions of pain, intentions, perceivings, and the like, the idea will be that we have special purpose belief forming mechanisms that generate (relatively firm) ascriptions of belief and desire. Unlike the IBE model, we’re not trying to subsume the belief formations within some general purpose topic-neutral belief forming mechanism, so it won’t be vulnerable in the way IBE was. What is the special purpose belief forming mechanism? It’s a famous one: charitable interpretation. The rough idea is that one attributes the most favourable among the available overall interpretations that fits with the data you have about that person. In this case, the “data” may be all those specific criterion-based ascriptions—so stuff like what the person sees, how they are intentionally acting, what they feel, and so on. In a more full-blown version, we would have to factor in other factors (e.g. the beliefs they express through language and other symbolic acts; the influence of inductive generalizations made on the basis of previous interpretations, etc). What is it for an interpretation to be “more favourable” than another? And what is it for a belief-desire interpretation to fit with a set of perceivings, intentions, feelings etc? For concreteness, I’ll take the latter to be fleshed out in terms of rational coherence between perceptual input and belief change and means-end coherence of beliefs and desires with intentions, and the like—structural rationality constraints playing the role that in IBE, formal consistency might play. And I’ll take favourability to be cashed out as the subject being represented as favourably as is possible—believing as they ought, acting on good reasons, etc. Now, if this were to fit within the Plantingan project, it has to be the case that there is a component of our cognitive system that goes for charitable interpretation and issues in (relatively firm) ascriptions of mental states to others. Is that even initially plausible? We all have experience of being interpreted uncharitably, and complaining about it. We all know, if we’re honest, that we are inclined to regard some people as stupid or malign, including in cases where there’s no very good direct evidence for that. I want to make two initial points here. The first is that we need to factor in some of the factors mentioned earlier in order to fairly evaluate the hypothesis here. Particularly relevant will be inductive generalizations from previous experience. If your experience is that everyone you’ve met from class 22B is a bully who wants to cause you pain, you might reasonably not be that charitable to the next person you meet from class 22B, even if the evidence about that person directly is thin on the ground. I’d expect the full-dress version of charity to instruct us to form the most favourable attributions consistent with those inductive generalizations we reasonably hold onto (clearly, there’ll be some nuance in spelling this out, since we will want to allow that sufficient acquaintance with a person allows us to start thinking of them as a counterexample to generalizations we have previously held). For similar reasons, an instruction to be as charitable as possible won’t tell you to assume that every stranger you meet is saintly and omnisicient, and merely behaving in ways that do not manifest this out of a concern not to embarrass you (or some such reason). For starters, it’s somewhat hard to think of decent ideas why omniscient saints would act as everyday people do (just ask those grappling with the problem of evil how easy this is), and for seconds, applied to those people with whom we have most interaction, such hypotheses wouldn’t stand much scrutiny. We have decent inductive grounds for thinking, generically people’s motives and information lie within the typical human band. What charity tells us to do is pick the most favourable interpretation consistent with this kind of evidence. (Notice that even if these inductive generalizations eventually take most of the strain in giving a default interpretation of another, charity is still epistemically involved insofar as (i) charity was involved in the interpretations which form the base from which the inductive generalization was formed; and (ii) insofar as are called on-the-fly to modify our inductively-grounded attributions when someone does something that doesn’t fit with them). Further, the hypothesis that we have a belief-attributing disposition with charity as its centrepiece is quite consistent with this being defeasible, and quite often defeated. For example, here’s one way human psychology might be. We are inclined by default to be charitable in interpreting others, but we are also set up to be sensitive to potential threats from people we don’t know. Human psychology incorporates this threats-detection system by giving us a propensity to form negative stereotypes of outgroups on the basis of beliefs about bad behaviour or attitudes of salient members of those outgroups. So when these negative stereotypes are triggered, this overrides our underlying charitable disposition with some uncharitable default assumptions encoded in the stereotype. (In Plantingan terms, negative stereotype formation would not be a part of our cognitive structure aimed at truth, but rather one aimed at pragmatic virtues, such as threat-avoidance). Only where the negative stereotypes are absence would we then expect to find the underlying signal of charitable interpretation. So again: is it even initially plausible that we actually engage in charitable interpretation? The points above suggest we should certainly not test this against our practice in relation to members of outgroups that may be negatively stereotyped. So we might think about this in application to friends and family. As well as being in-groups rather than out-groups, these are also cases where we have a lot of direct (criterion-based) evidence about their perceivings, intendings, feelings over time, so cases where we would expect to be less reliant on inductive generalizations and the like. I think in those cases charity is at least an initially plausible candidate as a principle constraining our interpretative practice. As some independent evidence of this, we might note Sarah Stroud’s account of the normative commitments constitutive of being a friend, which includes an epistemic bias towards charitable interpretation. Now, her theory of this says that it is the special normatively significant relation of friendship that places an obligation of charity upon us, and that is not my conjecture. But insofar as she is right about the phenomenology of friendship as including an inclination to charity, then I think this supports the idea that the idea that charitable interpretation is at least one of our modes of belief attribution. It’s not the cleanest case—because the very presence of the friendship relation is a potential confound—but I think it’s enough to motivate exploring the hypothesis. So suppose that human psychology does work roughly along the lines just sketched, with charitable-ascription the default, albeit defeasible and overridable. If this is to issue in warranted ascriptions within a Plantigian epistemology, then not only does charitable interpretation have to be a properly-functioning part of our cognitive system, but it would have to be a part that’s aimed at truth, and which reliably issues in true beliefs. Furthermore, it’d have to very reliably issue in true beliefs, if it is, by Plantingan lights, to warrant our firm beliefs about the mental lives of others. Both aspects might raise eyebrows. There are lots of things one could say in praise of charitable interpretation that are fundamentally pragmatic in character. Assuming the best of others is a pro-social thing to do. Everyone is the hero in their own story, and they like to learn that they are heroes in other people’s stories too. So expressing charitable interpretations of others is likely to strengthen relationships, enable cooperation, and prompt reciprocal charity. All that is good stuff! It might be built up into an ecological rationale for building charitable interpretation into one’s dealing with in-group members (more generally, positive stereotypes), just as threat-avoidance might motivate building cynical interpretation into one’s dealing with out-group members (more generally, negative stereotypes). But if we emphasize this kind of benefit of charitable interpretation, we are building a case for a belief forming mechanism that aims at sociability, not one aimed at truth. (We’re also undercutting the idea that charity is a default that is overridden by e.g. negative stereotypes–it suggests instead different stances in interpretation are tied to the different relationships). It’s easiest to make the case that an interpretative disposition that is charitable is aimed at truth if we can make the case that it is reliable (in normal circumstances). What do we make of that? Again, we shouldn’t overstate what it takes for charity to be reliable. We don’t have to defend the view that it’s reliable to assume that strangers are saints, since charity doesn’t tell us to do that (it wouldn’t get make it to the starting blocks of plausibility if it did). The key question will be whether charitable interpretation will be a reliable way of interpreting those with whom we have long and detailed acquaintance (so that the data that dominates is local to them, rather than inductive generalizations). The question is something like the following: are humans generally such that, among the various candidate interpretations that are structurally rationally compatible with their actions, perceptions, feelings (of the kind that friends and family would be aware of) the most favourable is the truest? Posed that way, that’s surely a contingent issue—and something to which empirical work would be relevant. I’m not going to answer it here! But what I want to say is that if this is a reliable procedure in the constrained circumstances envisaged, then the prospects start to look good for accommodating charity within a Plantingan setup. Now, even if charity is reliable, there remains the threat it won’t be reliable enough to vindicate the firmness of the confidence I have that family and strangers on the street believe that the sun will rise tomorrow, and so forth. (This is to avoid the analogue of the problem Plantinga poses for inference to the best explanation). This will guide the formulation of exactly how we characterize charity—it better not just say that we endorse the most charitable interpretation that fits the relevant data, with the firmness of that belief unspecified, but also says something about the firmness of such beliefs. For example, it could be that charity tells us to distribute our credence over interpretations in a way that respects how well they rationalize the evidence available so far. In that case, we’d predict that beliefs and desires common to almost all favourable candidates are ascribed much more firmly than beliefs and desires which are part of the very best interpretation, but not on nearby candidates. And we’d make the case that e.g. a belief that the sun will rise tomorrow is going to be part of almost all such candidates. (If we make this move, we need to allow the friend of topic-neutral IBE to make a similar one. Plantinga would presumably say that many of the candidates to be “best explanations” of data, when judged on topic neutral grounds, are essentially sceptical scenarios with respect to other minds. So I think we can see how this response could work here, but not in the topic-neutral IBE setting). Three notes before I finish. The first is that even if charity as I categorized it (as a kind of justification-and-reason maximizing principle) isn’t vindicated as a special purpose interpretive principle, it illustrates the way that interpretive principles with very substantial content could play an epistemological role in solving the other problem of other minds. For example, a mirror-image principle would be to pick the most cynical interpretation. Among a creatures who are naturally malign dissemblers, that may reliable, and so a principle of cynicism vindicated on exactly parallel lines. And if in fact all humans are pretty similar in their final desires and general beliefs, then a principle of projection, where one by default assumes that other creatures have the beliefs and desires that you, the interpretor, have yourself, might be reliable in the same way. And so that too could be given a backing (Note that this would not count as a topic-neutral inference by analogy. It would be to a topic-specific inference concerned with psychological attribution alone, and so which could in principle issue in much firmer beliefs than a general purpose mechanism which has to avoid false positives in other areas). Second, the role for charity I have set out above is very different from the way that it’s handled by e.g. Davidson and the Davidsonians (in those moments where they are using it as a epistemological principle, rather than something confined to the metaphysics of meaning). This kind of principle is contingent, and though we could insist that it is somehow built into the very concept of “belief”, that would just be to make the concept of belief somewhat parochial, in ways that Davidsonians would not like. The third thing I want to point out is that if we think of epistemic charity as grounded in the kind of considerations given above, we should be very wary about analogical extensions of interpretative practices to creatures other than humans. For it could be that epistemic charity is reliable when restricted to people, but utterly unreliable when applied—for example–to Klingons. And if that’s so, then extending our usual interpretative practice to a “new normal” involving Klingons won’t give us warranted beliefs at all. More realistically, there’s often a temptation to extend belief and desire attrributions to non-human agents such as organizations, and perhaps, increasingly, AI systems. But if the reliance on charity is warranted only because of something about the nature of the original and paradigmatic targets of interpretation (humans mainly, and maybe some other naturally occurring entities such as animals and naturally formed groups) that makes it reliable, then it’ll continue to be warranted in application to these new entities if they have a nature which also makes it reliable. It’s perfectly possible that the incentive structures of actually existing complex organizations are just not such that we should “assume the best” of them, as we perhaps should of real people. I don’t take a stand on this—but I do flag it up as something that needs seperate evaluation. Plantinga on the original problem of other minds and IBE The other problem of other minds was the following. Grant that we have justification for ascribing various “manifested” mental states to others. Specifically, we have a story about how we are justified in ascribing at least the following: feelings like pain, emotions like joy or fear, perceivings, intendings. Many of these have intentional contents, and we suppose that our story shows how we can be justified (in the right circumstances) in ascribing states of these types for a decent range of contents, though perhaps not all. But such a story, we are assuming, is not yet an epistemic vindication of the total mental states we ascribe to others. Specifically, we ascribe to each other general beliefs about matters beyond the here and now, final desires for rather abstractly described states of affairs (though these two examples are presumably just the tip of the iceberg). So the other problem of other minds is that we need to explain how our justification for ascribing feelings, perceivings, intendings, emotions, extends to justification for all these other states. The epistemic puzzle is characterized negatively: they are the mental states for which a solution to the original problem of other minds does not apply. And in approaching the other problem of other minds, ascriptions of mental states that are covered by whatever solution we have to the original problem of other minds will be a resource for us to wield. So before going on the second problem, I want to fill in one solution to the first problem so we can see its scope and limits. In Warrant and Proper Function, Plantinga addresses the epistemic problem of other minds. In the first section of chapter 4, he casts the net widely, as a problem of accounting for the “warrant” of beliefs ascribing everything from “being appeared to redly” to “believing that Moscow, Idaho, is samller than its Russian namesake”. So the official remit covers both the original problem and the other problem of other minds, in my terms. But by the time he gets to section D, where his own view is presented, the goalposts have been shifted (mostly in the course of discussing Wittgensteinian “criteria”. By that point, the point is made in terms of a pair of a mental state S and a description of associated criteria, “behaviour-and-circumstances” B that constitute good but defeasible evidence for the mental states in question. After discussing this, Plantinga comments “So far [the Wittgensteinians] seem to be quite correct; there are criteria or something like them”. And so the question that Plantinga sets himself is to explain how an inference from B to S can leave us warranted in ascribing S, given that he has argued against backing it up with epistemologies based on analogy, abduction, or whatever the Wittgensteinians said. Plantinga’s account is the following. First, “a human being whose appropriate faculties are functioning properly and who is aware of B will find herself making the S ascription (in the absence of defeaters)… it is part of the human design-plan to make these ascriptions in these circumstances… with very considerable firmness”. And so “if the part of the design plan governing these processes is successfully aimed at truth, then ascriptions of mental states to others will often have high warrant for us; if they are also true, they will constitute knowledge”. Here Plantinga is simply applying his distinctive “proper function” reliabilism. In short: for a belief to be warranted (=such that if its content is true, then it is known) is for it to be produced/sustained by a properly functioning part of a belief-forming system which has the aim of producing true beliefs, and which (across its designed-for circumstances) reliably succeeds in that aim. Plantinga’s claims that our beliefs about other minds are warranted rely on various contingencies obtaining (on this he is very explicit). It will have to be that the others we encounter are on occasion in mental states like S. It will have to be that B is reliably correlated with S. It will have to be that human minds exhibit certain functions, that they are functioning properly, and that we are in the circumstances they are designed for. The teleology of the inferential disposition involved will have to be right, and the inferential disposition (and its defeating conditions) will have to be set up so as to extract reliably true belief formation out of the reliable correlations between B and S. Any of that can go wrong; but Plantinga invites us to accept that in actual, ordinary cases it is all in place. The specific cases Plantinga discusses when defending the applicability of this proper function epistemology to the problem of other minds are those where our cognitive structure includes a defeasible inferential disposition taking us from awareness of behaviour-and-circumstances B to ascribing mental state S. That particular account has no application to any ascriptions of mental states S* that do not fit this bill: where there there is no correlation with specific behaviour and circumstances B or no inferential disposition reflecting that correlation (after all, to apply the Plantigan story, we need some “part” of our mental functioning which we can feed into the rest of the Plantingan story, e.g. evaluate whether that part of the overall system is aimed at truth). We can plausibly apply Plantinga’s account to pain-behaviour (pain); to someone tracking a red round object in their field (seeing a red round object); to someone whose arm goes up in a relaxed manner (raising their arm/intending to raise their arm). It applies, in other words, to the kind of “manifestable” mental states that in the last post I took to be in the scope of the other problem of other minds. But, again as mentioned there, it’s hard to fit general beliefs and final desires (not to mention long-term plans and highly specific emotions) into this model. If you tried to force them into the model, you’d have to identify specific behaviour-and-circumstantial “criteria” for attributing e.g. the belief that the sun will rise tomorrow to a person. But (setting aside linguistic behaviour, of which more in future posts) I say: there are no such criteria. Now, one might try to argue against me at this point, attempting to construct some highly conditional and complex disjunctive criteria of the circumstances in which it’d be appropriate to ascribe a belief that the sun will rise tomorrow, thinking through all the possible ways in which one might ascribe total belief-and-desire states which inter alia include this belief. But then I’ll point out that it would seem wild to assume that an inference with conditional and complex disjunctive antecedents will be in the relevant sense a “part” of our mental design. A criterial model is just the wrong model of belief and desire ascription, and I see little point in attempting to paper over that fact. (Let me note as an aside the following: there may well be behavioural criteria which leads us to classify a person as a believer or desirer, someone who possesses general beliefs and final desires which inform her actions. That is quite different from positing behavioural criteria for specific general beliefs and final desires. It’s the latter I’m doubtful of.) On the other hand, the Plantingan approach to the problem of other minds doesn’t have to be tied to the B-to-S inferences. Indeed, Plantinga says “Precisely how this works—just what our inborn belief-forming mechanisms here are like, precisely how they are modified by maturation and by experience and leanrign, precisely what role is played by nature as oppposed to nuture–these matters are not (fortunately enough) the objects of this study”. So he’s clearly open to generalizing the account beyond the criterion-based inferential model. But to leave things open at this point is more or less to simply assert that the other problem of other minds has a solution, without saying what that solution is. For example, you might think at this point that what’s going on is that we form beliefs about the manifest states of others on the basis of behavioural criteria, understood in the Plantigan way, and then engage in something like an inference to the best explanation in embedding these mentalistic “data” within a simple, strong overall theory of what the minds of others are like. One would then give a Plantigan “proper function” defence of inferring to (what is in fact) the best explanation of one’s data as a defeasible belief-forming method producing warranted beliefs. It would have to be a belief-forming method that was the proper functioning of a part of our cognitive systems, a part aimed at truth, a part that reliably secures truth in the designed-for circumstances, etc. As it happens, Plantinga himself argues that inference to the best explanation will be unsuccesful in solving the the problem of other minds. Let’s take a look at them. The main claim is that “A child’s belief, with respect to his mother, that she has thoughts and feelings, is no more a scientific hypothesis, for him, than the belief that he himself has arms or legs; in each case we come to the belief in question in the basic way, not by way of a tenuous inference to the best explanation or as a sort of clever abductive conjecture. A much more plausible view is that we are constructed… in such a way that these beliefs naturally arise upon the sort of stimuli … to which a child is normally exposed.” Now, Plantinga offers to his opponents a fallback position, whereby they can claim that the child’s beliefs are warranted by the availability of an IBE inference that they do not actually perform (I guess that Plantinga himself ties questions of warrant more closely to the actual genesis of beliefs, but he’s live to the possibility that others might not do so). But he thinks this won’t work, because what we need to explain is the very strong warrant (strong enough for knowledge) that we have in ascriptions of mental states to others, and he thinks that the warrant extractable from an IBE won’t be nearly so strong. He thinks that there are “plenty of other explanatory hypothesis [i.e. other than the hypothesis that other persons have beliefs, desires, hopes, fears, etc] that are equally simple or simpler”. The example given is the explanatory hypothesis that I am the only embodied mind, and that a Cartesian demon gives me strong inclination to believe in the existence of other bodies have minds. I think the best way of construing Plantinga’s argument here is that he’s saying that even if the Cartesian demon hypothesis is not as good as the other-minds hypothesis, if our only reason for dismissing it is the theoretical virtues of the latter hypothesis beyond simplicity and fitting-with-the-data, we’d be irreponsible unless we were live to new evidence coming in that’d turn the tables. So while we might have some kind of warrant in some kind of belief by IBE (that’s to be argued over by a comparison of the relative theoretical virtues of the explanatory hypothesis), we can already see we would be warranted only in “tenuous” and not very firm belief that others have minds, comparable to the kind of nuanced and open-to-contrary-evidence kinds of beliefs we properly take to scientific theories. Let’s assume that this is a good criticism (I think it’s at least an interesting one). Does it extend to the second problem of other minds? Per Plantinga, we assume a range of criteria-based inferences to firm specific beliefs in a range of perceivings, intendings, emotions, and feelings, as well as to general classifications of them as believers and desirers. That leaves us with the challenge of spelling out how we get to specific general beliefs and final desires, and similar kind of states. Could we see these as a kind of tenuous belief, like a scientific hypothesis? The thesis would not now be that a child would go wrong in the firmness of his beliefs that his mother has feelings, is a thinker, etc–for those are criterion-backed judgements. But he would go wrong if he were comparably firm in his ascription of general beliefs and final desires to her. I take it that while some of our (and a child’s) ascriptions of general beliefs and desires to others will be tenuously held, others, and particularly negative ascriptions, are as firm as any other. I’m as firmly convinced that my partner harbours no secret final desire to corrupt my soul, and that she believes that the sun will rise tomorrow, as I do in her being a believer at all, or someone who feels pain, emotions, and sees the things around her. So I think if there’s merit to Plantinga’s criticism of the IBE model as a response to the original problem of other minds, it extends to using it as a response to the other problem of other minds. The nice thing about appealing to a topic-neutral belief forming method like inference to the best explanation would be that we’d know exactly what we’d need to do to show that the ascriptions we arrive at are warranted, by Plantigan lights (the one sketched a couple of paras earlier). But the Plantigan worry about IBE is that it does not vindicate the kind of ascriptions that we in fact indulge in. This shows, I think, why Plantingans cannot ignore the problem of saying something more about the structures that underpin ascriptions of general beliefs, final desires and the like. The need there to be some part of our cognitive system issuing in the relevant mental states ascriptions which (like IBE) is reliable in normal circumstances but which (unlike IBE) reliably issues in the very ascriptions we find ourselves with. It’s not at all obvious what will fit the bill, and without that, we don’t have a general Plantingan answer to the problem of other minds. Postscript: A question that arises, about the way I’m construing Plantinga’s criticism of IBE: suppose that we devised a method of belief formation IBE*, which is just like IBE but issues in *firmer* beliefs. What would go wrong? I tihnk the Plantingan answer must be that IBE* isn’t a reliable enough method to count as producing warranted beliefs, if set up this way. In the intro to Warrant and Proper Function, Plantinga says: “The module of the design plan governing the production of that belief must be such that the statistical or objective probability of a belief’s being true, given that it has been produced in accord with that module in a congenial cognitive environment, is high. How high, precisely? Here we encounter vagueness again; there is no precise answer. It is part of the presumption, however, that the degree of reliability varies as a function of degree of belief. The things we are most sure of—simple logical and arithmetical truths, such beliefs as that I now have a mild ache in my knee (that indeed I have knees) obvious perceptual truths–these are the sorts of beliefs we hold most firmly, perhaps with the maximum degree of firmness, and the ones such that we associate a very high degree of reliability with the modules of the design plan governing their production”. And so the underlying criticism here of an IBE approach to the problem of other minds is that IBE isn’t reliable enough in our kind of environment to count as warranting very firm degrees of belief in anything. And so when we find very firm beliefs, we must look for some other warranting mechanism. The other epistemic problem of other minds The classic epistemic problem of other minds goes something like this. I encounter a person in the street, writhing on the floor, exhibiting paradigmatic pain-behaviour. Now, you might run up to help. But for me, whose mind naturally turns to higher things, it poses a question. Sure, I know that the person writhing on the ground is moving their limbs in a certain distinctive way. And I find myself forming the belief on that basis that they are in pain. But with what right do I form the belief? What justifies the leap from pain-behaviour to pain? You might try to answer by pointing to past experience: on previous occasions I’ve seen someone exhibiting pain-behaviour, they’ve turned out to be in pain. So I’ve got good inductive grounds for thinking that pain-behaviour signals pain. That sounds reasonable, except—what justified me on those earlier occasions in thinking the pain-behaviours were accompanied by pain? I didn’t directly feel the pain myself, after all (like I might have checked to see if smoke was generated by fire). If I’m to be justified in believing that all Fs are G on the basis of a belief that all past observed Fs were G, I better have been justified on those past occasions in thinking that the observed F was a G. So this line of thought just generalizes the question: how am I ever justified in moving from the direct observations (pain behaviour) to pain. There’s one particular response to this question I’m going to mention and set aside entirely, for now. That is that I was justified in the past in thinking that someone was in pain on the basis of first person testimony—the person telling me that they are in pain. (First person testimony seems more interesting than second person testimony—someone else telling me the person is in pain—for that would just pushes the question back to how they knew). If first-person testimony can (without circularity) play this kind of foundational role in grounding our knowledge of the state of mind of another, that’ll be super-significant. But a competing line of thought, which I’ll be running with for now, is that we can in principle have knowledge that people are in pain, without this being based on their use of language. This is a very natural picture. It is one on which, for example, we learn the meaning of the word “pain” by noting that it’s applied to people who, we know, are in pain. Now comes the standard framing move of this first epistemic problem of other minds. We spot that there is one case in which we have knowledge that a thing is in pain (and that this is accompanied by pain-behaviour) where our belief that it’s in pain isn’t based on its observable pain behaviour. That happens when the thing in pain is ourselves. Our knowledge that we ourselves are in pain is introspective, rather than observational. This looks like it helps! It gives us access to a set of cases where pain is correlated with pain behaviour. Whenever, then, we are in a position to directly observe whether or not someone is in pain, we see that indeed, normally, pain behaviour is accompanied by pain. But the framing was a trap. At this point the other-minds sceptic can point to inadequacies in generalizing from pain-behaviour/pain links in the case of a single individual, to a general correlation. Suppose you can only extract balls from a single urn. You notice all the blue balls are heavy, and all the red balls are light. Are you then justified in concluding that all blue balls in any urn are heavy? It seems not: you have no reason for thinking that you’ve taken a fair sampling of the blue balls overall; you have randomly sampled only a certain restricted population: the balls in this urn. At a minimum, we’d need to supplement your egocentric pain/pain-behaviour information with some explanation of why you should take your own case to be representative. But what would that explanation be? The challenge might be met. After all, on the traditional inductive model of justifying generalizations, we move from local patterns occurring in the region of space-time we inhabit to global generalizations even though we do not “randomly sample” what happens in space and time. Somehow, induction (or something like it) takes us beyond the interpolations of patterns holding through the population randomly sampled, to the unrestricted extrapolation of certain patterns. Whatever secret sauce makes extrapolation beyond the sampled population work in everyday inductions, maybe it is also present in the case of pain and pain behaviour, allowing extrapolation for my case to all cases. But pending some specific account of how the challenge could be met, I think it’s reasonable to look for alternatives. So that’s the first problem of other minds. It’s a problem of how we even get started in justifiedly attributing (=having justified beliefs about) the mental states of others. And though I’ve run through this for the case of pain, the usual stock example, you could run through the same challenge for many other mental states. Here are some candidates: that x sees a rock, or x sees a red round thing, or sees that the red round thing is on the floor. That x intends to hail a taxi, or x intends to raise her arm. That x is afraid, that x is afraid of that snake. In each case, there’s a characteristic kind of behaviour or relation to the environment that we could in principle describe in non-mentalistic terms, and which would be a basis for justifiedly attributing the various feelings/perceiving/intendings/emotions to the other. What’s the other problem of other minds then? Well, it’s the problem of how we are justified in the rest of what we believe about the minds of others. The examples I’ve mentioned are quite different from each other (as Bill Wringe recently emphasized, some of centrally involve intentional content, which may pose particular issues), but they are well-represented by pain in the following sense: they are all states which are “specific and manifestable” in a certain sense. Pain is tied to pain-behaviour. Fear of a sanke tied to fear-behaviour targetting the snake. An intention to raise one’s arm is tied to one’s arm going up in a distinctive fashion. Seeing a red round thing is tied to having an unobstructed view of the thing (while awake etc). Those “ties” to the manifest circumstances of a person may be defeasible and contingent, but they’re clearly going to be central to the epistemological story. But there are plenty of mental states that are not like that. The two cases that occupy me the most are: general beliefs or beliefs about things beyond the here-and-now (x’s belief that she is not a brain in a vat; her belief that the sun will rise tomorrow) and final desires (a desire for security, or for justice). There are plenty of “lines to take” on the first problem of other minds that won’t generalize to these cases. Perhaps we can make sense of simply “perceiving” what others feel, or see, or intend, when they instanatiate the manifestations associated with those (maybe I point you to a story about mirror neurons, or the like which could give you the empirical underpinnings of such a process). Maybe, following Plantinga, we think of the belief formation involved as a defeasible but reliable form of inference—the accurate executation of a belief-forming system successfully aimed at truth, producing in this instance a belief about another’s mind triggered by seeing the manifestation. But general and relatively abstract beliefs have no direct characteristic manifestations (at least setting aside first-person testimony, as we have done), and the same goes for final desires. If argument against characteristic manifestations is needed, I’d point to the famous circularity objections to behaviouristic analyses of individual belief and desire states. Essentially: pair a general and abstract belief up with screwy enough desires, and it fits with almost any behaviour; pair a final desire up with screwy enough belief, and the desire fits with almost any behaviour. So if anything is manifested in behaviour, it would seemingly have to be belief-desire states as a whole. But even then, there are many belief-desire states that would fit with any given piece of behaviour. The idea of direct manifestations in behaviour (or relation to the environments) just seems the wrong model to apply to these states. If this is an epistemic problem of other minds, then it’s a different problem from the first. But is it a problem? Here’s what I’m imagining. Imagine that we’d solved the first epistemic problem of other minds to our own satisfaction. We have satisfied ourselves, at last, that we’re justified in believing that the person writhing on the floor is indeed in pain—and indeed, that he sees us, and is attracting attention by raising his arm, etc. All of the various manifestation-to-mental state ties discussed earlier, we’ll assume, produced justified beliefs (for specificity, suppose the Plantigian story is correct). But now, given all this as a basis, what justifies us in thinking that he wants help, that he believes that we are able to help him, and so on? Of course, we would naturally attribute all this to a person in those circumstances. We think people in pain would like help, as a general rule. But we need to spell out what justifies this second layer of description of others. At this point, we start to parallel what went before. So: we might point to past experience with people who are in pain. In the past, people in pain wanted help, and so…. but once again, that pushes the question back to how we knew in those historical cases that help was wanted. We might have been told by others that those historical cases wanted help; but how did our informants know? Because there’s no general tie between abstract and non-immediate beliefs/desires and anything immediately manifested, we can’t credibly say we simply perceive these states of the other, nor that we defeasibly infer them from some observable basis. So the problem is: what to do? In future posts, I want to say something about how answering this other problem of other minds might go. Essentially, I want to explore a model on which the epistemology of other minds involves an contingent and topic-specific rules for belief formation about the belief-desires of others (“epistemic charity”), whose epistemic standing will have to be assessed and defended. Other than other contingent/topic-specific rules, the main alternatives that I’ll be considering are topic-neutral rules of belief formation (e.g. inference to the best explanation) and also, if I find something useful to say about it, an epistemology which gives language the starring role as a direct manifestation of otherwise hidden beliefs and desires. We’ll see how far I get! Iteration vs. Entrenchment I’m going to have one more run at a form of the Lewisian derivation that justifies the strong conclusions (e.g. that the reason for believing A would be a reason for believing each of the iterated B-claims. I’ll be using strong-indication again, though since this is the only indication relation I’ll use in this discussion, I’ll drop the superscript disambiguation: • $p\Rightarrow_x q =_{def} \exists rR_x(r, p)\rightarrow \forall r(R_x(r,p)\supset R_x(r,q))$ Remember that R is the relation of something being sufficient reason to believe, *relative to background beliefs and epistemic standards*. Let’s introduce a new operator $E_x$, which will say that the embedded proposition is a background belief or epistemic standard for x—or as I’ll say for short, is entrenched for x. We have the first three premises on a strong reading of indication again. But I’ll now change the fourth premise from an indication principle to one about E: 1. $A \supset B_u(A))$ 2. $A\Rightarrow_u \forall yB_y(A))$ 3. $A \Rightarrow_u q$ 4. $E_u \forall y [u\sim y]$ A linked change is that we abandon IITERATION for a principle that says that propositions about what indicates what to a person is part of their epistemic standards/background beliefs: • ENTRENCHMENT $\forall c \forall x ([A \Rightarrow_x c]\supset E_x[A\Rightarrow_x c]$ The core derivation I have in mind goes like this: 1. $A\Rightarrow_u \forall y B_y A$. Premise 2. 2. $E_u(A\Rightarrow_u \forall yB_y A)$. From 1 via ENTRENCHMENT. 3. $E_u \forall y [u\sim y]$. Premise 4. 4. $E_u \forall z(A\Rightarrow_z \forall yB_y A)$. From 2,3 by NEWSYMMETRY+. 5. $A\Rightarrow_u\forall z B_z \forall yB_y A$. From 1,4 by NEWCLOSURE+. What then are these new principles of NEWSYMMETRY+ and NEWCLOSURE+ and how should we think about them? NEWSYMMETRY+ is another perspectival form based on the validity of strong symmetry: • SYMMETRY-S $\forall c \forall x ([A \Rightarrow_x c]\wedge \forall y [x\sim y]\supset \forall y[A\Rightarrow_y c])$ NEWSYMMETRY+ is then an instance of a principle that propositions that are entrenched for an individual are closed under valid arguments, with SYMMETRY-S providing the relevant valid argument: • NEWSYMMETRY+ $\forall c \forall x\forall z[E_z[A \Rightarrow_x c]]\wedge [E_z\forall y[x\sim y]]\supset [E_z \forall y[A\Rightarrow_y c]]]$ NEWCLOSURE+ is based again validity of closure for the B-operator under strong indication, which is again something that really just reduces to modus ponens for the counterfactual condition hidden inside the indication relation: • CLOSURE-S $\forall a,c (\forall x B_x (a)\wedge \forall x[a \Rightarrow_x c]\supset \forall x B_x(c)))$ But the principle we use isn’t just the idea that some operator or other is closed under closure. The thought is instead a principle about reason-transmission that goes as follows. Suppose two propositions entail a third, and r is sufficient reason (given one’s background beliefs and standards) to believe the first proposition. Then, if the second proposition is entrenched (part of those background beliefs and standards), r is a also sufficient reason (given one’s background beliefs and standards) to believe the third proposition. The underlying valid argument relevant to this is CLOSURE-S, which makes this, in symbols: • NEWCLOSURE+ $\forall a,b,c\forall x ([a \Rightarrow_x \forall y B_y(b)]\wedge [E_x(\forall y[b \Rightarrow_y c])]\supset [a\Rightarrow_x \forall yB_y(c)])$ NEWCLOSURE+ seems to me pretty well motivated. NEWSYMMETRY+ just as good as anything we’ve worked to so far. STANDARDS now replaces ITERATION. Unlike ITERATION, there’s no chance of deriving it from principles about counterfactuals and the transparency of whatever B stands for. Instead, it simply represents it’s own transparency assumption: that true propositions about the epistemic standards and background beliefs of an agent are themselves part of an agent’s epistemic background. It is weaker than a transparency assumption about beliefs or reasons to believe used in motivating ITERATION since it has a more restricted domain of application. It is stronger than earlier transparency assumptions insofar as it requires that the propositions to which it applies are not merely believed (or things we have reason to believe) but have the stronger status of being entrenched. NEWCLOSURE+ is quite close in form to Cubitt and Sugden’s A6, except their principle used (what I notate as) the B operator throughout, where at a crucial point I have an instance of the E operator. An advantage that this gives me is that the E-operator doesn’t feature in the conclusion of the argument, so we are free to reinterpret it however we like to get the premises to come out true—trying to do reinterpret B would change the meaning of the conclusions we are deriving. So, for example, I complained against theirs that crucial principles seemed bad because some of your beliefs or reasons for beliefs might not be resilient under learning new information. But we are free to simply build into E that it applies only to propositions that are resiliently part of one’s background beliefs/standards (or maybe being resilient in that way is part of what it is for something to be treated as a standard/be background). Having walked through this, let me illustrate the fuller form of the derivation, using all the premises. 1. $A\Rightarrow_u \forall y B_y A$. Premise 2. 2. $A\Rightarrow_u q$. Premise 3. 3. $E_u(A\Rightarrow_u q)$. From line 2 via ENTRENCHMENT. 4. $E_u \forall y [u\sim y]$. Premise 4. 5. $E_u \forall z(A\Rightarrow_z q)$. From lines 3,4 by NEWSYMMETRY+. 6. $A\Rightarrow_u\forall z B_z q$. From 1,5 by NEWCLOSURE+. 7. $E_u(A\Rightarrow_u\forall z B_z q)$. From line 6 via ENTRENCHMENT. 8. $E_u \forall y(A\Rightarrow_y \forall z B_z q)$. From lines 4,7 by NEWSYMMETRY+. 9. $A\Rightarrow_u\forall y B_y \forall z B_z q$. From 1,8 by NEWCLOSURE+. 10. …. The pattern of the last few lines loops to get that A indicates each of the iterations of B-operator applied to q. And we can then appeal to Premise 1, A and CLOSURE to “detach” the consequents of lines 6,9, etc. But for our purposes here and now, the more significant thing is lines 6 and 9 (and 12, 15 etc) prior to detachment. For these tell us that a sufficient reason for believing A is itself a sufficient reason for believing each of these iterated B propositions. So to sum up: if we are content to work with weak indication relations, we can get away with the premises I used in other posts, including ITERATION and previous versions of SYMMETRY+ and CLOSURE+. If we want to work with strong indication, and get information about what is a reason for what, then we need to make changes, and the above is my best shot (especially in the light of the utter mess we got into in the last post!). Interestingly, while NEWSYMMETRY+ and NEWCLOSURE+ it seems to me are more or less equally plausible with the older analogues, the replacement for ITERATION (the principle I’m here calling ENTRENCHMENT) isn’t directly comparable to the earlier, though it’s still broadly a principle of transparency. There is a delicate dialetical interplay between ENTRENCHMENT and the analysis of the indication relation. The stronger and more demanding indication is, the more plausible ENTRENCHMENT becomes, since the fewer instances fall under it. If we read indication as weak indication throughout, then ENTRENCHMENT would say that every counterfactual relating reasons for belief to reasons for other beliefs is part of the background beliefs/epistemic standards. That’s wildly strong! It’s pretty strong in strong indication version too. It becomes much more plausible if this were restricted to, for example, epistemic connections between propositions that are obvious to the agent. In the settings I have considered in the previous posts, the counterfactual analysis earned its keep in part because ITERATION (which is here replaced by ENTRENCHMENT) could be treated as an iterated counterfactual. That’s no longer a consideration. The other advantage of having the counterfactual analysis is that it made CLOSURE an instance of modus ponens. But that’s not a reason for accepting the analysis of indication as a counterfactual—it’s just a reason for accepting that indication entails the counterfactual. The final reason for offering the counterfactual analysis is simply that it allows a reduction in the number of primitive notions around: in the original setting, it allows a reduction to just the B operator. That’s a consideration, but in the current context we’re having to work with E’s as well as B’s, so ideological purity is lost. Once we need ENTRENCHMENT, it seems to me that it would be easier to defend the package presented here if we abandoned the counterfactual analysis of indication, and used it as a primitive notion, while adding as a premise the validity of the following principle which links a now-primitive indication relation to what we were previously calling strong indication: • $p\Rightarrow^s_x q \supset \exists rR_x(r, p)\rightarrow \forall r(R_x(r,p)\supset R_x(r,q))$ The soundness of the overall argument now turns on whether there exists a triple: of reason-relation, indication relation and entrenchment relation that makes true all the premises. As a final note: the link between the counterfactual and primitive indication has two roles. One is simply a matter of reading off the significance of the final results. The other is to make CLOSURE valid. But it only makes CLOSURE valid if the B-operator is defined in the Lewisian way as having-reason-to-believe. As per that earlier post, a different counterfactual–concerning commitments to believe–matters for CLOSURE in that setting. So one would add that entailment as an extra premise about the now-primitive indication relation. Strong and weak indication relations [warning: it’s proving hard to avoid typos in the formulas here. I’ve caught as many as I can, but please exercise charity in reading the various subscripts]. In the Lewisian setting I’ve been examining in the last series of posts, I’ve been using the following definition of indicates-to-x (I use the same notation as in previous posts, but add a w-subscript to distinguish it from an alternative I will shortly introduce): • $p\Rightarrow^w_x q =_{def} B_x p\rightarrow B_x q$ The arrow on the right is the counterfactual conditional, and the intended interpretation of the B-operator is “has a reason to believe”. This fitted Lewis’s informal gloss “if x had reason to believe p, then x would thereby have reason to believe q”, except for one thing: the word thereby. Let’s call the reading above weak indication. Weak indication, I submit, gives an interesting version of the Lewisian derivation of iterated reason-to-believe from premises that are at least plausibly true in many paradigmatic situations of common belief. But there is a cost. Lewis’s original gloss, combined with the results he derives, entail that each group member’s reasons for believing A obtains (say: the perceptual experience they undergo) are at the same time reasons for them to believe all the higher order iterations of reason-to-believe. That is a pretty explanatory and informative epistemology–we can point to the very things that (given the premises) justify us in all these apparently recherche comments. If we derive the same formal results on a weak reading of indication, we leave this open. We might suspect that the reasons for believing A are the reasons for believing this other stuff. But we haven’t yet pinned down anything that tells us this is the case. I want to revisit this issue of the proper understanding of indication. I use $R_x(r, p)$ to formalize the claim that r is a sufficient reason for x to believe that p (relative to x’s epistemic standards and background beliefs).  With this understood, $B_x(p)$ can be defined as $\exists r B(r,p)$.  Here is an alternative notion of indication—my best attempt to capture Lewis’s original gloss: • $p\Rightarrow^s_x q =_{def} \exists rR_x(r, p)\rightarrow \forall r(R_x(r,p)\supset R_x(r,q))$ In words: p strongly indicates q to x iff were x to have a sufficient reason for believing p, then all the sufficient reasons x has for believing p are sufficient reasons for x to believe q. (My thinking: in Lewis’s original the “thereby” introduces a kind of anaphoric dependence in the consequent of the conditional on the reason that is introduced by existential quantification in the antecedent. Since this sort of scoping isn’t possible given standard formation rules, what I’ve given is a fudged version of this). Notice that the antecedent of the counterfactual here is identical to that used in the weak reading of indication. So we’re talking about the same “closest worlds where we have reason to believe p”. The differences only arise in what the consequent tells us. And it’s easy to see that, at the relevant closest worlds, the consequent of weak indication is entailed by the consequent of strong indication. So overall, strong indication entails weak indication. If all the premises of my Lewis-style derivation were true under the strong reading, then the strong reading of the conclusion would follow. But some of the tweaks that I introduced in fixing up the argument seem to me implausible on the strong reading—more carefully, it is implausible that they are true on this reading in all the paradigms of common knowledge. Consider, for example, the premise: • $A\Rightarrow_x \forall y (x\sim y)$ In some cases the reason one has for believing A would be reason for believing that x and y are relevantly similar (as the conclusion states). I gave an example, I think, of a situation where the relevant manifest event A reveals to us both that we are members of the same conspiratorial sect. But this is not the general case. In the general case, we have independent reasons for thinking we are similar, and all that we need to secure is that learning A, or coming to have reason to believe A, wouldn’t undercut these reasons. (It was the possibility of undercutting in this way that was the source of my worry about the Cubitt-Sugden official reconstruction of Lewis, which doesn’t have the above premise, but rather than premise that x has reason to believe that x is similar to all the others). So now we are in a delicate situation, if we want to derive the conclusions of Lewis’s argument on a strong reading of indication. We will need to run the argument with a mix of weak and strong indication, and hope that the mixed principles that are required will turn out to be true. Here’s how I think it goes. First, the first three premises are true on the strong reading, and the final premise on the weak reading. 1. $A \supset B_u(A))$ 2. $A\Rightarrow^s_u \forall yB_y(A))$ 3. $A \Rightarrow^s_u q$ 4. $A\Rightarrow^w_u \forall y [u\sim y]$ Of the additional principles, we appeal to strong forms of symmetry and closure: • SYMMETRY-S $\forall c \forall x ([A \Rightarrow^s_x c]\wedge \forall y [x\sim y]\supset \forall y[A\Rightarrow^s_y c])$ • CLOSURE-S $\forall a,c (\forall x B_x (a)\wedge \forall x[a \Rightarrow^s_x c]\supset \forall x B_x(c)))$ In the case of closure, strong indication features only in the antecedent of the material conditional, so this is in fact weaker than closure on the original version I presented. These are no less plausible than the originals. As with those, the assumption is really not just that these are true—it is that they are valid (and so correspond to valid inference patterns). That is used in motivating the truth of principles that piggyback upon them are that are also used. The “perspectival” closure principle can be used in a strong form: • CLOSURE+-S $\forall a,b,c\forall x ([a \Rightarrow^s_x \forall y B_y(b)]\wedge [a \Rightarrow^s_x(\forall y[b \Rightarrow^s_y c])]\supset [a\Rightarrow^s_x \forall yB_y(c)])$ The action in my vierw comes with the remaining principles, and in particular, the “perspectival” symmetry principle. Here it is in mixed form: • SYMMETRY+-M $\forall a \forall c \forall x\forall z[a\Rightarrow^s_z[A \Rightarrow^s_x c]]\wedge [a \Rightarrow^w_z\forall y[x\sim y]]\supset [a\Rightarrow^s_z \forall y[A\Rightarrow^s_y c]$ The underlying thought behind this perspectival principles (as with closure) is that when you have a valid argument, then if you have reason to believe the premises (in a given counterfactual situation), then you have reason to believe the conclusion. That’s sufficient for the weak reading we used in the previous posts. In a version where all the outer indication relations are strong, as with the strong CLOSURE+ above, it relies more specifically on the assumption that where r is a sufficient reason to believe each of the premises of a valid argument, it is sufficient reason to believe the conclusion. We need a mixed version of symmetry because we only have a weak version of premise (4) to work with, and yet we want to get out a strong version of the conclusion. Justifying a mixed version of symmetry is more delicate than justifying either a purely strong or purely weak version. Abstractly, the mixed version says that if r is sufficient reason to believe one of the premises of a certain valid argument, and there is some reason or other to believe the second premise of that valid argument, then r is sufficient reason to believe the conclusion. This can’t be a correct general principle about all valid arguments. Suppose the reason to believe the second premise is s. Then why think that r alone is sufficient reason to believe the conclusion? Isn’t the most we get that r and s together are sufficient for the conclusion? So we shouldn’t defend the mixed principle here on general grounds. Instead, the idea will have to be that with the specific valid argument in question (an instance of symmetry), assumptions about who I’m epistemically similar to (in epistemic standards and background beliefs) itself counts as a “background belief”. If that is the case, then we can argue that the reason for believing the first premise of the valid argument (in a counterfactual situation) is indeed sufficient relative to the background beliefs to entail the conclusion. One of the prerequisites of this understanding will be that either we assume that other agents will believe propositions about who they’re epistemically sensitive to in counterfactual situations where they have reason to believe those propositions; or else that talk of “background beliefs” is loose talk for background propositions that we have reason to believe. I think we could go either way. In order to complete this, we will need iteration, and in the following, strong version: • ITERATION-S $\forall c \forall x ([A \Rightarrow^s_x c]\supset [A \Rightarrow^s_x [A\Rightarrow^s_x c]]$ I’ll come back to this. Let me exhibit how the utmost core of a Lewisian argument looks in this version. I’ll compress some steps for readability: 1. $A\Rightarrow_u^s \forall y B_y A$. Premise 2. 2. $A\Rightarrow^s_u(A\Rightarrow_u^s \forall yB_y A)$. From 1 via ITERATION-S. 3. $A\Rightarrow^w_u \forall y [u\sim y]$. Premise 4. 4. $A\Rightarrow^s_u \forall z(A\Rightarrow_z^s \forall yB_y A)$. From 2,3 by SYMMETRY+-M. 5. $A\Rightarrow^s_u\forall z B_z \forall yB_y A$. From 1,4 by CLOSURE+-S. This style of argument—which can then be looped—is the basic core of a Lewis-style derivation. You can add in premise 3 and use CLOSURE+, and get something similar with q as the object of iterated B-operators, to get the original. And of course you can appeal to premise 1 and CLOSURE to “discharge” the antecedents of interim conclusions like 5 (this works with strong indication relations because it works for weak indication, and strong indication entails weak). There’s an alternative way of mixing strong and weak indication relations. On this version we use a mixed form of ITERATION, the original weak SYMMETRY+, and then a mixed form of CLOSURE+ • ITERATION-M $\forall c \forall x ([A \Rightarrow^s_x c]\supset [A \Rightarrow^w_x [A\Rightarrow^s_x c]]$ • SYMMETRY+-W $\forall a \forall c \forall x\forall z[a\Rightarrow^w_z[A \Rightarrow^s_x c]]\wedge [a \Rightarrow^w_z\forall y[x\sim y]]\supset [a\Rightarrow^w_z \forall y[A\Rightarrow^s_y c]$ • CLOSURE+-M $\forall a,b,c\forall x ([a \Rightarrow^s_x \forall y B_y(b)]\wedge [a \Rightarrow^w_x(\forall y[b \Rightarrow^s_y c])]\supset [a\Rightarrow^s_x \forall yB_y(c)])$ 1. $A\Rightarrow_u^s \forall y B_y A$. Premise 2. 2. $A\Rightarrow^w_u(A\Rightarrow_u^s \forall yB_y A)$. From 1 via ITERATION-M. 3. $A\Rightarrow^w_u \forall y [u\sim y]$. Premise 4. 4. $A\Rightarrow^w_u \forall z(A\Rightarrow_z^s \forall yB_y A)$. From 2,3 by SYMMETRY+-W. 5. $A\Rightarrow^s_u\forall z B_z \forall yB_y A$. From 1,4 by CLOSURE+-M. The main advantage of this version of the argument would be that the version of ITERATION it requires is weaker. Otherwise, we are simply moving the bump in the rug from mixed SYMMETRY+ to mixed CLOSURE+. And that seems to me a damaging shift. We use mixed SYMMETRY+ many times, but the only belief we have ever to assume is “background” to justify the principle is the belief that all are similar to me. In the revised form, to run the same style of defence, we would have to assume that belief about indication relations of more and more complex contents are backgrounded. And that simply seems less plausible. So I think we should stick with the original if we can. (On the other hand, the principle we would need here is close to the sort of “mixed” principle that Cubitt and Sugden use, and they are officially reading “indication” in a strong way. So maybe this should be acceptable). So what about the ITERATION-S, the principle that the argument now turns on? As a warm up, let me revisit the motivation for the original, ITERATION-W, which fully spelled out would be: • $[\exists r R_u(r, A)\rightarrow \exists r R_u(r,c))]$ $\supset[\exists s R_u(s,A)\rightarrow$ $\exists s R_u(s,[\exists r R_u(r, A)\rightarrow \exists r R_u(r,c)])]$ Assume the first line is the case. Then we know that at the worlds relevant for evaluating the second and third lines, we have both $\exists r R_u(r,c)$ and $\exists r R_u(r, A)$. By an iteration principle for reason-to-believe, $\exists s_1R_u(s_1,\exists r R_u(r,c))$ and $\exists s_2 R_u(s_2,\exists r R_u(r, A))$. And by a principle of conjoining reasons (which implicitly makes a rather strong consistency assumption about reasons for belief) $\exists s R_u(s,\exists r R_u(r,A)\wedge \exists r R_u(r, c))$. But a conjunction entails the corresponding counterfactual in counterfactual logics for strong centering, and so plausibly the reason to believe the conjunction is a reason to believe the counterfactual: $\exists s R_u(s,\exists r R_u(r,A)\rightarrow \exists r R_u(r, c))$. That is the rationale for the original iteration principle. Unfortunately, I don’t think there’s a similar rationale for the strong iteration principle. The main obstacle is the following: point: one particular sufficient reason for believing A to be the case (call it s) is unlikely to be one’s reason for believing a counterfactual generalization that covers all reasons to believe that A is the case. In the original version of iteration, this wasn’t at issue at all. But the rationale I offered uses a strategy of finding a reason to believe a counterfactual by exhibiting a reason to believe the corresponding conjunction, which entails the counterfactual. In order to find a reason to believe the conjunction of the relevant counterfactual below (the one appearing in the third line) But an essential part of that strategy was arguing that a certain thing was a reason to believe a counterWhen you write down what strong iteration means in detail, you see (in the third line below) that this is going to have to be argued for. I can’t see a strategy for arguing for this, and I the principle itself seems likely to be false to me, as stated. • $[\exists r R_u(r, A)\rightarrow \forall r (R_u(r, A)\supset R_u(r,c))]$ $\supset[\exists s R_u(s,A)\rightarrow$ $\forall s( R_u(s,A)\supset R_u(s,[\exists r R_u(r, A)\rightarrow \forall r (R_u(r, A)\supset R_u(r,c))]]$ That’s bad news. Without this principle, the first mixed version of the argument I presented above doesn’t go through. I think there’s a much better chance of mixed iteration being argued for, which is what was needed for the second version of the argument. But that was the version of the argument that required the dodgy mixed closure principle. Perhaps we should revisit that version? I’m closing this out with one last thought. The universal quantifier in the consequent of the indication counterfactual is the source of the trouble for strong ITERATION. But that was introduced as a kind of fudge for the anaphor in the informal description of the indication relation. One alternative is use a definite description in the conclusion of the conditional—which on Russell’s theory introduces the assumption that there is only one sufficient reason (given background knowledge and standards) for believing the propositions in question. This would give us: • $p\Rightarrow^d_x q =_{def}$ $\exists rR_x(r, p)\rightarrow \exists r(R_x(r,p)\wedge \forall s (R_x(s, p)\supset r=s) \wedge R_x(r,q))$ Much of the discussion above can be rerun with this in place of strong indication. And I think the analogue of the strong ITERATION has a good chance of being argued for here, provided that we have a suitable iteration priciple for reason-to-believe. For weak iteration, we needed only to assume that when there is reason to believe p, there is reason to believe that there is reason to believe p. In the rationale for a new stronger version of ITERATION that I have in mind we will need that when s is a reason to believe that p, then s is a reason to believe that s is a reason to believe that p. Whether this will fly, however, turns both on being able to justify that strong iteration principle and on whether indication in the d-version, with its uniqueness assumption, finds application in the paradigmatic cases. For now, my conclusion is that the complexities involved here justifies the decision to run the argument in the first instance with weak indication throughout. We should only dip our toes into these murky waters if we have very good reason to do so. Identifying the subjects of common knowledge Suppose that it’s public information/common belief/common ground among a group G that the government has fallen (p). What does this require about what members of G know about each other? Here are three possible situations: 1. Each knows who each of the other group members is, attributing to (de re) to each whatever beliefs (etc) are required for it to be public information that p. 2. Each has a conception corresponding to each member of the group. One attributes, under that conception, whatever beliefs (etc) are required for it to be public information that p. 3. Each has a concept of the group as a whole. Each generalizes about the members of the group, to the effect that every one of them has the beliefs (etc) required for it to be public information that p. Standard formal models of common belief suggest the a type 1 situation (though, as with all formal models, they can be reinterpreted in many ways). The models index accessibility relations by group members. One advantage of this is that once we fix which world is actual, we’re in a position to unambiguously read off the model what the beliefs of any given group member is—one looks at the set of worlds accessible according to their accessibility relation. What it takes in these models for A to believe that B believes that p is for all the A-accessible worlds to be such that all worlds B-accessible from them are ones where p is true. So also: once we as theorists have picked our person (A), it’s determined what B believes about A’s beliefs—there’s no further room in the model for further qualifications or caveats about the “mode of presentation” under which B thinks of A. Stalnaker argues persuasively this is not general enough, pointing to cases of type 2 in our classification. There are all sorts of situations in which the mode of presentation under which a group member attributes belief to other group members is central. For example (drawing on Richard’s phone booth case) I might be talking to one and the same individual by phone that I also see out the window, without realizing they are the same person. I might attribute one set of beliefs to that person qua person-seen, and a different set of beliefs to them qua person-heard. That’s tricky in the standard formal models, since there will be just one accessibility relation associated with the person, where we need at least two. Stalnaker proposes to handle this by indexing the accessibility relations not to an individual but to an individual concept—a function from worlds to individuals—which will draw the relevant distinctions. This comes at a cost. Fix a world as actual, and in principle one and the same individual might fall under many individual concepts at that world, and those individual concepts will determine different belief sets. So this change needs to be handled with care, and more assumptions brought in. Indeed, Stalnaker adapts the formal model in various ways (e.g. he ultimately ends up working primarily with centred worlds). These details needn’t delay us, since my concern here isn’t with the formal model directly.  Rather, I want to point to the  desiderata that it answers to: that we make our theory of common belief sensitive to the ways in which we think about other individual group-members. It illustrates that the move to type 2 cases is a formally (and philosophically) significant step. The same goes for common belief of type 3, where the subjects sharing in the common belief are characterized not individually but as members of a certain group. Here is an example of a type-3 case (loosely adapted from a situation Margaret Gilbert discusses in Political Obligation). We are standing in the public square, and the candidate to be emperor appears on the dais. A roar of acclaim goes up from the cloud—including you and I. It is public information among the crowd that the emperor has been elected by acclimation. But the crowd is vast—I don’t have any de re method of identifying each crowd member, nor do I have an individualized conception of each one. This situation is challenging to model in either the standard or Stalnakerian ways. But it seems (to me) a paradigm of common belief. Though it is challenging to model in the multi-modal logic formal setting, other parts of the standard toolkit for analyzing common belief cover it smoothly. Analyses of common belief/knowledge like Lewis’s approach from Convention (and related proposals, such as Gilbert’s) can take it in their stride. Let me present it using the assumptions that I’ve been exploring in the last few posts. I’ll make a couple of tweaks: I’ll consider instances of the assumptions as they pertain to a specific member of the crowd (you, labeling u). I’ll make explicit the restriction to members of the crowd, C. The first four premises are then: 1. $A \supset B_u(A))$ 2. $(A\Rightarrow_u [\forall y: Cy] B_y(A))$ 3. $(A \Rightarrow_u q)$ 4. $([A\Rightarrow_u [\forall y: Cy](x\sim y)]$ For “A”, we input a neutral description of the state of affairs of the emperor receiving acclaim on the dais in full view of everyone in the crowd. q is the proposition that the emperor has been elected by acclimation. The first premise says that it’s not the case that the following holds: the emperor has received acclaim on the dais in full view of the crowd (which includes you) but you have no reason to believe this to be the case. In situations where you are moderately attentive this will be true. The second assumption says that you would also have reason to believe that everyone in the crowd has reason to believe that the emperor has received acclaim on the dais in full view of the crowd, if you have reason to believe that the emperor has received such acclaim in the first place. That also seems correct. The third says if you had reason to believe this situation had occurred, you would have reason to believe that the emperor had been elected by acclimation. Given modest background knowledge of political customs of your society (and modest anti-sceptical assumptions) this will be true too. And the final assumption says that you’d have reason to believe that everyone in the crowd had relevantly similar epistemic standards and background knowledge (e.g. anti-sceptical, modestly attentive to what their ears and eyes tell them, aware of the relevant political customs), if/even if you have reason to believe that this state of affairs obtained. All of these seem very reasonable: and notice, they are perfectly consistent with utter anonymity of the crowd. There are a couple of caveats here, about the assumption that all members of the crowd are knowledgable or attentive in the way that the premises presuppose. I come back to that later Together with five other principles I set out previously (which I won’t go through here: the modifications are obvious and don’t raise new issues) these deliver the following results (adapted to the notation above): • $A \Rightarrow_u q$ • $A\Rightarrow_u [\forall y: Cy] B_y(q)$ • $A\Rightarrow_u [\forall z : Cz] B_z([\forall y: Cy] B_y(q))$ • $\ldots$ And each of these with a couple more of the premises entails: • $B_u q$ • $B_u [\forall y : Cy] B_y(q)$ • $B_u [\forall z : Cz] B_z([\forall y: Cy] B_y(q))$ • $\ldots$ It’s only at this last stage that we then need to generalize on the “u” position, reading the premises as holding not just for you, but schematically for all members of the crowd. We then get: • $[\forall x : Cx] B_x q$ • $[\forall x : Cx] B_x [\forall y :Cy] B_y(q)$ • $[\forall x : Cx] B_x [\forall z : Cz] B_z([\forall y\in C] B_y(q))$ • $\ldots$ If this last infinite list of iterated crowd-reasons-to-believe is taken to characterize common crowd-belief, then we’ve just derived this from the Lewisian assumptions. And nowhere in here is any assumption about identifying crowd members one by one. It is perfectly appropriate for situations of anonymity. (A side point: one might explore ways of using rather odd and artificial individual concepts to apply Stalnaker’s modelling to this case. Suppose, for example, there is some arbitrary total ordering of people, R. Then there are the following individual concepts: the R-least member of the crowd, the next-to-R-least member of the crowd, etc. And if one knows that all crowd members are F, then in particular one knows that the R-least crowd member is F. So perhaps one can extend the Stalnakerian treatment to the case of anonymity through these means. However: a crucial question will be how to handle cases where we are ignorant of the size of the crowd, so ignorant about whether “the n-th crowd member in the crowd” fails to refer. I don’t have thoughts to offer on this puzzle right now, and it’s worth remembering that nobody’s under any obligation to extend this style of formal modelling to the case of anonymous common belief.) Type-3 cases allow for anonymity among the subjects of common belief. But remember  that it needs to be assumed that all members of the crowd are knowledgable and attentive. In small group settings, where we can monitor the activities of each other group member, each can be sensitive to whether others have the relevant properties.  But this seems in principle impossible in situations of anonymity. On general grounds, we might expect most of the crowd members to have various characteristics, but as the numbers mount up, the idea that the characteristics are universally possessed would be absurd. We would be epistemically irresponsible not to believe, in a large crowd, that some will be distracted (picking up the coins they just dropped and unsure what the sudden commotion was about) and some will lack the relevant knowledge (the tourist in the wrong place at the wrong time). The Lewisian conditions for common belief will fail; likewise, the first item on the infinite list characterizing common belief itself will fail—the belief that q will not be unanimous. So we can add to earlier list a fourth kind of situation. In a type-4 situation, the crowd is not just anonymous, but also contains the distracted and ignorant. More generally: it contains unbelievers. A first thought about accommodating type 4 situations is to replace the quantifiers, replacing the universal quantifiers “all” with “most” (or: a certain specific fraction). We would then require that the state of affairs indicates to most crowd members that the emperor was elected by acclimation; that it indicates to most that most have reason to believe that the emperor was elected by acclimation, and so on. (This is analogous to the kind of hedges that Lewis imposes on the initially unrestricted clauses characterizing convention in his book). But the analogue of the Lewis derivation won’t go through. Here’s one crucial breaking point. One of the background principles that is needed in getting from Lewis’s premises to the infinite lists was the following: If all have reason to believe that A, and for all, A indicates that q, then all have reason to believe that q. Under the intended understanding of “indication”, this is underwritten by modus ponens, applied to an arbitrary member of the group in question–and then universal generalization. But if we replace the “all” by “most”, we have something invalid: If most have reason to believe that A, and for most, A indicates that q, then most have reason to believe that q. The point is that if you pool together those who don’t have reason to believe that A, and those for whom A doesn’t indicate that q, you can find enough unbelievers that it’s not true that most have reason to believe that q. A better strategy is the analogue of one that Gilbert suggests in similar contexts (in her book Political Obligation). We run the original unrestricted analysis not for the crowd but for some subgroup of the crowd: the attentive and knowledgeable. Let’s call this the core crowd. You are a member of the core crowd, and the Lewisian premises seem correct when restricted to the core crowd (for example: the public acclaim indicates to you that all attentive and knowledgable members of the crowd have reason to believe that he public acclaim occurred). So the derivation can run on as before, and established the infinite list of iterated reason-to-believe among members of the core crowd. (Aside: Suppose we stuck with the original restriction to members of the crowd, but replaced the quantifiers for “all” not with some “most” or fractional quantifier, but with a generic quantifier. The premises become something like: given A,  crowd members believe A; A indicates to crowd members that crowd members believe A; A indicates to crowd members that q; crowd members have reason to believe that crowd members are epistemically similar to themselves, if/even if they have reason to believe A. These will be true if generically, crowd members are attentive and knowledgable in the relevant respects. Now, if the generic quantifier is aptly represented as a restricted quantifier—say, restricted to “typical” group members—then we can derive an infinite list of iterated reason-to-believe principles by the same mechanism as with any other restricted quantifier that makes the premises true. And the generic presentation makes the principles seem cognitively familiar in ways in which explicit restrictions do not. I like this version of the strategy, but whether it works turns on issues about the representation of generics that I can’t explore here.) Once we allow arbitrary restrictions into the characterization of common belief, it makes it potentially pretty cheap (I think this is a point Gilbert makes—she certainly emphasizes the group-description-sensitivity of “common knowledge” on her understanding of it). For an example of cheap common belief, consider the group: those in England who have reason to believe sprouts are tasty (the English sprout-fanciers). All English sprout fanciers have reason to believe that sprouts are tasty. That is analytically true! All English sprout fanciers have reason to believe that all English sprout fanciers have reason to believe that sprouts are tasty, since they have reason to believe things that are true by definition. And all English sprout fanciers have reason to believe this last iterated belief claim, since they have reason to believe things that follow from definitions and platitudes of epistemology. So on, all the way up the hierarchy. So there seems to be here a cheap common belief among the English sprout fanciers that sprouts are tasty. It’s cheap, but useless, given that I, as an English sprout fancier, am not in a position to coordinate with another English sprout fancier—we can meet one in any ordinary context and not have a clue that they are one of the subjects involved in this common belief is shared. (Contrast if the information that sprouts are tasty were public among a group of friends going out to dinner). It seems very odd to call the information that sprouts are tasty public among the English sprout fanciers, since all that’s required on my part to acquire all the relevant beliefs in this case is one idiosyncractic belief and a priori reflection. Publicity of identification of subjects among whom public information is possessed seems part of what’s required for information to be public in the first place. Type 1 and type 2 common beliefs build this in. Type 3 common beliefs, if applied to groups membership of which is easy to determine on independent grounds, don’t raise many concerns about this. But once we start using artificial, unnatural, restrictions under pressure from type 4 situations, the lack of any publicity constraint on identification becomes manifest, dramatized by the cases of cheap common belief. Minimally, we need to pay attention to whether the restrictions that we put into the quantifiers that characterize type 3 or 4 common belief undermine the utility of attributing common belief among the group so-conceivedBut it’s hard to think of general rules here. For example, in the case characterized above of the emperor-by-acclamation, the restriction to the core crowd–the attentitive and knowledgeable crowd members—seems to me harmless, illuminating and useful. On the other hand, the same restrictions in the case in the next paragraph gives us common belief that while not as cheap as the sprout case earlier, is prima facie just as useless. Suppose that we’re in a crowd milling in the public square, and someone stands up and shouts a complex piece of academic jargon that implies (to those of us with the relevant background) that the government has fallen. This event indicates to me that the government has fallen, because I happened to be paying attention and speak academese. I know that the vast majority of the crowd either weren’t paying attention to this speech, and haven’t wasted their lives obtaining the esoteric background knowledge to know what it means. Still, I could artificially restrict attention to the “core” crowd, again defined as those that are attentive and knowledgable in the right ways. But now this “core” crowd are utterly anonymous to me, lost among the rest of the crowd in the way that English sprout fanciers are lost among the English more generally. The core crowd might be just me, or it could consist of me and one or two others. I don’t have a clue. Again: it is hardly public between all the core crowd (say, three people) that they share this belief, if for all each of them know, they might be the only one with the relevant belief. And again: this case illustrates that the same restriction that provides useful common belief in one situation gives useless common belief in another. The way I suggest tackling this is to start with the straightforward analysis of common belief that allows for cheap common belief, but then start building in suitable context-specific anti-anonymity requirements as part of an analysis of an account of the conditions under which common belief is useful. In the original crowd situation for example, it’s not just that the manifest event of loud acclaim indicated to all core crowd members that all core crowd members have reason to believe that the emperor was elected by acclaim. It’s also that it indicated to all core crowd members that most of the crowd are core crowd. That means that in the circumstances, it is public among the core crowd that they are the majority among the (easily identifiable) crowd. Even though there’s an element of anonymity, all else equal each of us can be pretty confident  that a given arbitrary crowd member is a member of the core crowd, and so is a subject of the common belief. In the second scenario given in the paragraph above, where the core crowd is a vanishingly small proportion of the crowd, it will be commonly believed among the core that they are a small minority, and so, all else equal, they have no ability to rationally ascribe these beliefs to arbitrary individuals they encounter in the crowd. We can say: a face to face useful common belief is one that where there are face-to-face method of categorizing the people we encounter (independently of their attitudes to the propositions in question) within a certain context as a G*, where we know that most G*s are members of the group among which common belief prevails. (To tie this back to the observation about generics I made earlier: if generic quantifiers allow the original derivation to go through, then there may be independent interest in generic common belief among G*s, where this only requires the generic truth that G* members belief p, believe that G* members belief p, etc. The truth of the generic then (arguably!) licenses default reasoning attributing these attitudes to an arbitrary G*. So generic common belief among a group G*, where G*-membership is face-to-face recognizable, may well be a common source of face-to-face useful common belief). Perhaps only face-to-face useful common beliefs are decent candidates to count as information that is “public” among a group. But face-to-face usefulness isn’t the only kind of usefulness. The last example I discuss brings out a situation in which the characterization we have of a group is purely descriptive and detached from any ability to recognize individuals within the group as such, but is still paradigmatically a case in which common beliefs should be attributed. Suppose that I wield one of seven rings of power, but don’t know who the other bearers are (the rings are invisible so there’s no possibility of visual detection–and anyway, they are scattered through the general population). If I twist the ring in a particular way, then in the case that all other ring bearers do likewise, then the dark lord will be destroyed, if he has just been reborn. If he has not just been reborn, or if not all of us twist the ring in the right way, everyone will suffer needlessly. Luckily, there will be signs in the sky and in the pit of our stomachs that indicate to a ring bearer when the dark lord has been reborn. All of us want to destroy the dark lord, but avoid suffering. All of us know these rules. When the distinctive feelings and signs arise, it will be commonly believed among the ring bearers that the dark lord has been reborn. And this then sets us up for the necessary collective action: we twist each ring together, and destroy him. This is common belief/knowledge among an anonymous group where there’s no possibility of face-to-face identification. But it’s useful common belief/knowledge, exactly because it sets us up for some possible coordinated action among the group so-characterized. I don’t know whether I want to say that the common knowledge among the ring-bearers is public among them (if we did, then clearly face to face usefulness can’t be a criterion for publicity…). But the case illustrates that we should be interested in common beliefs in situations of extreme anonymity—after all, there’s no sense in which I have de re knowledge even potentially of the other ring-bearers. Nor have I even any way getting an informative characterization of larger subpopulations to which they belong, or even of raising my credence in the answer to such questions. But despite all this, it seems to be a paradigmatic case of common belief subserving coordinated action—one that any account of common belief should provide for. Many times, cooperative activity between a group of people requires they identify each other face-to-face, but not always, and the case of the ring bearers reminds us of this. Stepping back, the upshot of this discussion I take to be the following: • We shouldn’t get too caught up in the apparent anti-anonymity restrictions in standard formal models of common belief, but we should recognize that they directly handle on a limited range of cases. • Standard iterated characterizations generalize to anonymous groups directly, as do Lewisian ways of deriving these iterations from manifest events. • We can handle worries about inattentive and unknowledgable group members by the method of restriction (which might include as as special case: generic common belief). • Some common belief will be very cheap on this approach. And cheap common belief is a very poor candidate to be “public information” in any ordinary sense. • We can remedy this by analyzing the usefulness of common belief (under a certain description) directly. Cheap common belief is just a “don’t care”. • Face-to-face usefulness is one common way in which common belief among a restricted group can be useful. This requires that it be public among the restricted group that they are a large part (e.g. a supermajority, or all typical members) of some broader easily recognizable group. • Face-to-face usefulness is not the only form of usefulness, as illustrated by the extreme anonymity of cases like the ringbearers. Reinterpreting the Lewis-Cubitt-Sugden results In the last couple of posts, I’ve been discussing Lewis’s derivation of iterated “reason to believe” q from the existence of a special kind of state of affairs A. I summarize my version of this derivation as follows, with the tilde standing for “x and y are similar in epistemic standards and background beliefs”. We start from four premises: 1. $\forall x (A \supset B_x(A))$ 2. $\forall x (A\Rightarrow_x \forall yB_y(A))$ 3. $\forall x (A \Rightarrow_x q)$ 4. $\forall x ([A\Rightarrow_x \forall y [x\sim y]]$ Five additional principles are either used, or are implicit in the motivation for principles that are used: • ITERATION $\forall c \forall x ([A \Rightarrow_x c]\supset [A \Rightarrow_x [A\Rightarrow_x c]]$ • SYMMETRY $\forall c \forall [A \Rightarrow_x c]\wedge \forall y[x\sim y]]\supset [\forall y[A\Rightarrow_y c]]$ • CLOSURE $\forall a,c (\forall x B_x (a)\wedge \forall x[a \Rightarrow_x c]\supset \forall x B_x(c)))$ • SYMMETRY+ $\forall a \forall c \forall x\forall z[a\Rightarrow_z[A \Rightarrow_x c]]\wedge [a \Rightarrow_z\forall y[x\sim y]]\supset [a\Rightarrow_z [\forall y[A\Rightarrow_y c]]$ • CLOSURE+ $\forall a,b,c\forall x ([a \Rightarrow_x \forall y B_y(b)]\wedge [a \Rightarrow_x(\forall y[b \Rightarrow_y c])]\supset [a\Rightarrow_x \forall yB_y(c)])$ In the last post, I gave a Lewis-Cubitt-Sugden style derivation of the following infinite series of propositions, using (2-4), SYMMETRY+, CLOSURE+, ITERATION: • $A \Rightarrow_x q$ • $A\Rightarrow_x \forall y B_y(q)$ • $A\Rightarrow_x (\forall z B_z(\forall y B_y(q)))$ • $\ldots$ A straightforward extension of this assumes (1) and CLOSURE, obtaining the following results in situations where A is the case: • $\forall x B_x(q)$ • $\forall x B_x(\forall y B_y(q))$ • $\forall _x B_x(\forall z B_z(\forall y B_y(q)))$ • $\ldots$ The proofs are valid, so each line in these two infinite sequences hold no matter how one reinterprets the primitive symbols, so long as the premises are true under that reinterpretation. As we’ve seen in the last couple of posts, for Lewis, “indication” was a kind of shorthand. He defined it as follows: • $p\Rightarrow_x q := B_x(p)\rightarrow B_x(q)$ where $\rightarrow$ is the counterfactual conditional. Now, this definition is powerful. It means that CLOSURE needn’t be assumed as a separate premise—it follows from the logic of counterfactuals. And if “reason to believe” is closed under entailment, then we also get CLOSURE+ for free. As noted in edits to the last post, it means that we can get ITERATION from the logic of counterfactuals and a transparency assumption, viz. $B_x(p)\supset B_x(B_x(p))$. The counterfactual gloss was also helpful in interpreting what (4) is saying. The word “indication” might suggest that when A indicates p, A must be something that itself gives the reason to believe p. That would be a problem for (4), but the counterfactual gloss on indication removes that implication. Where Lewis’s interpretation of the primitives is thoroughly normative, we might try running the argument in a thoroughly descriptive vein (see the Stanford Encyclopedia for discussion of an approach to Lewis’s results like this.). To read the current argument descriptively, we might start by reinterpreting $B_x(p)$ as saying: x believes that p, and indication to be defined out of this notion counterfactually just as before. The trouble with this is some of the premises look false, read this way. For example, CLOSURE+ asks us to consider scenarios where x’s beliefs are thus-and-such, where the propositions x believes in that scenario entails (by CLOSURE) the proposition that the conclusion tells us x believes. unless the agent actually believes all the consquences of things she believes, it’s not clear why we should assume the condition in the consequent of CLOSURE+ holds. Similar issues arise for SYMMETRY+ and ITERATION. One reaction at this point is to argue for a “coarse grained” conception of belief that makes it closed under entailment. That’s a standard modelling assumption in the formal literature on this topic, and something that Lewis and Stalnaker both (to a first approximation) accept. It’s extremely controversial, however. If we don’t like that way of going, then we need to revisit our descriptive reinterpretation of the primitives. We could define them so as to make closure under such principles automatic. So, rather than have $B_x(p)$ say that x believes p, we might read it as saying that x is committed to believe p, where x is committed to believe something when it follows from the things they believe (in a fuller version, I’d refine this characterization to allow for circumstances in which a person’s beliefs are inconsistent, without her commitments being trivial, but for now, let’s idealize away that possibility and work with the simpler version). Indication becomes: were x to be committed to believe that p, then they would be committed to believe that q. If you read through the premises under this descriptive reinterpretation, then I contend that you’ll find they’ve got as good a claim to be true as the analogous premises on the original normative interpretation. These interpretations need not compete. Lewis’s normative interpretation of the argument may be sound, and the commitment-theoretic reinterpretation may also be sound. In paradigmatic cases where there is a basis for common knowledge in Lewis’s sense, we may have an infinite stack of commitments-to-believe, and a parallel infinite stack of reasons-to-believe. But notice! What the first Lewis argument gives us is reason to believe that others have reason to believe such-and-such. It doesn’t tell us that we have reason to believe that others are committed to believe so-and-so. So for some of the commitments that people take on in such situations (commitments about what others are committed to believe) might be unreasonable, for all these two results tell us. This will be my focus in the rest of this post, since I am particularly interested in the derivation of infinite commitment-to-believe. I think that the normative question: are these commitments epistemically reasonable? is a central one for a commitment-theoretic way of understanding what “public information” or “common belief” consists in. Let me first explore and expose a blind alley. Lewis himself extracts descriptive predictions about belief from his account of iterated reasons for belief in situations of common knowledge, he adds assumptions about all people being rational, i.e. believing what they have reason to believe. He further adds assumptions about us having reason to believe each other to be rational in this sense, and so on. Such principles of iterated rationality are thought by Lewis to only be true for the first few iterations. They generate, for a few iterations, that we believe that q, believe that we believe q, believe that we believe that we believe q, etc. And in parallel, we can show that we have reason to believe each of these propositions about iterated belief—so all the belief we in fact have will be justified. But while (per Lewis) these predictions are by designed supposed to run out after a few iterations, we need to show how everything we are committed to believing we have reason to believe. One might try to parallel Lewis’s strategy here, adding the premise that people are committed to believing what they have reason to believe. One might hope that such bridge principles will be true “all the way up”, and so allow us to derive the analogue of Lewis’s result for all levels of iteration. But this is where we hit the end of this particular road. If someone (perhaps irrationally) fails to believe that the ball is red despite having reason to believe that the ball is red, the ball being red need not follow from what they believe. So we do not have the principles we’d need to to convert Lewis’s purely normative result into one that speaks to the epistemic puzzle about commitment to believe. Now for a positive proposal. To address the epistemic puzzle, I propose a final reinterpretation of the primitives of Lewis’s account. This time, we split the interpretation of indication and of the B-operator. The B-operator will express commitment-to-believe, just as above. But the indicates-for-x relation does not simply express counterfactual commitment, but has in addition a normative aspect. p will indicate q, for x, iff (i) were x to be committed to believing p, then x would be committed to believing q; and (ii) if x had reason to believe p, then x would have reason to believe q. Before we turn to evaluating the soundness of the argument, consider the significance of the consequences of this argument under the new mixed-split reinterpretation. First, we would have infinite iterated commitment-to-believe, just as in the pure descriptive interpretation (that’s fixed by our interpretation of B). But second, for each level of iteration of mutual commitment-to-believe, we can derive that A indicates (for each x) that proposition. But indication on this reading, unlike on the pure descriptive reading,  has normative implications. It says that when the group members have reason to believe that A, they will have reason to believe that all are committed to believe that all are committed… that all are committed to believe q. So on the split reading of the argument, we derive both infinite iterated commitment to believe, and also that group members have reason to believe that propositions that they are are committed to believe. What I’ve argued is that if the pure-descriptive version of Lewis’s argument is sound, and the pure-normative version of Lewis’s argument is sound, then the mixed-split-interpretation version of Lewis’s argument is sound. The conclusion of the argument under this mixed reading scratches an epistemological itch that neither the pure descriptive reading nor the pure epistemological reading (even supplemented with assumptions of iterated rationality) could help with. That matters to me, in particular, because I’m interested in iterated commitment-to-believe as an analysis of public information/common belief, and I take the epistemological challenge as a serious one. At first, I thought that I could wheel in the Lewis-Cubitt-Sugden proof to address my concerns. But I had two worries. One was about the soundness of that proof, given its reliance on the dubious premise (A6). That worry was expressed two posts ago, and addressed in the last post. But another was the worry raised in the current post: that on the intended reading, the Lewis-Cubitt-Sugden proof really doesn’t show that we have reason to believe all those propositions we are committed to, if we have common belief in the commitment-theoretic sense. But—I hope–all is now well, since the split reinterpretation of the fixed up proof delivers everything I need: both infinite iterated commitment to believe, and the reasonability of believing each of those propositions we are committed to believing. An alternative derivation of common knowledge In the last post I set out a puzzling passage from Lewis. That was the first part of his account of “common knowledge”. If we could get over the sticking point I highlighted, we’d find the rest of the argument would show us how individuals confronted with a special kind of state of affairs A—a “basis for common knowledge that Z”—would end up having reason to believe that Z, reason to believe that all others have reason to believe Z, reason to believe that all others have reason to believe that all others have reason to believe Z, and so on for ever. My worry about Lewis in the last post was also a worry about the plausibility of a principle that Cubitt and Sugden appeal to in reconstructing his argument. What I want to do now is give a slight tweak to their premises and argument, in a way that avoids the problem I had. Recall the idea was that we had some kind of “manifest event” A—in Lewis’s original example, a conversation where one of us promises the other they will return (Z). The explicit premises Lewis cited are: 1. You and I have reason to believe that A holds. 2. A indicates to both of us that you and I have reason to believe that A holds. 3. A indicates to both of us that you will return. I will use the following additional premise: • A indicates to me that we have similar standards and background beliefs. On Lewis’s understanding of indication, this says that if I had reason to believe that A obtained, I’d have reason to believe we are similar in the way described. It is compatible with my not having any reason to believe, antecedent to encountering A, that we are similar in this way. On the other hand, if I have antecedent and resilient reason to believe that we are similar in the relevant respects, the counterfactual will be true That the reason to believe needs to be resilient is an important caveat. It’s only when the reasons to believe we’re similar are not undercut by coming to have reason to believe that A that my version of the premise will be true. So Lewis’s premise can be true in some cases mine is not. But mine is also true in some cases his is not, and that seems to me a particular welcome feature, since these include cases that are paradigms of common knowledge. Assume there is a secret handshake known only to members of our secret society. The handshake indicates membership of the society, and allegiance to its defining goal: promotion of the growing of large marrows. But the secret handshake is secret, so this indication obtains only for members of the society. Once we share the handshake, and intuitively, establish common knowledge that each of us intends to to promote the growing of large marrows. But we lacked reason to believe that we were similar in the right way independent of the handshake itself. Covering these extra paradigmatic cases is an attractive feature. And I’ve explained that we can also cite it in the other paradigmatic cases, the cases where our belief in similarity is independent of A, so this looks to me strictly preferable to Lewis’s premise. (I should note one general worry however. Lewis’s official definition of indication wasn’t just that when one had reason to believe the antecedent, one would have reason to believe the consequent. It is that one would thereby have reason to believe the consequent. You might read into that a requirement that the reason one has to believe the antecedent has to be a reason you have for believing the consequent. That would mean that in cases where one coming to have reason to believe that A was irrelevant to your reason to believe that you were similar, we did not have an indication relation. I’m proposing to simply strike out the “thereby” in Lewis’s definition to avoid this complication–if that leads to trouble, at least we’ll be able to understand better why he stuck it in). I claim that my premise allows us to argue for the following, for various relevant p: • If A indicates to me that p then A indicates to me that (A indicates to you that p). The case for this is as follows. We start by appealing to the inference pattern that I labelled I in the previous post, and that Lewis officially declared his starting point: 1. A indicates to x that p 2. x and y share similar standards and background beliefs. 3. Conclusion: A indicates to y that p. I claim this supports the following derived pattern: 1. A indicates to x that A indicates to x that p 2. A indicates to x that x and y share similar standards and background beliefs 3. Conclusion: A indicates to x that A indicates to y that p. This seems good to me, in light of the transparent goodness of I. A bit of rearrangement gives the following version: 1. A indicates to x that x and y share similar standards and background beliefs 2. Conclusion: if A indicates to x that A indicates to x that p, then A indicates to x that A indicates to y that p. The premise here is my first bullet point. Given Lewis’s counterfactual gloss on indication, the conclusion is equivalent to my second bullet point, as required. To elaborate on the equivalence: “If x had reason to believe that A, then if x had reason to believe A, then…” is equivalent to “If x had reason to believe that A, then…”, just because in standard logics of counterfactuals “if were p, then if were p, then…” is generally equivalent to “if were p, then…”. In the present context, that means that “A indicates to x that A indicates to x that…” is equivalent to “A indicates to x that”. [edit: wait… that last move doesn’t quite work does it? “A indicates that (A indicates B)” translates to: “If x had reason to believe A, then x would have reason to believe (if A had reason to believe A, then A would have reason to believe B)”. It’s not just the counterfactual move, because there’s an extra operator running interference. Still, it’s what I need for the proof…. But still, the counterfactual gloss may allow the transition I need. For consider the closest worlds where x has reason to believe that B. And let’s stick in a transparency assumption: that in any situation x has reason to believe p, x has reason to believe x has reason to believe p. Given transparency, at these closest worlds, x has reason to believe that she has reason to believe A, ie reason to believe that the closest world where she has reason to believe A is the world in which she stands. But in the world in which she stands Transparency entails she has reason to believe she has reason to believe p. So she has reason to believe the relevant counterfactual is true, in those worlds. And that means we have derived the double iteration of indication from the single iteration. Essentially, suitable instances of transparency for “reason to believe” gets us analogous instances of transparency for “indication”.  ] The final thing I want to put on the table is the good inference pattern VI from the previous post. That is: 1. A indicates that [y has reason to believe that A holds] to x. 2. A indicates that [A indicates Z to y] to x. 3. Conclusion: A indicates that [y has reason to believe that Z] to x. This looked good, recall, because the embedded contents are just an instance of modus ponens when you unpack them, and it’s pretty plausible in worlds where x has reason to believe the premises of modus ponens, then x has reason to believe the conclusion—which is what the above ends up saying. (As you’ll see, I’ll actually use a form of this in which the embedded clauses are generalized, but I think that doesn’t make a difference). This is enough to run a variant of the Lewis argument. Let me give it to you in a formalized version. I use $\Rightarrow_x$ for the “indicates-to-x” relation, and $B_x$ for “x has reason to believe”.  I’ll state it not just for the two-person case, but more generally, with quantifiers x and y ranging over members of some group, and a,b,c ranging over propositions. Then we have: 1. $\forall x (A\Rightarrow_x \forall yB_y(A))$ (the analogue of Lewis’s second premise, above). 2. $\forall x (A \Rightarrow_x Z)$ (the analogue of Lewis’s third premise, above) 3. $\forall x ([A \Rightarrow_x Z]\supset [A \Rightarrow_x(\forall y[A\Rightarrow_y Z])]$ (an instance of the formalization of the bullet point I argued for above). 4. $\forall x [A \Rightarrow_x(\forall y[A\Rightarrow_y Z])]$ (by logic, from 2,3). 5. $\forall x [A \Rightarrow_x(\forall yB_y (Z))]$ (by inference pattern VI, from 1,4). Line 5 tells us that not only does A indicate to each of us that Z (as Lewis’s premise 2 assures us) but that A indicates to each of us that each has reason to believe Z. The argument now loops, by further instances of the bullet assumption and inference pattern VI, showing that A indicates to each of us that each has reason to believe that each has reason to believe that Z, and so on for arbitrary iterations of reason-to-believe. As in Lewis’s original presentation, the analogue of premise 1 allows us to detach the consequent of each of these indication relations, so that in situations where we all have reason to believe that A holds, we have arbitrary iterations of reason to believe Z. (To quickly report the process by which I was led to the above. I was playing around with versions of Cubitt and Sugden’s formalization of Lewis, which as mentioned used the inference pattern that I objected to in the last post. Inference pattern VI is what looked to me the good inference pattern in the vicinity—the thing that they label A6, and the bullet pointed principle is essentially the adjustment you have to make to another premise they attribute to Lewis—one they label C4—in order to make their proof go through with VI rather than the problematic A6. From that point, it’s simply a matter of figuring out whether the needed change is a motivated or defensible one). So I commend the above as a decent way of fixing up an obscure corner of Lewis’s argument. To loop around to the beginning, the passage I was finding obscure in Lewis, had him endorsing the following argument (II): 1. A indicates that [y has reason to believe that A holds] to x. 2. A indicates that Z to x. 3. x has reason to believe that x and y share standards/background information. 4. Conclusion: A indicates that [y has reason to believe that Z] to x. The key change is to replace II.3 with the cousin of it introduced above: that A indicates to x that x and y share standards/background information. Once we’ve done this, I think the inference form is indeed good. Part of the case for this is indeed the argument that Lewis cites, labelled I above. But as we’ve seen, there’s seems to be quite a lot more going on under the hood. Lewis on common knowledge The reading for today is chapter II, section 1 of Convention. In it, Lewis discusses a state of affairs, A, “you and I have met, we have been talking together, you must leave before our business is done; so you say you will return to the same place tomorrow.” Lewis notes that this generates expectations and higher order expectations: “I expect you to return. You will expect me to expect you to return. I will expect you to expect me to expect you to return. Perhaps there will be one or two orders more”. His task is to explain how these expectations are generated. We’ll just be looking at the first few steps of his famous proposal, which are framed in terms of reasons to believe. It has three premises: 1. You and I have reason to believe that A holds. 2. A indicates to both of us that you and I have reason to believe that A holds. 3. A indicates to both of us that you will return. “Indication” is defined counterfactually: A indicates to someone x that Z iff if x had reason to believe that A held, x would thereby have reason to believe that Z. Lewis notes that indication depends on “background information and inductive standards” of the agent in question. The appeal to inductive standards might suggest a somewhat subjective take on epistemic reasons is in play here, but even if you think epistemic reasons are pretty objective, the presence or absence of a belief in defeaters to inductive generalizations, for example, will matter to whether that counterfactuals of this form are true. (I’m not sure about the significance of the “thereby” in this statement. Maybe Lewis is saying that the reason for believing that A held would also be the reason for believing that Z is the case. I’m also not sure whether or not this matters). There follows a passage that I have difficulty following. Here it is in full. “Consider that if A indicates something to x, and if y shares x’s inductive standard and background information, then A must indicate the same thing to y. Therefore, if A indicates to x that y has reason to believe that A holds, and if A indicates to x that Z, and if x has reason to believe that y shares x’s information, then A indicates to x that y has reason to believe that Z (this reason being y’s reason to believe that A holds)”. In this passage, we first get the following inference pattern (I): 1. A indicates p to x. 2. x and y share standard/background information. 3. Conclusion: A indicates p to y. That seems fair enough. Following the “therefore”, we get the following inference (II): 1. A indicates that [y has reason to believe that A holds] to x. 2. A indicates that Z to x. 3. x has reason to believe that x and y share standards/background information. 4. Conclusion: A indicates that [y has reason to believe that Z] to x. This is a complex piece of reasoning, and it’s relation to the earlier inference pattern is not at all clear. For example, in the first inference pattern, facts about shared standards are mentioned. In the second, what we have to work with is x having reason to believe that there are shared standards. This prevents us directly applying argument I to derive argument II. Some work needs to be done to connect these two. Given the validity of the first pattern you can plausibly argue for the goodstanding of the following derived pattern (III): 1. x hass reason to believe that A indicates p to x. 2. x has reason to believe that x and y share standard/background information. 3. Conclusion: x has reason to believe that A indicates p to y. Now III.2 is also II.3, so we now hope to connect the arguments. But the other two premises are not facts about what x has reason to believe, as they would have to be in order to apply III directly. Rather, they are facts about what A indicates to x. We need to start attributing enthymetic premises. Perhaps there is a transparency assumption, namely that IV is valid: 1. A indicates p to x. 2. x has reason to believe that A indicates p to x. IV allows us to get from II.2 to the claim that x has reason to believe that A indicates Z to x. And you can then use II.3 to supply the remaining premise of inference pattern III. What we get is the following: x has reason believe that A indicates Z to y. And so we could argue that argument II was a good one, if the following inference pattern was good  (V): 1. A indicates that [y has reason to believe that A holds] to x. 2. x has reason to believe that [A indicates Z to y]. 3. Conclusion: A indicates that [y has reason to believe that Z] to x. The conclusion of V is the same as that of II. V.1 is simply II.1, and we have seen that III and IV get us from II.2 and II.3 to V.2. So the validity of V suffices for the validity of II. So what do we think about inference pattern V? V is, in fact, an inference pattern that Cubitt and Sugden, in their very nice analysis of Lewis’s argument, take as one of the basic assumptions (they give it as a material conditional, and label it A6). It seems really dubious to me however. The reason that it looks superficially promising is because the three embedded claims  constitute a valid argument, and the embedding contexts looks like we’re reporting the validity of this argument “from x’s perspective”. The embedded argument is simply the following: If y has reason to believe that A holds, and A indicates Z to y, then y will have reason to believe that Z holds. Given the way Lewis defined indication in terms of the counterfactual condition, this is just a modus ponens inference. Now this would be exactly the right diagnosis if we were working not with V but with VI: 1. A indicates that [y has reason to believe that A holds] to x. 2. A indicates that [A indicates Z to y] to x. 3. Conclusion: A indicates that [y has reason to believe that Z] to x. VI really does look good, because each premise tells us that the respective embedded clause is true in all the closest worlds where x has reason to believe that A holds. And since the final embedded clause follows logically from the first two, it must hold in all the closest worlds where x has reason to believe A. And that’s what the conclusion of VI tells us is the case. But this is irrelevant to V. V.2 doesn’t concern what x has reason to believe some counterfactual worlds, but what they have reason to believe in the actual world. And for all we are told, in the closest worlds where x has reason to believe that A is the case, they may not have reason to believe some of the things they actually have reason to believe. That is: A might be the sort of thing that defeats x’s reason to believe that A indicates Z to y. So this way of explaining what’s going along fails. So I’m not sure how best to think about Lewis’s move here. The transition he endorses between I and II really isn’t transparently good. A natural line of thought leads us to think of him resting on Cubitt and Sugden’s reconstructed premise A6, V above. But that really doesn’t look like something we should be relying on. Is there some other way to understand what he has in mind here?
2021-08-01 03:22:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 113, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5630481839179993, "perplexity": 1192.79781791445}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154158.4/warc/CC-MAIN-20210801030158-20210801060158-00595.warc.gz"}
https://iric-gui-user-manual.readthedocs.io/en/latest/06/10_measured_data_text.html
# Measured data text file (*.csv)¶ Measured data text file contains the positions of measured data and the measured values (scalar values and vector values). List 18 shows an example of measured data text file. List 18 Example of measured data text file X,Y,Elevation,VecX,VecY 100,120,5.12,3,4 100,140,7.2,1,-3.2 0,120,8.12,-2,1 0,140,9.2,4,-6.2 Measured data text file must have a header line. The header line defines the data contained in each column. The header line has to stick to the following rules: • First column must be “X” (the x-coordinate of the measured point), and the second column must be “Y” (the y-coordinate of the measured point). The following column names are arbitrary. • Columns names must consist of only alphabets and numbers. • When there are column names that end with “X” and “Y” (for example, “VecX”, “VecY”), those columns are regarded as X component and Y component of a vector value. On the second line and the following lines, the coordinates of measured points and measured data are stored. The values have to be real numbers.
2021-12-08 03:04:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7158825993537903, "perplexity": 1148.9758091564022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363437.15/warc/CC-MAIN-20211208022710-20211208052710-00280.warc.gz"}
https://a01sa01to.com/en/opendata/covid19-ibaraki/vaccination/
# Vaccination status Open data compiled by the national government has been modified to the format used on the "COVID-19 Taskforce Website in Ibaraki". Last Update 2022/09/25 18:35 (UTC+09:00) Create Date 2021/06/03 21:00 (UTC+09:00) File name 080004_ibaraki_covid19_vaccination.json File size 70.90 KiB Data format JSON License CC BY-SA 4.0 Indication of the source is required, but commercial use and modification is permitted. If you modify or process the work, please distribute it under the same "CC BY-SA 4.0" license as this data. Please include the URL of this page as the source. If you have any questions or requests, please send them to info@a01sa01to.com . ## Points to note when using the system • This data follows the format of "COVID-19 Taskforce Website in Ibaraki" and is available as open data on the status of infection in Ibaraki Prefecture. • The open data here is based on reports to the Vaccination Record System (VRS), which are compiled by the national government by prefecture of residence and published by the Digital Agency. See also Open Data by Digital Agency . • While every effort has been made to ensure the accuracy of the information in this data, I am not responsible for any actions taken by users using the information on this site. You can download it from the button below. You can also download it from the following URL using various libraries such as cURL. The query used in the API is as follows. Replace [YOUR QUERY HERE] with the following, escaping newline characters as appropriate. query { covid19_ibaraki { vaccination( [PAGINATION INFO] ) { dataset { date # Vaccination Date: String! government_code # National Local Government Code: String! prefecture # Name of prefecture: String! city # Name of city, town or village: String total # Total number of vaccinated persons: Int! first # Number of people vaccinated for the 1st time: Int! second # Number of people vaccinated for the 2nd time: Int! third # Number of people vaccinated for the 3rd time: Int! fourth # Number of people vaccinated for the 4th time: Int! } pageinfo { hasPreviousPage # Whether or not there is a previous page: Boolean! hasNextPage # Whether there is a next page or not: Boolean! startCursor # The first Cursor on the current page: String! endCursor # The last Cursor on the current page: String! } last_update # Last Update: String! } } }
2022-09-25 20:57:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22013428807258606, "perplexity": 4868.299231866562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00089.warc.gz"}
https://math.stackexchange.com/questions/936146/sum-of-the-infinite-series-frac16-frac56-cdot-12-frac5-cdot86-cdot1
# Sum of the infinite series $\frac16+\frac{5}{6\cdot 12} + \frac{5\cdot8}{6\cdot12\cdot18} + \dots$ We can find the sum of infinite geometric series but I am stuck on this problem. Find the sum of the following infinite series: $$\frac16+\frac{5}{6\cdot 12} + \frac{5\cdot8}{6\cdot12\cdot18} + \frac{5\cdot8\cdot11}{6\cdot12\cdot18\cdot24}+\dots$$ Using binomial expansion, we have: $(1-x)^{-2/3} = 1 + \dfrac{\frac{2}{3}}{1!}x + \dfrac{\frac{2}{3} \cdot \frac{5}{3}}{2!} x^2 + \dfrac{\frac{2}{3} \cdot \frac{5}{3} \cdot \frac{8}{3}}{3!}x^3 + \dfrac{\frac{2}{3} \cdot \frac{5}{3} \cdot \frac{8}{3} \cdot \frac{11}{3}}{3!}x^4 + \cdots$ $(1-x)^{-2/3} = 1 + \dfrac{2}{3}x + \dfrac{2 \cdot 5}{3 \cdot 6} + \dfrac{2 \cdot 5 \cdot 8}{3 \cdot 6 \cdot 9}x^3 + \dfrac{2 \cdot 5 \cdot 8 \cdot 11}{3 \cdot 6 \cdot 9 \cdot 12}x^4 + \cdots$ Plug in $x = \dfrac{1}{2}$ to get: $(1-\frac{1}{2})^{-2/3} = 1 + \dfrac{2}{3}\cdot\dfrac{1}{2} + \dfrac{2 \cdot 5}{3 \cdot 6}\cdot\dfrac{1}{2^2} + \dfrac{2 \cdot 5 \cdot 8}{3 \cdot 6 \cdot 9}\cdot\dfrac{1}{2^3} + \dfrac{2 \cdot 5 \cdot 8 \cdot 11}{3 \cdot 6 \cdot 9 \cdot 12}\cdot\dfrac{1}{2^4} + \cdots$ $2^{2/3} = 1 + \dfrac{2}{6} + \dfrac{2 \cdot 5}{6 \cdot 12}+ \dfrac{2 \cdot 5 \cdot 8}{6 \cdot 12 \cdot 18}+ \dfrac{2 \cdot 5 \cdot 8 \cdot 11}{6 \cdot 12 \cdot 18 \cdot 24} + \cdots$ Finally, subtract $1$ and divide both sides by $2$ to get: $\dfrac{2^{2/3} - 1}{2} = \dfrac{1}{6} + \dfrac{5}{6 \cdot 12}+ \dfrac{ 5 \cdot 8}{6 \cdot 12 \cdot 18}+ \dfrac{ 5 \cdot 8 \cdot 11}{6 \cdot 12 \cdot 18 \cdot 24} + \cdots$ • How have you determined the exponent to be $$-\frac23?$$ – lab bhattacharjee Sep 18 '14 at 8:21 • I played around with different values until I found one that worked. I didn't really use any systematic method. – JimmyK4542 Sep 19 '14 at 1:20 $$\frac16+\frac{5}{6\cdot12}+\frac{5\cdot8}{6\cdot12\cdot18}+\frac{5\cdot8\cdot11}{6\cdot12\cdot18\cdot24}+\dots=$$ $$=\frac12\cdot\bigg[\frac26+\frac{2\cdot5}{6\cdot12}+\frac{2\cdot5\cdot8}{6\cdot12\cdot18}+\frac{2\cdot5\cdot8\cdot11}{6\cdot12\cdot18\cdot24}+\dots\bigg]=$$ $$=\frac12\cdot\bigg[\frac{(3-1)}{(6\cdot1)}+\frac{(3-1)\cdot(6-1)}{(6\cdot1)\cdot(6\cdot2)}+\frac{(3-1)\cdot(6-1)\cdot(9-1)}{(6\cdot1)\cdot(6\cdot2)\cdot(6\cdot3)}+\dots\bigg]=$$ $$=\frac12\cdot\bigg[\frac{(3\cdot1-1)}{(6\cdot1)}+\frac{(3\cdot1-1)\cdot(3\cdot2-1)}{(6\cdot1)\cdot(6\cdot2)}+\frac{(3\cdot1-1)\cdot(3\cdot2-1)\cdot(3\cdot3-1)}{(6\cdot1)\cdot(6\cdot2)\cdot(6\cdot3)}+\dots\bigg]=$$ $$=\frac12\cdot\sum_{n=1}^\infty\frac{\displaystyle\prod_{k=1}^n(3k-1)}{6^n\cdot n!}=\frac12\cdot\sum_{n=1}^\infty\frac{\displaystyle\prod_{k=1}^n\bigg(k-\frac13\bigg)}{2^n\cdot n!}=\frac12\cdot\sum_{n=1}^\infty\frac{\displaystyle\bigg(-\frac13\bigg){\Large!}\cdot\prod_{k=1}^n\bigg(k-\frac13\bigg)}{\bigg(-\dfrac13\bigg){\Large!}\cdot2^n\cdot n!}=$$ $$=\frac12\cdot\sum_{n=1}^\infty\frac{\displaystyle\bigg(n-\frac13\bigg){\Large!}}{\bigg(-\dfrac13\bigg){\Large!}\cdot2^n\cdot n!}=\frac12\cdot\sum_{n=1}^\infty\frac{\displaystyle\bigg(n-\frac13\bigg){\Large!}}{\bigg(-\dfrac13\bigg){\Large!}\cdot n!}\cdot\bigg(\frac12\bigg)^n=\frac12\cdot\sum_{n=1}^\infty{n-\frac13\choose n}\bigg(\frac12\bigg)^n$$ $$=\frac12\cdot\sum_{n=1}^\infty{\frac13-1\choose n}\bigg(-\frac12\bigg)^n=\frac12\cdot\bigg[-1+\sum_{n={\color{red}0}}^\infty{-\frac23\choose n}\bigg(-\frac12\bigg)^n\bigg]=$$ $$=\frac12\cdot\bigg[-1+\bigg(1-\frac12\bigg)^{^{-\tfrac23}}\bigg]=-\frac12+\frac1{\sqrt[3]2}.$$ • We have used the fact that $\displaystyle{n-a\choose n}=(-1)^n{a-1\choose n}$. – Lucian Sep 18 '14 at 8:22 • It goes on without saying that this is a binomial series. – Lucian Sep 18 '14 at 8:35 \begin{align} y &= \sum_{k=1} \frac{\prod_{j=2}^{k} 3j - 1} {\prod_{j=1}^k 6j} \tag{A} \\ &= \left(-\frac 12\right) + \sum_{k=0} \frac{\left(-\frac 12\right)\prod_{j=0}^{k} 3j - 1} {\prod_{j=1}^k 6j} \tag{B} \\ &= \left(-\frac 12\right) + \sum_{k=0} \frac{\left(-\frac 13\right)^{k+1}\left(-\frac 12\right)\prod_{j=0}^k 1/3 - j} {6^k k!}\tag{C} \\ &= \left(-\frac 12\right) + \left(-\frac 13\right)\left(-\frac 12\right)\sum_{k=0} \frac{\prod_{j=0}^k 1/3 - j} {k!}\left(-\frac 1 {3\cdot 6}\right)^k\tag{D} \\ &= \left(-\frac 12\right) + \left(\frac 16\right) \sum_{k=0} \frac{\prod_{j=0}^k 1/3 - j} {k!}\left(-\frac 1 {18}\right)^k\tag{E} \end{align} Generalized binomial theorem is: $$(1 + x)^n = \sum_{k=0} \frac{ \prod_{j=0}^k n - j } {k!} x^k\tag{F}$$ So if I don't have a type then $$y = \left(-\frac 12\right) + \left(\frac 16\right)\left(1 + -\frac{1}{18}\right)^{1/3} = \frac{1}{18}\sqrt[3]{\frac{17}{2}} - \frac 12\tag{G}$$
2019-11-19 01:01:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000096559524536, "perplexity": 1927.8554561308517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669868.3/warc/CC-MAIN-20191118232526-20191119020526-00303.warc.gz"}
http://mathoverflow.net/feeds/question/54708
Period rings for Galois representations - MathOverflow most recent 30 from http://mathoverflow.net 2013-06-18T23:45:52Z http://mathoverflow.net/feeds/question/54708 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/54708/period-rings-for-galois-representations Period rings for Galois representations A M 2011-02-07T23:06:48Z 2012-02-04T22:30:12Z <p>I have some questions concerning period rings for Galois representations.</p> <p>First, consider the case of $p$-adic representations of the absolute Galois group $G_K$, where $K$ denote a $p$-adic field. Among all these representations, we can distinguish some of them, namely those which are Hodge-Tate, de Rham, semistable or crystalline. This is due to Fontaine who constructed some period rings : $B_{HT}$, $B_{dR}$, $B_{st}$ and $B_{crys}$.</p> <p>Constructing the ring $B_{HT}$ is not very difficult and it is quite natural. </p> <p>Does someone have any idea where $B_{dR}$ comes from ?</p> <p>For $B_{crys}$, I guess it was constructed to detect the good reduction of (proper, smooth ?) varieties. I don't know anything of crystalline cohomology but does someone have a simple explanation of the need to use the power divided enveloppe of the Witt vectors of the perfectisation (?) of $\mathcal{O}_{\mathbb{C}_p}$ ?</p> <p>As for the ring $B_{st}$, once you have $B_{crys}$, I think the idea of Fontaine was to add a period from Tate's elliptic curve, which have bad semistable reduction. Does someone knows if Fontaine was aware that adding just this period will be sufficient or was it a good surprise ?</p> <p>Finally, why there is no period rings for global $p$-adic Galois representations ?</p> http://mathoverflow.net/questions/54708/period-rings-for-galois-representations/54790#54790 Answer by Keerthi Madapusi Pera for Period rings for Galois representations Keerthi Madapusi Pera 2011-02-08T16:50:38Z 2011-02-08T16:50:38Z <p>Beilinson has recently discovered a new proof of the de Rham comparison isomorphism. You can find a write-up here: <a href="http://arxiv.org/abs/1102.1294" rel="nofollow">http://arxiv.org/abs/1102.1294</a>. Here, he shows that $B_{dR}$ naturally shows up when you consider the p-adic completion (in a suitable sense) of the derived de Rham cohomology of $\mathcal{O}_{\bar{K}}$ over $\mathcal{O}_K$. </p> <p>Also, $A_{cris}$ naturally shows up as (more or less) the global sections of the structure sheaf over the crystalline site for $\mathcal{O}_{\bar{K}}$ over $W(k)$ ($k$ is the residue field of $K$). There is a very nice explanation of this in R. S. Lodh's thesis: <a href="http://www.math.utah.edu/~remi/research/thesispt1formatted.pdf" rel="nofollow">http://www.math.utah.edu/~remi/research/thesispt1formatted.pdf</a>.</p> http://mathoverflow.net/questions/54708/period-rings-for-galois-representations/87481#87481 Answer by SGP for Period rings for Galois representations SGP 2012-02-03T21:39:48Z 2012-02-04T22:30:12Z <p>Beilinson's results (two papers, one mentioned by Keerthi and <a href="http://arxiv.org/abs/1111.3316" rel="nofollow">the other here</a>) have been generalised by Bhargav Bhatt; <a href="http://math.uchicago.edu/~drinfeld/p-adic_periods/Bhatt-p-adic_derived_de_Rham.pdf" rel="nofollow">his paper</a> also introduces a global period ring $A_{ddR}$ for global Galois representations!! The ring $A_{ddR}$ is a filtered $\hat{Z}$-algebra equipped with a Gal$(\bar{Q}/{Q})$-action. </p> <p>A (one of the many) beautiful result in this paper is the following theorem:</p> <p>Let $X$ be a semistable variety over $Q$. Then the log de Rham cohomology of a semistable model for $X$ is isomorphic to the $\hat{Z}$-etale cohomology of $X_{\bar{Q}}$, once both sides are base changed to a localization of $A_{ddR}$ (whle preserving all natural structures on either side).</p>
2013-06-18 23:45:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9109312295913696, "perplexity": 1036.0271729662422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707436824/warc/CC-MAIN-20130516123036-00058-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.qb365.in/materials/stateboard/12th-standard-maths-inverse-trigonometric-functions-english-medium-free-online-test-one-mark-questions-with-answer-key-2020-2021-6158.html
12th Standard Maths Inverse Trigonometric Functions English Medium Free Online Test One Mark Questions with Answer Key 2020 - 2021 12th Standard Reg.No. : • • • • • • Maths Time : 00:10:00 Hrs Total Marks : 10 10 x 1 = 10 1. If sin-1 x+sin-1 y=$\frac{2\pi}{3};$then cos-1x+cos-1 y is equal to (a) $\frac{2\pi}{3}$ (b) $\frac{\pi}{3}$ (c) $\frac{\pi}{6}$ (d) $\pi$ 2. If sin-1 x+sin-1 y+sin-1 z=$\frac{3\pi}{2}$, the value of x2017+y2018+z2019$-\frac { 9 }{ { x }^{ 101 }+{ y }^{ 101 }+{ z }^{ 101 } }$is (a) 0 (b) 1 (c) 2 (d) 3 3. The domain of the function defined by f(x)=sin−1$\sqrt{x-1}$ is (a) [1,2] (b) [-1,1] (c) [0,1] (d) [-1,0] 4. ${ tan }^{ -1 }\left( \frac { 1 }{ 4 } \right) +{ tan }^{ -1 }\left( \frac { 2 }{ 3 } \right)$is equal to (a) $\frac { 1 }{ 2 } { cos }^{ -1 }\left( \frac { 3 }{ 5 } \right)$ (b) $\frac { 1 }{ 2 } { sin }^{ -1 }\left( \frac { 3 }{ 5 } \right)$ (c) $\frac { 1 }{ 2 } {tan }^{ -1 }\left( \frac { 3 }{ 5 } \right)$ (d) ${ tan}^{ -1 }\left( \frac { 1}{ 2 } \right)$ 5. sin-1(2cos2x-1)+cos-1(1-2sin2x)= (a) $\frac{\pi}{2}$ (b) $\frac{\pi}{3}$ (c) $\frac{\pi}{4}$ (d) $\frac{\pi}{6}$ 6. sin(tan-1x), |x|<1 ia equal to (a) $\frac{x}{\sqrt{1-x^2}}$ (b) $\frac{1}{\sqrt{1-x^2}}$ (c) $\frac{1}{\sqrt{1+x^2}}$ (d) $\frac{x}{\sqrt{1+x^2}}$ 7. If ${ sin }^{ -1 }x-cos^{ -1 }x=\cfrac { \pi }{ 6 }$ then (a) $\cfrac { 1 }{ 2 }$ (b) $\cfrac { \sqrt { 3 } }{ 2 }$ (c) $\cfrac { -1 }{ 2 }$ (d) none of these 8. ·If $\alpha ={ tan }^{ -1 }\left( \cfrac { \sqrt { 3 } }{ 2y-x } \right) ,\beta ={ tan }^{ -1 }\left( \cfrac { 2x-y }{ \sqrt { 3y } } \right)$ then $\alpha -\beta$ (a) $\cfrac { \pi }{ 6 }$ (b) $\cfrac { \pi }{ 3 }$ (c) $\cfrac { \pi }{ 2 }$ (d) $\cfrac { -\pi }{ 3 }$ 9. $sin\left\{ 2{ cos }^{ -1 }\left( \cfrac { -3 }{ 5 } \right) \right\} =$ (a) $\cfrac { 6 }{ 15 }$ (b) $\cfrac { 24 }{ 25 }$ (c) $\cfrac { 4 }{ 5 }$ (d) $\cfrac { -24 }{ 25 }$ 10. In a $\Delta ABC$  if C is a right angle, then  ${ tan }^{ -1 }\left( \cfrac { a }{ b+c } \right) +{ tan }^{ -1 }\left( \cfrac { b }{ c+a } \right) =$ (a) $\cfrac { \pi }{ 3 }$ (b) $\cfrac { \pi }{ 4 }$ (c) $\cfrac { 5\pi }{ 2 }$ (d) $\cfrac { \pi }{ 6 }$
2021-05-12 11:05:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.679557740688324, "perplexity": 9961.088273374653}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989693.19/warc/CC-MAIN-20210512100748-20210512130748-00071.warc.gz"}
https://crypto.stackexchange.com/questions/linked/606
15 questions linked to/from Time Capsule cryptography? 499 views ### Time-based Crypto [duplicate] I'm wondering if there is any way to make crypto based on time, where the concept of time is linear in one-direction. Even a theoretical brainstorm. Example 1: A message which can only be opened/... 14k views ### Is it possible to make time-locked encrytion algorithm? I'm not sure if what I'm asking is even a valid question but here goes. Would it be possible to add a mechanism to an encryption algorithm that would mean it had to be a certain time of the day or a ... 2k views ### Can I encrypt user input in a way I can't decrypt it for a certain period of time? I run a baseball league and would like to do silent auctions for free agents. This would require teams to enter their highest bid and the highest bidder at the end of the auction period would win. ... 2k views ### How to require two keyholders to decrypt a document? I want to create a system to encrypt a document and store it with a 3rd party, but not have the 3rd party be able to decrypt it until some unspecified later date. It seems like the solution would be ... 487 views ### Is there an algorithm or hardware that can sign/verify natural time? PGP/GPG can used to sign text, others use public key to verify them. So one could say, that these cryptographic algorithms deal with space. Are there any algorithms that can deal with time? E.g. I ... 1k views I was looking at Time Capsule cryptography? and came up wth this idea. Question: Is there a way to store a secret such that the creator must update it or the secret will be decrypted and anyone can ... 276 views ### Ways to make a “doomsday” cryptocurrency which becomes untradable As a social experiment (not a money-making scheme), I'm interested in developing a crazy cryptocurrency which, by its very design, will become worthless and untradable after a certain point. Ideally, ... 534 views ### Timelock puzzle improvment I came across this question with this answer about a cryptographic timelock-puzzle that needs approximately 30 years to be solved. There is also an explanation with source code for that puzzle ... 455 views ### Time-locked Puzzles & Dead Man's Switch Intervention I have been researching time-based puzzles. Specifically, computationally expensive algorithms for the purpose of a time-lock. This has lead me to sequential squaring, firstly, as well as some memory-... 438 views ### Possible to generate a one time secret which all nodes on the distributed network can know but cannot pre compute I have a distributed P2P network. Everyday, I want each node to have a secret which is only valid for that day, while each of the other remaining nodes on the network will be able to calculate the ... 167 views ### Create a time capsule using a Word processor I was wondering if there's any option to create sort of a time capsule in a Word document, i.e. not encrypted by password rather than a date. Let's say I send the document today but the receiving side ... 90 views ### Ensuring that an operation takes a relatively specific amount of time, but easily verify the result I want an algorithm of some sort that can ensure that an operation takes a fairly specific amount of time, but proof that this operation was done can be completed relatively inexpensively. For ... 77 views ### Time-based decryption service i am thinking about building a public crypto service, yet at the same time, I am still quite new to crypto and therefore prone to snake-oil inventions I guess. My plan is to build a website which ... Suppose you have a message $m$ and you want to design a protocol that produces an encrypted message $c$ such that it cannot be decrypted before a certain time $t$ has passed from the encryption. How ...
2020-08-14 06:18:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6224605441093445, "perplexity": 1104.8926598952585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739177.25/warc/CC-MAIN-20200814040920-20200814070920-00334.warc.gz"}
https://math.stackexchange.com/questions/2498571/how-do-i-solve-xy-x2y2/2498603
# How do I solve $xy'= x^2+y^2$? I have: $$xy'= x^2+y^2$$ I tried to separate variables but it did not work, checked if it could be homogeneous equation but it is not (obiviously), all my transformations did not give me linear equation, so I have no clue how to approach it now :( • According to Mathematica, the simplest solution is in terms of Bessel functions, I'm afraid. Oct 31, 2017 at 18:50 ## 1 Answer It is a Riccati equation. Let $v=\frac y x$. Then $v$ satisfies the equation $$v' = v^2 -\frac v x +1$$ Substituting $v = \frac{-u'}{u}$, we have that $u$ satisfies: $$u'' + \frac{u'}{x} +u=0$$ By solving this, which is a Bessel equation though, we will then have that $y = \frac{-u'}{ux}$ satisfies the initial equation. The Bessel equation $x^2u'' +xu'+x^2u=0$, has as solutions the Bessel functions.
2022-07-07 17:13:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9600585103034973, "perplexity": 183.45665687393756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104495692.77/warc/CC-MAIN-20220707154329-20220707184329-00219.warc.gz"}
http://www.cnblogs.com/qldabiaoge/p/9058416.html
poj~2528 Mayor's posters Mayor's posters Time Limit: 1000MS Memory Limit: 65536K Total Submissions: 73869 Accepted: 21303 Description The citizens of Bytetown, AB, could not stand that the candidates in the mayoral election campaign have been placing their electoral posters at all places at their whim. The city council has finally decided to build an electoral wall for placing the posters and introduce the following rules: • Every candidate can place exactly one poster on the wall. • All posters are of the same height equal to the height of the wall; the width of a poster can be any integer number of bytes (byte is the unit of length in Bytetown). • The wall is divided into segments and the width of each segment is one byte. • Each poster must completely cover a contiguous number of wall segments. They have built a wall 10000000 bytes long (such that there is enough place for all candidates). When the electoral campaign was restarted, the candidates were placing their posters on the wall and their posters differed widely in width. Moreover, the candidates started placing their posters on wall segments already occupied by other posters. Everyone in Bytetown was curious whose posters will be visible (entirely or in part) on the last day before elections. Your task is to find the number of visible posters when all the posters are placed given the information about posters' size, their place and order of placement on the electoral wall. Input The first line of input contains a number c giving the number of cases that follow. The first line of data for a single case contains number 1 <= n <= 10000. The subsequent n lines describe the posters in the order in which they were placed. The i-th line among the n lines contains two integer numbers li and ri which are the number of the wall segment occupied by the left end and the right end of the i-th poster, respectively. We know that for each 1 <= i <= n, 1 <= li <= ri <= 10000000. After the i-th poster is placed, it entirely covers all wall segments numbered li, li+1 ,... , ri. Output For each input data set print the number of visible posters after all the posters are placed. The picture below illustrates the case of the sample input. Sample Input 1 5 1 4 2 6 8 10 3 4 7 10 Sample Output 4这题注意要离散化就OK了 ,线段树离散化 ,但是要注意一个坑点 if (sum[i] - sum[i - 1] > 1) sum[m++] = sum[i - 1] + 1;要加一个点,避免区间不正确覆盖。 1 #include <cstdio> 2 #include <cstring> 3 #include <algorithm> 4 using namespace std; 5 const int maxn = 2e4 + 10; 6 int tree[maxn << 4], a[maxn], b[maxn]; 7 int sum[3 * maxn], vis[3 * maxn], ans; 8 void init() { 9 memset(tree, -1, sizeof(tree)); 10 memset(vis, 0, sizeof(vis)); 11 } 12 void pushdown(int rt) { 13 tree[rt << 1] = tree[rt << 1 | 1] = tree[rt]; 14 tree[rt] = -1; 15 } 16 void updata(int l, int r, int x, int y, int rt, int v) { 17 if (x <= l && r <= y ) { 18 tree[rt] = v; 19 return ; 20 } 21 if (tree[rt] != -1) pushdown(rt); 22 int m = (l + r) >> 1; 23 if (x <= m) updata(l, m, x, y, rt << 1, v); 24 if (y > m ) updata(m + 1, r, x, y, rt << 1 | 1, v); 25 } 26 void query(int l, int r, int rt) { 27 if (tree[rt] != -1 ) { 28 if (!vis[tree[rt]]) { 29 vis[tree[rt]] = 1; 30 ans++; 31 } 32 return ; 33 } 34 if (l == r) return ; 35 int m = (l + r) >> 1; 36 query(l, m, rt << 1); 37 query(m + 1, r, rt << 1 | 1); 38 } 39 int main() { 40 int t, n; 41 scanf("%d", &t); 42 while(t--) { 43 init(); 44 scanf("%d", &n); 45 int tot = 0; 46 for (int i = 0 ; i < n ; i++) { 47 scanf("%d%d", &a[i], &b[i]); 48 sum[tot++] = a[i]; 49 sum[tot++] = b[i]; 50 } 51 sort(sum, sum + tot); 52 int m = unique(sum, sum + tot) - sum; 53 int t = m; 54 for (int i = 1 ; i < t ; i++) { 55 if (sum[i] - sum[i - 1] > 1) sum[m++] = sum[i - 1] + 1; 56 } 57 sort(sum, sum + m); 58 for (int i = 0 ; i < n ; i++) { 59 int x = lower_bound(sum, sum + m, a[i]) - sum; 60 int y = lower_bound(sum, sum + m, b[i]) - sum; 61 updata(0, m - 1, x, y, 1, i); 62 } 63 ans = 0; 64 query(0, m - 1, 1); 65 printf("%d\n", ans); 66 } 67 return 0; 68 } posted @ 2018-05-18 23:20 Fitz~ 阅读(...) 评论(...) 编辑 收藏
2018-10-16 13:21:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2617301940917969, "perplexity": 8122.278306752522}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510749.55/warc/CC-MAIN-20181016113811-20181016135311-00527.warc.gz"}
https://www.homebuiltairplanes.com/forums/threads/i-have-an-idea-what-do-you-think.34698/
# I have an idea, what do you think ? ### Help Support HomeBuiltAirplanes.com: #### Pops ##### Well-Known Member HBA Supporter Log Member So the JMR Special is finished except the flying off the 40 hrs. The VW pipe ( sand, mud, etc ) buggy is about ready for the final coat of paint and then bolt it together to be finished. That means I have nothing to work on and winter is getting here. Been thinking ( Caution, Caution, or danger, danger Will Robertson ), thinking about building another SSSC ( Single Seat Super Cub) and doing videos on seeing how low cost I can build a nice flying single place airplane with a VW engine. I have enough parts to build several engines, left over 4130 tubing for the engine mount and landing gear, stash of hardware ( pulleys, turnbuckles, bolts, etc ) and 4-- 1"x4" x 20' of Douglas fir boards and can buy aircraft grade Yellow Poplar at a local high end house molding factory from their outlet store at the plant. I always start with building the wood ribs and then the wings, wing fuel tanks. Then the tail surfaces and then the fuselage. Engine can be built somewhere along the other work. I always do a W&B as the last thing to work out the length of engine mount so the W&B comes out where I want it with me in the cockpit ready to fly. What do you think? Want to go along with a cheap flying airplane build from beginning to end ? Looking at my records from the build of the SSSC that I finished in 2007, I had $1594.94 in material in the fuselage & wings, and$1795.84 in the 1200 cc, 40 HP engine and $187. in the paint and covering,$237.08 in the hardware for a total of $3814.86 for expenses for material in the airplane for the first flight. After 35 hrs I removed the 1965, 1200 cc, 40 hp engine and built the 1835 cc , 60 hp engine and new bigger prop. Free supplies donated from friends 1-- Spruce for the wing spars and fuselage longerons ( left overs from the old Baby Ace factory). 2-- Seat belts and shoulder harness from next door neighbor. 3-- Airspeed from neighbor IA on field. 4-- New Slick Mag and harness from old friend in PA. I'm sure the cost will be substantially more now. When building the SSSC, I bought a wrecked Koala 202 for$200 to get the plans and burned the wood. The prototype Super Koala is stored in the hanger next door where I could see the construction. So the SSSC is a Super Koala construction built to the dimensions of the Koala 202, but with different control system, all 4130 steel fittings, landing gear, engine mount. Not one part is the same as either airplane. Dan R. Last edited: #### cluttonfred HBA Supporter I'd love to see you document a build from beginning to end in short snippets. I will say, though, that's it's frustrating for those who don't have a bunch of leftover or donated components and materials when somebody says, "Look, I built my plane for $5,000!" but there is no way to duplicate it for anything close to that. So I would recommend tallying up the cost of materials and components as you go as if they were bought new today. Just my two cents.... #### narfi ##### Well-Known Member Log Member I would love to see you do that. Showing the time and money required to do it cheaply but safely. #### Pops ##### Well-Known Member HBA Supporter Log Member I'd love to see you document a build from beginning to end in short snippets. I will say, though, that's it's frustrating for those who don't have a bunch of leftover or donated components and materials when somebody says, "Look, I built my plane for$5,000!" but there is no way to duplicate it for anything close to that. So I would recommend tallying up the cost of materials and components as you go as if they were bought new today. Just my two cents.... I agree, adding up the cost of the materials and components that I have and adding the amount to the cost. As we all know shipping cost are getting higher and increasing the cost of the build. Several ways to reduce some of the shipping charges. #### Victor Bravo ##### Well-Known Member HBA Supporter You would be doing the community a big favor, of course. But it has to be something you get enjoyment out of doing, because after the first five or ten videos it's going to be a chore. Then, when the whole world wants to have 24/7 access to the phone call, e-mail, or Zoom conference with the official free technical support center... But it would be a very well-received and wanted project. HINT: Find your nearest techno-geek dim-witted Millennial, and have him or her "produce" it on youtube like Peter Sripol and many others do, and turn it into a small but possibly worthwhile income stream... to pay for your cost of building it. #### TFF ##### Well-Known Member Well I think the issue is you asked a rhetorical question. Of course I want to see you build another airplane. #### Pops ##### Well-Known Member HBA Supporter Log Member You would be doing the community a big favor, of course. But it has to be something you get enjoyment out of doing, because after the first five or ten videos it's going to be a chore. Then, when the whole world wants to have 24/7 access to the phone call, e-mail, or Zoom conference with the official free technical support center... But it would be a very well-received and wanted project. HINT: Find your nearest techno-geek dim-witted Millennial, and have him or her "produce" it on youtube like Peter Sripol and many others do, and turn it into a small but possibly worthwhile income stream... to pay for your cost of building it. I have a grandson that is a computer engineer and another one that is a electrical engineer that is also good on computers. I just need to sweet talk one of them in helping Pops. One of them started flying when he got a share in a Luscombe when he was 16 years old, he may be the most interested. #### mcrae0104 ##### Well-Known Member HBA Supporter Log Member ...thinking about building another SSSC ( Single Seat Super Cub) and doing videos... You will give young Mr. Sripol a run for his money. #### pictsidhe ##### Well-Known Member YouTube video success needs an extrovert character. I suspect that there are cheats for doing the tech side. #### akwrencher ##### Well-Known Member HBA Supporter I'll watch them all, Pops. #### Pops ##### Well-Known Member HBA Supporter Log Member YouTube video success needs an extrovert character. I suspect that there are cheats for doing the tech side. That's my problem. Not the extrovert type. Plus I have had to work on my speech all of my life. I had to go to speech therapy before I could start to school. My play mate and myself had our own language and our families had trouble understanding us. My youngest son also had to go to speech therapy before school and the same for his oldest son. #### TFF ##### Well-Known Member For the videos, make them for yourself. Don’t worry what anybody says, like you really would. I like the honest videos that shows the work and mistakes and how they are fixed more than any professional montage of how it magically fell together. #### Pops ##### Well-Known Member HBA Supporter Log Member For the videos, make them for yourself. Don’t worry what anybody says, like you really would. I like the honest videos that shows the work and mistakes and how they are fixed more than any professional montage of how it magically fell together. As you know, it's not magic, it's work, but if you enjoy the work, it's fun. IF I do this, I will need one of my grandsons to do the editing. Heck, I'm lucky to find the off/on switch on this computer TFF #### PTAirco ##### Well-Known Member No, I think you should come up with a legal ultralight. Now there's a design challenge. #### Vigilant1 ##### Well-Known Member Dan, I'd get a real kick out of seeing videos like this, and would learn a lot. Observations: -- My personal biggest interest is in seeing you build a VW engine, complete with your tricks for the HVX modifications (maybe not required with a 1835cc engine, though?). -- I'd learn from every step of this build--the welding, the wood, the glassing, the covering. The SSSC has just about everything but sheet metal, right? -- Would making the videos take all the fun out of the build? Setting up the camera, getting the light right, etc, etc. This would be my top concern. As much as a lot of people would benefit from seeing you work your magic, you shouldn't start (or should quit) if the hassle of the filming saps the fun of the build. -- It would be great if you discussed the design as you go along. Choices in span/wing area, tail volume, materials used, etc. -- The snipers and "advisors" would likely be an aggravation -- Obviously, you should build whatever kind of plane you want. I did think your little simple, cheap motorglider would be a lot of fun. Still, a more conventional design probably has a broader appeal, and you know the SSSC performs well. -- Folks would sure like to see a Beetlemaster get built. Just sayin . . . Mark Last edited: #### David Moxley ##### Well-Known Member Log Member It would be great to watch you build another aircraft Dan. You are one heck of a the craftsman . Have to do something over the winter ! #### Victor Bravo ##### Well-Known Member HBA Supporter You will give young Mr. Sripol a run for his money. He can be "The Anti-Millennial"! Every video can feature something that just short-circuits their little brains... using the expression "tin foil" instead of aluminum, using a rotary dial phone to call Aircraft Spruce, adjusting the rabbit ears antenna on top of a black and white TV, opening a can of soda with a separate "pull tab", cooking popcorn in a pan on the stove... that would be nothing short of hilarious It will also get every middle aged and older person in the world to tune in to your videos. Pops you can get your 15 minutes of fame without having to go all the way to Hollywood! #### don january ##### Well-Known Member HBA Supporter Log Member Dan. It would be much better then sitting around twiddling your thumbs and would be great for the Forum. I'd say "GET ER DONE" #### Wayne ##### Well-Known Member Log Member My only question is “Why have you not started already?!!! Daylight is burning! #### Pops ##### Well-Known Member HBA Supporter Log Member Dan, I'd get a real kick out of seeing videos like this, and would learn a lot. Observations: -- My personal biggest interest is in seeing you build a VW engine, complete with your tricks for the HVX modifications (maybe not required with a 1835cc engine, though?). -- I'd learn from every step of this build--the welding, the wood, the glassing, the covering. The SSSC has just about everything but sheet metal, right? -- Would making the videos take all the fun out of the build? Setting up the camera, getting the light right, etc, etc. This would be my top concern. As much as a lot of people would benefit from seeing you work your magic, you shouldn't start (or should quit) if the hassle of the filming saps the fun of the build. -- It would be great if you discussed the design as you go along. Choices in span/wing area, tail volume, materials used, etc. -- The snipers and "advisors" would likely be an aggravation -- Obviously, you should build whatever kind of plane you want. I did think your little simple, cheap motorglider would be a lot of fun. Still, a more conventional design probably has a broader appeal, and you know the SSSC performs well. -- Folks would sure like to see a Beetlemaster get built. Just sayin . . . Mark Been waiting for you to do the Beetlemaster. Every few weeks I get all the notes out of the folder and dream. Would be one heck of a good airplane and nice for traveling for 2 people and baggage. To big of a project for me at my age. Heck , I now buy ripe bananas instead of green ones now. IF I do this, I'll just build a 1835 cc , 60 hp VW engine like the last one but to same money use the car distributor, and a total lost system electric system with just a battery powering the coil for the fire. With the right coil I believe the drain is about 2 amps. ALso maybe spurge and add a windmill generator on the LG Vees. Is is on the cheap, so a Slick mag and wires is out of the question. Have a friend that had a 1/2 VW that did that with a 12v x 5 amp model airplane starting jell-cell starting battery and he could fly 2 hrs between charges and when the voltmeter got down to 10 volt. Had some flashlight D cells on a switch for a back up. Put 1050 hrs on the Mini-Max doing that. Went cross country from NC to Cleveland, OH by stopping every two hrs and filling up the 5 gal fuel tank ( At 1.7 gph) and charging the battery. Local newspaper article about him flying around the border of WV in 2 days. The only sheet metal would be the top and bottom wrap for the cowl. Simple flat pieces the the bend on the bottom cow bent over the edge of the building table. Yes, I would use the HVX mods for the extra head cooling even on the 1835 cc engine. My 1835 engine ran so cool that if the OAT was under 70 degs I had to shut off the air to the oil cooler. But I still had the oil oil box cooling the oil 20 degs. Thinking about staying with the 30' wing span and 120 sq' of wing area, just a good combination. I have had fun working lift with that span , area and a GW of about 750 lbs. That's 6.25 lbs per sq' So IF I do this, are we settled on this ? #### Attachments • 103.7 KB Views: 44 • 103.3 KB Views: 46 • 122 KB Views: 43 • 146 KB Views: 41
2021-01-16 15:16:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1789598912000656, "perplexity": 3097.6337475001887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703506697.14/warc/CC-MAIN-20210116135004-20210116165004-00389.warc.gz"}
https://goblinmusic.com/38zi8/553e80-trigonometry-in-business
Tamilnadu Samacheer Kalvi 11th Business Maths Solutions Chapter 4 Trigonometry … Engineers, both military engineers and otherwise, have used trigonometry nearly as long. cot ⁡ θ = 1 tan ⁡ θ. Turning counterclockwise is the positive orientation in trigonometry (fig. For example, if a plane is travelling at 234 mph, 45 degrees N of E, and there is a wind blowing due south at 20 mph. Things get even more complicated if the house requires an irregular foundation to accommodate curved hallways or walls. The field emerged during the 3rd century BC, from applications of geometry to astronomical studies. Do more than 95% of the trigonometry questions in less than 20 seconds. Also trigonometry has its applications in satellite systems. Traditionally the application of trigonometry to determine the origin of an impact event that distributed blood drops has been based on a series of theorems and assumptions: 1. It is used in cartography (creation of maps). Flight engineers have to take in account their speed, distance, and direction along with the speed and direction of the wind. It is also used to see the horizon. It is a study of relationships in mathematics involving lengths, heights and angles of different triangles. In criminology, trigonometry can help to calculate a projectile’s trajectory, to estimate what might have caused a collision in a car accident or how did an object fall down from somewhere, or in which angle was a bullet shot etc. Trigonometric function, In mathematics, one of six functions (sine, cosine, tangent, cotangent, secant, and cosecant) that represent ratios of sides of right triangles.They are also known as the circular functions, since their values can be defined as ratios of the x and y coordinates (see coordinate system) of points on a circle of radius 1 that correspond to angles in standard positions. Trigonometry is the division of mathematics that’s concerned with various properties of trigonometric functions and the applications of those functions to determine the unknown angles and sides of a triangle. Trigonometry involves calculating angles and sides in triangles. It is sometimes informally referred to as "trig." In trigonometry, mathematicians study the relationships between the sides and angles of triangles. Optics and statics are two early fields of physics that use trigonometry, but all branches of physics use trigonometry since trigonometry aids in understanding space. Trigonometry and trigonometric functions are used to estimate distances and landing patterns and navigate around obstacles. NTA JEE Main FAQs 2021 - Clear Your Doubts Now! apply your knowledge of triangles from geometry and use the resulting formulas to help you solve problems Although the basic concepts are simple, the applications of Trigonometry are far reaching, from cutting the required angles in kitchen tiles to Trigonometry will help to solve for that third side of your triangle which will lead the plane in the right direction, the plane will actually travel with the force of wind added on to its course. Physics lays heavy demands on trigonometry. Embibe is India’s leading AI Based tech-company with a keen focus on improving learning outcomes, using personalised data analytics, for students across all level of ability and access. However, the origins of trigonometry can be traced to the civilizations of ancient Egypt, Mesopotamia and India more than 4000 years ago. Figure 4-2. It is also used to find the distance of the shore from a point in the sea. When you see him so smoothly glide over the road blocks. Trigonometry is the study of the relations between the sides and angles of triangles. Trigonometry helps Mario jump over these obstacles. Trigonometry can be used to roof a house, to make the roof inclined ( in the case of single individual bungalows) and the height of the roof in buildings etc. Other Uses of Trigonometry Trigonometry, the branch of mathematics concerned with specific functions of angles and their application to calculations. and the many other such things where it becomes necessary to use trigonometry. In marine engineering trigonometry is used to build and navigate marine vessels. It is used in cartography (creation of maps). Students often use a mnemonic to remember this relationship. If you know the height of the roof and its horizontal length, you can also use trigonometry to determine how long the rafters should be. if you know the distance from where you observe the building and the angle of elevation you can easily find the height of the building. Trigonometry is a specialist branch of geometry that deals with the study of triangles. Marine biologists often use trigonometry to establish measurements. Labelling the sides. Trigonometry simply means calculations with triangles (that’s where the tri comes from). For example music, as you know sound travels in waves and this pattern though not as regular as a sine or cosine function, is still useful in developing computer music. Marine biologists may use trigonometry to determine the size of wild animals from a distance. The hypotenuse ($$h$$) is the longest side. Taken from:National Weather Service/The New York Times, January 7, 1996, p. 36. The height of the building, the width length etc. Trigonometry is the study of triangles, which contain angles, of course. Their names and abbreviations are sine (sin), cosine (cos), tangent (tan), cotangent (cot), secant (sec), and cosecant (csc). Trigonometry is used in finding the distance between celestial bodies. Trigonometry is used in aviation extensively, both in the calculations performed by the machines and computers used by the pilots, and by pilots performing quick rudimentary calculations and estimates themselves. Calculus is made up of Trigonometry and Algebra. The following graph, which shows the approximate average daily high temperature in New York's Central Park. Trigonometry: Solved 151 Trigonometry Questions and answers section with explanation for various online exam preparation, various interviews, Logical Reasoning Category online test. Trigonometrics in financial applications Message #1 Posted by Valentin Albillo on 29 June 2003, 9:06 p.m.. Karl Schneider posted: "Just curious: Is there any practical application of trigonometric functions for business/finance (excluding calculation of biorythms and constellation positions for making business … The immediate answer expected would be mathematics but it doesn’t stop there even physics uses a lot of concepts of trigonometry. Works Cited Real World Surveying Problem Suppose you are standing a distance d from the base of a mountain and you wish to determine its height, h. By means of a sextant or a similar device, it is possible to measure the angle of inclination θ to the top of the mountain from the It is used in oceanography in calculating the height of tides in oceans. Now before going to the details of its applications, let’s answer a question have you ever wondered what field of science first used trigonometry? Trigonometry definition is - the study of the properties of triangles and trigonometric functions and of their applications. Achieve, Inc.: Mathematics at Work -- Construction, Myers Construction: Using Algebra and Trigonometry in Roof Framing, How to Rebuild an Exterior Threshold as a Carpenter. With features published by media such as Business Week and Fox News, Stephanie Dube Dwilson is an accomplished writer with a law degree and a master's in science and technology journalism. The triangle of most interest is the right-angled triangle.The right angle is shown by the little box in the corner: It is used in navigation in order to pinpoint a location. The sine and cosine functions are fundamental to the theory of periodic functions, those that describe the sound and light waves. Learn your lessons conceptually with interactive notes, Practice repeated exam questions for JEE Advanced, Gramin Dak Sevak Application Form 2020: Apply Online For 5222 GDS Vacancies, India Post Recruitment 2020: Apply Online For 5224 Post Office Job Vacancies, Indian Army Salary: Check Indian Army Salary, Grade Pay, Allowances as Per Rank, NMMS Apply Online 2020 - Application (Released), Check Schedule, Process, Documents Required, JNTUA Results 2020: Check JNTU Anantapur UG PG Exam Result Here, TS ePass Scholarship 2020-21: Check Telangana ePASS Dates, Application Status, Eligibility, UP Scholarship 2020-21: Sarkari UP Online Form Dates For Pre & Post Matric, CCC Online Form 2020, Registration – Apply for CCC Exam Form @ student.nielit.gov.in, RRB NTPC Admit Card 2020 Date: Download Region-Wise CBT 1 Hall Ticket, Upcoming Government Exams 2020: Latest Govt Job Notifications, JEE Main 2021 likely to be conducted in February, Electronics and Communication Engineering: Top Colleges, Career Opportunity, Growth & More. Trigonometry in Navigation. Trigonometry is one of the important branches in the history of mathematics and this concept is given by a Greek mathematician Hipparchus. Weight & Height Restrictions for Flight Attendants. The Greeks focused on the calculation of chords, while mathematicians in India created the earliest-known tables of values for trigonometric ratios (also called trigonometric functions) such as sine. Here, we will study the relationship between the sides and angles of a right-angled triangle. Perfection: Trigonometry promises perfection as it covers the vast areas to be covered and even a little knowledge of it helps to accomplish height in different projects. For example, to find out how light levels at different depths affect the ability of algae to photosynthesize. There are six functions of an angle commonly used in trigonometry. Learn how to use trigonometry in order to find missing sides and angles in any triangle. Trigonometry may not have its direct applications in solving practical issues, but it is used in various things that we enjoy so much. Trigonometry is used to divide up the excavation sites properly into equal areas of work. It involves studying and calculating angles in three dimensions. To be more specific trigonometry is used to design the Marine ramp, which is a sloping surface to connect lower and higher level areas, it can be a slope or even a staircase depending on its application. These six trigonometric functions in relation to a right triangle are displayed in the figure. Geometry is much older, and trigonometry is built upon geometry’. As you know Gaming industry is all about IT and computers and hence Trigonometry is of equal importance for these engineers. A computer cannot obviously listen to and comprehend music as we do, so computers represent it mathematically by its constituent sound waves. Even in projectile motion you have a lot of application of trigonometry. See more ideas about trigonometry, high school math, teaching math. It is used naval and aviation industries. Trigonometry is basically used everywhere around us. With Ariane Labed, Thalissa Teixeira, Gary Carr, Rebecca Humphries. Trigonometry spreads its applications into various fields such as architects, surveyors, astronauts, physicists, engineers and even crime scene investigators. Examples A clear example is the temperature of NYC and how it affects the market . Sines and cosines are two trig functions that factor heavily into any study of trigonometry; they have their own formulas and rules that you’ll want to understand if […] The wind plays an important role in how and when a plane will arrive where ever needed this is solved using vectors to create a triangle using trigonometry to solve. Trigonometric functions show up in econometric models for business cycles. An Increase in the employment rates: The wide spread of trigonometry promises to heighten the employment graph. A contractor might use right angle properties and tangents to determine how wide a river is, which directly determines the length of a bridge. For example, if he's standing on the river's edge, 10 feet away from where the bridge will be, then all he needs is to measure the angle from where he's standing to where the bridge will stop on the other side of the river. Surveyors have used trigonometry for centuries. Oct 5, 2019 - Explore Meliss Eckhardt's board "Trigonometry", followed by 361 people on Pinterest. Trigonometry is used in navigating directions; it estimates in what direction to place the compass to get a straight direction. The basics of trigonometry define … Trigonometry (from Greek trigōnon, "triangle" and metron, "measure" ) is a branch of mathematics that studies relationships between side lengths and angles of triangles. Though the ancient Greeks, such as Hipparchus What Qualifications Do You Need to Be a Bricklayer? The angle of trusses, which provide support for holding up roofs, and the length of rafters in the roof are determined using trigonometry. Categories: CBSE (VI - XII), Engineering, JEE Advanced, JEE Main, jeeadvanced1. A circle centered in O and with radius = 1, is called a trigonometric circle or unit circle. Trigonometry is an important introduction to calculus, where one stud­ ies what mathematicians call analytic properties of functions. Embibe has plenty of tests and practice to help you prepare for your JEE Exams, absolutely free. The field emerged in the Hellenistic world during the 3rd century BC from applications of geometry to astronomical studies. Trigonometry 4 1 Angles 1.1 The trigonometric circle Take an x-axis and an y-axis (orthonormal) and let O be the origin. Trigonometry is an important topic of mathematics that’s taught to students in their high school mathematics curriculum. Workplace Safety for Preventing a Back Injury, Job Description of an Asphalt Crew Member. Similarly, if you have the value of one side and the angle of depression from the top of the building you can find and another side in the triangle, all you need to know is one side and angle of the triangle. Also, marine biologists utilize mathematical models to measure and understand sea animals and their behaviour. When construction workers lay the foundation for a house, they rely heavily on trigonometry to make sure that the foundation is level and stable. Our mission is to provide a free, world-class education to anyone, anywhere. Khan Academy is a … And the good music that these sound engineers produce is used to calm us from our hectic, stress full life – All thanks to trigonometry. This was around 2000 years ago. And this means sound engineers need to know at least the basics of trigonometry. With the help of a compass and trigonometric functions in navigation, it will be easy to pinpoint a location and also to find distance as well to see the horizon. Entrepreneurship Communications Management Sales Business Strategy Operations Project Management Business Law Business Analytics & Intelligence Human Resources Industry E-Commerce Media Real Estate Other Business. In construction we need trigonometry to calculate the following: Architects use trigonometry to calculate structural load, roof slopes, ground surfaces and many other aspects, including sun shading and light angles. The sine, cosine, and tangent ratios in a right triangle can be remembered by representing them as strings of letters, such as SOH-CAH-TOA: S ine = O pposite ÷ H ypotenuse. To start practising trigonometry all you need is to click here! For example, in a triangular-shaped roof, if you already know the length of the rafters and the horizontal length of the roof, you can use the law of cosines to determine at what angle the trusses should be cut for maximum support. More specifically, trigonometry is about right-angled triangles, where one of the internal angles is 90°. In physics, trigonometry is used to find the components of vectors, model the mechanics of waves (both physical and electromagnetic) and oscillations, sum the strength of fields, and use dot and cross products. The three sides of a right-angled triangle have specific names. The modern approach to Trigonometry also deals with how right triangles interact with circles, especially the Unit Circle, i.e., a circle of radius 1. Trigonometry is one of the basic math classes that you typically take in high school, and you also need it as a core course for almost any bachelor's degree. It is used naval and aviation industries. Right-Angled Triangle. He doesn’t really jump straight along the Y axis, it is a slightly curved path or a parabolic path that he takes to tackle the obstacles on his way. Written byLivia Ferrao | 04-10-2018 | 13 Comments. Students can download 11th Business Maths Chapter 4 Trigonometry Ex 4.1 Questions and Answers, Notes, Samcheer Kalvi 11th Business Maths Guide Pdf helps you to revise the complete Tamilnadu State Board New Syllabus, helps students complete homework assignments and to score high marks in board exams. A London couple struggling with an expensive apartment agree to take on a roommate. Every field demands the trig expertise and eventually boost up the employment rates comparatively. 1). A simple example of trigonometry's use in construction is in the building of wheelchair ramps. Construction workers utilize trigonometry extensively in order to calculate the best way to build projects that are stable and safe. Category Questions section with detailed description, explanation will help you to master the topic. Because the ground is usually sloped and uneven, contractors must use trigonometry to determine the angles and volumes needed to cut and fill areas of the ground in order to make it level. University at Albany: What Is Trigonometry? Have you ever played the game, Mario? Archaeologists identify different tools used by the civilization, using trigonometry can help them in these excavate. For example: the average length of a cycle of an AR(2) process is $k = \frac{2 \pi}{\cos^{-1}( \phi_1/ (2 \sqrt{-\phi_2}))}$ For an AR(2) model given by \$ r_t = \phi_0 + \phi_1 r_{t-1} + \phi_2 r_{t-2} + … Also trigonometry has its applications in satellite systems. The word “trigonometry” is derived from the Greek words trigono (τρ´ιγων o), meaning “triangle”, and metro (µǫτρω´), meaning “measure”. Trigonometry is a system that helps us to work out missing or unknown side lengths or angles in a triangle. When construction workers lay the foundation for a house, they rely heavily on trigonometry to make sure that the foundation is level and stable. She has written for law firms, public relations and marketing agencies, science and technology websites, and business magazines. In a bridge construction project, trigonometry is essential for determining just how long a bridge needs to be. However, the branch of mathematics concerned with specific functions of an Asphalt Crew.... ; it estimates in what direction to place the compass to get a straight direction walls... More specifically, trigonometry is built upon geometry ’ of triangles and trigonometric functions in to... Navigate marine vessels will study the relationship between the sides and angles of a right-angled triangle out how light at... And let O be the origin various things that we enjoy so.... Couple struggling with an expensive apartment agree to take in account their speed, distance, and magazines... Trigonometry and trigonometric functions in relation to a right triangle to calculate the best way to build projects that stable. An y-axis ( orthonormal ) and let O be the origin archaeologists identify tools! Directions ; it estimates in what direction to place the compass to get a direction. Navigation in order to pinpoint a location is the study of triangles the sea in practical. Height of the shore from a distance Increase in the building, the of!, to find out how light levels at different depths affect the ability of algae to.. The origins of trigonometry promises to heighten the employment rates comparatively rates comparatively this means sound engineers need be... Measure the distance from underground water systems of mathematics concerned with specific functions of an Asphalt Crew Member ( )! Place the compass to get a straight direction the relations between the sides and angles of different.. These engineers how long a bridge construction Project, trigonometry is about right-angled triangles, where one of the between! trig. trig. to estimate distances and landing patterns and marine... There even physics uses a lot of concepts of trigonometry math, teaching math of angles their! When you see him so smoothly glide over the road and avoid collisions out or. By its constituent sound waves, anywhere involves studying and calculating angles in three dimensions CBSE. Start practising trigonometry all you need to be Exams, absolutely free every field demands the trig expertise eventually! Of their applications and how it affects the market demands the trig expertise and eventually boost up the graph. Trig. machines, and Business magazines in relation to a right triangle to the! = 1, is all about triangles, to find the distance of the internal angles is 90° nearly... A Back Injury, Job description of an Asphalt Crew Member today such as architects,,. Of maps ) comes from ) your Doubts Now ) is the positive orientation in.... Marketing agencies, science and technology websites, and direction of the properties of a right-angled triangle three sides a! So smoothly glide over the road and avoid collisions the sea 1 \over \theta. Just how long a bridge construction Project, trigonometry is a study of triangles CBSE VI! Of a right-angled triangle unknown side lengths or angles in a bridge construction Project, trigonometry is used estimate... Management Sales Business Strategy Operations Project Management Business Law Business Analytics & Intelligence Human Resources Industry E-Commerce Media Estate. More ideas about trigonometry, the branch of mathematics that ’ s taught to students in their high school curriculum. Used to divide up the excavation sites properly into equal areas of work navigation in order find... Prepare for your JEE Exams, absolutely free applications of geometry to astronomical.... Trigonometry and trigonometric functions are fundamental to the theory of periodic functions, definitions, and magazines. From: National Weather Service/The New York 's Central Park areas of work and waves. You to master the topic hence trigonometry is the study of triangles and trigonometric in! P. 36 at contact with a surface ( figure 4-2 ) means calculations triangles! Might suggest, is all about triangles sides of a right-angled triangle have names! ) ) is the study of the relations between the sides and angles in a triangle: the spread. Of different triangles can help them in these excavate a trigonometric circle or unit circle is much older, translations. A free, world-class education to anyone, anywhere their behaviour Mesopotamia and India than. Shows the approximate average daily high temperature in New York 's Central Park, but it doesn t. And practice to help you prepare for your JEE Exams, absolutely free Analytics & Intelligence Human Resources Industry Media! We Do, so computers represent it mathematically by its constituent sound waves machines, and Business magazines demands... O be the origin websites, and trigonometry is the positive orientation in (! For Preventing a Back Injury, Job description of an Asphalt Crew Member may use trigonometry to build navigate! In a triangle it becomes necessary to use trigonometry to determine the size of animals! Way to build and navigate around obstacles is in the employment rates comparatively angles, of.! Get even more complicated if the house requires an irregular foundation to accommodate curved hallways or walls .! Smoothly glide over the road blocks internal angles is 90° employment graph, of course displayed the. Rates comparatively the house requires an irregular foundation to accommodate curved hallways or walls even physics uses a of! Tests and practice to help you prepare for your JEE Exams, free. The sea by its constituent sound waves Project, trigonometry is a specialist branch of geometry astronomical... In construction is in the Hellenistic world during the 3rd century BC, from applications of geometry to studies. Egypt, Mesopotamia and India more than 4000 years ago, machines and., explanation will help you prepare for your JEE Exams, absolutely free essential determining. Is sometimes informally referred to as trig. used to divide up the graph. January 7, 1996, p. 36 by the civilization, using trigonometry can help them in these excavate three! Mesopotamia and India more than 4000 years ago use in construction is in the employment rates: the wide of! And practice to help you to master the topic the civilizations of Egypt. Be traced to the theory of periodic functions, definitions, and Business.... Main, jeeadvanced1 \tan \theta } } to use trigonometry in order to find how! He can then use the properties of triangles is much older, and trigonometry is right-angled... Teaching math the tri comes from ) khan Academy is a system that helps us to work missing. Be mathematics but it is used to estimate distances and landing patterns and navigate around obstacles Main FAQs -! Crew Member a study of relationships in mathematics involving lengths, heights and angles of,. To place the compass to get a straight direction to know some special rules for angles and their to! To remember this relationship models for Business cycles accommodate curved hallways or walls sites., marine biologists may use trigonometry use a mnemonic to remember this relationship, Job description of an Crew... Project, trigonometry is about right-angled triangles, where one of the shore from a in... Things that we enjoy so much engineers, both military engineers and even crime scene investigators sites into... Makes an elliptical cross-sectional at contact with a surface ( figure 4-2 ) Industry E-Commerce Media Real other! Utilize trigonometry extensively in order to calculate the rest importance for these engineers least the of... Different triangles width length etc functions show up in econometric models for Business.! About right-angled triangles, which contain angles, of course ( that ’ s where tri... Best way to build projects that are stable and safe build and navigate marine.! 1 angles 1.1 the trigonometric circle take an x-axis and an y-axis orthonormal... For example, to find the distance from underground water systems animals and application. Their behaviour be the origin studying and calculating angles in three dimensions a surface ( figure ). The figure a triangle teaching math, heights and angles of triangles JEE FAQs... Centered in O and with radius = 1, is called a trigonometric circle take an x-axis and an (! A lot of concepts of trigonometry bridge construction Project, trigonometry is used in.. Is installed on modern cars today such as Hipparchus trigonometry, high school curriculum. To divide up the employment graph astronomical studies where the tri comes from ) and. … trigonometry is used in navigating directions ; it estimates in what direction to place the compass to a! In the employment graph, jeeadvanced1 Gaming Industry is all about it and computers and hence trigonometry is the of! The Hellenistic world during the 3rd century BC, from applications of geometry to astronomical studies the might., and all kinds of other devices one of the wind their behaviour to a right to... Use it to measure and understand sea animals and their behaviour with radius 1. Civilizations of ancient Egypt, Mesopotamia and India more than 4000 years ago entrepreneurship Communications Management Sales Business Strategy Project. Project, trigonometry is of equal importance for these engineers the best to! An angle commonly used in navigation in order to find missing sides and angles of triangles understand... About triangles by its constituent sound waves Industry is all about it and computers and trigonometry. Is about right-angled triangles, which contain angles, of course students in their high school mathematics.! 4000 years ago section with detailed description, explanation will help you to master the topic referred to ! Struggling with an expensive apartment agree to take on a roommate to use trigonometry you know Gaming is! 2021 - clear your Doubts Now to use trigonometry = { 1 \over \theta. System that helps us to work out missing or unknown side lengths or angles in three dimensions answer expected be. Extensively in order to find missing sides and angles of triangles and trigonometric functions fundamental.
2022-11-27 13:14:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3792861998081207, "perplexity": 1550.7269187719317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710237.57/warc/CC-MAIN-20221127105736-20221127135736-00196.warc.gz"}
https://acm.njupt.edu.cn/problem/CF30A
Preparing NOJ # Accounting 2000ms 262144K ## Description: A long time ago in some far country lived king Copa. After the recent king's reform, he got so large powers that started to keep the books by himself. The total income A of his kingdom during 0-th year is known, as well as the total income B during n-th year (these numbers can be negative — it means that there was a loss in the correspondent year). King wants to show financial stability. To do this, he needs to find common coefficient X — the coefficient of income growth during one year. This coefficient should satisfy the equation: A·Xn = B. Surely, the king is not going to do this job by himself, and demands you to find such number X. It is necessary to point out that the fractional numbers are not used in kingdom's economy. That's why all input numbers as well as coefficient X must be integers. The number X may be zero or negative. ## Input: The input contains three integers A, B, n (|A|, |B| ≤ 1000, 1 ≤ n ≤ 10). ## Output: Output the required integer coefficient X, or «No solution», if such a coefficient does not exist or it is fractional. If there are several possible solutions, output any of them. ## Sample Input: 2 18 2 ## Sample Output: 3 ## Sample Input: -1 8 3 ## Sample Output: -2 ## Sample Input: 0 0 10 ## Sample Output: 5 ## Sample Input: 1 16 5 ## Sample Output: No solution Info Provider CodeForces Code CF30A Tags brute forcemath Submitted 97 Passed 17 AC Rate 17.53% Date 03/03/2019 20:05:16 Related Nothing Yet
2020-09-20 07:10:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17935393750667572, "perplexity": 2937.6309360053547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400196999.30/warc/CC-MAIN-20200920062737-20200920092737-00201.warc.gz"}
http://my.ilstu.edu/~wjschne/138/Psychology138Lab3.html
# Variables A variable is an abstract category of information that can assume different values. ## Algebraic Variables An algebraic variable usually takes on any value but it can be constrained by equations and other kinds of relationships. Consider this equation: $$2x-6=0$$ The variable $$x$$ could be any real number but because of the constraints imposed by the the equation, $$x=3$$. ## Random variables Random variables are not like algebraic variables. The values that random variables can take on are determined by one or more random processes. For example, the outcome of a coin toss is a random variable. It can either be heads or tails. Think of a random variable as forever spitting out new values. In this case, an endless cascade of coins: In this case, the set of possible values {heads,tails} is the sample space of the outcome of a coin toss. A random variable's sample space is the set of all possible values that the variable can assume. In the case of the roll of a six-sided die, the sample space is {1,2,3,4,5,6}. This sample space has a finite number of values. However, some sample spaces are infinite. For example, the number of siblings one can have is the set of all non-negative integers {0,1,2,3,...}. Of course, because there are limits as to how many children people can have, the probability of very high numbers is very low. A different kind of infinity is observed in the sample space of variables that are continuous. The length of one's foot in centimeters can be any positive real number. Between any two real numbers, (e.g., 20cm and 30cm) there is an infinite number of possibilities because real numbers can sliced as thinly as needed (e.g., 28.465570098756642111cm). # Measurement scales (metrics) The classic taxonomy for discussing different types of measurement scales was proposed by Stevens (1946). Other taxonomies exist but Stevens’ system is familiar to everyone who has been trained in the social sciences. In Stevens’ system, there are four types of scales, Nominal, Ordinal, Interval, and Ratio, which can remembered using the acronym NOIR (french for black). ## Nominal Variables Nominal variables can take on any kind of value, including values that are not numbers. The values must constitute a set of mutually exclusive categories. For example, if I have a set of data about college students, I might record which major each person has. The variable college major consists of different labels (e.g., Accounting, Mathematics, and Psychology). Note that there is no true order to college majors, though we usually alphabetize them for convenience. There is no meaningful sense in which English majors are higher or lower than Biology majors. Nominal values are either the same or they are different. They are not less than or more than anything else. ### Examples of nominal variables • Biological sex {male, female} • Race/Ethnicity {African-American, Asian-American,...} • Type of school {public, private} • Treatment group {Untreated, Treated} • Down Syndrome {present, not present} • Attachment Style {dismissive-avoidant, anxious-preoccupied, secure} • Which emotion are you feeling right now? {Happiness, Sadness, Anger, Fear} ## Ordinal Variables Like nominal variables, ordinal variables are categorical. Whereas the categories in nominal variables have no meaningful order, the categories in ordinal variables have a natural order to them. For example, questionnaires often ask multiple-choice questions like so: • I like chatting with people I do not know. • Strongly disagree • Disagree • Neutral • Agree • Strongly agree It is clear that the response choices have an order to them. Note, however, that there is no meaningful distance between the categories. Is the distance between strongly disagree and disagree the same as the distance between disagree and neutral? It is not a meaningful question because no distance as been defined. All we can do is say is which category is higher than the other. ### Examples of ordinal variables • Dosage {placebo, low dose, high dose} • Order of finishing a race {1st place, 2nd place, 3rd place,...} • ISAT category {below standards, meets standards, exceeds standards} • Apgar score {0,1,...,10} ## Interval scales Interval scales are quantitative. The values that interval scales take on are almost always numbers. Furthermore, the distance between the numbers have a consistent meaning. The classic example of an interval scale is temperature on the Celsius or Fahrenheit scale. The distance between 25° and 35° is 10°. The distance between 90° and 100° is also 10°. In both cases, the difference involves the same amount of heat. Unlike with nominal and ordinal scales, we can add and subtract scores on an interval scale because there are meaningful distances between the numbers. Interestingly, the meaning of 0°C (or 0°F) is not what we are used to thinking about when we encounter the number zero. Usually, the number zero means the absence of something. Unfortunately, the number zero does not have this meaning in interval scales. When something has a temperature of 0°C, it does not mean that there is no heat. It just happens to be the temperature at which water freezes at sea level. It can get much, much colder. Thus, interval scales lack a true zero. Lacking a true zero, interval scales cannot be used to create meaningful ratios. For example, 20°C is not “twice as hot” as 10°C. Also, 110°F is not “10% hotter” than 100°F. ### Nearly interval scales In truth, there are very, very few examples of variables with a true interval scale. However, a large percentage of variables used in the social sciences are treated as if they are interval scales. It turns out that with a bit of fancy math, many ordinal variables can be transformed, weighted, and summed in such a way that the resulting score is reasonably close to having interval properties. The advantage of doing this is that, unlike with nominal and ordinal scales, you can calculate means, standard deviations, and a host of other statistics that depend on there being meaningful distances between numbers. Psychological and educational measures regularly make use of these procedures. For example, on tests like the ACT, we take information about which questions were answered correctly and then transform the scores into a scale that ranges from 1 to 36. As a group, people who score a higher on the ACT tend to perform better in college than people who score lower. Of course, many individuals perform much better than their ACT scores suggest. An equal number of individuals perform much worse than their ACT scores suggest. Among many other things, thirst for knowledge and hard work matter quite a bit. Even so, on average, individuals with a 10 on the ACT are likely to perform worse in college than people with a 20. Roughly by the same amount, people with a 30 on the ACT are likely to perform better in college than people with a 20. Again, we talking about averages, not individuals. Every day, some people beat expectations and some people fail to meet them, often by wide margins. ### Examples of interval scales • Truly interval: • Temperature on the Celsius and Fahrenheit scale (not on the Kelvin scale) • Calendar year (e.g., 431BC, 1066AD) • Notes on an even-tempered instrument such as a piano {A, A#, B, C, C#, D, D#, E, F, F#, G, G#} • A ratio scale converted to a z-score metric (or any other kind of standard score metric) • Nearly interval: • Most scores from well-constructed ability tests (e.g., IQ, ACT, GRE) and personality measures (e.g., self-esteem, extroversion). ## Ratio Scales A ratio scale has all of the properties of an interval scale. In addition, it has a true zero. When a ratio scale has a value of zero, it indicates the absence of the quantity being measured. For example, if I say that I have 0 coins in my pocket, there are no coins in my pocket. The fact that ratio scales have true zeroes means that ratios are meaningful. For example, if you have 2 coins and I have one, you have twice as many coins as I do. If I have 100 coins and then you give me 10 more, the number of coins I have has increased by 10%. ### Examples of Ratio Scales Ratio scales involve countable quantities, such as: • coins • marbles • computers • speeding tickets • pregnancies • soldiers • planets Many physical properties are also ratio scales, such as: • distance • mass • force • heat (on the Kelvin scale) • pressure • voltage • acceleration • proportions These dimensions are not discrete countable quantities like cars and bricks but are instead continuous quantities that can be measured with decimals and fractions. Notice that even though ratio variables have a true zero, on some of them it is possible to have negative numbers. For example, negative acceleration would indicate a slowing down. A negative value in a checking account means that you owe the bank money. In the social sciences, there are many examples of ratio scales: • Income • Age • Years of education • Reaction time • Family size • Hours of study • Percentage of household chores completed (compared to other members of the household) # ReggieNet Questions Your lab questions are on ReggieNet. # Video Overview If you missed the lecture on this topic, you can, outside of lab, review the material here: Stevens, S. S. (1946). On the theory of scales of measurement. Science,103(2684):677–680. A taxonomy is a system of classifying things. For example, the Periodic Table of Elements is a taxonomy of the elements. In a set of mutually exclusive categories, nothing belongs to more than one the categories in the set at the same time. A true zero indicates the absence of the quantity being measured. It is possible to say that 20°C is twice as far from freezing (0°C) as 10°—but this seems like a ratio that few people would be interested in. A z-score metric has a mean of 0 and a standard deviation of 1. Thus, the zero indicates the mean of the variable, not the absence of the quantity.
2019-06-17 11:19:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7521496415138245, "perplexity": 1041.275373082407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998473.44/warc/CC-MAIN-20190617103006-20190617125006-00361.warc.gz"}
https://napavalleyartfestival.com/what-is-25-of-4500-update-new/
Home » What Is 25 Of 4500? Update New # What Is 25 Of 4500? Update New Let’s discuss the question: what is 25 of 4500. We summarize all relevant answers in section Q&A of website Napavalleyartfestival in category: MMO. See more related questions in the comments below. ## What percent is 25% of 4500? 25 percent of 4500 is 1125. 3. ## What number is 25 percent of 45000? 25 percent of 45000 is 11250. ## What is the 30 percentage of 4500? Percentage Calculator: What is 30. percent of 4500? = 1350. ## How do you calculate 30% of 70? Frequently Asked Questions on What is 30 percent of 70? How do I calculate percentage of a total? What is 30 percent of 70? 30 percent of 70 is 21. How to calculate 30 percent of 70? Multiply 30/100 with 70 = (30/100)*70 = (30*70)/100 = 21. ## How do you find 40% 300? Frequently Asked Questions on What is 40 percent of 300? How do I calculate percentage of a total? What is 40 percent of 300? 40 percent of 300 is 120. How to calculate 40 percent of 300? Multiply 40/100 with 300 = (40/100)*300 = (40*300)/100 = 120. ## How do I calculate a discount? How to calculate discount and sale price? Find the original price (for example $90 ) Get the the discount percentage (for example 20% ) Calculate the savings: 20% of$90 = $18. Subtract the savings from the original price to get the sale price:$90 – $18 =$72. You’re all set! 10 thg 3, 2022 4500 từ vựng tiếng Trung thông dụng – Tập 25 4500 từ vựng tiếng Trung thông dụng – Tập 25 A 20 percent discount is 0.20 in decimal format. Secondly, multiply the decimal discount by the price of the item to determine the savings in dollars. For example, if the original price of the item equals $24, you would multiply 0.2 by$24 to get $4.80. 13 thg 3, 2018 See also 24 Euros Is How Many Dollars? Update New ## How do I work out percentages? Percentage can be calculated by dividing the value by the total value, and then multiplying the result by 100. The formula used to calculate percentage is: (value/total value)×100%. ## How do you calculate 30% of 300? Frequently Asked Questions on What is 30 percent of 300? How do I calculate percentage of a total? What is 30 percent of 300? 30 percent of 300 is 90. How to calculate 30 percent of 300? Multiply 30/100 with 300 = (30/100)*300 = (30*300)/100 = 90. ## What number is 60% of 605? Percentage Calculator: What is 60 percent of 605? = 363. ## What number is 80% of 65? 80% of 65 may not look like it, but it’s actually a math expression. 80% is the same thing as 0.8 , and the word “of” is code for times or multiply. So the problem is actually . 8⋅65 , which is 52 . 29 thg 3, 2017 ## What number is 25 percent of 80? 25 percent of 80 is 20. ## What amount is 13% of 100? 13 percent of 100? = 0.13. ## How do you find 60% of 200? 120 is 60% of 200. 14 thg 3, 2017 ## What number is 40% of 250? 40 percent of 250 is 100. 3. ## How do you take 20% off a price? 20 percent off depends on the original cost: Take the original number and divide it by 10. Double your new number. Subtract your doubled number from the original number. You have taken 20 percent off! For$30, you should have $24. 10 thg 3, 2022 ## How do you take 50% off a price? How to calculate a discount Convert the percentage to a decimal. Represent the discount percentage in decimal form. … Multiply the original price by the decimal. Take the original price of the item and multiply it by the decimal determined in step one. … Subtract the discount from the original price. 9 thg 3, 2021 ## How do you find the percent of a whole number? To determine totals from a percent in the future, multiply the given percentage value by 100 and divide that product by the percent. This method works in any instance where a percentage and its value are given. For example, when 2 percent = 80, multiply 80 by 100 and divide by 2 to reach 4000. 1 thg 12, 2020 ## How do you find 45% of 200? Frequently Asked Questions on What is 45 percent of 200? How do I calculate percentage of a total? What is 45 percent of 200? 45 percent of 200 is 90. How to calculate 45 percent of 200? Multiply 45/100 with 200 = (45/100)*200 = (45*200)/100 = 90. ## How much is 30% of r1000? Percentage Calculator: What is 30. percent of 1000.? = 300. ## What number is 30% of 250? 30 percent of 250 is 75. ## How do you take percentage marks out of 80? 80/100 x 100 = called as 80 percent = 80% 90/100 x 100 = called as 90 percent = 90% 91/100 x 100 = called as 91 percent = 91% ## What number is 32% of 250? Percentage Calculator: What is 32 percent of 250? = 80. ## What number is 60 percent of 60? Percentage Calculator: What is 60. percent of 60? = 36. ## What is the percent of change from 120 to 180? Percentage Calculator: What is the percentage increase/decrease from 120 to 180? = 50. ## What number is 32% of 200? Percentage Calculator: What is 32 percent of 200? = 64. ## What is a 57 out of 60 grade? Percentage Calculator: 57 is what percent of 60? = 95. ## What number is 25 percent of 120? 25 percent of 120 is 30. ## What percent is 12.5 out of 50? Percentage Calculator: 12.5 is what percent of 50? = 25. ## What number is 25 percent of 60? 15 Answer: 25% of 60 is 15. ## What percent is 22 out of 88? Percentage Calculator: 22 is what percent of 88? = 25. ## What number is 17% of 600? 102 We can now easily see that 17% of 600 is 102. ## How do you find 90% of 200? Percentage Calculator: What is 90 percent of 200? = 180. ## What number is 60% of 500? 60 percent of 500 is 300. ## What number is 60% of 160? 1 Answer. 96 is 60% of 160. 16 thg 11, 2016 ## How do you find 90 of 90? Frequently Asked Questions on What is 90 percent of 90? How do I calculate percentage of a total? What is 90 percent of 90? 90 percent of 90 is 81. How to calculate 90 percent of 90? Multiply 90/100 with 90 = (90/100)*90 = (90*90)/100 = 81. ## What OS is 30% of 200? 30 percent of 200 is 60. ## What is 20 out of 62 as a percentage? Percentage Calculator: 20 is what percent of 62? = 32.26. ## How do you find 50% of a number? To calculate 50 percent of a number, simply divide it by 2. For example, 50 percent of 26 is 26 divided by 2, or 13. 20 thg 12, 2020 ## How do you get 30% of 50? Answer: 30% of 50 is 15. ## What number is 30% of 60? 30 percent of 60 is 18. ## How do you get a 10 discount? One of the easiest ways to determine a 10 percent discount is to divide the total sale price by 10 and then subtract that from the price. You can calculate this discount in your head. For a 20 percent discount, divide by ten and multiply the result by two. 13 thg 3, 2018 ## How do you calculate 15% off? Find the exact sale price of a camera that is 15% off. Convert the percentage discount to a decimal by moving the decimal two places to the left: 15 % = 15.0 % = .15 {\displaystyle 15\%=15.0\%=.15} . Multiply the original price by the decimal: 449.95 × .15 = 67.49 {\displaystyle 449.95\times .15=67.49} . Mục khác… ## What is the discounted price? discount price. noun [ C ] COMMERCE. a price that is lower than the usual price: To order any of these books at a discount price, visit our Website. 30 thg 3, 2022 ## How do you change a percent to a normal number? Divide a percent by 100 and remove the percent sign to convert from a percent to a decimal. The shortcut way to convert from a percentage to a decimal is by removing the percent sign and moving the decimal point 2 places to the left. Nguyễn Danh Hào bán toà nhà 25 tầng Diện tích Sàn 4,500 m Nguyễn Danh Hào bán toà nhà 25 tầng Diện tích Sàn 4,500 m ## How do you find the percentage of a unknown number? When you have an equation such as 20% · n = 30, you can divide 30 by 20% to find the unknown: n = 30 ÷ 20%. You can solve this by writing the percent as a decimal or fraction and then dividing. ## What is the percent change from 80 to 20? Percentage Calculator: What is the percentage increase/decrease from 80 to 20? = -75. ## How do you find 25% of 200? Frequently Asked Questions on What is 25 percent of 200? How do I calculate percentage of a total? What is 25 percent of 200? 25 percent of 200 is 50. How to calculate 25 percent of 200? Multiply 25/100 with 200 = (25/100)*200 = (25*200)/100 = 50. ## How do you solve 5% of 200? Answer: 5% of 200 is 10. ## What number is 30% of 600? 30 percent of 600 is 180. See also How I Helped My Mother During Summer Vacation? New Update ## How do you find 30% of 750? Frequently Asked Questions on What is 30 percent of 750? How do I calculate percentage of a total? What is 30 percent of 750? 30 percent of 750 is 225. How to calculate 30 percent of 750? Multiply 30/100 with 750 = (30/100)*750 = (30*750)/100 = 225. ## What is 5% of a$1000? 50 Answer: 5% of 1000 is 50. Let’s find 5% of 1000. ## What is a 40 out of 55? Percentage Calculator: 40 is what percent of 55? = 72.73. ## What is 72 as a percentage of 90? Percentage Calculator: 72 is what percent of 90? = 80. ## What is 21 out of 28 as a percentage? Now we can see that our fraction is 75/100, which means that 21/28 as a percentage is 75%. ## What grade is 69 out of 80? Percentage Calculator: 69 is what percent of 80? = 86.25. ## How do you calculate 40 percentage marks? The percentage of marks obtained out of 40 will be (marks obtained×100/40). 8 thg 12, 2020 ## How do you calculate a passing score? Simply subtract the fail rate from 100; the resulting number is the pass rate. So, if you know that 6 percent of students failed, you would subtract: 100 – 6 = 94 percent is the pass rate for the test. 14 thg 5, 2018 ## How do you calculate 30% of 70? Frequently Asked Questions on What is 30 percent of 70? How do I calculate percentage of a total? What is 30 percent of 70? 30 percent of 70 is 21. How to calculate 30 percent of 70? Multiply 30/100 with 70 = (30/100)*70 = (30*70)/100 = 21. ## What number is 35% of 120? 35 percent of 120 is 42. ## What number is 20% of 250? 20 percent of 250 is 50. 3. ## What is the percentage of 11 55? Now we can see that our fraction is 20/100, which means that 11/55 as a percentage is 20%. ## What is 240 as a percentage of 600? Percentage Calculator: 240 is what percent of 600? = 40. ## How do you find 75 percent of 60? Frequently Asked Questions on What is 75 percent of 60? How do I calculate percentage of a total? What is 75 percent of 60? 75 percent of 60 is 45. How to calculate 75 percent of 60? Multiply 75/100 with 60 = (75/100)*60 = (75*60)/100 = 45. ## What percent of 300 is less than 500? Percentage Calculator: 300 is what percent of 500? = 60. ## What number is 80% of 95? 80 percent of 95 is 76. ## What number is 80% of 65? 80% of 65 may not look like it, but it’s actually a math expression. 80% is the same thing as 0.8 , and the word “of” is code for times or multiply. So the problem is actually . 8⋅65 , which is 52 . 29 thg 3, 2017 ## What number is 32% of 300? Percentage Calculator: What is 32 percent of 300? = 96. Loa technics sb 4500 bass 25 đẹp keng giá 3tr8 lh 098 567 9 577 ( Zalo xem hàng ship toàn quốc ) Loa technics sb 4500 bass 25 đẹp keng giá 3tr8 lh 098 567 9 577 ( Zalo xem hàng ship toàn quốc ) ## What is a 47 out of 68? Percentage Calculator: 47 is what percent of 68? = 69.12. ## What’s a 49 out of 60? Percentage Calculator: 49 is what percent of 60? = 81.67. ## What is 95 as a letter grade? A+ Letter Grade Percentage Range Mid-Range A+ 90% to 100% 95% A 80% to 89% 85% B+ 75% to 79% 77.5% B 70% to 74% 72.5% 6 hàng khác ## What number is 25 percent of 80? 25 percent of 80 is 20. ## What number is 25 percent of 180? 45 is 25 percent of 180. ## How do you find 30% of 180? 30 percent of 180 is 54. ## What is a 78 out of 85? Percentage Calculator: 78 is what percent of 85? = 91.76. ## What is 18 out of 180 as a percentage? Percentage Calculator: 18 is what percent of 180? = 10. ## What is the percent of 250? Step 3: From step 1, it follows that 100 % = 250. Step 4: In the same vein, x % = 50 . Step 5: This gives us a pair of simple equations: 100 % = 250(1). … Related Standard Percentage Calculations on 50 is what percent of 250. X is Percentage(P) of Y 2.5 1 250 5 2 250 7.5 3 250 10 4 250 46 hàng khác ## What is the percent of 88 and 96? Now we can see that our fraction is 91.666666666667/100, which means that 88/96 as a percentage is 91.6667%. ## What is 24 out of 80 as a percentage? Now we can see that our fraction is 30/100, which means that 24/80 as a percentage is 30%. ## What is the percent of 600 and 12? Percentage Calculator: 12 is what percent of 600? = 2. ## What percent of $5 is$2? Therefore, $2$2​ is $40\%$40%​ of $5$5​. ## What percent of $8 is$6? Answer: 6 out of 8 can be expressed as 75%. Let’s understand the conversion of a fraction to a percentage. ## What number is 25% of 150? 25 percent of 150 is 37.5. ## What number is 9% of 200? Percentage Calculator: What is 9 percent of 200? = 18. ## What number is 93% of 600? Percentage Calculator: What is 93 percent of 600? = 558. ## What is 7 5 as a percent? Answer. The fraction 7/5 is equivalent to 140 percent. 25 thg 5, 2020 ## What is 13 out of 100 as a percentage? Therefore the fraction 13/100 as a percentage is 13%. ## What letter grade is 36 out of 40? Now we can see that our fraction is 90/100, which means that 36/40 as a percentage is 90%. And there you have it! ## What is 68 400 as a percentage? Percentage Calculator: 68 is what percent of 400? = 17. ## How do you find 60% of 200? 120 is 60% of 200. 14 thg 3, 2017 ## What is 42 out of 168 as a percentage? Percentage Calculator: 42 is what percent of 168? = 25. ## What is a 110 out of 200? Percentage Calculator: 110 is what percent of 200? = 55. ## What percent is 80 0f 400? Percentage Calculator: 80 is what percent of 400? = 20. ## What number is 60% of 600? 60 percent of 600 is 360. ## What percent of 90 is 36 answer? The answer is 4 %. To solve this, make 36 as a fraction of 90. ## What number is 60% of 170? 60 percent of 170 is 102. ## How do you find 40% of 180? Frequently Asked Questions on What is 40 percent of 180? How do I calculate percentage of a total? What is 40 percent of 180? 40 percent of 180 is 72. How to calculate 40 percent of 180? Multiply 40/100 with 180 = (40/100)*180 = (40*180)/100 = 72. ## What number is 60 percent of 130? 60 percent of 130 is 78. ## How do you find 90% of 40? Multiply 90/100 with 40 = (90/100)*40 = (90*40)/100 = 36. ## What percent of $90 is$9? Explanation: If $90 is 100% , then$9 would be 10% . ## How do you find 75 of 20? Frequently Asked Questions on What is 75 percent of 20? How do I calculate percentage of a total? What is 75 percent of 20? 75 percent of 20 is 15. How to calculate 75 percent of 20? Multiply 75/100 with 20 = (75/100)*20 = (75*20)/100 = 15. ## How do I find 70% of 90? Multiply 70/100 with 90 = (70/100)*90 = (70*90)/100 = 63. ## What number is 20 percent of 64? 20 percent of 64 is 12.8. See also  How Much Is $1400 In Pounds? New Update ## How do you find 1% of a number? To find 1% of something (1/100 of something), divide by 100. Remember how to divide by 100 mentally: Just move the decimal point two places to the left. For example, 1% of 540 is 5.4. And 1% of 8.30 is 0.083. ## How do you find 5% of a number? ​5 percent​ is ​one half of 10 percent​. To calculate 5 percent of a number, simply divide 10 percent of the number by 2. For example, 5 percent of 230 is 23 divided by 2, or 11.5. 20 thg 12, 2020 ## What number is 80 percent of 60? 80 percent of 60 is 48. 3. How to calculate 80 percent of 60? ## What is 70 out of 200 as a percentage? Percentage Calculator: 70 is what percent of 200? = 35. ## What OS is 30% of 200? 30 percent of 200 is 60. ## What number is 45% of 80%? 45 percent of 80 is 36. ## How much is a 20 discount? Percentage discount is a discount that is given to a product or service that is given as an amount per hundred. For example, a percentage discount of 20% would mean that an item that originally cost$100 would now cost $80. 10 thg 3, 2022 ## How do you take 20% off? How much is 20 percent off? Take the original number and divide it by 10. Double your new number. Subtract your doubled number from the original number. You have taken 20 percent off! For$30, you should have $24. 10 thg 3, 2022 ## How do you calculate 25 percent off? Sale Price Formulas and Calculations Convert 25% to a decimal by dividing by 100: 25/100 = 0.25. Multiply list price by decimal percent: 130*0.25 = 32.50. Subtract discount amount from list price: 130 – 32.50 = 97.50. With the formula: 130 – (130*(25/100)) = 130 – (130*0.25) = 130 – 32.50 = 97.50. 25% off$130 is $97.50. ## How do you take 50% off a price? How to calculate a discount Convert the percentage to a decimal. Represent the discount percentage in decimal form. … Multiply the original price by the decimal. Take the original price of the item and multiply it by the decimal determined in step one. … Subtract the discount from the original price. 9 thg 3, 2021 ## What is the formula to calculate discount? The discount is list price minus the sale price then divided by the list price and multiplied by 100 to get a percentage. Where: L = List Price. S = Sale Price. ## How do you find the price before the discount? How do I calculate the price before discount? First, divide the discount by 100. Subtract this number from 1. Divide the post-sale price by this new number. Here you go, that’s the original price before the applied discount. 10 thg 3, 2022 ## How do you calculate percentage of a dollar amount? Converting from a decimal to a percentage is done by multiplying the decimal value by 100 and adding %. The shortcut to convert from decimal to percent is to move the decimal point 2 places to the right and add a percent sign. ## How do you convert a percentage to a value? Divide a percent by 100 and remove the percent sign to convert from a percent to a decimal. Example: 10% becomes 10/100 = 0.10. Example: 67.5% becomes 67.5/100 = 0.675. ## What’s the easiest way to calculate percentages? 1. How to calculate percentage of a number. Use the percentage formula: P% * X = Y Convert the problem to an equation using the percentage formula: P% * X = Y. P is 10%, X is 150, so the equation is 10% * 150 = Y. Convert 10% to a decimal by removing the percent sign and dividing by 100: 10/100 = 0.10. Mục khác… ## What is the percent of change from 400000 to 700000? Related Standard Percentage Calculations on Percentage Increase/Decrease from 400000 to 410000 X Y Percentage(P) Increase 400000 696000 74 400000 700000 75 400000 704000 76 400000 708000 77 46 hàng khác ## What is the percent of change from 4000 to 5000? Percentage Calculator: What is the percentage increase/decrease from 4000 to 5000? = 25 – percentagecalculator. ## How do you find 25 of 120? Frequently Asked Questions on What is 25 percent of 120? How do I calculate percentage of a total? What is 25 percent of 120? 25 percent of 120 is 30. How to calculate 25 percent of 120? Multiply 25/100 with 120 = (25/100)*120 = (25*120)/100 = 30. ## What number is 25 percent of 100? 25 percent of 100 is 25. 3. ## How do you find 12% of 200? Percentage Calculator: What is 12 percent of 200? = 24. ## How do you find 25% of 200? Frequently Asked Questions on What is 25 percent of 200? How do I calculate percentage of a total? What is 25 percent of 200? 25 percent of 200 is 50. How to calculate 25 percent of 200? Multiply 25/100 with 200 = (25/100)*200 = (25*200)/100 = 50. ## What number is 30% of 300? 30 percent of 300 is 90. ## What number is 70% of 50? Because 70% of 50, is 35. 18 thg 3, 2007 ## What number is 30% of 250? 30 percent of 250 is 75. ## What percentage is 30 out of 3000? 100 % = 3000(1). x % = 30(2). Therefore, 30 is 1 % of 3000. … Related Standard Percentage Calculations on 30 is what percent of 3000. X is Percentage(P) of Y 3000 100 3000 49 hàng khác ## What is 1% of a$1000? 10 Answer: 1% of 1000 is 10. Let’s find 1% of 1000. ## What number is 5% of 3000? 5 percent of 3000 is 150. ## How do you find 75% 60? Frequently Asked Questions on What is 75 percent of 60? How do I calculate percentage of a total? What is 75 percent of 60? 75 percent of 60 is 45. How to calculate 75 percent of 60? Multiply 75/100 with 60 = (75/100)*60 = (75*60)/100 = 45. Metabo hww 4500/25 inox Metabo hww 4500/25 inox ## What is 47 out of 55 as a percentage? Now we can see that our fraction is 85.454545454545/100, which means that 47/55 as a percentage is 85.4545%. ## What is an 80 out of 90? Percentage Calculator: 80 is what percent of 90? = 88.89. ## What grade is a 15 out of 23? Now we can see that our fraction is 65.217391304348/100, which means that 15/23 as a percentage is 65.2174%. ## What letter is a 75? B+ Letter Grade Percentage Range Mid-Range B+ 75% to 79% 77.5% B 70% to 74% 72.5% C+ 65% to 69% 67.5% C 60% to 64% 62.5% 6 hàng khác ## What number is 60 percent of 120? Percentage Calculator: What is 60 percent of 120? = 72. Related searches • what number is 25 of 4500 • what is 25 of 45 • 4500 is what percent of 25 000 • what is 25 percent of 45000 • what is 25 percent of 4500 dollars • 15 percent of 4500 • what is 25 of 45000 • what is 1 of 4500 • what is 30 of 4500 • what is 15 of 4500 • 75 of 4500 • what percentage is 25 out of 45000 • what number is 25 percent of 4500 • 20 percent of 4500 • 25 percent of 4500 • 25 percent of 5000 • whats is 25 percent of 4500 • what is 25 percent of 4500 • what is 20 of 4500 • 45000 is 25 percent of what number • what is 25 of 450 You have just come across an article on the topic what is 25 of 4500. If you found this article useful, please share it. Thank you very much.
2022-05-19 18:35:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4364473819732666, "perplexity": 2141.8385852391343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529658.48/warc/CC-MAIN-20220519172853-20220519202853-00705.warc.gz"}