url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://www.ideals.illinois.edu/handle/2142/70414
|
## Files in this item
FilesDescriptionFormat
application/pdf
8823187.pdf (10MB)
(no description provided)PDF
## Description
Title: Quasi-Elastic Light Scattering Diagnostics for a High Voltage Spark Discharge Author(s): Lovik, Mark Alan Doctoral Committee Chair(s): Scheeline, Alexander Department / Program: Chemistry Discipline: Chemistry Degree Granting Institution: University of Illinois at Urbana-Champaign Degree: Ph.D. Genre: Dissertation Subject(s): Chemistry, Analytical Abstract: Quasi-elastic light scattering was used as a diagnostic probe into analyte material transport processes in a unipolar, positionally stable, high voltage spark discharge. The material transport by particulate intermediates was studied. Instrumentation was constructed to allow spatially-, temporally-, and angularly-resolved scattering measurements on the positionally stable spark discharge. Active polarization control on the light scattering instrument was characterized using ellipsometry and Mueller calculus. Mie scattering theory was used in the determination of particle size in an aluminum cathode model system.Two possible production mechanisms were expected to be able to account for the observed spark-generated particulates. Particle production could be a result of Joule heating of the electrode, followed by shock wave ejection of the molten cathodic material. Condensation of analyte vapor in the spark discharge could also account for spark-generated particulates. From crude velocity measurements made on the particle system, an upper velocity range of 8 meters/second was calculated for the observed particle. This velocity was calculated to be inconsistent with a cathodically ejected particle model.Particulates were observed to remain within the spark environment for about 0.5 seconds, and comparison of experimental data with Mie scattering calculations provided a mean size estimate of 4.4$\mu$m for the particle system. Temporally-resolved measurements of particle scattering showed the point of condensation of analyte to particulates to occur from 300-400$\mu$s after spark initiation.Cross-correlation between scattering by particulates and spark sampling phenomena, seen as analyte emission, indicated partial correlation between particle production and sampling variation in the spark. The temporal interdependence between particle scattering and analyte emission variation was inconclusive. Statistical variability due to low particle counts in scattering measurements may have been the limiting aspect in correlation measurements. Issue Date: 1988 Type: Text Description: 364 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 1988. URI: http://hdl.handle.net/2142/70414 Other Identifier(s): (UMI)AAI8823187 Date Available in IDEALS: 2014-12-15 Date Deposited: 1988
|
2017-01-20 02:07:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3966847062110901, "perplexity": 7463.946199113141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00126-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://learn.careers360.com/ncert/question-multiply-and-express-as-a-mixed-fraction-a-3-multiplied-by-5-1-by-5/
|
Q
# Multiply and express as a mixed fraction: (a) 3 multiplied by 5 1/5
6. Multiply and express as a mixed fraction :
$(a) 3\times 5\frac{1}{5}$ $(b) 5\times 6\frac{3}{4}$ $(c) 7\times 2\frac{1}{4}$
$(d) 4\times 6\frac{1}{3}$ $(e) 3\frac{1}{4}\times 6$ $(f) 3\frac{2}{5}\times 8$
Views
$(a) 3\times 5\frac{1}{5}$
On Multiplying, we get
$\Rightarrow 3\times\frac{5\times5+1}{5}=3\times\frac{26}{5}=\frac{3\times26}{5}=\frac{78}{5}$
Converting This into Mixed Fraction,
$\Rightarrow \frac{78}{5}=\frac{75+3}{5}=\frac{75}{5}+\frac{3}{5}=15+\frac{3}{5}=15\frac{3}{5}$
$(b) 5\times 6\frac{3}{4}$
On Multiplying, we get
$\Rightarrow 5\times\frac{6\times4+3}{4}=5\times\frac{27}{4}=\frac{5\times27}{4}=\frac{135}{4}$
Converting This into Mixed Fraction,
$\Rightarrow \frac{135}{4}=\frac{132+3}{4}=\frac{132}{4}+\frac{3}{4}=33+\frac{3}{4}=33\frac{3}{4}$
$(c) 7\times 2\frac{1}{4}$
On multiplying, we get
$7\times 2\frac{1}{4}=7\times \frac{4\times2+1}{4}=7\times\frac{9}{4}=\frac{7\times9}{4}=\frac{63}{4}$
Converting it into a mixed fraction, we get
$\frac{63}{4}=\frac{60+3}{4}=\frac{60}{4}+\frac{3}{4}=15+\frac{3}{4}=15\frac{3}{4}$
$(d) 4\times 6\frac{1}{3}$
On multiplying, we get
$4\times 6\frac{1}{3}=4\times\frac{3\times6+1}{3}=4\times\frac{19}{3}=\frac{4\times 19}{3}=\frac{76}{3}$
Converting it into a mixed fraction,
$\frac{76}{3}=\frac{75+1}{3}=\frac{75}{3}+\frac{1}{3}=25+\frac{1}{3}=25\frac{1}{3}$
$(e) 3\frac{1}{4}\times 6$
Multiplying them, we get
$3\frac{1}{4}\times 6=\frac{4\times3+1}{4}\times6=\frac{13}{4}\times6=\frac{13\times6}{4}=\frac{78}{4}$
Now, converting the result fraction we got to mixed fraction,
$\frac{78}{4}=\frac{76+2}{4}=\frac{76}{4}+\frac{2}{4}=19+\frac{1}{2}=19\frac{1}{2}$
$(f) 3\frac{2}{5}\times 8$
On multiplying, we get
$3\frac{2}{5}\times 8=\frac{5\times3+2}{5}\times8=\frac{17}{5}\times8=\frac{17\times8}{5}=\frac{136}{5}$
Converting this into a mixed fraction, we get
$\frac{136}{5}=\frac{135+1}{5}=\frac{135}{5}+\frac{1}{5}=27+\frac{1}{5}=27\frac{1}{5}$
Exams
Articles
Questions
|
2020-02-21 16:25:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9063940644264221, "perplexity": 10861.070935265025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145533.1/warc/CC-MAIN-20200221142006-20200221172006-00220.warc.gz"}
|
https://www.physicsforums.com/threads/properties-of-linear-transformation-did-my-professor-make-an-error.102434/
|
# Properties of linear transformation, did my professor make an error?
1. Dec 2, 2005
### mr_coffee
Hello everyone, i'm studying an example my professor did, and it isn't making sense to me... here is the orignal matrix:
THe oringal matrix is:
T = [3s-t]
[t].......[2t+7s]
he wants to determine if the following trnasformation is Linear.
Here is what he wrote on the board:
http://img205.imageshack.us/img205/8454/lastscan0mm.jpg [Broken]
Why does he say:
T[x*s1]
..[x*s2]
then when he finally proves that is passes the 2nd test of linear transformations, he writes:
x*T[s1]
.....[t1]
when right up there he has s2 on the bottom, did he to say:
T[x*s1]
..[x*t1]
Last edited by a moderator: May 2, 2017
2. Dec 2, 2005
### Galileo
That's just a typo ofcourse. The whole subscript thing is also redundant, that's probably why he confused s2 and t1. s2 and t1 are both referred to as the second component of the vector.
3. Dec 2, 2005
### mr_coffee
So is it suppose to be:
T[x*s1]
..[x*t1]
?
I don't want to miss it on the exam, because this was the pratice exam
4. Dec 2, 2005
### Fermat
In the example you attached, replace each occurence of t1 with s2.
The result would then be,
$$T \begin{array}{|c|} \alpha s_1 \\ \alpha s_2 \\ \end{array} = \alpha T \begin{array}{|c|} s_1 \\ s_2 \\ \end{array}$$
|
2017-09-23 02:28:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4822767972946167, "perplexity": 3047.818813739856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689413.52/warc/CC-MAIN-20170923014525-20170923034525-00525.warc.gz"}
|
http://list.seqfan.eu/pipermail/seqfan/2009-July/001861.html
|
# [seqfan] A property of 163
Tanya Khovanova mathoflove-seqfan at yahoo.com
Fri Jul 10 15:50:30 CEST 2009
Dear SeqFans,
I received the following submission for my number gossip page (numbergossip.com) from Anand Deopurkar:
"A unique property of 163: It is the largest number n such that the integers in the imaginary quadratic extension Q(\sqrt -n) have the unique factorization property."
Can some confirm this?
He also sent a sequence which is not in the database:
"Integers in the following imaginary quadratic fields Q(\sqrt -n) have the unique factorization property: n = 1,2,4,7,11,19,43,67,163. So you could add this as a rare property for those integers as well."
|
2022-01-28 18:58:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6810343265533447, "perplexity": 2997.5566795289596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306335.77/warc/CC-MAIN-20220128182552-20220128212552-00403.warc.gz"}
|
https://www.ativotecnologia.com/auditor-general-lqnmhj/find-the-reference-angle-for-330-degrees-5f3e0e
|
69% average accuracy. Which of the following is a negative coterminal angle of 165 degrees? This problem has been solved! -240+360=120 Since 120 is positive, you can stop here. Explanation: The reference angle of an angle θ in standard position. samanthagrimm. Make the expression negative because cotangent is negative in the fourth quadrant . Example : For each given angle, find a coterminal angle with measure of θ such that 0 ° ≤ θ < 360 °. 235 degrees. Check the answer using the calculator above. Note how the reference angle always remain less than or equal to 90°, even for large angles. Coterminal angles are equal angles. β = α ± 2π * k, where k is a positive integer. What is the reference angle for 11pi/6. cot(330)=1/tan(330) 330 lies in quadrant 4 where tan is negative. 5pi/6. β = α ± 360 * k, where k is a positive integer. Coterminal angles are two angles that are drawn in the standard position (so their initial sides are on the positive x-axis) and have the same terminal side like 110° and -250° Another way to describe coterminal angles is that they are two angles in the standard position and one angle is a multiple of 360 degrees larger or smaller than the other. Finding the Reference Angle. 120 seconds . If the angle is negative as in this question, we measure in a clockwise direction. 2. Q. Contents. Tan values are positive in the #1st and 3rd# quadrants and negative in the … For any angle in standard position, we measure from the positive -axis. Definition of reference angle: The smallest angle that the terminal side of a given angle makes with the x-axis. For example, 1 degrees and 361 degrees are at the same location since the total angles … If angle A is in quadrant II then the reference angle A r = 180° - A if A is given degrees and A r = π - A if A is given in radians. Degrees. a. cot(-135 degrees) b. sec(-260 degrees) c. csc 200 degrees d.csc 175 degrees e. cos(-67 degrees) Answer by stanbon(75887) (Show Source): The angles are always measured anti-clockwise from the positive x-axis. 2 years ago. Tags: Question 4 . You use the rules for reference angles, the values of … 2: 1: 3: 4: Depending on the quadrant, find the reference angle: … Expert Answer 100% (1 rating) … 120 seconds . Arc Length, according to Math Open Reference, is the measure of the distance along a curved line. Let us take four axes, to divide 360 degrees into four quadrants. We just keep subtracting 360 from it until it’s below 360. answer choices . Tags: Question 3 . Solving for the reference angle in degrees is much easier than trying to determine a trig function for the original angle. Solved: Draw each of the following angles in standard position and name the reference angle. EZ as pi Jul 7, 2017 #tan 330° = -1/sqrt3# Explanation: #330°# is in the #4th# quadrant. Question: Without Using A Calculator, Compute The Sine And Cosine Of 330° By Using The Reference Angle. You can determine the trig functions for any angles found on the unit circle — any that are graphed in standard position (meaning the vertex of the angle is at the origin, and the initial side lies along the positive x-axis). Also, find the exact value of cot 7pi/4. Show transcribed image text. answer choices . See the answer. -sqrt3 Cot 330= cot 360-30 = cot -30= -cot 30=-sqrt3. What is a coterminal angle? Edit. 1 decade ago. FAQ. Tags: Question 5 . We … It makes sense here to state the angle in terms of its positive coterminal angle. To compute the measure (in degrees) of the reference angle for any given angle theta, use the rules in the following table. cos(330º) = sqrt3/2 Remember that cos(a-b) = cos(a)cos(b) + sin(a)sin(b) and that 330º = 360º - 30º, so cos(330º) = cos(360º - 30º) = cos(360º)cos(30º) + sin(360º)sin(30º) The cosine, sine, tangent and secant of 360º equal the cosine, sine, tangent and secant of 0º respectively. A coterminal angle is an angle that ends at the same location as another angle. pi/6. If angle A is in quadrant I then the reference angle A r = A. In order to find its reference angle, we first need to find its corresponding angle between 0° and 360°. We know that sin0º is 0, and that cos0º = 1 so cos(330º) = 1*cos(30º) + … 55 degrees . Step by step guide to solve Coterminal Angles and Reference Angles Problems. Motivation; Calculating Reference Angles; Application of Reference Angles ; Motivation. trig questions: find the reference angle for 330 degrees? Hi, Reference angle for 330° = 360° - 330° = 30° cot (7π/4) = 1/[tan (7π/4)] = 1/[-tan … What is the reference angle for 5pi/4? 125 degrees. For instance, if our angle is 544°, we would subtract 360° from it to get 184° (544° – 360° = 184°). In What Quadrant Is This Angle? Coterminal and reference angles DRAFT. Find tan 330 deg. To find this, add a positive rotation (360 degrees) until you get a positive angle. pi/3. 120 seconds . To find the value of functions for angles more than 90 degrees, we follow a set of rules known as cast rule. Angles 57 °, 417 ° and -303 ° have the same initial side and terminal side but with different amount of rotations, such angles are called coterminal angles. Assume angle A is positive and less than 360° (2π), we have 4 possible cases: 1. If necessary, first "unwind" the angle: Keep subtracting 360 from it until it is lies between 0 and 360°. This is easy to do. Science Anatomy & Physiology Astronomy Astrophysics Biology Chemistry Earth Science Environmental Science Organic Chemistry Physics Math Algebra Calculus Geometry Prealgebra Precalculus Statistics Trigonometry Humanities English Grammar U.S. History World History ... and beyond … By Reed Denney. Trigonometry . What Is The Reference Angle? Edit. 3ˇ 4 13.reference angle: ˇ 3 x y = 2ˇ 3 14.reference angle: ˇ 3 x y = ˇ 3 15.reference angle: ˇ 6 x y = 13ˇ 6 16.sin = 1 p 3 2;cos = 2;tan = p 3; csc = 2 p 3 Find one positive angle and one negative angle coterminal with the given angle. Lv 7. Graphs in trigonometry are cyclic, that is, repeating. Finding the reference angle. Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, just like a math tutor. Show your work. When determining the reference angle vertically drop (or raise), the line to the horizontal axis (x-axis). Find the Exact Value cot(330 degrees ) Apply the reference angle by finding the angle with equivalent trig values in the first quadrant . Finding Reference Angles in Degrees Quadrant Measure of Angle Theta Measure of […] anonymous89. Then use the normal rules for the reference angle of positive angles.-20° + 360° = 340° coterminal → Quadrant 4 → 360°-340° = 20° reference Find the exact trigonometric function values of any angle whose reference angle measures 30°, 45°, or 60°. Fourth quadrent: (270 to 360 degrees) you must subtract the angle from 360 degrees. SURVEY . pi/4. Reference angle is the smallest angle that you can make from the terminal side of an angle with the $$x$$-axis. For a negative angle, just add 2π or 360° to find the positive coterminal angle. ˇ 6 12. Remember that they are not the same thing - the reference angle is the angle between the terminal side of the angle and the x-axis, ... for angles measured in degrees. (For negative angles add 360 instead). Last, we need to add 360 degrees to that angle to find an angle that is coterminal with the original angle, so 114.59+360 = 475.59 degrees. Which of the following is a negative coterminal angle of 165 degrees? The answer therefore is -tan(30) = -1/√3 SURVEY . to find the referance in the second quadrent you must subtract the angle measure from 180 (angle-180) third quadrent (180 to 270 degrees) you subtract 180 degrees from the angle given. ˇ 3 11. 2 … SURVEY . (answer 1, 2, 3, Or 4) Sin(330°) = Cos(330) (Type Sqrt(2) For 2 And Sqrt(3) For 3.) The given … Preview this quiz on Quizizz. Q. b) for angles measured in radians . is the acute angle r. formed by the terminal side of θ. and the horizontal axis. And if the angle is positive, we measure in a counterclockwise direction. The reference angle is the acute angle between the terminal line and the x-axis. 2. Two or more coterminal angles have the same reference angle. The reference angle is 360-330 = 30. t = 330° reference angle = 360° - 330° = 30° Sketch the angle to see which quadrant it is in. Mathematics. In both these diagrams, the blue angle y y y is a reference angle of the red angle x. x. x. Save. Q. … Favorite Answer. What is the reference angle for negative 330 degrees. Find the exact value of sin (x) when cos (x) = 3/5 and the terminal side of x is in quadrant 4 . The reference angle is the angle from the sketch to the x-axis, in this case, 60 degrees. 1 Answer. Coterminal and reference angles DRAFT. Ans: #-sqrt3/3# Explanation: On the trig unit circle, tan 330 = tan (-30 + 360) = tan (-30) = - tan 30 Trig table gives --> - #tan 30 = - sqrt3/3# Therefor, #tan 330 = - sqrt3/3# Answer link. 383 times. Find the value of the given trigonometric function by finding the reference angle a and the attaching the proper sign. 75 degrees. For example, a … As stated above, the angles other than the 90 degrees angle in a right-angled triangle are acute (i.e, less than 90 degrees). To find a coterminal of an angle, add or subtract $$360$$ degrees (or $$2π$$ for radians) to the given angle. 9th - 12th grade. The reference angle is always between 0 0 0 and π 2 \frac{\pi}{2} 2 π radians (or between 0 0 0 and 90 90 9 0 degrees). Relevance. tan 343 (a) What is the representation of the tan 343 using the reference angle a 3. Answer Save. Add +360 degrees until you have a positive angle, then sketch. What is the reference angle for 125 degrees? A useful feature is that in trigonometry, any two coterminal angles have … Question 180059: Express each as a trigonometric function of a reference angle and give the value to 4 decimal places. 330 degrees. We will begin by sketching the unit circle and identifying where the angle negative 330 degrees lies.
Luxio Moveset Gen 4, Good Riddance Instrumental Hades, Unclouded Day Meaning, Kriss Vector Muzzle Brake, Bugs Bunny Gif, No7 Line Correcting Booster Serum, Armed Robbery Ireland, Knitting Edge Stitch Patterns,
|
2021-11-30 05:47:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7869499325752258, "perplexity": 1198.5964568582178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358953.29/warc/CC-MAIN-20211130050047-20211130080047-00065.warc.gz"}
|
https://tutorme.com/tutors/72898/interview/
|
Enable contrast version
Tutor profile: Mohamed J.
Inactive
Mohamed J.
Experienced Tutor. Math Enrichment at Rochester Math and Science Academy.
Tutor Satisfaction Guarantee
Questions
Subject:Linear Algebra
TutorMe
Question:
Solve the following system of equations $$x+y+z=27$$ $$x+y-2z=9$$ $$2x+3y+z=59$$
Inactive
Mohamed J.
Contact tutor
Send a message explaining your
needs and Mohamed will reply soon.
Contact Mohamed
Start Lesson
FAQs
What is a lesson?
A lesson is virtual lesson space on our platform where you and a tutor can communicate. You'll have the option to communicate using video/audio as well as text chat. You can also upload documents, edit papers in real time and use our cutting-edge virtual whiteboard.
How do I begin a lesson?
If the tutor is currently online, you can click the "Start Lesson" button above. If they are offline, you can always send them a message to schedule a lesson.
Who are TutorMe tutors?
Many of our tutors are current college students or recent graduates of top-tier universities like MIT, Harvard and USC. TutorMe has thousands of top-quality tutors available to work with you.
|
2019-09-17 12:16:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3618261218070984, "perplexity": 9615.360461427998}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573071.65/warc/CC-MAIN-20190917121048-20190917143048-00197.warc.gz"}
|
https://uol.de/en/lcs/topics-for-theses
|
# Topics for Theses
## Contact
Prof. Dr. Claus Möbus
Room: A02 2-226
Tel: +49 441 / 798-2900
claus.moebus@uol.de
-------------------------------------------
Secretary
Manuela Wüstefeld
Room: A02 2-228
Tel: +49 441 / 798-4520
manuela.wuestefeld@uol.de
-------------------------------------------
# Topics for Theses
## Topics for Bachelor and Master Theses
Bachelor or Master theses in probabilistic modelling, machine learning or applied artificial intelligence are led by me and other researchers.
The work begins and ends with a lecture. In the initial presentation you introduce the topic and the milestone plan. This is done in my research seminar "Probabilistic Modeling" (inf 533 and/or inf534). You should register via Stud.IP for this seminar and also qualify for a certificate of successful participation. This can be achieved by the (successful) presentation and the written milestone description.
In the final presentation you summarize the results of the work (if necessary with a demonstrator). This takes place in the corresponding seminar ("upper seminar", etc.) of the co-investigator of the dissertation. Depending on the special field of the thesis (e.g. probabilistic robotics, computational intelligence, machine learning, business intelligence) the following colleagues (Prof. Fränzle, Prof. Kramer, Prof. Sauer) are recommended as co-examiners.
Interested students should consult this schedule ("Leitfaden") and are invited to contact me via email:
Prof. Dr. Claus Möbus
## Algorithm for Generating the Smallest Sigma-Algebra $$\sigma(\varepsilon)$$ from a Set System $$\varepsilon$$
"A σ-algebra ... is a set system in measure theory, i.e. a set of sets. A σ-algebra is characterized by its closedness with respect to certain set-theoretic operations. σ-algebras play a central role in modern stochastics and integration theory, since they appear there as definition domains for measures and contain all sets to which one assigns an abstract volume or probability, respectively" (Wikipedia, 2021/03/07)
Every primer (Behrends, 2013, p.11; Hable, 2015, p.9; Halpern, 2017, p.14ff; Hübner, 2009, p.17) on stochastics contains the definition of the measure-theoretic concept of sigma-algebra. The algebra contains the sets which define those events which can be measured by probabilities. After the definition typical textbooks (e.g. Hable, 2015, p.10) present some trivial examples with countable sets (e.g. Omega = {1, 2, 3, 4}) or some very abstract sometimes counterintuitive examples.
In practical applications one is interested only in a special set $$\varepsilon$$ of events which should be measured by probabilities. This set is in most cases not a full-fledged sigma-algebra. So to this set of interest $$\varepsilon$$ more sets have to be added to "fill the gap" and to embed $$\varepsilon$$ into a sigma-algebra. If this has been accomplished we say that "$$\sigma(\varepsilon)$$ has been generated by set system $$\varepsilon$$". Sometimes (e.g. Pollard, 2010, p.19f) trivial examples with countable sets (e.g. $$\varepsilon =\{\{a, b, c\},\{c, d, e\}\}, \Omega=\{a, b, c, d, e\})$$ (e.g. Fig.1, Fig.2) or some very abstract sometimes counterintuitive examples are given.
Despite its theoretical importance no stochastic textbook presents an efficient algorithm for the generation of the smallest $$\sigma(\varepsilon)$$.
Such an algorithm should be developed for the finite case in a BSc or MSc-thesis.
An algorithmic solution sketch was provided by Behrends (2013, p.16) and in a personal communication. "...this is an interesting question for which I know of no theoretical research. I think that the complexity grows strongly exponentially. More precisely it looks like this.Let $$\Omega$$ have $$r$$ elements, let $$k$$ subsets $$E_1,...,E_k$$ be given, and one is interested in the generated sigma algebra $$\Sigma$$ of $$E_i$$. To do this, one must know the atoms of $$\Sigma$$, the minimal nontrivial elements. If there are $$n$$ pieces, $$\Sigma$$ has $$2^n$$ elements.
And how do you find the atoms? The easiest way is by induction on $$k$$. For $$k=1$$ there are (at most) 2 atoms, and at the transition $$k \rightarrow k+1$$ one has only to form the intersections with $$E_k$$ and $$\Omega\setminus E_k$$ of the atoms to $$k-1$$. In short: There will be at most $$2^k$$ atoms, i.e. $$\Sigma$$ can have $$2^{2^k}$$ elements
And this can happen in the worst case. For example, if $$\Omega=\{0,1\}^s$$ and one chooses $$E_i$$ as "i-th entry equals 1" for $$i=1,..,s$$, the atoms are the one-element sets, so there are $$2^s$$ atoms. Unfortunately, I cannot contribute further subtleties. For example, how elaborate is it to compute the intersection of two subsets of an r-elementary set? I guess $$2r$$ steps, and thus we end up with an effort of $$2r \cdot 2^{2^k}$$." (personal email of E.Behrends, 2021/03/04)
In a BSc-thesis the algorithm for a finite $$\Omega$$ should be formulated in pseudocode, its complexity should be specified, and implemented in Julia.
In a MSc-thesis to the achievements of a BSc-thesis the algorithmic idea should if possible transferred to the transfinite domain. A near algorithmic solution sketch is provided in Behrends (2013, p.16, p.29f).
Students interested in should apply with a proof of their competence in stochastics and machine learning.
References
• BEHRENDS, E., Elementare Stochastik - Ein Lernbuch, von Studierenden mitentwickelt - , Springer Spektrum, 2013
• HABLE, R., Einführung in die Stochastik - Ein Begleitbuch zur Vorlesung - , Springer Spektrum, 2015
• HALPERN, J.Y., Reasoning About Uncertainty, 2/e, MIT Press 2017
• HÜBNER, G., Stochastik - Eine anwendungsorientierte Einführung für Informatiker, Ingenieure und Mathematiker, 5.Auflage, Vieweg-teubner, 2009
• POLLARD, D., A User's Guide to Measure Theoretic Probability, Cambridge University Press, 2010
Students interested should consult this schedule ("Leitfaden") and are invited to contact me via email:
Prof. Dr. Claus Möbus
## Paradigmatic Problems for the Probabilistic Programming Languages TURING.jl and WebPPL
WebPPL and Turing.jl are relatively new Turing-complete universal probabilistic programming languages (PPLs). WebPPL is embedded in the functional part of JavaScript (JS). Turing.jl is embedded in Julia. Both WebPPL and Turing.jl are open source and experimental.
PPLs are used for building generative probabilistic models (GPMs). These models represent causal background knowledge which is characteristic for experts. In contrast, deep learning models only represent shallow knowledge which is useful for pattern matching and computer vision. In that respect the latter have become rather successful. This paper describes with many examples the fundamental difference between DL and GPMs (Lake, et al., 2016).
The thesis should survey models programmed in WebPPL and Turing.jl according to a metric measure (e.g. code length). This set is called the paradigmatic example set WebPPL-problems $$\cup$$ Turing.jl-problems. This set should be partitioned in the joint set WebPPL-problems $$\cap$$ Turing.jl-problems and the two difference sets Turing.jl-probs \ WebPPL-probs and WebPPL-probs \ Turing.jl-probs. The last two sets are of special interest. Are there these sets by chance or are there fundamental difficulties in formulating a problem solution in one of the two languages?
Let's take the example of "penalty kicks" from the football world. There are two agents in the penalty shootout. The goalkeeper and the shooter. Each agent has certain preferences for certain shots and certain defensive measures: e.g. left upper corner, etc.. At the same time, each agent has guesses about the opponent's preferences. We know from WebPPL that the language means are sufficient to model such situations. But what about Turing.jl ?
Such a question should be clarified in the work.
Interested students are invited to contact me via email:
Prof. Dr. Claus Möbus
## Bayesian Portfolio Optimisation through Diversification of Risks
At the latest when prices collapse, some securities owners would wish they had done something to spread and minimise risk. According to Martin Weber (Professor at the University of Mannheim), a layman cannot perform better than the market, but he/she can do something for risk management in the portfolio. The classical non-Bayesian procedure was described theoretically by Nobel Prize winner Markowitz in 1952 in the article Portfolio Selection. Markowitz was awarded the Nobel Prize for this in 1990. In his book, Weber gives practical advice in chapter 6 of his book 'Genial einfach investieren; Mehr müssen Sie nicht wissen - das aber unbedingt !'. Somewhat more mathematical - but still easy to read - is the treatment of the topic of portfolio optimisation in Section 4.2 Diversification of Risks in the book by Cottlin & Döhler, Risikoanalyse, 2013 2/e.
In a previous BSc thesis the problem was solved up to a portfolio of 3 assets in the form of ETFs (see below news on this website).
There are now two new challenges. (1) In another BSc-thesis the optimization problem with more than 3 ETFs shall be solved, the existing webapp shall be further developed and evaluated for usability with financially affine laymen.
(2) An MSc thesis will investigate how the classical approach of Markowitz Bayesian can be extended the Bayesian way. For this purpose, the posterior distribution of the securities shares in the portfolio must be calculated. A literature search is expected in the first third of the paper. In the second third, realization possibilities are to be examined. In the last third a small demonstrator is to be built.
Previous knowledge: The candidate should have successfully studied WI, have a basic knowledge of descriptive statistics (mean, standard deviation, variance, correlation, regression, etc.), machine learning, understand chapter 6 of Weber's book and the existing BSc-thesis, and be able to create dynamic web pages.
Interested students are invited to contact me via email:
Prof. Dr. Claus Möbus
## Probabilistic Modeling with Model Fragments, Patterns, or Templates
WebPPL is a web-based probabilistic programming language (PPL) embedded in JavaScript. PPLs are used for implementing probabilistic models in domains with uncertain knowledge (cognition, medicine, traffic, finance, etc). Dependent on the situation-specific problem questions in the form of unknown (conditional) probabilities are formalized. Models and programs generate answers by numerical inference processes.
The interactive tutorial "Probabilistic Models of Cognition" provides a variety of WebPPL-models. There is nearly always a fixed sequence modelling steps: 1) modelling the causal process of interest (root causes, expositions -> syndroms -> symptoms), 2) observation of evidence (data), 3) (diagnostic) inference (most often) contrary to the causal direction (symptoms -> syndroms -> expositions).
Challenge: The research question is, whether the modelling process can be improved substantially by a library of model fragments, patterns, or templates.
Prerequisites: The topic is suited for a master thesis. Preknowledge can be acquired by successful participation in the seminar "Probabilististic Modeling I & II" (Inf533, Inf534) and studying the above mentioned tutorial.
Contact:
(Changed: 2021-06-08)
|
2021-06-24 17:51:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6070844531059265, "perplexity": 2170.362220592173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488556482.89/warc/CC-MAIN-20210624171713-20210624201713-00358.warc.gz"}
|
http://math.stackexchange.com/questions/35374/minimal-cut-in-a-network
|
# Minimal cut in a network
How do you find the minimal cut in a network?
-
If you give an example or some background, it'll be easier for us to answer. Do you have a particular network in mind? – Ben Derrett Apr 27 '11 at 8:16
## 1 Answer
I refer you here. It's a pretty simple algorithm.
-
Or do a search for "network flow" and take your pick of the results. If you prefer hard copy, most textbooks on Discrete Mathematics will have a chapter on network flows, with a discussion of the algorithm. – Gerry Myerson Apr 27 '11 at 13:14
|
2014-07-30 17:31:50
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8106022477149963, "perplexity": 607.4033274008268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510270877.35/warc/CC-MAIN-20140728011750-00240-ip-10-146-231-18.ec2.internal.warc.gz"}
|
https://brilliant.org/problems/an-algebra-problem-by-anthony-pham/
|
# An algebra problem by Anthony Pham
Algebra Level pending
There is a sequence of integers $$a_{1} ,a_{2},a_{3},\ldots$$ such that $$a_{n}=a_{n-1}-a_{n-2}$$, for every $$n>2$$. If the sum of the first 1996 terms is 2015, and the sum of the first 2015 terms is 1996, what is the sum of the first 2019 terms?
×
|
2017-01-18 06:08:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8262683153152466, "perplexity": 99.94306293106757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00315-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://blog.melindalu.com/month-2021-08/
|
# Building: How to build an outdoor climbing cube part 2 — the climbing surface
Thing practiced: building the holiest wall
(This is the second part of 3 — part 1 covers design, foundation, and framing; and part 3 will cover flooring, finishing touches, and routesetting.)
## So far
There’s a frame, but no surface to mount climbing holds to. What next?
Tools used for climbing surface (utility rating out of 5):
• Stanley yellow panel carry handle (★★★★½)
• putty knife (★★★★)
• hand sanding block (★★)
• rubber mallet (★★★)
• 500 golf tees (★★★½)
• paint rollers and tray (★★★★)
• paint can opener (★★★★★)
• paint stirrer (★★★½)
• two Wal-Board Tools 8" taping knives (★★★★)
• Wal-Board Tools 14" mud pan (★★★★)
• a sacrificial sheet of 1" x 4' x 8' foam insulation (★★★★)
• Makita 7.25" circular saw with Diablo 40-tooth blade (★★★★½)
• tape measure (★★★★½)
• regular Sharpies (★★★★)
• metallic Sharpies (★★★★)
• 2' level (★★★½)
• Bosch 12V drill (★★★★½)
• Black and Decker drill (★★)
• Black and Decker 20V impact driver (★★★★½)
• Fisch 7/16" Forstner bit (★★★★★)
• standard drill bit and impact bit set (★★★★)
• reciprocating saw
• hammer (★★★)
Other tools used:
• whiteboard
• 8x11 paper
• several cameras
• JBL Charge 3 (★★★★★)
Parts list (climbing surface):
• about 800 2-hole T-nuts from Escape Climbing with accompanying mounting screws
• 11 sheets 3/4" x 4' x 8' marine-grade CDX plywood
• 3 Simpson Strong-Tie galvanized heavy angles
• 6 1/2" bolts/nuts/washers
• 400 Deckmate #9 x 3" screws
• 300 GRK #9 x 2.5" R4 screws
• 40 GRK 5/16" x 4" RSS structural screws
• 40 GRK 1/4" x 2.5" RSS structural screws
• 2 gallons KILZ 2 interior/exterior primer
• 2 gallons Behr porch and patio paint (silver gray low-lustre enamel)
• 7 gallons drywall mud
• 15 pounds play sand
• 3 lbs DAP Plastic Wood-X wood filler
## Planning
### 1. Deciding on the plywood to use
Based mainly on the perceived safety of obtaining sheets mid-pandemic, we decided to go with 3/4"-thick marine-grade, pressure-treated plywood from Economy Lumber Oakland. Each sheet was a standard 4' x 8', and we decided to get 11 sheets to cover all our surfaces and leave some extra for adding volumes later.
### 2. Choosing the T-nut layout
Climbing holds are mounted to a climbing surface (whether made of wood or not) with 3/8"-16 bolts threaded into 3/8"-16 T-nuts that are attached firmly to the climbing surface.
Before getting started with preparing our climbing surface, we needed to decide how we would lay out our T-nuts. To do this, we looked around at other people’s builds, and drew alternatives out on sheets of 8.5” x 11” paper, and settled on a grid with holes sqrt(32) == 5.66” apart, diagonally (equivalent to rows of holes 8" apart horizontally, spaced 4" apart vertically, and offset horizontally by 4" per row).
### 3. Setting up the workspace
Four feet by eight feet of plywood turn out to be way more cumbersome to handle in practice than they are in theory, so we needed to amass a few tools and clean out some space to make it possible to work with our plywood sheets.
First, we needed to figure out a way to cut our sheets, without snagging our saw blade (i.e. if the sheets collapse in from two sawhorses supporting each short end) or wasting too much wood cutting into our supports (i.e. if we used long 2x6s to support the sheet along its length). After trying to come up with cleverer options, we decided to just get a cheap piece of 4' x 8' foam insulation from Home Depot to place under an entire sheet of plywood as we cut it, and be okay with shredding the insulation over the course of the project. This worked pretty well.
Then, learning from the failures of the stubborn approach to post digging last time, we decided to purchase a sheet-goods lifter. This is a \$5 plastic handle-type thing that you use to make it possible to get both arms around a wide sheet to carry it safely. This, disappointingly, also worked pretty well.
Then we spent some time moving lumber/other stuff around our home until there was a large-enough space to have three flat-sheet-of-plywood-size spaces: one for resting, one for marking and drilling, and one for one of: cutting, T-nut mounting, priming, texturing, or painting.
## Plywood
### 4. Cutting to size and filling in the surface with wood putty
We tried to design the shape of our wall to use an intact sheet of plywood where possible, to minimize excess cutting. The whole climbing cube is pretty small, though, so there were only three sections that were 4' x 8' or larger flat — so there were only three sheets of plywood we could keep whole.
For the rest of the climbing surface, we tried to close-pack our sections as tightly as possible to minimize material waste:
To make the cuts, we placed the foam insulation directly underneath the sheet being cut, marked our cuts, then just ran the circular saw across the plywood — leaving cuts in the foam but a neat and unburred plywood cut, and no cut-up sawhorses.
Because we were using CDX plywood (which has faces of quality C and D — not good), there were significant flaws and gouges in most of the sheets. We tried our best to fill these with wood filler, then sand the surface down to be reasonably flat (but not perfect — we knew that we’d still be texturing and painting on top).
### 5. Marking and drilling
After each sheet was cut to size and roughly-flat, we marked and drilled holes for our T-nuts. We tried to stick to the 5.66" grid described in 2., with some holes offset to avoid hitting the framing members that would sit behind them.
To make the holes the T-nuts would fit into, we drilled using a 7/16" Forstner bit, which was the perfect size to have the T-nuts sit snugly. (Forstner bits are expensive but make fantastically precise, crisp-edged holes — if we’d used standard bits, the holes would likely be too ragged to fit our T-nuts perfectly.)
We thought it’d be important to make the T-nut holes as perfectly perpendicular to the surface as possible, to make sure the holds would mount securely. To try to achieve this without too much tooling, we used a block of wood with a hole drilled through it as a guide. This was imprecise enough (but I was lazy enough to not make a proper jig) that we ended up abandoning this partway through and just doing this by eye. So far this has turned out to be workable.
### 6. Mounting the T-nuts
We chose 2-hole T-nuts from Escape Climbing, which hit a sweet spot in optimizing for barrel length, corrosion-resistance, and availability during the pandemic. (In an ideal world these would be stainless and I would’ve gotten to order them from the McMaster-Carr catalog, but that option was prohibitively more expensive.)
To mount the T-nuts into the plywood, we hammered them in with a big mallet, predrilled pilot holes in each screw hole, and attached with two (provided) screws.
### 7. Teeing
Now, with the sheets cut to size and full of T-nuts, we were ready to add the surface finish to the plywood. To avoid getting any primer/texture/paint into our arduously-mounted T-nut threads, we stole an idea we saw on YouTube and plugged them with golf tees.
### 8. Priming
To improve adhesion of our texture to the plywood, we started with a latex primer layer. We used KILZ 2 interior/exterior primer, which seemed to be reasonably priced for the quality.
### 9. Surface texturing
We were originally going to skip this step and just use paint with some sand in it directly on top of the primer. We tried this on one sheet, though, and found the wood texture was too pronounced to allow for smearing, and gave us a headache.
We thought we’d be consigned to misery forever, but our friend Kris suggested that we look into rock texture, and instead of giving up we figured out how to mix up something nicely-thick that would still stick well to the plywood: about 2:3:8 by volume of sand, primer, and drywall mud. We applied this with drywall taping tools instead of a paint roller, although in retrospect it might’ve been easier to thin it with water and apply with a paint roller.
### 10. Painting
By this point we were really bored of doing this. But, painting is fun? We chose Behr porch and patio paint because their display at Home Depot was appealing and because climbing walls, porches, and patios are all walked on with feet.
## Putting it all together
### 11. Mounting the wall G plywood to frame G
We had decided (when framing) to use our existing fence posts (which were already concreted down 16") as our main frame, with 2x6s added horizontally to carry load to the posts. Because these 2x6s were placed with their 5.5" face flush to the fence, in order to place the plywood flush against them, we needed to drill out holes where the T-nut backs and hold bolts would intersect with the girts.
After preparing the framing members, mounting the sheets was straightforward: we used a handful of large structural screws and several handfuls of wood/decking screws per sheet to attach the plywood both directly into the fenceposts and to the girts.
### 12. Mounting the wall F plywood to frame F
Mounting the surface for the right half of wall F was even more straightforward: because the 2x6 framing stringers were mounted in a standard balloon-frame configuration, we had been able to drill our T-nut holes to avoid hitting the stringers. We mounted the sheets using structural screws and wood/decking screws both directly to the posts and to the stringers across the sheet.
### 13. Mounting the WALL·E plywood to FRAME·E
For the cave wall (WALL·E) (and the corner of wall F above), we again begged for help from our friend S, who agreed to help us make this work. This was trickier than the other walls for a few reasons:
• the sections were overhung at different angles, so we couldn’t rest them on each other while drilling, and attaching the plywood sheets while holding them in place was physically taxing;
• the wall was taller, so we needed to do this work on a ladder;
• I had dug an asymmetric and partial hole in the ground to get started on the flooring, so using the ladder was a fun and dangerous activity; and
• senioritis.
Eventually, with the help of S’s reciprocating saw, we got this up.
### 14. Finishing: covering fastener holes and filling in cracks and edges
We wanted the sheets to feel like one continuous surface (and for climbers not to be hurt by protruding edges), so we filled the holes and the sub-millimeter gaps between plywood sheets with wood putty, drywall mud, and paint.
Now we have a cube! Can we resist climbing on it until we have proper safety matting in place?
### P.S.: references
Thanks to the Vancouver Carpenter for being very handsome and teaching us about drywall.
# Building: How to build an outdoor climbing cube part 1 — design, foundation, framing
Thing practiced: building the tiniest and most-exposed of houses
(This is the first part of 3 — part 2 covers how we built the climbing surface; and part 3 will cover flooring, finishing touches, and routesetting.)
As a staying-inside project this pandemic, V, some friends, and I built this miniature climbing gym. Its name is minicube:
Tools used for foundation and framing (utility rating out of 5):
• Makita 7.25" circular saw (★★★★½)
• Rockwell 4.5" compact circular saw (★★½)
• 2 Swanson Tool speed squares (★★★★★)
• Bosch 12V drill (★★★★½)
• Black and Decker 20V impact driver (★★★★½)
• 16-inch-long 3/8" and 1/2" spade bits
• standard drill bit and impact bit set
• tape measure (★★★★½)
• 2' level (★★★½)
• post level (★★½)
• shovel (★★★)
• hammer (★★★)
• regular Sharpies (★★★½)
• metallic Sharpies (★★★★)
• no. 2 pencils (★★★)
• T-bevel (★★)
• some string (★★★½)
Information tools used:
• whiteboard (★★★★½)
• 8x11 paper (★★★★)
• 4x6 and 5x8 index cards (★★★★)
• blue painters/leftover skeleton tape (★★★★)
• Jira (★★★)
• SketchUp trial (★★★)
Other tools used:
• several cameras
• JBL Charge 3
Parts list (foundation and framing only):
• 750 lbs Quikrete quick-setting concrete
• 2 pcs 6x6 x 16' pressure-treated lumber for primary posts
• 2 pcs 4x4 x 12' pressure-treated lumber for secondary posts
• 4 pcs 2x6 x 10' pressure-treated lumber for bottom cross-bracing
• 50 pcs 2x6 x 12' douglas fir framing lumber for the rest of the framing
• 1/2" carriage bolts/nuts/washers
• 3/8" carriage bolts/nuts/washers
• Simpson Strong-Tie LUS25Z 2x6 face-mount joist hangers
• Simpson Strong-Tie 12 A21, 26 A23Z, 22 A34Z, 4 A35Z angles, 4 H2.5Z hurricane ties, and 1 TP15 tie plate
• 600 Simpson Strong-Tie #9 x 1.5" SD connector screws
• 300 Simpson Strong-Tie #9 x 2.5" SD connector screws
• 12 GRK 5/16" x 4" RSS structural screws
• 12 GRK 1/4" x 2.5" RSS structural screws
• 20 GRK #9 x 2.5" R4 screws
• a handful of nails
• Rustoleum gray paint
## Why
We saw Stacey’s, and it created a sense of wonder in our hearts. The dream: of a place of refuge a few steps away from one’s usual cares; a place where the outlines of the self could be more distinct. A place that would let us dream in peace; to try on alternate futures and feel how they fit. Something uncomplicated and fun.
## Design
### 1. Exploration, goals, and constraints
Our goals were (1) to have a climbing area where people can recreate, (2) with a cave, (3) and a few corners for stemmy climbs; (4) without being annoying to the neighbors.
Constraints: it all had to fit in a 110" x 110" footprint, without anchoring to the existing building. Figuring out how to preserve the existing sunlight was also important — building the walls too high would block too much light, but too-short walls aren’t climbable.
After a few experiments, we decided on a three-sided configuration with one taller (but still short) wall with 10° and 65° overhangs, close to the existing building (marked as WALL·E), and two shorter vertical walls on the other two sides (walls F and G). WALL·E would be about 10' high, while walls F and G would be about 6' high.
## Design
### 2. Engineering design
After we had a rough idea of what we wanted to build, it was time to engineer it. This was fun: I got to revisit beam theory, read about/measure West Oakland soil densities, read and reread the Oakland building code; and a bunch of other desk work.
After we finished all the load calculations (love a cantilever), we chose our materials and drew it up in SketchUp:
## Design
### 3. Getting ready for fabrication
Fabrication seemed straightforward: we just needed to sink the footings, build the structure, then clad it with a climbing surface.
We assembled the materials:
Then warmed up and tested our tools by building some watching V build some sawhorses.
## Foundation
### 4. Digging
First, we called before we dug.
Next we had to kill all the plants that were previously growing in that space (RIP wisteria), and take a few inches of surface soil off to maximize available height.
Then, in an excruciatingly tedious process, we dug our post holes 42" deep (note to diggers: please don’t be stubborn, just borrow/rent/buy a post hole digger).
## Foundation
### 5. Setting posts for WALL·E
We used pressure-treated lumber from Economy Lumber Oakland for our posts. For WALL·E proper, we used one 6x6 for the leftmost post, and two 4x4s for the center and right posts.
Because the posts would be holding up about 1,500 lbs of dead weight and potentially several times that in (dynamic) live loads, we needed to make sure the foundation was sound.
To set each one, we filled the bottom 4" of each post hole with gravel, put a few nails in the post to get extra grip for the concrete, poured concrete for the remaining 3+' to ground level, then smoothed the top of the concrete away from the post to try to direct any rainwater runoff away from the post.
## Framing
### 6. Doing the rest of the framing for WALL·E
There were two parts to our framing: horizontal girts to tie the posts together and serve as the main load-bearing structure, and a standard-ish balloon frame to carry load from the climbing surfaces back to the posts. We used pressure-treated 2x6s for the parts of the frame that would touch the ground, and standard 2x6 douglas fir, painted with RUST-OLEUM silver gray, for the rest.
We framed everything using 2x6s at about 16" on-center, with slight deviances depending on hole placement for the climbing-hold holes (more on that in the next post).
First, we put up WALL·E’s horizontal girts. We attached these to the posts with 1/2" and 3/8" carriage bolts because the girts would be part of the critical load path from our climbers to the ground, and these would be critical joints. (This meant that we got to use these absurdly-long drill bits.)
For the 10° and 65° overhanging walls, we used joist hangers to carry load from the framing members to the post/girt structure. Joist hangers are the fasteners used to attach floor/ceiling boards to the vertical frames in a standard wood-framed house. In our climbing cube, the overhanging walls carry load more like a ceiling/floor than like a vertical wall in a house.
The first section of wall framing we put up was the 10° section. We cut, painted, and beveled the wood stringers, then attached them to the girts, posts, and themselves with joist hangers, 1/2" bolts, various other Simpson Strong-Ties, and structural and wood screws:
After that, we put up the framing for the 65° section. This was basically the same as for the 10° section, except that the forces here would be larger, so we sized up the structural elements and fasteners commensurately:
We put up a minimal vertical frame at the bottom of WALL·E for mounting the kicker board:
That was as far as we wanted to get with framing before putting in our last two support posts.
## Foundation
### 7. Setting the last two posts (in wall F)
The last two posts were one pressure-treated 6x6 (our longest post: 112" above ground, 42" below ground) to support WALL·E from the top and wall F, and one pressure-treated 4x4 for the right side of wall F. We used the same process as for the earlier three posts, but this time with some video:
## Framing
### 8. Doing the framing for walls F and G
Framing up walls F and G was much more straightforward than WALL·E — these would be vertical walls, going up only to our fence line (about 6'). For wall G, we planned to use our existing fence posts (which were already concreted down 16") as our posts, adding only horizontal girts for bracing and mounting the climbing surface to.
First we had to remeasure and replan, to see if our original plan still made sense. On a second pass, we decided to cut the height of these two walls, to (1) block less sunlight, (2) not be such an eyesore for the neighbors, and (3) avoid digging any more post holes.
For wall F (the vertical wall adjoining the cave wall), we put up a straightforward balloon frame in place using 2x6s, Simpson Strong-Ties, and some screws — a relief after WALL·E.
For wall G (a totally-vertical 16'-wide traverse wall), we cross-braced our 4x4 fenceposts with horizontal 2x6 girts attached with structural screws.
### 9. Back to WALL·E: framing the topmost section
To put up the framing for the 0° vertical section at the top of WALL·E, we procrastinated for two weeks, decided we might never complete the project, then called in help from pro contractor team S and C. With them on the team it was a breeze and a joy.
Phew. Framing complete. Next post: okay but what about the climbing part? (===, what’s the fastest way to mount 800 T-nuts?)
### P.S.: references
Special thanks to jennsends and MattBangsWood for information, dispelling our distrust of YouTube (a little), and inspiration.
|
2021-11-29 06:06:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3020128011703491, "perplexity": 8218.585641247884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358688.35/warc/CC-MAIN-20211129044311-20211129074311-00605.warc.gz"}
|
https://www.lmfdb.org/L/2/1300
|
Results (1-50 of at least 1000)
Next
Label $\alpha$ $A$ $d$ $N$ $\chi$ $\mu$ $\nu$ $w$ prim arith $\mathbb{Q}$ self-dual $\operatorname{Arg}(\epsilon)$ $r$ First zero Origin
2-1300-13.5-c0-0-0 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 13.5 $$0.0 0 -0.0467 0 0.977407 Modular form 1300.1.t.a.1201.1 2-1300-13.5-c0-0-1 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 13.5$$ $0.0$ $0$ $-0.0467$ $0$ $1.40076$ Modular form 1300.1.t.b.1201.1
2-1300-13.8-c0-0-0 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 13.8 $$0.0 0 0.0467 0 1.16770 Modular form 1300.1.t.a.801.1 2-1300-13.8-c0-0-1 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 13.8$$ $0.0$ $0$ $0.0467$ $0$ $1.75504$ Modular form 1300.1.t.b.801.1
2-1300-1300.1003-c0-0-0 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 1300.1003 $$0.0 0 0.397 0 1.79652 Modular form 1300.1.cy.a.1003.1 2-1300-1300.1039-c0-0-0 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 1300.1039$$ $0.0$ $0$ $-0.0200$ $0$ $1.20643$ Modular form 1300.1.bd.a.1039.1
2-1300-1300.1039-c0-0-1 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 1300.1039 $$0.0 0 0.0800 0 1.56100 Modular form 1300.1.bd.b.1039.1 2-1300-1300.1047-c0-0-0 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 1300.1047$$ $0.0$ $0$ $0.222$ $0$ $2.03727$ Modular form 1300.1.cy.a.1047.1
2-1300-1300.1087-c0-0-0 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 1300.1087 $$0.0 0 -0.0332 0 1.38121 Modular form 1300.1.ca.a.1087.1 2-1300-1300.1103-c0-0-0 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 1300.1103$$ $0.0$ $0$ $0.0205$ $0$ $1.63200$ Modular form 1300.1.ct.a.1103.1
2-1300-1300.1123-c0-0-0 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 1300.1123 $$0.0 0 0.0332 0 1.11804 Modular form 1300.1.ca.a.1123.1 2-1300-1300.1163-c0-0-0 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 1300.1163$$ $0.0$ $0$ $0.426$ $0$ $0.349263$ Modular form 1300.1.cy.a.1163.1
2-1300-1300.1203-c0-0-0 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 1300.1203 $$0.0 0 0.105 0 1.33876 Modular form 1300.1.ct.a.1203.1 2-1300-1300.1219-c0-0-0 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 1300.1219$$ $0.0$ $0$ $0.0253$ $0$ $1.58718$ Modular form 1300.1.cp.a.1219.1
2-1300-1300.1219-c0-0-1 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 1300.1219 $$0.0 0 0.392 0 2.60125 Modular form 1300.1.cp.b.1219.1 2-1300-1300.1227-c0-0-0 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 1300.1227$$ $0.0$ $0$ $0.183$ $0$ $1.94954$ Modular form 1300.1.cf.a.1227.1
2-1300-1300.123-c0-0-0 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 1300.123 $$0.0 0 0.0861 0 2.03782 Modular form 1300.1.cy.a.123.1 2-1300-1300.1231-c0-0-0 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 1300.1231$$ $0.0$ $0$ $-0.350$ $0$ $0.405306$ Modular form 1300.1.cl.a.1231.1
2-1300-1300.1231-c0-0-1 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 1300.1231 $$0.0 0 0.182 0 1.61681 Modular form 1300.1.cl.b.1231.1 2-1300-1300.1239-c0-0-0 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 1300.1239$$ $0.0$ $0$ $0.227$ $0$ $1.51709$ Modular form 1300.1.cp.b.1239.1
2-1300-1300.1239-c0-0-1 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 1300.1239 $$0.0 0 -0.00537 0 1.83820 Modular form 1300.1.cp.a.1239.1 2-1300-1300.1263-c0-0-0 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 1300.1263$$ $0.0$ $0$ $0.00743$ $0$ $1.42481$ Modular form 1300.1.cy.a.1263.1
2-1300-1300.1267-c0-0-0 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 1300.1267 $$0.0 0 -0.0205 0 1.65907 Modular form 1300.1.ct.a.1267.1 2-1300-1300.163-c0-0-0 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 1300.163$$ $0.0$ $0$ $-0.484$ $0$ $0.287807$ Modular form 1300.1.ct.a.163.1
2-1300-1300.167-c0-0-0 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 1300.167 $$0.0 0 0.183 0 1.54475 Modular form 1300.1.cy.a.167.1 2-1300-1300.179-c0-0-0 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 1300.179$$ $0.0$ $0$ $0.112$ $0$ $1.32915$ Modular form 1300.1.cp.b.179.1
2-1300-1300.179-c0-0-1 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 1300.179 $$0.0 0 0.145 0 2.12309 Modular form 1300.1.cp.a.179.1 2-1300-1300.187-c0-0-0 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 1300.187$$ $0.0$ $0$ $-0.276$ $0$ $0.990915$ Modular form 1300.1.cf.a.187.1
2-1300-1300.191-c0-0-0 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 1300.191 $$0.0 0 0.129 0 1.69481 Modular form 1300.1.cl.a.191.1 2-1300-1300.191-c0-0-1 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 1300.191$$ $0.0$ $0$ $0.0626$ $0$ $1.77408$ Modular form 1300.1.cl.b.191.1
2-1300-1300.203-c0-0-0 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 1300.203 $$0.0 0 0.406 0 0.0442066 Modular form 1300.1.cf.a.203.1 2-1300-1300.211-c0-0-0 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 1300.211$$ $0.0$ $0$ $-0.129$ $0$ $1.38322$ Modular form 1300.1.cl.a.211.1
2-1300-1300.211-c0-0-1 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 1300.211 $$0.0 0 -0.0626 0 1.86932 Modular form 1300.1.cl.b.211.1 2-1300-1300.223-c0-0-0 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 1300.223$$ $0.0$ $0$ $-0.332$ $0$ $0.646656$ Modular form 1300.1.cy.a.223.1
2-1300-1300.227-c0-0-0 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 1300.227 $$0.0 0 -0.430 0 0.789268 Modular form 1300.1.ct.a.227.1 2-1300-1300.259-c0-0-0 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 1300.259$$ $0.0$ $0$ $0.0200$ $0$ $1.09939$ Modular form 1300.1.bd.a.259.1
2-1300-1300.259-c0-0-1 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 1300.259 $$0.0 0 -0.0800 0 1.72492 Modular form 1300.1.bd.b.259.1 2-1300-1300.267-c0-0-0 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 1300.267$$ $0.0$ $0$ $-0.397$ $0$ $0.827276$ Modular form 1300.1.cy.a.267.1
2-1300-1300.323-c0-0-0 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 1300.323 $$0.0 0 -0.109 0 0.993780 Modular form 1300.1.ct.a.323.1 2-1300-1300.327-c0-0-0 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 1300.327$$ $0.0$ $0$ $0.484$ $0$ $1.99595$ Modular form 1300.1.ct.a.327.1
2-1300-1300.383-c0-0-0 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 1300.383 $$0.0 0 0.196 0 1.35600 Modular form 1300.1.cy.a.383.1 2-1300-1300.423-c0-0-0 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 1300.423$$ $0.0$ $0$ $-0.0241$ $0$ $1.54075$ Modular form 1300.1.ct.a.423.1
2-1300-1300.427-c0-0-0 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 1300.427 $$0.0 0 -0.426 0 2.50710 Modular form 1300.1.cy.a.427.1 2-1300-1300.439-c0-0-0 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 1300.439$$ $0.0$ $0$ $-0.267$ $0$ $1.05834$ Modular form 1300.1.cp.b.439.1
2-1300-1300.439-c0-0-1 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 1300.439 $$0.0 0 0.165 0 1.17812 Modular form 1300.1.cp.a.439.1 2-1300-1300.447-c0-0-0 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 1300.447$$ $0.0$ $0$ $-0.186$ $0$ $1.37302$ Modular form 1300.1.cf.a.447.1
2-1300-1300.459-c0-0-0 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 1300.459 $$0.0 0 -0.165 0 0.768021 Modular form 1300.1.cp.a.459.1 2-1300-1300.459-c0-0-1 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 1300.459$$ $0.0$ $0$ $0.267$ $0$ $2.22221$ Modular form 1300.1.cp.b.459.1
2-1300-1300.463-c0-0-0 $0.805$ $0.648$ $2$ $2^{2} \cdot 5^{2} \cdot 13$ 1300.463 $$0.0 0 -0.183 0 1.54949 Modular form 1300.1.cf.a.463.1 2-1300-1300.47-c0-0-0 0.805 0.648 2 2^{2} \cdot 5^{2} \cdot 13 1300.47$$ $0.0$ $0$ $-0.143$ $0$ $0.757009$ Modular form 1300.1.ca.a.47.1
Next
|
2022-09-30 15:18:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992088675498962, "perplexity": 464.995121657726}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00479.warc.gz"}
|
https://deepai.org/publication/fine-grained-reductions-from-approximate-counting-to-decision
|
# Fine-grained reductions from approximate counting to decision
The main problems in fine-grained complexity are CNF-SAT, the Orthogonal Vectors problem, 3SUM, and the Negative-Weight Triangle problem (which is closely related to All-Pairs Shortest Path). In this paper, we consider the approximate counting version of each problem; thus instead of simply deciding whether a witness exists, we attempt to (multiplicatively) approximate the number of witnesses. In each case, we provide a fine-grained reduction from the approximate counting form to the usual decision form. For example, if there is an O(c^n)-time algorithm that solves k-SAT for all k, then we prove there is an O((c+o(1))^n)-time algorithm to approximately count the satisfying assignments. Similarly, we get that the exponential time hypothesis (ETH) is equivalent to an approximate counting version. This mirrors a result of Sipser (STOC 1983) and Stockmeyer (SICOMP 1985), who proved such a result in the classical polynomial-time setting, and a similar result due to Müller (IWPEC 2006) in the FPT setting. Our algorithm for polynomial-time problems applies in a general setting in which we approximately count edges of a bipartite graph to which we have limited access. In particular, this means it can be applied to problem variants in which significant improvements over the conjectured running time bounds are already known. For example, the Orthogonal Vectors problem over GF(m)^d for constant m can be solved in time n·poly(d) by a result of Williams and Yu (SODA 2014); our result implies that we can approximately count the number of orthogonal pairs with essentially the same running time. Moreover, our overhead is only polylogarithmic, so it can be applied to subpolynomial improvements such as the n^3/(Θ(√( n))) time algorithm for the Negative-Weight Triangle problem due to Williams (STOC 2014).
## Authors
• 8 publications
• 4 publications
• ### Monochromatic Triangles, Triangle Listing and APSP
One of the main hypotheses in fine-grained complexity is that All-Pairs ...
07/18/2020 ∙ by Virginia Vassilevska Williams, et al. ∙ 0
• ### Approximately counting and sampling small witnesses using a colourful decision oracle
In this paper, we prove "black box" results for turning algorithms which...
07/10/2019 ∙ by Holger Dell, et al. ∙ 0
• ### Nearly Optimal Average-Case Complexity of Counting Bicliques Under SETH
In this paper, we seek a natural problem and a natural distribution of i...
10/12/2020 ∙ by Shuichi Hirahara, et al. ∙ 0
• ### On the Fine-Grained Complexity of Parity Problems
We consider the parity variants of basic problems studied in fine-graine...
02/18/2020 ∙ by Amir Abboud, et al. ∙ 0
• ### Fast Witness Counting
We study the witness-counting problem: given a set of vectors V in the d...
07/16/2018 ∙ by Peter Chini, et al. ∙ 0
• ### Fine-Grained Complexity of Regular Expression Pattern Matching and Membership
The currently fastest algorithm for regular expression pattern matching ...
08/06/2020 ∙ by Philipp Schepper, et al. ∙ 0
• ### Polynomial Anonymous Dynamic Distributed Computing without a Unique Leader
Counting the number of nodes in Anonymous Dynamic Networks is enticing f...
04/07/2021 ∙ by Dariusz R. Kowalski, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
### 1.1 Approximate counting and decision in coarse-grained settings
It is clearly at least as hard to count objects as it is to decide their existence, and very often it is harder. For example, Valiant [21] defined the class as the natural counting variant of , and Toda [19] proved that contains the entire polynomial hierarchy. The decision counterparts of many -complete problems are in , for example, counting perfect matchings is -complete and detecting one is in .
However, the situation changes substantially if we consider approximate counting rather than exact counting. For all real with , we say that is an -approximation to if holds. Clearly, computing an -approximation to is at least as hard as deciding whether holds, but surprisingly, in many settings it is no harder. Indeed, Sipser [16] and Stockmeyer [17] proved implicitly that every problem in has a polynomial-time randomised -approximation algorithm using an -oracle; the result is later proved explicitly in Valiant and Vazirani [22]. This is a foundational result in the wider complexity theory of polynomial approximate counting initiated by Dyer, Goldberg, Greenhill and Jerrum [5].
Another example arises in parameterised complexity. Here, the usual goal is to determine whether an instance of size with parameter can be solved in “FPT time” for some computable function . Hardness results are normally presented using the -hierarchy (see for example [6]). Müller [14] has proved that for any problem in #W[1], there is a randomised -approximation algorithm using a W[1]-oracle which runs on size- parameter- instances in time for some computable . He also proves analogous results for the rest of the hierarchy.
We consider the subexponential-time setting. The popular (randomised) exponential time hypothesis (ETH) introduced by Impagliazzo and Paturi [9] asserts that there exists such that no randomised algorithm can solve an -variable instance of -SAT in time .222
To streamline our discussion, we ignore the detail that some papers only allow for deterministic algorithms. Throughout the paper, we require randomised algorithms to have success probability at least
unless otherwise specified. We prove that ETH is equivalent to a seemingly weaker approximate counting version.
###### Theorem 1.
ETH is true if and only if there exists such that no randomised -approximation algorithm can run on -variable instances of #-SAT in time .
Note that the usual argument of Valiant and Vazirani [22] does not apply in this setting without modification, as it adds clauses of linear width to the instance. Our proof will take a similar approach, but making use of a sparse hashing technique due to Calabro, Impagliazzo, Kabanets and Paturi [2]. (We give more detail in Section 1.4.)
### 1.2 Fine-grained results in the exponential setting
In fine-grained complexity, we are concerned not only with the classification of algorithms into broad categories such as polynomial, FPT, or subexponential, but with their more precise running times. A more fine-grained analogue of ETH is known as the strong exponential time hypothesis (SETH, see Impagliazzo, Paturi and Zane [10]), which asserts that for all , there exists such that no randomised algorithm can solve -variable instances of -SAT in time .
An analogue of Theorem 1 for SETH is implicit in Thurley [18], who provides a randomised approximate counting algorithm that makes use of a decision oracle: SETH is true if and only if for all , there exists such that no randomised -approximation algorithm can run on -variable instances of -SAT in time . However, this result is not ideal from a fine-grained perspective, as it does not guarantee that solving -SAT and approximating #-SAT have similar time complexities in the limit as . Indeed, given an algorithm for -SAT with running time , Thurley’s approximation algorithm has worst-case running time , that is, the exponential savings over exhaustive search goes down from for decision to using Thurley’s algorithm. Traxler [20] proved that if we allow super-constant clause width instead of considering -SAT, then the same savings can be achieved for approximate counting and decision. We strengthen Traxler’s result so that it applies in the setting of SETH.
###### Theorem 2.
Let . Suppose that for all , there is a randomised algorithm which runs on -variable instances of -SAT in time . Then for all and all , there is a randomised -approximation algorithm which runs on -variable instances of #-SAT in time .
In particular, if SETH is false, then Theorem 2 yields an efficient algorithm for approximating #-SAT for sufficiently large . Note that there is no particular reason to believe that an efficient decision algorithm would yield an efficient counting algorithm directly. Indeed, when , the most efficient known algorithms run in time for decision (due to Hertli [7]), in time for -approximate counting (due to Thurley [18]), and in time for exact counting (due to Kutzkov [12]).
It remains an open and interesting question whether a result analogous to Theorem 2 holds for fixed , that is, whether deciding -SAT and approximating #-SAT have the same time complexity up to a subexponential factor. For (large) fixed , the best-known decision, -approximate counting, and exact counting algorithms (due to Paturi, Pudlák, Saks, and Zane [15], Thurley [18], and Impagliazzo, Matthews, and Paturi [8], respectively) all have running time , but with progressively worse constants in the exponent. Our reduction has an overhead of , so we do not get improved approximate counting algorithms for fixed .
### 1.3 Fine-grained results in the polynomial-time setting
Alongside SAT, perhaps the most important problems in fine-grained complexity are 3SUM, Orthogonal Vectors (OV), and All-Pairs Shortest Paths (APSP). All three problems admit well-studied notions of hardness, in the sense that many problems reduce to them or are equivalent to them under fine-grained reductions, and they are not known to reduce to one another. See Williams [26] for a recent survey. We prove analogues of Theorem 2 for 3SUM and OV. While it is not clear what a “canonical” counting version of APSP should be, we nevertheless prove an analogue of Theorem 2 for the Negative-Weight Triangle problem (NWT) that is equivalent to APSP under subcubic reductions. Our results, together with previous decision algorithms, immediately imply three new approximate counting algorithms. (However, we believe that two of these, Theorems 4 and 8, may also be derived more directly by modifying their known decision algorithms.)
3SUM asks, given three integer lists , and of total length , whether there exists a tuple with . (Frequently the input is taken to be a single list rather than three; it is well-known that the two versions are equivalent.) It is easy to see that 3SUM can be solved in operations by sorting and iterating over all pairs in , and it is conjectured that for all , 3SUM admits no -time randomised algorithm. We obtain an analogue of Theorem 2 for the natural counting version of 3SUM, in which we approximate the number of tuples with . (See Section 2 for details on our model of computation.)
###### Theorem 3.
If 3SUM with integers from has a randomised -time algorithm, then there is an -time randomised -approximation algorithm for #3SUM.
Note that normally is assumed to be at most , in which case our algorithm has only polylogarithmic overhead over decision. Thus independently of whether or not the 3SUM conjecture is true, we can conclude that 3SUM and, say, -approximating #3SUM have essentially the same time complexity.
Chan and Lewenstein [3] have proved that the 3SUM conjecture fails when the problem is restricted to instances in which elements of one list are somewhat clustered. This is an interesting special case with several applications, including monotone multi-dimensional 3SUM with linearly-bounded coordinates — see the introduction of [3] for an overview. By Chan and Lewenstein’s algorithm combined with an analogue of Theorem 3, we obtain the following result.
###### Theorem 4.
For all , there is a randomised -approximation algorithm with running time for instances of #3SUM with integers in such that at least one of , , or may be covered by intervals of length .
We next consider OV, which asks, given two lists and of total length of zero-one vectors over , whether there exists an orthogonal pair . It is easy to see that OV can be solved in operations by iterating over all pairs, and it is conjectured that for all , when , OV admits no -time randomised algorithm. This conjecture is implied by SETH [23], and Abboud, Williams and Yu [1] proved that it fails when . We obtain an analogue of Theorem 2 for the natural counting version of OV, in which we approximate the number of orthogonal pairs.
###### Theorem 5.
If OV with vectors in dimensions has a randomised -time algorithm, then there is a randomised -time -approximation algorithm for #OV.
Note that it is impossible to decide OV in time , so when is polylogarithmic (as is the usual assumption), our algorithm has only polylogarithmic overhead over decision. Thus our result is able to turn the -time algorithm of [1] into an approximate counting algorithm, but Chan and Williams [4] already gave a deterministic exact counting algorithm of similar complexity.
A version of OV in which the real zero-one vectors are replaced by arbitrary vectors over finite fields or rings is also studied, and there are efficient randomised algorithms due to Williams and Yu [25]. Their algorithms do not immediately generalise to counting, but by an analogue of Theorem 5, we nevertheless obtain efficient approximate counting algorithms.
###### Theorem 6.
Let be a constant prime power. Then there is a randomised -approximation algorithm for -vector instances of #OV over (resp. ) in time (resp. ).
Note that the dependence on may be close to best possible; under SETH, for all and with , OV over (resp. ) cannot be solved in time (resp. ) for all but finitely many values of [25]
Finally, we study the Negative-Weight Triangle problem (NWT) of deciding whether an edge-weighted tripartite graph contains a triangle of negative total weight, which Williams and Williams [27] have shown is equivalent to APSP under fine-grained reductions. It is easy to see that NWT can be solved in operations by checking every possible triangle, and it is conjectured that for all , NWT admits no -time randomised algorithm. We obtain an analogue of Theorem 2 for the natural counting version of NWT, in which we approximate the number of negative-weight triangles.
###### Theorem 7.
If NWT for -vertex graphs with weights from has a randomised -time algorithm, then there is a randomised -time -approximation algorithm for #NWT.
Note that it is impossible to decide NWT in time , so when is polynomially bounded, our algorithm has only polylogarithmic overhead over decision. Note also that [27] provides a subcubic reduction from listing negative-weight triangles to NWT, although it has polynomial overhead and so does not imply our result. Together with an algorithm of Williams [24], Theorem 7 implies the following.
###### Theorem 8.
There is a randomised -approximation which runs on -vertex instances of #NWT with polynomially bounded weights in time .
### 1.4 Techniques
We first discuss Theorems 1 and 2, which we prove in Section 3. In the polynomial setting, the standard reduction from approximating #-SAT to deciding -SAT is due to Valiant and Vazirani [22], and runs as follows. If a -CNF formula has at most solutions for some , then using a standard self-reducibility argument, one can count the number of solutions with calls to a -SAT-oracle. Otherwise, for any , one may form a new formula by conjoining with uniformly random XOR clauses. It is relatively easy to see that as long as the number of satisfying assignments of is substantially greater than , then is concentrated around . Thus by choosing appropriately, one can count exactly, then multiply it by
to obtain an estimate for
.
Unfortunately, this argument requires modification in the exponential setting. If has variables, then each uniformly random XOR has length and therefore cannot be expressed as a width- CNF without introducing new variables. It follows that (for example) will contain variables. This blowup is acceptable in a polynomial setting, but not an exponential one — for example, given a -time algorithm for -SAT, it would yield a useless -time randomised approximate counting algorithm for #-SAT. We can afford to add only constant-length XORs, which do not in general result in concentration in the number of solutions.
We therefore make use of a hashing scheme developed by Calabro, Impagliazzo, Kabanets, and Paturi [2] for a related problem, that of reducing -SAT to Unique--SAT. They choose a -sized subset of uniformly at random, where is a large constant, then choose variables binomially at random within that set. This still does not yield concentration in the number of solutions of
, but it turns out that the variance is sufficiently low that we can remedy this by summing over many slightly stronger independently-chosen hashes.
Our results in Section 1.3 follow from a more general theorem, in which we consider the problem of approximately counting edges in an arbitrary bipartite graph to which we have only limited access. In particular, we only allow adjacency queries and independence queries: An adjacency query checks whether an edge exists between two given vertices of the graph, and an independence query checks whether a given set of vertices is an independent set in the graph. The standard approach (as used by Thurley [18]) would be to use random adjacency queries to handle instances with many edges, and independence queries and self-reducibility to handle instances with few edges. This approach requires polynomially many independence queries, which is too many to establish the tight relationship between approximate counting and decision required for the results of Section 1.3. In contrast, our main algorithm (Theorem 10) approximates the number of edges in such a graph in quasi-linear time and makes only poly-logarithmically many independence queries.
Using this algorithm, we obtain the results for polynomial-time problems in a straightforward way. For example, in the proof of Theorem 5 for OV, the vertices of are the input vectors and the edges correspond to orthogonal pairs. An adjacency query corresponds to an orthogonality check, which can be done in time in dimensions, and an independence query corresponds to a call to the decision oracle for OV on a sub-instance, which takes time by assumption. Since only poly-logarithmically independence queries occur, Theorem 5 follows.
Our algorithm for Theorem 10 works roughly as follows. Let be the bipartite graph whose edges we are trying to count, and let and be the vertex classes of . Using binary search together with our independence oracle, we can quickly find non-isolated vertices of . If contains few such vertices, then by the standard self-reducibility argument used in Theorems 1 and 2, we can quickly determine exactly, so suppose contains many such vertices. If every vertex of is contained in only a small proportion of its edges, then we can approximately halve by passing to a uniformly random subset of , and proceed similarly to Valiant and Vazirani. However, in general this will not be the case and the number of edges in the resulting graph will not be concentrated. We must therefore detect and remove problematic vertices as we go. The procedure we use for this is quite technical, and forms the bulk of the proof, so we defer further explanation to Section 4.
## 2 Preliminaries
We write for the set of all positive integers. For a positive integer , we use to denote the set . We use to denote the base- logarithm, and to denote the base- logarithm.
### 2.1 Notation
We write for the set of all positive integers. For a positive integer , we use to denote the set . We use to denote the base- logarithm, and to denote the base- logarithm.
We consider graphs to be undirected, and write . For all , we use to denote the neighbourhood of . For convenience, we shall generally present bipartite graphs as a triple in which is a partition of and .
When stating quantitative bounds on running times of algorithms, we assume the standard word-RAM machine model with logarithmic-sized words. We assume that lists and functions in the problem input are presented in the natural way, that is, as an array using at least one word per entry. In general, we shall not be overly concerned with logarithmic factors in running times. We shall write when for some constant , as . Similarly, we write when for some constant , as .
We require our problem inputs to be given as finite binary strings, and write for the set of all such strings. A randomised approximation scheme for a function is a randomised algorithm that takes as input an instance and a rational error tolerance , and outputs a rational number
(a random variable depending on the “coin tosses” made by the algorithm) such that, for every instance
, . All of our approximate counting algorithms will be randomised approximation schemes.
### 2.2 Probability theory
We will require some well-known results from probability theory, which we collate here for reference. First, we state Chebyshev’s inequality.
###### Lemma 1.
Let be a real-valued random variable with mean and let . Then
We also use the following concentration result due to McDiarmid [13].
###### Lemma 2.
Suppose is a real function of independent random variables , and let . Suppose there exist such that for all and all pairs differing only in the th coordinate, . Then for all ,
P(|f(X1,…,Xm)−μ|≥t)≤2e−2t2/∑mi=1c2i.\qed
Finally, we use the following Chernoff bounds, proved in (for example) Corollaries 2.3-2.4 and Remark 2.11 of Janson, Łuczak and Rucinski [11].
###### Lemma 3.
Suppose is a binomial or hypergeometric random variable with mean . Then:
1. [label=()]
2. for all , ;
3. for all , .∎
## 3 From decision to approximate counting CNF-SAT
In this section we prove our results for the satisfiability of CNF formulae, formally defined as follows.
Problem: -SAT Input: A -CNF formula . Task: Decide if is satisfiable. Problem: #-SAT. Input: A -CNF formula . Task: Compute the number of satisfying assignments of .
We also define a technical intermediate problem. For all , we say that a matrix is -sparse if every row of contains at most non-zero entries. In the following definition, and are constants.
Problem: . Input: An -variable Boolean formula of the form . Here is a -CNF formula, is an -sparse matrix over for some , and . Task: Decide if is satisfiable.
We define the growth rate of as the infimum over all such that has a randomised algorithm that runs in time and outputs the correct answer with probability at least . Our main reduction is encapsulated in the following theorem.
###### Theorem 9.
Let with , let , and suppose . Then there is a randomised approximation scheme for #-SAT which, when given an -variable formula and approximation error parameter , runs in time .
Before we prove this theorem, let us derive Theorems 1 and 2 as immediate corollaries.
###### Theorem 1 (restated).
ETH is true if and only if there exists such that no randomised -approximation algorithm can run on -variable instances of #-SAT in time .
###### Proof.
First note that we may use any randomised approximation scheme for #-SAT to decide -SAT with success probability at least 2/3 by taking and outputting ‘yes’ if and only if the result is non-zero. Thus the backward implication of Theorem 1 is immediate. Conversely, suppose ETH is false. A well-known result of Impagliazzo, Paturi and Zane [10, Lemma 10] then implies that for all constant and , there is a randomised algorithm which can decide -SAT in time with success probability at least . Hence for all constant , by the natural reduction from to -SAT, we obtain . The result therefore follows by Theorem 9. ∎
###### Theorem 2 (restated).
Let . Suppose that for all , there is a randomised algorithm which runs on -variable instances of -SAT in time . Then for all and all , there is a randomised -approximation algorithm which runs on -variable instances of #-SAT in time .
###### Proof.
Suppose that is as specified in the theorem statement. Then for all constant , by the natural reduction from to -SAT, we have . Thus the result again follows by Theorem 9. ∎
### 3.1 Proof of Theorem 9
Given access to an oracle that decides satisfiability queries, we can compute the exact number of solutions of a formula with few solutions using a standard self-reducibility argument given below (see also [18, Lemma 3.2]).
Algorithm . Given an instance of on variables, , and access to an oracle for , this algorithm computes if ; otherwise it outputs FAIL.
(S1)
(Query the oracle) If is unsatisfiable, return .
(S2)
(No variables left) If contains no variables, return .
(S3)
(Branch and recurse) Let and be the formulae obtained from by setting the first free variable in to 0 and 1, respectively. If is at most , then return this sum; otherwise abort the entire computation and return FAIL.
###### Lemma 4.
Sparse is correct and runs in time at most with at most calls to the oracle. Moreover, each oracle query is a formula with at most variables.
###### Proof.
Consider the recursion tree of Sparse on inputs and . At each vertex, the algorithm takes time at most to compute and , and it issues a single oracle call. For convenience, we call the leaves of the tree at which Sparse returns 0 (in (S1)) or 1 (in (S2)) the 0-leaves and 1-leaves, respectively.
Let be the number of 1-leaves. Each non-leaf is on the path from some 1-leaf to the root, otherwise it would be a 0-leaf. There are at most such paths, so there are at most non-leaf vertices in total. Finally, every 0-leaf has a sibling which is not a 0-leaf, or its parent would be a 0-leaf, so there are at most 0-leaves in total. Overall, the tree has at most vertices. An easy induction using (S3) implies that , and certainly , so the running time and oracle access bounds are satisfied. Correctness likewise follows by a straightforward induction. ∎
When our input formula has too many solutions to apply Sparse efficiently, we first reduce the number of solutions by hashing. In particular, we use the same hash functions as Calabro et al. [2]; they are based on random sparse matrices over and formally defined as follows:
###### Definition 5.
Let . An -hash is a random matrix over defined as follows. For each row , let be a uniformly random size- subset of . Then for all and all , we choose values independently and uniformly at random, and set all other entries of to zero.
For intuition, suppose is an -variable -CNF formula and is the set of satisfying assignments of , and that holds for some small . It is easy to see that for all and uniformly random , if is an -hash, then the number of satisfying assignments of has expected value . (See Lemma 6.) If were concentrated around its expectation, then by choosing an appropriate value of , we could reduce the number of solutions to at most , apply Sparse to count them exactly, then multiply the result by to obtain an approximation to . This is the usual approach pioneered by Valiant and Vazirani [22].
In the exponential setting, however, we can only afford to take , which means that is not in general concentrated around its expectation. In [2], only very limited concentration was needed, but we require strong concentration. To achieve this, rather than counting satisfying assignments of a single formula , we will sum over many such formulae. We first bound the variance of an individual -hash when and are suitably large. Our analysis here is similar to that of Calabro et al. [2], although they are concerned with lower-bounding the probability that at least one solution remains after hashing and do not give bounds on variance.
###### Lemma 6.
Let and let . Suppose and . Let and suppose . Let be an -hash, and let be uniformly random and independent of . Let . Then and .
###### Proof.
For each , let be the indicator variable of the event that . Exposing implies that for all , and hence
E(|S′|)=∑x∈SP(Ix)=2−m|S|.
We now bound the second moment. We have
E(|S′|2) =∑(x,y)∈S2E(IxIy)=∑(x,y)∈S2P(Ax=Ay=b) =∑(x,y)∈S2m∏i=1P((Ax)i=(Ay)i=bi). (1)
It will be convenient to partition the terms of this sum according to Hamming distance, which we denote by . Write for the binary entropy function , write for its left inverse, and let . Then it follows immediately from (1) that
E(|S′|2)=∑(x,y)∈S2d(x,y)≤αnm∏i=1P((Ax)i=(Ay)i=bi)+∑(x,y)∈S2d(x,y)>αnm∏i=1P((Ax)i=(Ay)i=bi). (2)
Denote the projection of any vector onto by . For any and any we have
P((Ax)i=(Ay)i=bi) =P((Ax)i=(Ay)i)⋅P((Ay)i=bi∣(Ax)i=(Ay)i) =P((Ax)i=(Ay)i)/2.
Since whenever , it follows that
P((Ax)i=(Ay)i=bi)=(1−P((A(x−y))i≠0∣xRi≠yRi)⋅P(xRi≠yRi))/2.
Since
has an equal number of odd- and even-sized subsets, on exposing
it follows that
P((Ax)i=(Ay)i=bi)=12(1−P(xRi≠yRi)2)=1+P(xRi=yRi)4 for all x,y∈S. (3)
In particular, this implies . Since a ball of Hamming radius in contains at most points, it follows that
∑(x,y)∈S2d(x,y)≤αnm∏i=1P((Ax)i=(Ay)i=bi)≤|S|2δn⋅2−m=E(|S′|)22m+δn|S|≤E(|S′|)2. (4)
Now suppose . Since by definition,
P(xRi=yRi) ≤(n−⌈αn⌉s)(n)s≤(1−⌈αn⌉/n)s≤(1−α)s≤e−αs.
Hence by (3), we have . It follows that
∑(x,y)∈S2d(x,y)>αnm∏i=1P((Ax)i=(Ay)i=bi)≤|S|222m(1+e−αs)m≤|S|222meme−αs. (5)
Combining (2), (4) and (5), we obtain
Var(|S′|)=E(|S′|2)−E(|S′|)2≤|S|222meme−αs.
Since , we have . It follows that , and so . Since and , the result follows. ∎
We now state the algorithm we will use to prove Theorem 9, then use the lemmas above to prove correctness. In the following definition, is a rational constant with .
Algorithm (#-SAT). Given an -variable instance of #-SAT, a rational number , and access to an oracle for for some , this algorithm computes a rational number such that with probability at least , .
(A1)
(Brute-force on constant-size instances) If , solve the problem by brute force and return the result.
(A2)
(If there are few satisfying assignments, count them exactly) Let , and apply Sparse to and . Return the result if it is not equal to FAIL.
(A3)
(Try larger and larger equation systems) For each :
1. (Set maximum number of solutions to find explicitly) Let .
2. For each :
• (Prepare query) Independently sample an -hash and a uniformly random vector . Let .
• (Ask oracle using subroutine) Let be the output of .
• (Bad hash or too small) If , go to next in the outer for-loop.
• Otherwise, set .
3. (Return our estimate) Return .
(A4)
(We failed, return garbage) Return .
###### Lemma 7.
is correct for all and runs in time at most . Moreover, the oracle is only called on instances with at most variables.
###### Proof.
Let be an instance of #-SAT on variables, and let . The running time of the algorithm is dominated by (A2) and (A3b). Clearly (A2) takes time at most by Lemma 4. In the inner for-loop, the number controls the maximum running time we are willing to spend. In particular, again by Lemma 4, the running time for one iteration of the inner for-loop is if and otherwise it is bounded by but the remaining iterations of the inner loop are then skipped. It is easy to see that holds at any point of the inner loop, and hence the overall running time is as required. Likewise, the oracle access requirements are satisfied, so it remains to prove the correctness of .
If terminates at (A1) or (A2), then correctness is immediate. Suppose not, so that holds, and the set of solutions of satisfies . Let , and note that and . The formulas are oblivious to the execution of the algorithm, so for the analysis we may view them as being sampled in advance. Let be the set of solutions to . For each with , let be the following event:
∣∣ ∣∣2t∑i=1|Sm,i|−2−m|S|∣∣ ∣∣≤2−m−(t−δn/2)/2⋅|S|.
Thus implies . By Lemma 6 applied to , , , and , for all and we have and . Since the ’s are independent, it follows by Lemma 1 that
P(Em)≥1−2t⋅|S|22δn/32−2m−2t2−2m−t+δn/2|S|2≥1−2−δn/4≥1−1/n2.
Thus a union bound implies that, with probability at least , the event occurs for all with simultaneously. Suppose now that this happens. Then in particular, we have . But then, if reaches iteration , none of the calls to Sparse fail in this iteration and we have for all . Thus returns some estimate in (A3c) while . Moreover, since occurs, this estimate satisfies as required. Thus behaves correctly with probability at least , and the result follows. ∎
###### Theorem 9 (restated).
Let with , let , and suppose . Then there is a randomised approximation scheme for #-SAT which, when given an -variable formula and approximation error parameter , runs in time .
###### Proof.
If , then we solve the #-SAT instance exactly by brute force in time , so suppose . By the definition of , there exists a randomised algorithm for with failure probability at most and running time at most . By Lemma 3(i), for any constant , by applying this algorithm times and outputting the most common result, we may reduce the failure probability to at most . We apply to and , using this procedure in place of the -oracle. If we take sufficiently large, then by Lemma 7 and a union bound, the overall failure probability is at most , and the running time is as required. ∎
## 4 General fine-grained result
We first define the setting of our result.
###### Definition 8.
Let be a bipartite graph. We define the independence oracle of to be the function such that if and only if is an independent set in . We define the adjacency oracle of to be the function such that if and only if .
We think of edges of as corresponding to witnesses of a decision problem. For example, in OV, they will correspond to pairs of orthogonal vectors. Thus calling will correspond to verifying a potential witness, and calling will correspond to solving the decision problem on a sub-instance. Our main result will be the following.
###### Theorem 10.
There is a randomised algorithm with the following properties:
1. [label=()]
2. is given as input two disjoint sets and and a rational number ;
3. for some bipartite graph with , has access to the independence and adjacency oracles of ;
4. returns a rational number such that holds with probability at least ;
5. runs in time at most ;
6. makes at most calls to the independence oracle.
Throughout the rest of the section, we take to be the bipartite graph of the theorem statement and . Moreover, for all , we define and .
We briefly compare the performance of the algorithm of Theorem 10 with that of the standard approach of sampling to deal with dense instances combined with brute force counting to deal with sparse instances (as used in Thurley [18]). Suppose is constant, that we can evaluate in time for some , that we can evaluate in time , and that the input graph contains edges for some . Then sampling requires time, and brute force enumeration of the sort used in Sparse (p. 3.1) requires time. The worst case arises when , in which case the algorithm requires time. However, the algorithm of Theorem 10 requires only time in all cases. Thus it has only polylogarithmic overhead over deciding whether the graph contains edges at all.
Similarly to Section 3, we shall obtain our approximation by repeatedly (approximately) halving the number of edges in the graph until few remain, then counting the remaining edges exactly. For the halving step, rather than hashing, if our current graph is induced by for some then we shall simply delete half the vertices in chosen uniformly at random. However, if any single vertex in is incident to a large proportion of the remaining edges, then the edge count of the resulting graph will not be well-concentrated around its expectation and so this approach will fail. We now prove that this is essentially the only obstacle.
###### Definition 9.
Given , we say a non-empty set is -balanced if every vertex in has degree at most .
###### Lemma 10.
Let , suppose is -balanced, and suppose . Let be a random subset formed by including each vertex of independently with probability . Then with probability at least , we have and
∣∣∣∂(X′)−12∂(X)∣∣∣≤√ξlogn⋅∂(X
|
2021-06-13 11:51:04
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9102056622505188, "perplexity": 853.863319948638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487608702.10/warc/CC-MAIN-20210613100830-20210613130830-00369.warc.gz"}
|
https://stacks.math.columbia.edu/tag/0EPY
|
Lemma 81.10.3. In Situation 81.2.1 let $f : X \to Y$ be an étale morphism of good algebraic spaces over $B$. If $Z \subset Y$ is an integral closed subspace, then $f^*[Z] = \sum [Z']$ where the sum is over the irreducible components (Remark 81.5.1) of $f^{-1}(Z)$.
Proof. The meaning of the lemma is that the coefficient of $[Z']$ is $1$. This follows from the fact that $f^{-1}(Z)$ is a reduced algebraic space because it is étale over the integral algebraic space $Z$. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
2023-03-23 18:15:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9917306900024414, "perplexity": 203.3991778859212}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945182.12/warc/CC-MAIN-20230323163125-20230323193125-00101.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/chapter-6-inverse-functions-6-2-exponential-functions-and-their-derivatives-6-2-exercises-page-420/89
|
## Calculus 8th Edition
$\frac{1}{2}e^{2x}+2x-\frac{1}{2}e^{-2x}+C$
$\int(e^x+e^{-x})^2dx$ $=\int((e^x)^2+2e^xe^{-x}+(e^{-x})^2)dx$ $=\int(e^{2x}+2+e^{-2x})dx$ $=\int e^{2x}dx+\int 2 dx+\int e^{-2x}dx$ For the first integral, let $u=2x$. Then $du=2dx$, and $\frac{1}{2}du=dx$. For the third integral, let $v=-2x$. Then $dv=-2dx$, and $-\frac{1}{2}dv=dx$. $=\int e^u*\frac{1}{2}du+2x+C+\int e^v*(-\frac{1}{2})dv$ $=\frac{1}{2}e^u+2x-\frac{1}{2}e^v+C$ $=\frac{1}{2}e^{2x}+2x-\frac{1}{2}e^{-2x}+C$
|
2018-10-24 03:33:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9993300437927246, "perplexity": 110.56265897936156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583518753.75/warc/CC-MAIN-20181024021948-20181024043448-00334.warc.gz"}
|
https://pypi.org/project/zc.parse_addr/
|
Parse network addresses of the form: HOST:PORT
## Project description
Parse network addresses of the form: HOST:PORT
>>> import zc.parse_addr
('1.2.3.4', 56)
It would be great if this little utility function was part of the socket module.
## Project details
Uploaded source
|
2022-09-27 09:33:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8837156891822815, "perplexity": 13388.626127973404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00586.warc.gz"}
|
https://brilliant.org/wiki/differential-equations-homogeneous-equations-with/
|
Differential Equations - Homogeneous Equations with Constant Coefficients
Contents
Cite as: Differential Equations - Homogeneous Equations with Constant Coefficients. Brilliant.org. Retrieved from https://brilliant.org/wiki/differential-equations-homogeneous-equations-with/
×
|
2022-05-22 11:43:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 143, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7467796206474304, "perplexity": 2940.756008898079}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545326.51/warc/CC-MAIN-20220522094818-20220522124818-00122.warc.gz"}
|
https://math.stackexchange.com/questions/2898142/local-construction-of-a-map-on-a-manifold
|
# Local construction of a map on a manifold
Assume $(M,J)$ is a smooth manifold equipped with an almost complex structure. Let $p$ be a point of $M$, and define the linear map $\Theta_p \colon T_pM \to T_pM$ so that $$\Theta_p = -\frac{1}{2}\mathrm{id}_{T_pM} + \frac{\sqrt{3}}{2}J_p.$$ One has that $\Theta_p^3 = \mathrm{id}_{T_pM}$, so $\Theta_p$ is a linear isomorphism of $T_pM$. The goal is to show there exists a neighbourhood of $p$, $U(p)$, and a map $\theta_p \colon U(p) \to U(p)$ such that $\theta_{p*} = \Theta_p$.
I have tried to solve this exercise finding a candidate for $\theta$: locally around $p$ one can assume to have a metric and then a connection, so we can construct an exponential map $\exp_p \colon T_pM \to U(p)$, where $U(p)$ is some neighbourhood of $p$. If $t$ is a real parameter in a sufficiently small open interval containing $0$, then $\exp_p tX_p = \gamma(t,p)$, where $\gamma$ is a curve on $M$ such that $\gamma(0,p) = p$ and $\gamma'(0,p) = X_p$. Hereafter, $Y_p$ is a tangent vector at $p$ and $\alpha(t,p)$ its integral geodesic. Defining $\theta_p := \exp_p \circ \thinspace \Theta_p \circ\exp_p^{-1}\colon U(p) \to U(p)$, I get \begin{align} \theta_{p*}(Y_p) & = \frac{\mathrm{d}}{\mathrm{dt}}\left(\exp_p \circ \thinspace \Theta_p \circ \exp_p^{-1}(\alpha(t,p)) \right)\Bigr\lvert_{t=0} \\ & = \frac{\mathrm{d}}{\mathrm{dt}}\left(\exp_p \circ \thinspace \Theta_p(tY_p) \right)\Bigr\lvert_{t=0} \\ & = \frac{\mathrm{d}}{\mathrm{dt}}\left(\exp_p\left(t\left( -\frac{1}{2}Y_p+\frac{\sqrt{3}}{2}J_pY_p\right)\right)\right)\Biggr\lvert_{t=0} \\ & = -\frac{1}{2}Y_p+\frac{\sqrt{3}}{2}J_pY_p = \Theta_p(Y_p). \end{align} What I am not convinced of is that I am fixing a curve $\alpha$ which is the geodesic associated to some vector $Y_p$. I do not know if this can be an arbitrary curve, and then I am not quite sure I solved the problem. Probably the point is the definition of the exponential map. Other solutions are also welcome, of course.
|
2019-05-22 18:55:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995450973510742, "perplexity": 101.00956921456604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256948.48/warc/CC-MAIN-20190522183240-20190522205240-00339.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/chapter-4-integrals-4-4-indefinite-integrals-and-the-net-change-theorem-4-4-exercises-page-337/31
|
## Calculus 8th Edition
$\displaystyle \frac{256}{15}≈17.066667$
$\displaystyle \int_1^4\sqrt{t}(1+t)dt = \int_1^4t^{1/2}(1+t)dt=\int_1^4t^{1/2}+t^{3/2}dt$ Use the reverse power rule (add one and divide by it) to evaluate the integral: $\displaystyle (\frac{t^{3/2}}{3/2}+\frac{t^{5/2}}{5/2})|_1^4$ $\displaystyle (\frac{2t^{3/2}}{3}+\frac{2t^{2/5}}{5})|_1^4$ $\displaystyle [\frac{2(4)^{3/2}}{3}+\frac{2(4)^{5/2}}{5}]-[\frac{2(1)^{3/2}}{3}+\frac{2(1)^{5/2}}{5}]$ $\displaystyle [\frac{2(2)^{3}}{3}+\frac{2(2)^{5}}{5}]-[\frac{2}{3}+\frac{2}{5}]$ $\displaystyle [\frac{2(8)}{3}+\frac{2(32)}{5}]-\frac{2}{3}-\frac{2}{5}$ $\displaystyle \frac{16}{3}+\frac{64}{5}-\frac{2}{3}-\frac{2}{5}$ $\displaystyle \frac{14}{3}+\frac{62}{5}$ $\displaystyle \frac{14\cdot5}{15}+\frac{62\cdot3}{15}=\frac{70+186}{15}$ $\displaystyle \frac{256}{15}≈17.066667$
|
2019-12-08 06:20:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9236096143722534, "perplexity": 196.37672043778153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540506459.47/warc/CC-MAIN-20191208044407-20191208072407-00019.warc.gz"}
|
https://plainmath.net/44614/find-the-area-of-the-parallelogram-with-vertices-a-3-0
|
# Find the area of the parallelogram with vertices A(−3, 0),
Find the area of the parallelogram with vertices A(−3, 0), B(−1, 7), C(9, 6), and D(7, −1).
You can still ask an expert for help
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
limacarp4
Given vertices A(−3, 0), B(−1, 7), C(9, 6), D(7, −1)
$\stackrel{\to }{AB}=\left(-1-\left(-3\right),7-0\right)=\left(2,7\right)$
$\stackrel{\to }{BC}=\left(9-\left(-1\right),6-7\right)=\left(10,-1\right)$ The area of parallelogram with adjacent sides AB and BC is the length of their cross product $|\stackrel{\to }{AB}×\stackrel{\to }{BC}|$
Now
$|\stackrel{\to }{AB}×\stackrel{\to }{BC}|=|\begin{array}{ccc}i& j& k\\ 2& 7& 0\\ 10& -1& 0\end{array}|$
$=\left(0-0\right)i-\left(0-0\right)j+\left(-2-70\right)k$
$=-72k$
Therefore, the area is,
$|\stackrel{\to }{AB}×\stackrel{\to }{BC}|=\sqrt{{\left(-72\right)}^{2}}=72$
###### Not exactly what you’re looking for?
levurdondishav4
$=\frac{1}{2}\left[-3\left(7-6\right)-1\left(-6-63\right)\right]$
$=\frac{1}{2}\left[-3\left(1\right)-1\left(-66\right)\right]$
$=\frac{1}{2}\left[-3+66\right]$
$=\frac{1}{2}\left[63\right]$
$=\frac{63}{2}$ sq units
$=\frac{1}{2}\left[-3\left(6+1\right)-1\left(-9-42\right)\right]$
$=\frac{1}{2}\left[-3\left(7\right)-1\left(-51\right)\right]$
$=\frac{1}{2}\left[-21+51\right]$
$=\frac{1}{2}\left[30\right]$
$=12$ sq units
Area of parallelogram abcd = Area of triangle abc + Area of triangle acd
$=\frac{63}{2}+12$
$=\frac{63+24}{2}$
$=\frac{87}{2}$ sq units.
Therefore, the area of the parallelogram is $\frac{87}{2}$ sq. units.
|
2022-09-26 14:58:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 98, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8896500468254089, "perplexity": 1806.2039849673026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00545.warc.gz"}
|
http://mathoverflow.net/feeds/user/8003
|
User philip engel - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-19T02:01:23Z http://mathoverflow.net/feeds/user/8003 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/129201/complete-curves-in-m-g-and-theta-characteristics Complete curves in $M_g$ and Theta Characteristics Philip Engel 2013-04-30T10:25:26Z 2013-05-01T12:28:27Z <p>Let $g\geq 3$. Following the reference below, the locus of curves in $M_g$ with an effective even theta characteristic has codimension $1$. (Those are the curves $C$ with an effective line bundle $L$ such that $L\otimes L=K$ and $h^0(L)\equiv 0\mod 2$)</p> <p>(QUESTION) Are there complete curves in $M_g$ that avoid this locus? I know that this is not the case when $g=3$ because the curves with an effective even theta characteristic are hyper-elliptic and any complete curve in $M_3$ must intersect the hyper-elliptic locus.</p> <p>The main reason I ask is because I'd like to construct surfaces in $M_{2g}$ as follows: Take a family $F\rightarrow B$ of genus $g$ curves that avoids all theta-null divisors, i.e. all half-canonical bundles have $0$ or $1$ section. Since the parity of $h^0(K^{1/2})$ is constant in families, the ineffective theta characteristics are locally constant and form an etale cover of $B$. There is a map $F\times_BF\rightarrow Jac_BF$ sending $(x,y)\mapsto O(x-y)$ which contracts the diagonal to the zero section. After a suitable base change, one can compose with a map $$Jac_BF\rightarrow Pic_B^{g-1}F$$ sending $L\rightarrow L\otimes K^{1/2}$ for an ineffective $K^{1/2}$. Thus, under the map $$\phi:F\times_BF\rightarrow Pic^{g-1}_BF$$ $\phi(\Delta)$ does not intersect the theta divisor $\Theta\subset Pic^{g-1}_BF$ (excuse the overuse of the word theta!)!</p> <p>Hence $\phi^{-1}(\Theta)$ parameterizes a surface of distinct pairs of points in $F_b$. After another base change, one can take the double branched cover of $F_b$ branched at these two points and get a surface inside $M_{2g}$. Will this technique ever work to construct surfaces in $M_{2g}$? That, by the way, is the motivation for the question...</p> <p><a href="http://www.ams.org/journals/tran/1982-271-02/S0002-9947-1982-0654853-6/S0002-9947-1982-0654853-6.pdf" rel="nofollow">http://www.ams.org/journals/tran/1982-271-02/S0002-9947-1982-0654853-6/S0002-9947-1982-0654853-6.pdf</a></p> http://mathoverflow.net/questions/129054/enriques-classification-of-algebraic-surfaces/129216#129216 Answer by Philip Engel for Enriques classification of algebraic surfaces Philip Engel 2013-04-30T13:07:05Z 2013-04-30T13:14:09Z <p>All of the following is from Beauville's wonderful, short, dense book: "Complex Algebraic Surfaces." I really recommend it if you want to learn about the classification or even general techniques in surface theory.</p> <p>For a minimal surface, $P_{12}=1$ alone is equivalent to $\kappa=0$. It probably is necessary to go through the classification to get this result. The number $12$ is not random: It arises because of the bi-elliptic surfaces, which are quotients of a product of two elliptic curves by a finite group. These finite groups all have elements whose order divides $12$, which is why the number $12$ appears.</p> <p>The key step is that $P_{12}=0$ implies $S$ is ruled. This follows from two cases:</p> <p>(1) Suppose $q=0$. Because $P_{12}=0$ we also have $P_2=0$. Castelnuovo's rationality criterion then applies to show $S$ is rational.</p> <p>(2) Suppose $q\geq 1$. Because $P_{12}=0$ we also have $p_g=0$. One can show relatively easily using the Albanese fibration that the only possibility for $S$ not to be ruled is when $q=1$ and $b_2=2$. Then lots of work shows that if $S$ were not ruled we would have $S=(B\times F)/G$ for curves $B$ and $F$ and a finite group $G$ (and a number of other technical restrictions). By analyzing the canonical bundle on the resulting surface, one can show that $P_{12}$ is never zero. Thus we have a contradiction, so $S$ is ruled. Furthermore, $P_{12}=1$ if and only if both $B$ and $F$ are elliptic curves.</p> <p>Once this difficult step is out of the way, we know that $\kappa=0$ implies $P_{12}=1$. Conversely, if $P_{12}=1$ then $S$ is non-ruled, and one can show $\chi(O_S)\geq 0$. Since $p_g=0$ or $1$, the list of possibilities for $q$ is finite. After analyzing the five cases (one turns out to be impossible), dealing with the most difficult ones by invoking part (2) from above, one can show that $P_{12}=1$ does in fact imply $\kappa=0$.</p> <p>Finally, the remaining case is $P_{12}\geq 2$. In this case, it is simple to show that $\kappa=2$ if and only if $K^2>0$. The backward direction is an application of Riemann-Roch, and the forward direction is proven by contrapositive. If $K^2=0$, then the mobile part $M$ of $|nK|$ satisfies $M^2=0$ and hence $\kappa=1$.</p> <p>Hope this helps. - Phil</p> http://mathoverflow.net/questions/123375/contracting-a-curve-of-negative-self-intersection-on-a-surface Contracting a curve of negative self-intersection on a surface Philip Engel 2013-03-01T21:01:45Z 2013-03-02T17:19:57Z <p>It is easy to show using birational factorization that the only curves on a surface which can be contracted to get an algebraic, smooth surface are smooth $(-1)$-curves. Furthermore, I know of examples of smooth curves of some other negative self-intersection which can be contracted in the algebraic category to result in a singular surface (where the order of the singularity is equal to the negative self-intersection?)</p> <p><strong>Question 1:</strong> Given a curve of negative self-intersection on a complex surface, what is the construction (in the analytic category) of its contraction?</p> <p><strong>Question 2:</strong> Given the curve, what conditions on it that determine whether the contraction is algebraic or not?</p> <p>Good, clear, elementary references would be fine, as an alternative to an answer! </p> http://mathoverflow.net/questions/110615/difference-of-curve-classes/110629#110629 Answer by Philip Engel for difference of curve classes Philip Engel 2012-10-25T08:19:36Z 2012-10-26T20:11:43Z <p>Every divisor class $D$ on a surface is the difference of two smooth, connected curves. Choose a very ample divisor $A$ and an $n$ so that $D+nA$ is also very ample. Then $(D+nA)-nA=D$ so $D$ is the difference of two curves. They may be chosen smoothly by Bertini's Theorem.</p> <p>ADDED LATER: They may also be chosen to be connected. The Lefschetz hyperplane theorem shows that hyperplane sections of surfaces are connected.</p> http://mathoverflow.net/questions/63423/checkmate-in-omega-moves/84946#84946 Answer by Philip Engel for Checkmate in $\omega$ moves? Philip Engel 2012-01-05T10:49:04Z 2012-01-05T11:01:37Z <p>The white queen moves anywhere to the east, then the black rooks force the king east, back rank mate-style, until they've either skewered, pinned, or forked the queen and king. Worst case scenario, black loses 2 rooks, and can still mate. If black ever doesn't check, white will have a perpetual. Note that black can't force mate, as white's strategy can always be "go in a northerly direction to escape check."</p> <p><a href="http://www.freeimagehosting.net/56042" rel="nofollow"><img src="http://www.freeimagehosting.net/t/56042.jpg"></a></p> <p>P.S. Sorry, I switched the colors.</p> http://mathoverflow.net/questions/79546/can-any-smooth-hyperelliptic-curve-be-embedded-in-a-quadric-surface/84461#84461 Answer by Philip Engel for Can any smooth hyperelliptic curve be embedded in a quadric surface? Philip Engel 2011-12-28T18:28:04Z 2011-12-28T18:28:04Z <p>An explicit realization of degree $2$ and degree $g+1$ maps that separate points can be provided. Suppose the equation of a hyperelliptic curve is $$C:y^2=f(x)$$ with $\deg(f)=2g+2$. "Complete the square" by writing $$f(x)=r(x)^2+q(x)$$ with $\deg(r)=g+1$ and $\deg(q)\leq g$. Then, the maps $$A:(x,y) \mapsto x$$ $$B:(x,y)\mapsto y-r(x)$$ are degree $2$ and degree $g+1$ respectively. The map $B$ is degree $g+1$ because if we assume that $B(x,y)=c$, then we get the equation $c(c+2r(x))=q(x)$, which generically has $g+1$ solutions. Furthermore $(A,B)$ is clearly injective. So, the image $$(A,B):C\mapsto C'\subset\mathbb{P}^1\times\mathbb{P}^1$$ is a degree $(2,g+1)$ curve that $C$ normalizes and the logic of Jack's answer applies.</p> http://mathoverflow.net/questions/33754/is-there-a-way-to-define-hecke-operators-inherently-as-certain-endomorphisms-of Is there a way to define Hecke operators "inherently" as certain endomorphisms of the Jacobian? Philip Engel 2010-07-29T04:48:25Z 2010-08-19T01:00:18Z <p>From the Eichler-Shimura relation, we have a formula for $T_p$ when we reduce $\textrm{End}(\textrm{Jac}(X))$ mod $p$. Explicity, $T_p=\textrm{Frob}_p+p\textrm{Frob}_p^{-1}$. Is there a way to define the Hecke operator as a lift of this operator satisfying certain other properties? Is there a definition of $T_p$ which does not rely on a moduli space interpretation or double coset operators, but "inherently" from the Jacobian? Excuse the vague formulation of this question; I am just learning about this stuff.</p> http://mathoverflow.net/questions/9754/magic-trick-based-on-deep-mathematics/31656#31656 Comment by Philip Engel Philip Engel 2013-05-14T22:33:14Z 2013-05-14T22:33:14Z Haha, my friend and I barely managed to work through the logic with the audience choosing the numbers 2 and 3! http://mathoverflow.net/questions/129201/complete-curves-in-m-g-and-theta-characteristics Comment by Philip Engel Philip Engel 2013-05-01T21:14:57Z 2013-05-01T21:14:57Z Thanks for the reference, this is exactly what I was looking for. http://mathoverflow.net/questions/129054/enriques-classification-of-algebraic-surfaces/129216#129216 Comment by Philip Engel Philip Engel 2013-05-01T07:58:34Z 2013-05-01T07:58:34Z One direct way to show $\kappa(S)=-\infty$ implies $S$ is ruled is to use Iitaka's Conjecture: If $S$ is a surface and $S\rightarrow B$ is a fibration with generic fiber $F$ then $\kappa(S)\geq\kappa(B)+\kappa(F)$. Applying this to the Albanese fibration solves case (2) above instantaneously, because it implies that $F$ must be rational. This applies the stronger assumption $P_n=0$ for all $n$ rather than $P_{12}=0$ though. I think there really will be no way to get the specific number $12$ without some classification. http://mathoverflow.net/questions/123375/contracting-a-curve-of-negative-self-intersection-on-a-surface/123376#123376 Comment by Philip Engel Philip Engel 2013-03-06T22:25:26Z 2013-03-06T22:25:26Z Thanks for your answer, it partially resolves the case of my second question, by giving criteria for contractibility in the algebraic category in certain cases. I mainly wanted an explicit construction {\it in the analytic category} of the contraction. (The conditions would of course be weaker if we allow the contraction not to be an algebraic surface). Since the proposition above seems to be the best result about contractibility, I assume it is hard then to determine whether the resulting surface is algebraic... http://mathoverflow.net/questions/110615/difference-of-curve-classes/110629#110629 Comment by Philip Engel Philip Engel 2012-11-01T03:25:07Z 2012-11-01T03:25:07Z This works whether or not $D$ is torsion, for any $(1,1)$-class in $H^2(X,\mathbb{Z})$. http://mathoverflow.net/questions/110615/difference-of-curve-classes/110629#110629 Comment by Philip Engel Philip Engel 2012-10-26T20:09:43Z 2012-10-26T20:09:43Z Thanks, Henri. So that settles it for all surfaces! http://mathoverflow.net/questions/110615/difference-of-curve-classes/110629#110629 Comment by Philip Engel Philip Engel 2012-10-25T11:11:36Z 2012-10-25T11:11:36Z Yeah, dunno what I was thinking... I totally just got it backwards. http://mathoverflow.net/questions/63423/checkmate-in-omega-moves/84946#84946 Comment by Philip Engel Philip Engel 2012-01-06T01:28:35Z 2012-01-06T01:28:35Z Good observation; removing the pawn surely goes a long way in proving the existence of a perpetual. A queen sufficiently far away always has at least 5 possible check squares. They can't all be blocked, since there are only 4 rooks. http://mathoverflow.net/questions/33754/is-there-a-way-to-define-hecke-operators-inherently-as-certain-endomorphisms-of Comment by Philip Engel Philip Engel 2010-07-29T15:22:13Z 2010-07-29T15:22:13Z Optimally, $X$ would be any Riemann surface such that the endomorphism ring of its Jacobian is defined over $\mathbb{Q}$. In this case, the definition of $T_p$ couldn't rely an interpretation as a moduli space or quotient of the upper half-plane. This definition would coincide with the one we know for modular curves.
|
2013-05-19 02:01:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9177544116973877, "perplexity": 514.0408733903873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00088-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.xarg.org/puzzle/project-euler/problem-9/
|
# Project Euler 9 Solution: Special Pythagorean triplet
Problem 9
A Pythagorean triplet is a set of three natural numbers, a < b < c, for which,
a2 + b2 = c2
For example, 32 + 42 = 9 + 16 = 25 = 52.
There exists exactly one Pythagorean triplet for which a + b + c = 1000.
Find the product abc.
## Solution
A problem like this is perfect for Haskell, since we can transform the mathematical description par for par into a programm:
solution n = head [a * b * c |
c <- [1..n],
b <- [1..c],
a <- [1..b],
a^2 + b^2 == c^2,
a + b + c == n]
However, it takes quite some time to find the solution with this brute force attempt. One improvement is to use the fact that $$a+b+c=n$$ which lets us construct the variable c as $$c=n - a - b$$ (we could construct c as $$c=\sqrt{a^2+b^2}$$, but this is computational much more expensive and of course we could construct a and b but this would require another check if the result becomes negative). This minimal change reduces the execution time tremendously:
solution n = head [a * b * c |
b <- [1..n],
a <- [1..b],
let c = n - a - b,
a^2 + b^2 == c^2]
In order to keep the relation $$a<b<c$$, we would need to add a check of $$b<c$$, but as $$a^2+b^2=c^2 \Rightarrow b<c$$ for $$a>0$$, we can skip this test. But based on the fact that $$a<b<c$$, we can conclude even more, namely that $$a<n/3$$ and $$a<b<n/2$$. With this, we can further improve the search space:
solution n = head [a * b * c |
a <- [1..quot n 3],
b <- [a..quot n 2],
let c = n - a - b,
a^2 + b^2 == c^2]
We learned quite a lot already, but the search space is still too large, so back to the drawing board.We know $$a+b=n-c$$ and $$a^2+b^2=c^2$$, which implies that $$2ab = (n-c)^2 - c^2$$. When subtracting the two equations, we get $$a^2 - 2ab + b^2 = c^2 - (n-c)^2 + c^2 = c^2 - n^2 + 2nc$$. Since $$(a-b)^2 = c^2 - n^2 + 2nc$$, we know the right part must be square. We achieved, that we completely removed a and b! As we constructed the triplet this way, we find a solution if the integer square root exists and the result will be maximized as $$abc = n(n/4)^2 - n(c - n/4)^2$$.
solution n = head [ a * b * c |
c <- [1 + quot n 3 .. quot n 2],
let sqa_b = c * c - n * n + 2 * n * c,
let a_b = floor(sqrt(fromIntegral sqa_b)),
let b = quot (n - c + a_b) 2,
let a = n - b - c,
a_b * a_b == sqa_b]
Or the same implemented in JavaScript:
function solution(n) {
for (var c = (n / 3 + 1) | 0; c < n / 2; c++) {
var sqa_b = c * c - n * n + 2 * n * c
var a_b = Math.floor(Math.sqrt(sqa_b));
if (a_b * a_b == sqa_b) {
var b = (n - c + a_b) / 2;
var a = n - b - c;
return a * b * c;
}
}
return -1
}
solution(1000);
« Back to problem overview
|
2019-03-18 14:31:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5130863189697266, "perplexity": 814.9157575018908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201329.40/warc/CC-MAIN-20190318132220-20190318154220-00523.warc.gz"}
|
http://www.kassfm.co.ke/best-selling-qviirh/archive.php?7dcb78=valence-electrons-and-oxidation-numbers-worksheet
|
The number of valence electrons on an atom is equal to its group number. -Cl Cl 17. The worksheet then pairs a metal with a nonmetal to determine the chemical formula that the two ions will form in an ionic bond. Metallic elements usually exhibit only positive oxidation numbers due to low Z eff values. This has been completed for the complexes in the first two rows of the table. When element with atomic number 118 is discovered, what family will it be in? SiO 2 Si O 3. Oxidation Numbers Worksheet Directions: Use the Rules for Assigning Oxidation Numbers to determine the oxidation number assigned to each element in each of the given chemical formulas. Valence electrons and ions worksheet. x��ۏ\E��[귶���̮�x�a����c�v����l��J�4����lw��s�\�K����}��RԐ]��dfD|�E��ԅ�f�������^�����������\�ޘ����푋q��зG?��͟���3?���G�==s����'ozV_8������|�����ӣ�n��˿�8������H�k#w�GrZytp�>f�]|&Ca}�L��˞�|yp8�Oo�X_^�9�]�����#��[YKKh}�>���>��e���P?U������qm��"�\��[Ok�}����Pq�\^^���������dGoU#n�]_�Gz|�g�����*g���C��6ڹ�����ng����_�T���W�n}_�P(4{/�� T�Xz�`Q2����2$��D���w�9��.o��0T��D�|�m�����d=KX�7���VNG���T�hf[̨j�B5n��.��խ��֬��U}#��7�.��a���L�f�\�~�Nچ6f~�g� |��f�d$5�"���ˏeuŰ4cu}���Cz�j�~L��_T���L?���Ī�������DvD괕��)�jd@t�� number of valence electrons your element has. Na 2 O 2 Na O 2. In each of the following equations indicate the element that has been oxidized and the one that Name period valence electrons and ions worksheet valence electrons 1. In a cation, the oxidation number is equal to the number of these electrons which have been removed. Any of the two: Be, Mg, Se, Ba, Ra – Same valence electrons / same family. Understand it covers the electrons worksheet key with electrons and numbers of the electrons of copy of. What element in period 3 is a metalloid? 2. This tells you how it will combine with other elements. Student Worksheet. Na Na 18. Show the valence electrons and how they are either shared between the atoms or how they are transferred between atoms. Write the electron configuration for Fe2+ Element # of Protons # of electrons # of valence electrons Oxidation number (charge) Sodium 11 10 1 1+ Chlorine 17 18 7 1-Beryllium 4 2 2 2+ Fluorine 9 10 7 1-Lithium 3 2 1 1+ Oxygen 8 10 6 2-Phosphorus 15 18 5 3-The oxidation number is sometimes indicated in some periodic tables. Oxidation Numbers Worksheets - Teacher Worksheets Oxidation Number Worksheet With Answers Oxidation Numbers Oxidation number and valency are related to the valence electrons of an atom. The number of protons corresponds to the atomic number of the element. Browse our pre-made printable worksheets library with a variety of activities and quizzes for all K … Elements in the same group of the periodic table have the same valency. electrons were shared equally between the bonded atoms. Oxidation-Reduction Reactions questions for your custom printable tests and worksheets. Each element having an oxidation state of +1. "�Sك0���XB� Formula Element and Oxidation Number Formula Element and Oxidation Number 1. Here it is. In this chemical bonding worksheet, students answer 76 questions about compounds, ... students give the valence electrons for given elements. The idea behind this is that as we stated earlier, the number of electrons lost in the oxidation reaction = the number of electrons gained in the reduction reaction. Complete the chart for each element. Oxygen – usually -2, except when it forms a O-O single bond, a … Element # of Protons # of Electrons # of Valence Electrons Oxidation Number Sodium Calcium Aluminum Chlorine Beryllium Fluorine Lithium Iodine Oxygen Potassium Magnesium Phosphorus For each of the listed combinations of elements, compose the chemical formula if they were to form an ionic bond. 2) Carbon is in the 4th ... On your worksheet, try these elements on your own: a) H b) P c) Ca d) Ar e) Cl f) Al. Structure Worksheet. Element # of Protons # of Electrons # of Valence Electrons Oxidation Number Sodium Calcium Aluminum Chlorine Beryllium Fluorine Lithium Iodine Oxygen Potassium Magnesium Phosphorus For each of the listed combinations of elements, compose the chemical formula if they were to form an ionic bond. The odd number immediately tells us that we have a free radical, so we know that not every atom can have eight electrons in its valence shell. 7. Valence electrons are the electrons that occupy the outermost shells or orbitals of an atom. Indicate if the complex is paramagnetic or not in the final column of the table. The number reflects how many electrons an atom will accept (negative number) or donate (positive number) to form a chemical bond. The oxidation number of an atom is the charge which the atom appears to have when its valence electrons … Name two more elements with that oxidation number and explain your choice. Notice that the number of electrons equals the change in oxidation number. If you get stuck, try asking another group for help. 6. Ionic Bonding Worksheet. Chemistry Lecture #31. Essential Concepts: Ions, ionic bonding, ionic compounds, ionic nomenclature, oxidation number, valence electrons. Worksheet #1- Naming Covalent Compounds. Na 2 O 2 Na O 2. -Cl Cl 17. Element # of Protons # of Electrons # of Valence Electrons Oxidation Number Sodium 11 11 1 +1 Calcium 20 20 2 +2 Aluminum 13 13 3 +3 Chlorine 17 17 7 -1 Beryllium 4 4 2 +2 Fluorine 9 9 7 -1 Lithium 3 3 1 +1 Iodine 53 53 7 -1 Oxygen 8 8 6 -2 Potassium 19 19 1 +1 Magnesium 12 12 2 +2 Phosphorus 15 15 5 -3 Oxidation Numbers, Notation, Lewis Dot Diagrams The oxidation numbers tell you how many electrons an element will gain or lose. Remember, nonmetals want to gain valence electrons to reach a stable arrangement. Now put the two half-reactions together. Essential Concepts: Ions, ionic bonding, ionic compounds, ionic nomenclature, oxidation number, valence electrons. The main difference between oxidation number and valency is that oxidation number is the charge of the central atom of a coordination compound if all bonds around that atom were ionic bonds whereas valency is the maximum number of electrons that an … Element # of Protons # of Electrons # of Valence Electrons Oxidation Number Sodium 11 11 1 +1 Calcium 20 20 2 +2 Aluminum 13 13 3 +3 Chlorine 17 17 7 -1 Beryllium 4 4 2 +2 Fluorine 9 9 7 -1 Lithium 3 3 1 +1 Iodine 53 53 7 -1 Oxygen 8 8 6 -2 Potassium 19 19 1 +1 Magnesium 12 12 2 +2 Phosphorus 15 15 5 -3 Valence electrons and ions worksheet. Ni is in group 10 so $$Ni^{2+}$$ has (10 – 2) = 8 valence electrons left: it has a $$d^8$$ configuration. They can see the difference between the number of protons (+) and electrons (-), which relates to the charge or oxidation number. Using your answer to Question 3, predict the net number of unpaired electrons in the reduced form containing one Fe(II) and one Fe(III). The nonmetals are connected by a shared pair of valence electrons. Valence Electrons. Formula Element and Oxidation Number Formula Element and Oxidation Number 1. Chemical Bonding Worksheet Answers. ... students give the valence electrons for given elements. Oxidation Numbers In order to keep track of electron transfers in oxidation-reduction reactions, it is convenient to introduce the concept of oxidation numbers. Periodic Variations of Oxidation Numbers: 1. Introduction to Oxidation-Reduction Reactions Quiz: Introduction to Oxidation-Reduction Reactions Oxidation Numbers %PDF-1.4 Complete the ‘oxidation number’ column of the table below by working out the oxidation number of each of the transition metal cations. – Calcium has 2 valence electrons, it will lose 2 electrons. Predict an element’s oxidation number based on its position in the periodic table and valence electrons. CaCl 2 ii. Use your knowledge of acid-base chemistry to suggest what the effect of low pH will be on the ligands in the complex. Predict reactivity of metals and nonmetals from general periodic trends. Ionic Bonding Worksheet. See more ideas about Redox reactions, Science chemistry, Teaching chemistry. 8. Ionic Bonding Worksheet Complete the chart for each element. You will be asked to assign oxidation numbers to elements in a. Electron Configuration Practice Worksheet Answers. Cl 2 Cl 16. 4. Essential concepts:Ions, ion notation, anions, cations, Bohr model, oxidation number, ionic compounds. Element atomic symbol total of electrons of valence electrons of electrons gained or lost oxidation number bromine br 35 7 gain 1 1 lithium li 3 1 lose 1 1 calcium ca 20 2 lose 2 2 sulfur s 16 6 gain 2 2 boron b 5 3 lose 3 3. oxidation number of fluorine in a compound is always -1. Cl 2 Cl 16. Valence Electrons Worksheet. Once you find your worksheet, click on pop-out icon or print icon to worksheet to print or download. Notice that the number of electrons equals the change in oxidation number. numbers name, Work 25, Oxidation number exercise, Work 1 determination of oxidation number or valence, Chapter 20 work redox, Work 25, , Redox practice work. The number of electrons produced must equal the number of electrons consumed. Complete the chart for each element. Oxidation numbers and Electronic Configurations (Worksheet), Oxidation‐Reduction Reactions (Worksheet). Complete the chart for each element. Now put the two half-reactions together. Ionic Bonding Worksheet. Oxidation number exercise answers page 57 oxidation number exercise do not hand in this work sheet. The electron configuration of iron, Fe, is [Ar]4s23d6. Balancing Chemical Equations Worksheet Answers 1 25. + 8H++ Mn04- + 41-120 multiply this half-reaction by 5 multiply this half-reaction by p e- + 16 +9 Mn2+ + JD Element # of Protons # of Electrons # of Valence Electrons Oxidation Number Sodium Calcium Aluminum Chlorine Beryllium Fluorine Lithium Iodine Oxygen Potassium Magnesium Phosphorus For each of the listed combinations of elements, compose the chemical formula if they were The sum of the valence electrons is 5 (from N) + 6 (from O) = 11. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. Valence electrons are important to be acquainted with when learning about chemical reactions, ... Oxidation Number: Definition, Rules & Examples 9:53 Chemistry Chapter 7 Worksheets —Chemical Formulas and Nomenclature page 1 WORKSHEET 1: Determination of oxidation number or valence number Rules to apply: 1. a. 5. In this chemical bonding worksheet, students answer 76 questions about compounds, Lewis dot structures, intermolecular forces between atoms, electronegativity and bonding and types of bonds. Name two more elements with that oxidation number and explain your choice. 2 Br-→ Br 2 + 2e-5e-+ 8H+ + MnO 4-→ Mn2+ + 4H 2O The sum of the charges of the metal cation and its ligands adds up to give the charge of the complex ion. The number of electrons produced must equal the number of electrons consumed. The sum of all oxidation numbers in a molecule or ion must add up to the total charge. The maximum positive oxidation number for any representative element is equal to the group number, from +1 (Alkali metals) to +7 (Halogens). This worksheet has 5 matching, 20 fill in the blank,... Get Free Access See Review The most stable valence is one that fills or half … This is the main difference between valency and oxidation state. Transition metal cations have a configuration $$d^z$$ where $$Z$$ is the number of valence electrons left over after ionization: $Z = \text{number of valence electrons on atom} – \text{charge of cation}$, $= \text{group number} – \text{oxidation number}$. Step 1 – Have students work together to complete Section A on their worksheet related to valence electrons and oxidation numbers and then discuss their answers. Some of the worksheets for this concept are Valence electrons, Valence electron work name period, Work the periodic table, Chapter 7 electron configuration and the periodic table, Work 11, Work 12, Practice problems electron configuration, Chapter 1 atomic and molecular structure. Noble Gases have an oxidation number of 0. In each of the following equations indicate the element that has been oxidized and the one that *en is $$NH_2CH_2CH_2NH_2$$ and can bond through lone pairs on both $$N$$ atoms (bidentate ligand). Now put the two half-reactions together. Notice that the number of electrons equals the change in oxidation number. What is the coordination number and approximate geometry of the Fe(III) atom. Non-metals (Negative Ions) Negative because they GAIN electrons. Worksheet 25 - Oxidation/Reduction Reactions. What are the coordination numbers and approximate geometries of the Fe atoms? Magnetic Properties: To minimize repulsion, electrons occupy orbitals singly before they pair up There are five d-orbitals and, as each orbital can accommodate two electrons, there is space for a maximum of ten electrons. Charting oxidation number worksheet answers. In this periodic table and elements worksheet, students review valence electrons, octet rule, oxidation number, and the difference between metals and non-metals on the periodic table. Oxidation Numbers In order to keep track of electron transfers in oxidation-reduction reactions, it is convenient to introduce the concept of oxidation numbers. Valence is also known as oxidation state. Describe how ionic, covalent, and metallic bonds form and provide examples of substances that exhibit each type of bonding. Structure Worksheet. For more information contact us at info@libretexts.org or check out our status page at https://status.libretexts.org. 8. The oxidation number of an atom is the charge which the atom appears to have when its valence electrons … Lewis Structures 1) Write the element symbol. iii. Given your answer to Question 2, suggest why magnetic studies indicate no net unpaired electrons when both iron atoms are present as Fe(III) in this cluster. Describe how this is achieved with 5 ligands. Valence Electrons and Ionic Compounds. Hydrogen–usually +1, except when bonded to Group I or Group II, when it forms hydrides, -1. Ionic Bonding Worksheet Complete the chart for each element. Complete the chart for each element. If it has more electrons (-), it would be a negative ion. If you need more than one polyatomic ion copy to make oxidation numbers add up to zero use parentheses. valence electrons forming an ion. %���� In a hurry? 3. Step 4: Multiply one or both half-equations so they contain equal numbers of electrons. Some of the worksheets for this concept are Work oxidation numbers name, Work 25, Oxidation number exercise, Work 1 determination of oxidation number or valence, Chapter 20 work redox, Work 25, , Redox practice work. Valence is also known as oxidation state. •You will only draw the valence electrons. What is the most common oxidation number for calcium? oxidation number of fluorine in a compound is always -1. Problems Worksheet. Oxidation Numbers Worksheet Directions: Use the Rules for Assigning Oxidation Numbers to determine the oxidation number assigned to each element in each of the given chemical formulas. As these electrons are weakly attracted to the nucleus, they can easily be lost or shared with other atoms.
2020 valence electrons and oxidation numbers worksheet
|
2021-01-17 22:16:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3955813944339752, "perplexity": 2158.9800340407883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703513194.17/warc/CC-MAIN-20210117205246-20210117235246-00681.warc.gz"}
|
https://stats.stackexchange.com/questions/203095/linearity-test-for-cublic-splines
|
# linearity test for cublic splines
Consider two models in which a continuous variable is modeled as a restricted cubic spline (RCS) or entered linearly. If one carries out a test for linearity, why are the degrees of freedom for the test equal to the number of knots minus 2. This is noted in the following question.
If there were no linear tail restrictions you would have one d.f. for a basic quadratic and cube term plus each knot. The linear tail restriction gets rid of the quadratic and cube terms plus the two final terms that have to do with differences in cubes. What is left is $k-2$ nonlinear terms if you have $k$ knots. My Course Notes go into more detail in Chapter 2.
|
2021-09-24 14:19:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5709605813026428, "perplexity": 485.1861664727835}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057558.23/warc/CC-MAIN-20210924140738-20210924170738-00576.warc.gz"}
|
https://physics.stackexchange.com/questions/497662/work-done-on-a-quantum-state
|
# Work done on a quantum state
I have a Hamiltonian $$H _{\lambda(t)}$$, where $$\lambda(t)$$ characterizes a time-dependent path in parameter space. The parameter is changed in finite time from $$\lambda(t_i)$$ to $$\lambda(t_f)$$ . At $$t=t_i$$ the system is in the initial state $$|\psi\rangle$$. What is the work done on the system? Is it well-defined?
Edit: Adding more details for clarity.
The question has been inspired by the attempt to understand Jarzynski's equality for quantum systems. There is a lot of controversy about the definition of work done in literature.
Here are 2 papers which highlight this observation. No-go theorem for the characterisation of work fluctuations in coherent quantum systems Fluctuation theorems: Work is not an observable
Arnold Neumaier stated on physics forums that
"the dynamics is unitary, and decoherence or measurement don't figure. Work is undefined in this generality.".
I agree with this viewpoint.
It is my understanding that the back-reaction on the probe that creates the driving field is important to understand the definition of work in a general setting.
My goal is to define work in a way that can be motivated from experimental considerations. It seems clear to me that the 2 point measurement scheme where energy is measured before and after the evolution does not take into account coherence of the state into account in defining work done. There are other proposals, and none(to the extent I have evaluated them) seemed satisfactory.
• Sorry for the d/v but your rep would imply that you are aware that you need to show your own thoughts and work. – StudyStudy Aug 19 at 19:10
• @StudyStudy The question stands independent of my own research as it is a well posed question. However since you asked, one protocol that is used is as follows "An established protocol to measure work involves a double projective measurement of the energy of the system at the beginning and at the end of the evolution. Such a two-measurement protocol (TMP) can be described in terms of classical conditional probabilities" ref:arxiv.org/abs/1504.01574 . However there is a doubt about whether it can be used in this context. – Prathyush Aug 19 at 19:22
• Why can't two projective measurement be used to define work in this context? Are you worried about initial coherences? – Sunyam Aug 20 at 15:10
• @Sunyam Yes i am worried about initial coherences. If you look at the discussions physics forums Arnold says if evolution is unitary then work cannot be defined. I think the correct answer to this question is that work cannot be defined in a context independent way, and one has to consider the back reaction on the object that is responsible for the change in the Hamiltonian. Its an actively debated topic in literature. I may eventually write an answer based on my literature survey eventually. – Prathyush Aug 20 at 16:40
• Your bounty description expands on the question a lot. Without it, it would be unclear what you are asking. I recommend editing your question and adding this additional information. (Then, it would be a quite interesting question imo.) – Noiralef Aug 24 at 10:30
What is the work done on the system? Is it well-defined?
We can choose a definition that behaves the way we want it to in certain situations, but whatever definition we choose, it won't be perfect (Examples 1-4). Accounting for back-reaction doesn't fix this. Instead, it highlights a deeper ambiguity (Example 5).
That's okay, because the concept(s) of "work" is not one of the foundational concepts in our current understanding of nature. Rather than trying to perfect the definition of "work," we can do things the other way around. For any experiment that we would traditionally describe in terms of "work," we can try to describe the experiment in more fundamental terms instead.
The rest of this answer consists of examples illustrating why I think an unambiguous unified definition of "work" is neither feasible nor necessary. Contents:
• Example 1: Charged particle in a uniform electric field
• Example 2: A contrived superposition
• Example 3: Statistical mechanics
• Example 4: Jarzynski's equality (quantum version)
• Example 5: Accounting for back-reaction
## Example 1: Charged particle in a uniform electric field
This is a special case of the general setup described in the OP. In this example, we would normally say that work is being done, even though $$H(t)$$ is independent of $$t$$.
Consider a charged particle in an external potential whose gradient represents an external electric field. In the usual representation, the Hamiltonian is $$H = \frac{-(\hbar \nabla)^2}{2m}+V(x),$$ using one-dimensional space for simplicity. Take the electric field (proportional to $$\nabla V$$) to be uniform in some large region of space, and take the particle's initial state to be a gaussian wavepacket with zero net momentum, localized somewhere near the middle of that region.
We already know what happens: the particle accelerates. More accurately, the expectation value of the position operator accelerates. By analogy with classical mechanics, we would say that the electric field is doing "work" on the particle.
In this case, "work" refers to a change in (the expectation value of) part of the system's energy, namely the particle's kinetic energy. The keyword is part, a theme that will be revisited in Example 5.
## Example 2: A contrived superposition
Consider the single-charged-particle model again, with a Hamiltonian of the form shown above. This time, take the field to be uniform in two large regions of space, $$R$$ and $$L$$, with the electric field vector directed to the right in $$R$$ and to the left in $$L$$. Take the initial state to be a superposition of two right-moving gaussian wavepackets, one in the middle of $$R$$ and one in the middle of $$L$$.
Again, we already know what happens in this situation: the wavepacket in one region accelerates, and the wavepacket in the other region decelerates. But we have just one particle, whose state is a quantum superposition of these two wavepackets, one accelerating and one decelerating, and we can choose the magnitudes of the field in each region so that the expectation value of the particle's momentum doesn't change at all.
How should we quantify the "work" done on the particle in this case? Is "work" even a useful concept here?
## Example 3: Statistical mechanics
Consider the first law of thermodynamics: $$T\,dS=dE+p\,dV$$. We typically describe the overall change in the system's energy (the $$dE$$ term) as being partly due to "work" (the $$p\,dV$$ term) and partly due to heat transfer (the $$T\,dS$$ term).
In statistical mechanics, we interpret $$S(E,V)$$ as the log of the number of microstates compatible with the given constraints on the total energy $$E$$ and the total volume $$V$$, and the first law is simply the definitions of $$T$$ and $$p$$.
In a system with an enormous number of particles, most of the microstates compatible with the volume constraint $$V$$ are states in which the particles are distributed throughout the whole available volume. States in which a significant part of the available volume is empty hardly affect the number $$S(E,V)$$ at all.
But in the opposite extreme of a system with just one particle localized somewhere inside a volume of astronomical proportions, as far as laboratory-scale experiments with that particle are concerned, counting energy eigenstates is no longer a useful thing to do, and defining "work" in terms of changes in that volume is likewise practically useless. The definition of "work" used in Example 1 is more useful in this extreme, at least if we exclude situations like the one in Example 2.
The message here is that different definitions of work are useful under different situations.
## Example 4: Jarzynski's equality (quantum version)
For the purpose of highlighting how it defines "work," here's a quick review of the quantum version of Jarzynski's equality. Consider a system with time-dependent Hamiltonian $$H(t)$$. Use the Schrödinger picture, and let $$\rho(t)$$ denote the density matrix representing the state at time $$t$$. Then $$\rho(t) =u(t)\rho(0) u^\dagger(t) \tag{1}$$ with $$\frac{d}{dt}u(t)=-iH(t)u(t) \hskip2cm u(0)=1. \tag{2}$$ (I'm using a lowercase $$u$$ here for consistency with Example 5.) For each $$t$$, let $$\mathcal{E}(t)$$ denote a complete set of orthonormal eigenstates of $$H(t)$$, and define $$E(t,k) := \langle k|H(t)|k\rangle \hskip1cm \text{for each} \hskip1cm |k\rangle\in\mathcal{E}(t) \tag{3}$$ and $$Z(t) := \text{trace} \left(e^{-\beta H(t)} \right) = \sum_{|k\rangle\in\mathcal{E}(t)} e^{-\beta E(t,k)}. \tag{4}$$ For simplicity, I'll keep the inverse temperature $$\beta$$ fixed. If the initial state is $$\rho(0)=\frac{e^{-\beta H(0)}}{Z(0)}, \tag{5}$$ and if $$H(t)$$ is measured at $$t=0$$ and again at $$t=1$$, then the joint probability of getting the outcomes $$|j\rangle$$ and $$|k\rangle$$, respectively, is $$p(j,k) = \big|\langle k|u(1)|j\rangle\big|^2 \frac{e^{-\beta E(0,j)}}{Z(0)} \tag{6}$$ with $$|j\rangle\in\mathcal{E}(0)$$ and $$|k\rangle\in\mathcal{E}(1)$$. The quantum version of Jarzynski's equality says $$\sum_{j,k}p(j,k) e^{-\beta W(j,k)} = \frac{Z(1)}{Z(0)} \tag{7}$$ with $$W(j,k) := E(1,k)-E(0,j)$$. The proof is easy: the factors $$e^{\pm\beta E(0,j)}$$ cancel each other, and then evaluating the sum over $$j$$ eliminates the dependence on $$u(1)$$. Equation (7) is a special case of equation (2.7) in Jarzynski Relations for Quantum Systems and Some Applications.
Even though the proof is easy, equation (7) is interesting because the left-hand side involves $$u(1)$$, which says something about how the system gets from $$t=0$$ to $$t=1$$, whereas the right-hand side only involves the initial and final Hamiltonians $$H(0)$$ and $$H(1)$$, regardless of what happens in between. We can make it sound even more interesting by referring to the quantity $$W(j,k)$$ as "work."
In Example 1, where $$H(t)$$ is independent of $$t$$, this definition of "work" would give $$\sum_{j,k}p(j,k)f\big(W(j,k)\big) = \sum_{j}p(j,j)f\big(W(j,j)\big)= f(0),$$ for any function $$f$$ and for any initial state (not just for a thermal state), so this definition of work is not equivalent to the definition we would normally use in a context like Example 1. Again, different definitions of work are useful in different situations.
## Example 5: Accounting for back-reaction
The OP considers a time-dependent Hamiltonian $$H(t)$$, which we often do as a way of accounting for how external devices affect the system of interest when back-reaction is negligible. In the paper Fluctuation relations and strong inequalities for thermally isolated systems, Jarzynski acknowledges the limitations of such a model:
It is hard to imagine an experimental situation in which a macroscopic quantum system is so utterly isolated from its environment that it evolves unitarily as an external parameter is varied from one value to another –- surely a wayward photon or gas molecule will scatter off the system, spoiling unitarity. ... These considerations highlight the idealizations that are made (and should always be kept in mind) when choosing specific dynamical equations of motion to model the evolution of a many-particle system.
To account for back-reaction, we can use a more realistic model that includes the microscopic quantum dynamics of whatever devices are affecting the subsystem of interest, together with the subsystem of interest itself, all as part of one big truly-closed system with an overall time-independent Hamiltonian $$H$$. Such a model automatically includes back-reaction: $$H$$ is time-independent, so the total energy is conserved. However, doing this doesn't make "work" unambiguous. Instead, it exposes a deeper ambiguity.
One example of such a model is QED + QCD (quantum electrodynamics combined with quantum chromodynamics), which is rich enough to account for both the molecular and nuclear dynamics of most of the laboratory-scale experiments that might be associated with Jarzynski's equality. In QED+QCD, what part of the total system's energy would we use to define the work done on the subsystem of interest? The answer depends on the state, because the state defines what objects are present, what lab equipment is present, and how everything is configured, both macroscopically and microscopically. Even the context of a given state, the concept of a subsystem's "energy" is fundamentally ambiguous. The Hamiltonian of QED+QCD involves one electron field, one up-quark field, and so on, and all of the objects and lab equipment and air molecules are described in terms of these same quantum fields. In QED+QCD, there is no general or perfect way to separate the energy of a subsystem from the energy of the rest of the world.
The difficulty is not limited to QED+QCD. To formulate it in general terms, consider any quantum system with a time-independent Hamiltonian $$H$$. In the Schrödinger picture, which I'm using here, the model is defined by its set $$\Omega$$ of time-independent observables, together with the Hamiltonian $$H$$ that generates time-evolution. A subsystem is a simply subset $$\omega\subset\Omega$$.
(A less general but more popular definition of "subsystem" involves writing the Hilbert space $$\cal{H}$$ as a tensor product $$\cal{H}=\cal{H}_1\otimes\cal{H}_2$$ and taking $$\omega$$ to be all of the observables in $$\Omega$$ that act non-trivially only on one factor $$\cal{H}_1$$.)
Defining the state of the subsystem is trivial. Let $$\psi(\cdots,t) := \frac{ \big\langle\psi(t)\,\big|\cdots\big|\,\psi(t)\big\rangle }{ \big\langle\psi(t)\,\big|\,\psi(t)\big\rangle} \tag{8}$$ be the state of the complete system at time $$t$$. The dots "$$\cdots$$" are a placeholder for an operator: if $$X$$ is an observable, then $$\psi(X,t)$$ is the expectation value of $$X$$ at time $$t$$. The Hamiltonian $$H$$ enters via the relationship $$\psi(\cdots,t) = \psi\big(U^\dagger(t)\cdots U(t),0\big) \tag{9}$$ with the unitary operators $$U(t)$$ defined by $$\frac{d}{dt} U(t)=-iH\,U(t). \tag{10}$$ The state of the subsystem $$\omega\subset\Omega$$ at time $$t$$ is simply the restriction of the function $$\psi(\cdots,t)$$ to observables in $$\omega$$. There is no need to factorize the Hilbert space, no need to take a "partial trace," or anything like that. Those can be useful tools for calculations, but they're not needed conceptually.
Equation (7) refers to the subsystem's Hamiltonian. What does that mean in the general framework? When does a subsystem act like it has a (possibly time-dependent) Hamiltonian of its own? That's probably a big enough question to fill several research careers, so I won't try to answer it here, but I'll suggest a way to formulate the question. The question asks about the feasibility of an approximation $$\psi(X,t)\approx\psi\big(u^\dagger(t)X u(t),0\big) \hskip1cm \text{for all } X\in\omega\subset\Omega, \tag{11}$$ with unitary operators $$u(t)$$ given by (2) for some time-dependent Hamiltonian $$H(t)$$ that commutes with everything that commutes with everything in $$\omega$$. In other words, we want $$H(t)$$ to belong to the double commutant of $$\omega$$. In other other words, we want $$H(t)$$ to belong to the von Neumann algebra generated by $$\omega$$.
If we can find such an $$H(t)$$, then we can use it to define the energy of the subsystem, which is a prerequisite for defining "work." In general, we have no guarantee that such an $$H(t)$$ exists, but at least we have a mathematical formulation of the question.
Equation (7) assumes that the system is initially in a thermal state (5). How can we express an initial condition like (5) in the general framework? One way would be to use the approximation (11) to justify reverting to a model with no back-reaction, but that would defeat the purpose of using the more general framework.
Here's another way: Suppose that $$H(t)$$ satisfies the conditions shown above, and to simplify the language and notation, suppose that its spectrum is discrete. Let $$P(t,k)$$ be the projection operator onto the eigenvector $$|k\rangle$$ of $$H(t)$$ with eigenvalue $$E(t,k)$$. Then an initial condition like (5) can be expressed like this: $$\psi\big(P(0,k),0\big) \propto e^{-\beta E(0,k)} \tag{12}$$ with a proportionality factor that is independent of $$k$$. By construction, the projection operators $$P(t,k)$$ belong to the von Neumann algebra generated by $$\omega$$, so the condition (12) describes the state of the subsystem without assuming anything further about the state of the complete system.
Equations (8)-(11) would provide a general approach to accounting for back-reaction, but it all relies on an identification of the subsystem of interest as a subset $$\omega\subset\Omega$$ of the system's observables. Thinking about the QED+QCD example makes it clear that in general, a state-independent definition of "subsystem" does not exist. Nevermind the ambiguities about how to define the subsystem's energy, which of course is a prerequisite for defining "work." Here we have a much deeper ambiguity: the very concept of "subsystem of interest" is state-dependent, because the existence and configuration of the whole experiment is state-dependent. In relativistic quantum field theory, observables are not tied to particles, because particles are just phenomena. Observables are tied to regions of space (or spacetime in the Heisenberg picture). This is a deep obstacle to the existence of any definition of "work" that behaves as desired in arbitrary quantum systems.
|
2019-09-21 02:52:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 110, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.814332902431488, "perplexity": 233.77134956073763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574182.31/warc/CC-MAIN-20190921022342-20190921044342-00057.warc.gz"}
|
https://www.vox.me.uk/post/2014/12/personal-image-sharing-and-embedding/
|
# Personal Image Sharing and Embedding
I’ve been looking at writing more of my blog posts on my iPad. However, up until I’ve recently been using Flickr for storing all my images and displaying them on my site. This has worked well in the past but has been a long winded - on the iMac, the image is copied into iPhoto and then uploaded to Flickr via that. Adding the image to the blog post is achieved by inserting the image using MarsEdit.
The issue is attaching the image inline on the iPad. Whilst I can happily upload images onto Flickr, I can’t then get the inline link attachment from the Flickr app itself. I could mess about with mobile Safari and the Flickr website, but the non mobile Flickr site is pretty poor on the iPad, so I’ve decided to go another way.
## Transmit
Earlier in 2014, Panic released the excellent Transmit FTP app onto the iOS app store. My decision to move in this direction was influenced in no uncertain terms by Macdrifter. This guide helped me set out the path that I was to take.
I use the excellent Fastmail to host my email, which also comes with FTP space (5GB on my plan). This allows me to host a very basic website (csalter.me.uk and vox.me.uk) on the Fastmail servers - my sites aren’t popular enough to reach the bandwidth limits set, so it’s a good way of hosting a static site. However, this space wasn’t being used and and as I can host from this location, I decided that I’d upload images to here and have them linked to on the blog.
However, after some playing around with the basic idea from Macdrifter, it needed some tweaking to suit me, my host and my blogs.
When uploading to Fastmail for testing, I found that the images I was uploading were a bit big and were taking a while to download - the downside of hosting it off the Fastmail servers are that they’re great at email, but perhaps I was asking to much, hosting 5 x 5MB images per post on the server and getting Tumblr to resize them dynamically for me. So the best bet seemed to be to resize them before uploading them to the Fastmail servers.
On the Mac, I currently use ImageMagick to resize it with the commands:
<code>mogrify -resize 307200@ -format JPG *.JPG
mogrify -resize 307200@ -format jpg *.jpg
mogrify -resize 307200@ -format jpeg *.jpeg
mogrify -resize 307200@ -format png *.png
mogrify -resize 307200@ -format PNG *.PNG
</code>
This gives me a image that is about 640 x 480 (though if it’s in portrait, it’ll be 480 x 640 - the pixel count resizes approximately to this size and keeps the photo ratio). This is uploaded onto the Fastmail server with Transmit.
This allows me to copy and paste the URL of the image and paste it directly into my text editor.
I can now achieve the same on the iPad with the recently released Workflow app. The workflow can be found here. It basically takes a photo from your photo roll, resizes it to 640 x 480 and then allows you to open it using an extension, which Transmit has. This then lets you upload to the Fastmail server, using a previously saved bookmark.
Currently, it’s very basic as the iPhone/iPad version shown doesn’t take into account the orientation of the photo and so it’ll always output as 640 x 480, thus ruining portrait photos (like the image of the workflow above would be). I think this could be done in Python within the workflow app, but I would need to learn more Python for that!
## Conclusions
With Workflow, I can easily take and upload pictures from my iPhone and upload them to Fastmail’s web service where I can then copy and paste the URL from Transmit into my Editorial app where I’m writing. Unlike my previous workflow of using Flickr, this allows me to upload to the same location whether I’m using my iPad or my iMac (or even my Windows work laptop, as occasionally happens!) In addition, I’m also retaining control of the images and in theory, I should be able to keep the URL’s the same in the future, even if I move from Fastmail as it’s on one of my domain names.
Overall, I’m pretty happy with the change. The only issue I do have is that Flickr would link to the full size image - my current setup only shows the smaller picture. However, I don’t think this is to much of a loss and it saves on the bandwidth on the Fastmail servers.
In the future, I could potentially look at using Hazel to automate some of the resize tasks on the iMac. Likewise, I have a small server with BuyVM - workflow allows me to run a script on something via SSH, so I could potentially have Workflow (and Hazel) upload an image to there, run a script to resize the image using ImageMagick and then move it to my Fastmail folder. But that’s for the future as it’s currently working fine at the minute.
|
2019-01-24 09:26:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17780455946922302, "perplexity": 2244.5486337387215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584519757.94/warc/CC-MAIN-20190124080411-20190124102411-00552.warc.gz"}
|
https://docs.ocean.dwavesys.com/projects/dwave-networkx/en/latest/reference/algorithms/generated/dwave_networkx.algorithms.max_cut.weighted_maximum_cut.html
|
# dwave_networkx.algorithms.max_cut.weighted_maximum_cut¶
weighted_maximum_cut(G, sampler=None, **sampler_args)[source]
Returns an approximate weighted maximum cut.
Defines an Ising problem with ground states corresponding to a weighted maximum cut and uses the sampler to sample from it.
A weighted maximum cut is a subset S of the vertices of G that maximizes the sum of the edge weights between S and its complementary subset.
Parameters: G (NetworkX graph) – The graph on which to find a weighted maximum cut. Each edge in G should have a numeric weight attribute. sampler – A binary quadratic model sampler. A sampler is a process that samples from low energy states in models defined by an Ising equation or a Quadratic Unconstrained Binary Optimization Problem (QUBO). A sampler is expected to have a ‘sample_qubo’ and ‘sample_ising’ method. A sampler is expected to return an iterable of samples, in order of increasing energy. If no sampler is provided, one must be provided using the set_default_sampler function. sampler_args – Additional keyword parameters are passed to the sampler. S – A maximum cut of G. set
Notes
Samplers by their nature may not return the optimal solution. This function does not attempt to confirm the quality of the returned sample.
|
2023-02-03 17:45:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4995498061180115, "perplexity": 944.5852003097947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500058.1/warc/CC-MAIN-20230203154140-20230203184140-00483.warc.gz"}
|
https://analytixon.com/2015/08/21/whats-new-on-arxiv-23/
|
Neural networks have been shown to improve performance across a range of natural-language tasks. However, designing and training them can be complicated. Frequently, researchers resort to repeated experimentation to pick optimal settings. In this paper, we address the issue of choosing the correct number of units in hidden layers. We introduce a method for automatically adjusting network size by pruning out hidden units through $\ell_{\infty,1}$ and $\ell_{2,1}$ regularization. We apply this method to language modeling and demonstrate its ability to correctly choose the number of hidden units while maintaining perplexity. We also include these models in a machine translation decoder and show that these smaller neural models maintain the significant improvements of their unpruned versions.
We address the problem of compressed sensing with Multiple Measurement Vectors (MMVs) when the structure of sparse vectors in different channels depend on each other. ‘The sparse vectors are not necessarily joint sparse’. We capture this dependency by computing the conditional probability of each entry of each sparse vector to be non-zero given ‘residuals’ of all previous sparse vectors. To compute these probabilities, we propose to use Long Short-Term Memory (LSTM) [1], a bottom up data driven model for sequence modelling. To compute model parameters we minimize a cross entropy cost function. We propose a greedy solver that uses above probabilities at the decoder. By performing extensive experiments on two real world datasets, we show that the proposed method significantly outperforms general MMV solver Simultaneous Orthogonal Matching Pursuit (SOMP) and model based Bayesian methods including Multitask Compressive Sensing [2] and Sparse Bayesian Learning for Temporally Correlated Sources [3]. Nevertheless, we emphasize that the proposed method is a data driven method where availability of training data is important. However, in many applications, train data is indeed available, e.g., recorded images or video.
Regularisation of deep neural networks (DNN) during training is critical to performance. By far the most popular method is known as dropout. Here, cast through the prism of signal processing theory, we compare and contrast the regularisation effects of dropout with those of dither. We illustrate some serious inherent limitations of dropout and demonstrate that dither provides a far more effective regulariser which does not suffer from the same limitations.
Analysis of sequential event data has been recognized as one of the essential tools in data modeling and analysis field. In this paper, after the examination of its technical requirements and issues to model complex but practical situation, we propose a new sequential data model, dubbed Duration and Interval Hidden Markov Model (DI-HMM), that efficiently represents ‘state duration’ and ‘state interval’ of data events. This has significant implications to play an important role in representing practical time-series sequential data. This eventually provides an efficient and flexible sequential data retrieval. Numerical experiments on synthetic and real data demonstrate the efficiency and accuracy of the proposed DI-HMM.
Deploying, configuring, and managing large clusters is very a demanding and cumbersome task due to the complexity of such systems and the variety of skills needed. One needs to perform low-level configuration of the cluster nodes to ensure their interoperability and connectivity, as well as install, configure and provision the needed services. In this paper we address this problem and demonstrate how to build a Big Data analytic platform on Amazon EC2 in a matter of minutes. Moreover, to use our tool, embedded into a public Amazon Machine Image, the user does not need to be an expert in system administration or Big Data service configuration. Our tool dramatically reduces the time needed to provision clusters, as well as the cost of the infrastructure. Researchers enjoy an additional benefit of having a simple way to specify the experimental environments they use, so that their experiments can be easily reproduced by anyone using our tool.
Graphical models use the intuitive and well-studied methods of graph theory to implicitly represent dependencies between variables in large systems. They can model the global behaviour of a complex system by specifying only local factors. This thesis studies inference in discrete graphical models from an algebraic perspective and the ways inference can be used to express and approximate NP-hard combinatorial problems. We investigate the complexity and reducibility of various inference problems, in part by organizing them in an inference hierarchy. We then investigate tractable approximations for a subset of these problems using distributive law in the form of message passing. The quality of the resulting message passing procedure, called Belief Propagation (BP), depends on the influence of loops in the graphical model. We contribute to three classes of approximations that improve BP for loopy graphs A) loop correction techniques; B) survey propagation, another message passing technique that surpasses BP in some settings; and C) hybrid methods that interpolate between deterministic message passing and Markov Chain Monte Carlo inference. We then review the existing message passing solutions and provide novel graphical models and inference techniques for combinatorial problems under three broad classes: A) constraint satisfaction problems such as satisfiability, coloring, packing, set / clique-cover and dominating / independent set and their optimization counterparts; B) clustering problems such as hierarchical clustering, K-median, K-clustering, K-center and modularity optimization; C) problems over permutations including assignment, graph morphisms and alignment, finding symmetries and traveling salesman problem. In many cases we show that message passing is able to find solutions that are either near optimal or favourably compare with today’s state-of-the-art approaches.
We consider the problem of identifying patterns in a data set that exhibit anomalous behavior, often referred to as anomaly detection. Similarity-based anomaly detection algorithms detect abnormally large amounts of similarity or dissimilarity, e.g.~as measured by nearest neighbor Euclidean distances between a test sample and the training samples. In many application domains there may not exist a single dissimilarity measure that captures all possible anomalous patterns. In such cases, multiple dissimilarity measures can be defined, including non-metric measures, and one can test for anomalies by scalarizing using a non-negative linear combination of them. If the relative importance of the different dissimilarity measures are not known in advance, as in many anomaly detection applications, the anomaly detection algorithm may need to be executed multiple times with different choices of weights in the linear combination. In this paper, we propose a method for similarity-based anomaly detection using a novel multi-criteria dissimilarity measure, the Pareto depth. The proposed Pareto depth analysis (PDA) anomaly detection algorithm uses the concept of Pareto optimality to detect anomalies under multiple criteria without having to run an algorithm multiple times with different choices of weights. The proposed PDA approach is provably better than using linear combinations of the criteria and shows superior performance on experiments with synthetic and real data sets.
We present the parallel and interacting stochastic approximation annealing (PISAA) algorithm, a stochastic simulation procedure for global optimisation, that extends and improves the stochastic approximation annealing (SAA) by using population Monte Carlo ideas. The standard SAA algorithm guarantees convergence to the global minimum when a square-root cooling schedule is used; however the efficiency of its performance depends crucially on its self-adjusting mechanism. Because its mechanism is based on information obtained from only a single chain, SAA may present slow convergence in complex optimisation problems. The proposed algorithm involves simulating a population of SAA chains that interact each other in a manner that ensures significant improvement of the self-adjusting mechanism and better exploration of the sampling space. Central to the proposed algorithm are the ideas of (i) recycling information from the whole population of Markov chains to design a more accurate/stable self-adjusting mechanism and (ii) incorporating more advanced proposals, such as crossover operations, for the exploration of the sampling space. PISAA presents a significantly improved performance in terms of convergence. PISAA can be implemented in parallel computing environments if available. We demonstrate the good performance of the proposed algorithm on challenging applications including Bayesian network learning and protein folding. Our numerical comparisons suggest that PISAA outperforms the simulated annealing, stochastic approximation annealing, and annealing evolutionary stochastic approximation Monte Carlo especially in high dimensional or rugged scenarios.
Fault-tolerance techniques for stream processing engines can be categorized into passive and active approaches. A typical passive approach periodically checkpoints a processing task’s runtime states and can recover a failed task by restoring its runtime state using its latest checkpoint. On the other hand, an active approach usually employs backup nodes to run replicated tasks. Upon failure, the active replica can take over the processing of the failed task with minimal latency. However, both approaches have their own inadequacies in Massively Parallel Stream Processing Engines (MPSPE). The passive approach incurs a long recovery latency especially when a number of correlated nodes fail simultaneously, while the active approach requires extra replication resources. In this paper, we propose a new fault-tolerance framework, which is Passive and Partially Active (PPA). In a PPA scheme, the passive approach is applied to all tasks while only a selected set of tasks will be actively replicated. The number of actively replicated tasks depends on the available resources. If tasks without active replicas fail, tentative outputs will be generated before the completion of the recovery process. We also propose effective and efficient algorithms to optimize a partially active replication plan to maximize the quality of tentative outputs. We implemented PPA on top of Storm, an open-source MPSPE and conducted extensive experiments using both real and synthetic datasets to verify the effectiveness of our approach.
Stream mining poses unique challenges to machine learning: predictive models are required to be scalable, incrementally trainable, must remain bounded in size (even when the data stream is arbitrarily long), and be nonparametric in order to achieve high accuracy even in complex and dynamic environments. Moreover, the learning system must be parameterless —traditional tuning methods are problematic in streaming settings— and avoid requiring prior knowledge of the number of distinct class labels occurring in the stream. In this paper, we introduce a new algorithmic approach for nonparametric learning in data streams. Our approach addresses all above mentioned challenges by learning a model that covers the input space using simple local classifiers. The distribution of these classifiers dynamically adapts to the local (unknown) complexity of the classification problem, thus achieving a good balance between model complexity and predictive accuracy. We design four variants of our approach of increasing adaptivity. By means of an extensive empirical evaluation against standard nonparametric baselines, we show state-of-the-art results in terms of accuracy versus model size. For the variant that imposes a strict bound on the model size, we show better performance against all other methods measured at the same model size value. Our empirical analysis is complemented by a theoretical performance guarantee which does not rely on any stochastic assumption on the source generating the stream.
A limitation of many clustering algorithms is the requirement to tune adjustable parameters for each application or even for each dataset. Some algorithms require an \emph{a priori} estimate of the number of clusters while density-based techniques usually require a scale parameter. Other parametric methods, such as mixture modeling, make assumptions about the underlying cluster distributions. Here we introduce a non-parametric clustering method that does not involve tunable parameters and only assumes that clusters are unimodal, in the sense that they have a single point of maximal density when projected onto any line, and that clusters are separated from one another by a separating hyperplane of relatively lower density. The technique uses a non-parametric algorithm—isotonic regression—as the kernel operation repeated at every iteration. We carry out a rigorous hypothesis test for whether pairs of clusters should be merged based upon Monte Carlo sampling of a statistic. We compare the method against k-means++, DBSCAN, and Gaussian mixture algorithms and show in simulations that it performs better than these standard methods in many situations. The algorithm’s utility is also demonstrated in the context of ‘spike sorting’ of neural electrical recordings. The source code for the algorithm is freely available.
Warehouse is one of the important aspects of a company. Therefore, it is necessary to improve Warehouse Management System (WMS) to have a simple function that can determine the layout of the storage goods. In this paper we propose an improved warehouse layout method based on ant colony algorithm and backtracking algorithm. The method works on two steps. First, it generates a solutions parameter tree from backtracking algorithm. Then second, it deducts the solutions parameter by using a combination of ant colony algorithm and backtracking algorithm. This method was tested by measuring the time needed to build the tree and to fill up the space using two scenarios. The method needs 0.294 to 33.15 seconds to construct the tree and 3.23 seconds (best case) to 61.41 minutes (worst case) to fill up the warehouse. This method is proved to be an attractive alternative solution for warehouse layout system.
|
2021-06-20 16:20:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.448417991399765, "perplexity": 661.3830047386327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488249738.50/warc/CC-MAIN-20210620144819-20210620174819-00061.warc.gz"}
|
http://www.acmerblog.com/hdu-1387-Team-Queue-1823.html
|
2013
12-09
# Team Queue
Queues and Priority Queues are data structures which are known to most computer scientists. The Team Queue, however, is not so well known, though it occurs often in everyday life. At lunch time the queue in front of the Mensa is a team queue, for example.
In a team queue each element belongs to a team. If an element enters the queue, it first searches the queue from head to tail to check if some of its teammates (elements of the same team) are already in the queue. If yes, it enters the queue right behind them. If not, it enters the queue at the tail and becomes the new last element (bad luck). Dequeuing is done like in normal queues: elements are processed from head to tail in the order they appear in the team queue.
Your task is to write a program that simulates such a team queue.
The input will contain one or more test cases. Each test case begins with the number of teams t (1<=t<=1000). Then t team descriptions follow, each one consisting of the number of elements belonging to the team and the elements themselves. Elements are integers in the range 0 – 999999. A team may consist of up to 1000 elements.
Finally, a list of commands follows. There are three different kinds of commands:
ENQUEUE x – enter element x into the team queue
DEQUEUE – process the first element and remove it from the queue
STOP – end of test case
The input will be terminated by a value of 0 for t.
For each test case, first print a line saying "Scenario #k", where k is the number of the test case. Then, for each DEQUEUE command, print the element which is dequeued on a single line. Print a blank line after each test case, even after the last one.
2
3 101 102 103
3 201 202 203
ENQUEUE 101
ENQUEUE 201
ENQUEUE 102
ENQUEUE 202
ENQUEUE 103
ENQUEUE 203
DEQUEUE
DEQUEUE
DEQUEUE
DEQUEUE
DEQUEUE
DEQUEUE
STOP
2
5 259001 259002 259003 259004 259005
6 260001 260002 260003 260004 260005 260006
ENQUEUE 259001
ENQUEUE 260001
ENQUEUE 259002
ENQUEUE 259003
ENQUEUE 259004
ENQUEUE 259005
DEQUEUE
DEQUEUE
ENQUEUE 260002
ENQUEUE 260003
DEQUEUE
DEQUEUE
DEQUEUE
DEQUEUE
STOP
0
Scenario #1
101
102
103
201
202
203
Scenario #2
259001
259002
259003
259004
259005
260001
1. 很高兴你会喜欢这个网站。目前还没有一个开发团队,网站是我一个人在维护,都是用的开源系统,也没有太多需要开发的部分,主要是内容整理。非常感谢你的关注。
|
2016-10-27 05:01:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18701381981372833, "perplexity": 1056.6792228680067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721141.89/warc/CC-MAIN-20161020183841-00101-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://www.zora.uzh.ch/id/eprint/130268/
|
# Search for Resonant Production of High-Mass Photon Pairs in Proton-Proton Collisions at $\sqrt{s}$ = 8 and 13 TeV
CMS Collaboration; Canelli, F; Chiochia, V; Kilminster, B; Robmann, P; et al (2016). Search for Resonant Production of High-Mass Photon Pairs in Proton-Proton Collisions at $\sqrt{s}$ = 8 and 13 TeV. Physical Review Letters, 117(5):051802.
## Abstract
A search for the resonant production of high-mass photon pairs is presented. The analysis is based on samples of proton-proton collision data collected by the CMS experiment at center-of-mass energies of 8 and 13 TeV, corresponding to integrated luminosities of 19.7 and 3.3 fb$^{−1}$, respectively. The interpretation of the search results focuses on spin-0 and spin-2 resonances with masses between 0.5 and 4 TeV and with widths, relative to the mass, between 1.4×10$^{−4}$ and 5.6×10$^{−2}$. Limits are set on scalar resonances produced through gluon-gluon fusion, and on Randall-Sundrum gravitons. A modest excess of events compatible with a narrow resonance with a mass of about 750 GeV is observed. The local significance of the excess is approximately 3.4 standard deviations. The significance is reduced to 1.6 standard deviations once the effect of searching under multiple signal hypotheses is considered. More data are required to determine the origin of this excess.
## Abstract
A search for the resonant production of high-mass photon pairs is presented. The analysis is based on samples of proton-proton collision data collected by the CMS experiment at center-of-mass energies of 8 and 13 TeV, corresponding to integrated luminosities of 19.7 and 3.3 fb$^{−1}$, respectively. The interpretation of the search results focuses on spin-0 and spin-2 resonances with masses between 0.5 and 4 TeV and with widths, relative to the mass, between 1.4×10$^{−4}$ and 5.6×10$^{−2}$. Limits are set on scalar resonances produced through gluon-gluon fusion, and on Randall-Sundrum gravitons. A modest excess of events compatible with a narrow resonance with a mass of about 750 GeV is observed. The local significance of the excess is approximately 3.4 standard deviations. The significance is reduced to 1.6 standard deviations once the effect of searching under multiple signal hypotheses is considered. More data are required to determine the origin of this excess.
## Statistics
### Citations
Dimensions.ai Metrics
25 citations in Web of Science®
43 citations in Scopus®
### Altmetrics
Detailed statistics
Item Type: Journal Article, refereed, original work 07 Faculty of Science > Physics Institute 530 Physics English 2016 11 Jan 2017 11:48 02 Feb 2018 11:19 American Physical Society 0031-9007 Hybrid https://doi.org/10.1103/PhysRevLett.117.051802
|
2018-03-18 04:56:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6073862314224243, "perplexity": 1620.2085907419332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645513.14/warc/CC-MAIN-20180318032649-20180318052649-00729.warc.gz"}
|
https://blog.hotwhopper.com/2015/10/extreme-weather-denial-at-wuwt.html?showComment=1444217058561
|
.
## Extreme weather denial at WUWT
Sou | 4:54 PM
The weather is not being kind to deniers. Just this week there have been record-breaking rains in the USA, where eleven people have reportedly lost their lives. And in Europe, where seventeen people are reported to have been killed. Not just record-breaking, but record-smashing rains. And in the past month there was also the incredible record rain in Japan. And the numerous records being set for tropical cyclones and hurricanes. It's as if the earth is getting sick of us ignoring the signs and has stepped up the pace of climate change.
All this while last year and this year are the two hottest years on record so far. Put all that together with the UN meeting in Paris and you can understand why deniers are losing it.
Anthony Watts has realised that he cannot ignore the rain in the USA, but he's claiming it's just weather (archived here). Which is very inconsistent of him, because he has a record of lying to his readers that extreme events aren't getting more extreme as global warming kicks in. He's also telling lies about the extremely hot waters that the winds blew over, which is part of the reason for the record-smashing rain events. Anthony's telling quite blatant lies now. He seems to not care that he has not a shred of credibility left. (You'll recall that just a few days ago he was also telling his readers that the greenhouse effect isn't real.)
#### Where does Anthony Watts think the water came from? Outer space?
The weird thing about all this is that Anthony is telling fibs and claiming that seas aren't any warmer. So how does he explain the hot seas fuelling the hurricane and rain event? One of the main reasons the weather has been so extreme is because it's been fueled by extremely hot seas, with some parts being the hottest on record.
He's also arguing that there isn't extra water in the atmosphere. So tell me, where does he think all that rain came from? Outer space? Anthony put up a chart from Figure 4 c of a 2012 paper but neglected to mention that the authors stated: "The results of Figs. 1 and 4 have not been subjected to detailed global or regional trend analyses, which will be a topic for a forthcoming paper. Such analyses must account for the changes in satellite sampling discussed in the supplement. Therefore, at this time, we can neither prove nor disprove a robust trend in the global water vapor data."
I'm a bit pressed for time. So I'll just let you know that where I live there are also records being smashed. We've suddenly gone from winter to summer. It's the hottest on record for this time of the year. 35C plus and it's still early springtime. Because of the unseasonally hot weather and the northerly winds, the fires have started up early and with a vengeance.
#### Anthony Watts - the AGU15 Poster Boy!
One more thing. I don't know why, but Anthony announced on Twitter that he's "presenting" at AGU15. I expect he means that he's doing a poster session there in a few weeks time. I came across his topic when looking for something else. The topic is, of course, US temperature records.
Comparison of Temperature Trends Using an Unperturbed Subset of The U.S. Historical Climatology Network (76932)
Anthony W. Watts, Organization Not Listed, Washington, DC, United States
I don't expect he'll get much out of the experience, other than the opportunity to cadge some more funds from his readers, and whine that no-one took any notice of his poster - or that he heard the imagined sounds of smirking.
Must go. Normal service will be restored shortly.
1. Perhaps we should hold a sweepstake for the first sighting of a 'scientists fraudulently adjusting historical rainfall data' post.
1. I expect some measurements changed from using imperial to metric measurements. That makes the figures look bigger. Part of a commie plot metric is.
2. "Anthony W. Watts, Organization Not Listed, Washington, DC, United States"
What happened to the Open Whatsit Society?
1. Good catch. Whatever did happen to that?
2. What if you called an an election and nobody nominated? There was supposed to be an election around late August. I expect John "Resigned from the AGU" Whitman will be back to explain this. Or not.
R the Anon.
3. It's just the Urban Rain Island effect. Most rain gauges are in areas with growing populations, where ... er ... [handwaving] ... microclimate ... poorly sited stations ... [more handwaving] ... and that creates a false trend in the observations.
There's not actually any anomalously high rainfall in South Carolina, it's just URI.
1. Well seeing a lot of the rain gauges got washed away, they can just ignore it :-(
2. Just wild ass guessing, but wouldn't there actually be an Urban Rain Island _deficit_ if, in fact temps were hotter over urban areas?
3. Warmer rising air and turbulence from large structures plus increased aerosols means more rain over and downwind of urban areas.
4. Well Melbourne beat the consecutive 35C days early in Oct record easily. It looks like the cool change has come thru, Sydney will also get a cool change around a day earlier than forecast.
If anyone likes to feel worried, just go check out the record level of heat in the Indian Ocean. Hopefully it will bring rain to the interior and not excess heat, but who can say?
https://www.climate.gov/sites/default/files/indian_ocean_anom_1950-2015_lrg.png
5. Interesting link to the story of Typhoon Etau in Japan.
September is the peak of the typhoon season there, with an average of 2.5 making landfall each year.
Typhoon Vera, in 1959 was the most severe recorded in recent times.
In all, over 5,000 people were killed as a result of Super Typhoon Vera. 38,921 individuals went missing and 1,596,855 people were made homeless. Total damages amounted to between 500-600 billion yen (about $261 million [1959 USD] or$1.67 billion [2007 USD]).
http://www.hurricanescience.org/history/storms/1950s/vera/
1. Just because the climate signal is only beginning to emerge from the weather noise doesn't mean that it's not happening at all. That would be denialism.
2. Gosh, Marke. Who would have thought you could find an extreme weather event somewhere on this planet pre 1990. I'll certainly not forget where I was when I read that.
But Typhoon Vera was especially lethal and destructive not because of its strength but because it occurred before a lot of infrastructure changes were put in place to protect against Typhoons.
"In addition to legislative reform, the breaching of coastal flood defense systems during Vera prompted a redesign of such mechanisms. In Nagoya, regulation was created for coastal construction and their heights. Development of flood defenses in Ise, Osaka, and Tokyo bays was also set into motion. The heights of such defense systems were based on worst-case scenarios and maximum storm surge heights caused by the typhoon"
Surely even a minimally competent climate change denier could find a better example out there for whatever the point is you are trying to make.
3. 'Your Honour, ladies and gentlemen of the jury, let us step back for a moment and consider; there have been forest fires sweeping the surface of this planet since well before the first of any of the Zippo lighters, the ancestors of the one you see before you presented as 'Exhibit A', rolled off the assembly line in 1933 - indeed, science tells us that such devastating conflagrations have occurred routinely since before our species even took the trouble to evolve! - and yet the prosecution would have you believe my client to be somehow guilty of a "crime" they refer to as "arson"...'
4. BBD, you may be right, but "beginning to emerge" is not really a term of statistical certainty.
Millicent, that cuts both ways, 55 years ago there were a lot less people and a lot less expensive infrastructure in the way of such storms.
Bill, if they are convicting him solely on the statistical probability of whether or not that particular conflagration could have occurred without his help, you may be able to successfully defend your client.
5. marke - extremes of heat, drought and floods are more prevalent now, which you'd know if you ever read any science. Check the latest IPCC reports for confirmation or read some journals.
6. marke is idiocy posing as reason. Bert
7. You are being far too generous, Bert. I don't know why marke does what he does, but he's been around climate stuff long enough to know better. And I don't think people who've been following climate science for more than a couple of months can be excused their denial on the grounds of stupidity. There is something more going on.
I'm thinking of writing an article on good and evil.
8. No way you could get any material on good from WUWT.
9. Sou
I'm thinking of writing an article on good and evil.
This is a very difficult matter. I've come to believe that despite the incessant intellectual dishonesty (that they must be aware of, at least some of the time), many vocal contrarians actually believe themselves to be correct and so acting in good faith.
For them, because they are sure that they are correct, lying and backtracking and repeating old lies is excusable, even necessary, as a means to a justified end. What's more, it is clear that most contrarians are convinced that the scientists are falsifying data and exaggerating their results in furtherance of a political agenda, so - from their perspective - why shouldn't they do the same?
In all this murk of stupidity and self-serving dishonesty it becomes very hard to draw the line, which, when crossed, marks the transition from deluded to evil. This is important because true evil requires awareness of wrong-doing and I think that is very difficult to demonstrate in most cases.
10. It is only difficult because as humans, we don't like to discuss subjects like this. We naturally shy away from confrontational topics.
BBD, what you are talking about is the difference between "being evil" and "doing evil". It is very difficult for anyone to determine whether or not an individual is knowingly doing evil. It is much simpler to determine whether a person or organisation is doing harm than whether they are intentionally doing harm.
If societies were willing to excuse individuals on the basis that they didn't really mean to cause harm, we'd not have any prisons, there'd be no war crimes hearings, we'd not be judgmental when writing history.
Thing is, societies do set limits on what is acceptable behaviour. Acting recklessly to hasten the drowning of Bangladesh would be considered unacceptable behaviour - unless you are a climate science denier. Then it's acceptable for whatever reason drives them to push for harmful climate change.
In my view, mitigating climate change is a moral issue, not simply a political or scientific issue.
11. What I find most pathetic about Marke's approach is that it is schizoidal.
When its a 'claim' by a warmist he is careful about definitions and metrics to the extreme end of pedantry.
But then, the next moment, he will tell us stuff like "Typhoon Vera, in 1959 was the most severe recorded in recent times" when the underlying facts clearly do not support making such a claim.
And when that's pointed out he responds with "Millicent, that cuts both ways, 55 years ago there were a lot less people and a lot less expensive infrastructure in the way of such storms". Yes, duh! How many people didn't know that? But the essential problem - how do you show that Vera was the most severe Typhoon to hit Japan when you account for these changes still remains unanswered by Marke. And, apparently, he sees no need to do so.
12. But marke is so delightfully comedic. Sometimes he just makes me guffaw with laughter. I hope he carries on giving me such amusement.
What I think is funny is the huge effort he makes to avoid the facts and come out with a completely unrelated and random point to confound us.
I have always wondered why blinkers work on a horse to make it less nervous and skittish. You would think that wearing blinkers would have the opposite effect and generate fear and nervousness by obscuring what is going on. Apparently intellectual blinkers work on humans to lessen their fear as well.
13. Sou
In my view, mitigating climate change is a moral issue, not simply a political or scientific issue.
Yes, I agree entirely. The problem arises when one sets out to determine what is evil and what is simply morally wrong and stupid and self-serving.
Evil is at once an extreme and yet vaguely defined term used in the context of climate change denial.
I find it difficult to concede that it is possible to do evil without being evil and difficult to agree that one can be evil without being aware that one is doing evil.
Which is why I am very, very wary of introducing the term into discussions about CC.
14. I find it difficult to concede that it is possible to do evil without being evil and difficult to agree that one can be evil without being aware that one is doing evil.
I'd agree with that, to a point, BBD. Which is why I'm coming to the view that it's time those doing evil were called to account.
Some people are sociopaths. It's likely that some deniers (though not most) are also sociopaths. It's debatable whether those people can tell the difference between good and evil, or care.
15. In 1959 , Japan's population was over 90 million...so any argument that suggests the island had 'a lot less people' is pretty clueless.
16. I'm thinking of writing an article on good and evil.
This is a very difficult matter.
Not if you approach it with a sense of righteousness.
Surely, then, all else is evil.
17. Asking a denier to do some soul-searching is like asking a rock what it feels like (to be a rock). Introspection is as foreign a concept to a denier who hangs about climate blogs as empathy, or valuing knowledge. They can't relate.
18. Sorry BBD, I had not realized you actually thought you were in the process of saving the world on this rather narrow, dimly lit stage we tramp here. Good luck with that process, and with the delusions of grandeur.
Before I leave you to it, I will attempt to explain my motivation, as Sou, at least, seems concerned at the potential fate of my soul:
Not too many years ago, when the forecasts of perpetual drought began to founder on a reef of floods and blizzards, it was explained that no, actually, we'd experience more extremes instead.
At the time, evidence, peer reviewed or otherwise, of this increase in extremes was in very short supply. So, soon, it came to pass, we had enthusiasts excitedly pointing at every weather event, storm, flood, or other, claiming it as 'the' evidence.
This continues to this day, in spite of the great difficulty in comparing the severity of a storm today with historical storms. Even to the point where we have the Millicents of the world quite convinced that a storm of decades ago which killed thousands and displaced millions must have been surpassed by a modern storm which killed several and caused the precautionary evacuation of 100,000. I would not define that one way or the other, but records, predating the recent events, cite this Typhoon Vera as 'the most severe of recent times'. (Similarly, you can find similar stories of historical Riviera floods, and historical South Carolina rainfall).
Will evidence stack up in time to support this forecast/projection of greater extremes?
Perhaps. But, these storms ain't it yet. In time, they may constitute a part of the evidence, but that will only be visible in retrospect.
So: Are you saving the world, while I sabotage it?
No, while you oversell evidence such as this, you do your cause more harm than good. Your disciples remain steadfast, lapping it up, but any fence sitter thinks, "Hang on, that ain't right".
And thus, in little steps, ye too will doom the world.
19. BBD, Sou, I sense your frustration.
I feel the same thing: when you know you are right, the facts under debate are on your side, and the other party is obviously wrong, that is the feeling you get.
Am I posting climate misinformation in here? No, I have posted some very simple facts in relation to some storms.
It all depends on how you interpret those facts and what sort of statistical occurrences we each need to convince us of our own, or another's viewpoint.
20. BTW, BBD, I'm curious:
What's with the 'sock' comment?
21. The Scorpion and the Frog
A scorpion and a frog meet on the bank of a stream and the
scorpion asks the frog to carry him across on its back. The
frog asks, "How do I know you won't sting me?" The scorpion
says, "Because if I do, I will die too."
The frog is satisfied, and they set out, but in midstream,
the scorpion stings the frog. The frog feels the onset of
paralysis and starts to sink, knowing they both will drown,
but has just enough time to gasp "Why?"
Replies the scorpion: "Its my nature..."
22. The difference between deniers and others is that deniers decide what they "believe" first of all, and then look for information to support their belief and discard all the rest. That's what marke does. Other people, including scientists, look for information - as much as there is. Then they interpret the information based on all the evidence. They don't look for information to support what they want to be convinced of. It's a big difference. Deniers aren't interested in creating knowledge. Nor are they interested in information. They are only interested in "belief". The lack of self awareness, and disinterest in knowledge, is probably why they think that science denial is a valid "viewpoint".
23. How do you know hazym is a sock?
Because he used to use it at Watching the Deniers.
24. marke posted a comment using his 'Hazym' sock.
Nah, it weren't me BBD, I've never had one them Hazym socks.
Check with Sou, she can probably monitor addresses.
25. However, speaking of memory, this strange situation does remind me of a moving story.
THE MAN AND THE ELEPHANT
In 1986, Peter Davies was on holiday in Kenya after graduating from Northwestern University.
While on a hike through the bush, he came across a young bull elephant standing with one leg raised in the air. The elephant seemed distressed, so Peter approached it very carefully. He got down on one knee, inspected the elephant’s foot, and found a large piece of wood deeply embedded in it. As carefully and as gently as he could, Peter worked the wood out with his knife, after which the elephant gingerly put down its foot. The elephant turned to face the man, and, with a rather curious look on its face, stared at him for several tense moments.
Peter stood frozen, thinking of nothing else but being trampled. Eventually the elephant trumpeted loudly, turned, and walked away. Peter never forgot that elephant or the events of that day.
Twenty years later, Peter was walking through the Chicago Zoo with his teenage son Cameron. As they approached the elephant enclosure, one of the creatures turned and walked over to near where Peter and his son were standing. The large bull elephant stared at Peter, lifted its front foot off the ground, then put it down. The elephant did that several times, then trumpeted loudly... all the while staring at Peter intently.
Remembering the encounter in 1986, Peter could not help wondering if this was the same elephant. He summoned up his courage, climbed over the railing, and made his way into the enclosure. He walked right up to the elephant and stared back in wonder. The huge creature turned to face the man, and, with a curious look on its face - a look that was strangely familiar to Peter - stared at him. Peter looked deeply into those large, liquid brown eyes. Was that... was that the spark of recognition he saw there?
The elephant trumpeted again, wrapped its trunk around one of Peter’s legs, picked him up and slammed him against the railing, killing him instantly.
Probably wasn’t the same fucking elephant.
26. The reason deniers have such an amazing ability to be willfully ignorant . This was written for creationists but is equally applicable to climate change.
Morton's demon
http://rationalwiki.org/wiki/Morton's_demon
Maxwell's demon was a thought experiment in which a demon could stand at a gate between two rooms and open the gate to let fast moving particles into one room and slow moving particles into the opposite room. This would create a temperature differential that could be used to perform work. Since in the thought experiment the demon itself did not need to expend energy to create this differential, it was believed that such a system could create a perpetual motion machine and violate the laws of thermodynamics.
Morton proposed that a similar demon stands at the gate of the mind of creationists and other anti-evolutionists that only allows in evidence confirming their world view, and shuts out any disconfirming evidence. Such a thing would be an extreme case of confirmation bias, but would go beyond such a mere bias to confirming one's thoughts and would stray into willful ignorance. It is this demon that allows them to maintain their world view in the face of overwhelming evidence to the contrary.
27. It certainly isn't a 'strange situation' from where I stand. Its more like business as usual when climate change denial is involved. We see it with WUWT's own pet moderator. We saw it with McIntyre. Reddit's moderators found it wasn't just one sockpuppet it was an army.
Good catch BBD.
And can you also tell me the label/name of this debating method for addition to my collection, (which is primarily garnered from liberally applied labels in this blog)?
I guess it's "an adhom attack on a randomly nominated scapegoat, with strawmen thrown in for good measure".
29. C'mon BBD, enough is enough. You have always seemed reasonably bright in the past.
I hereby categorically deny ever using a different name in this site.
Occasionally I have had to log in as anon, but I then sign my name as 'marke' at the bottom.
I did not even see this posting you speak of, I have never visited the other sites you mention. I've never used a sock puppet before, so why do you think I would I start now?
You really have to watch that problem of believing something simply because you really want to.
And I'm pretty disappointed in Sou, she could have headed this off.
But perhaps she does not like me? Due to my evilness?
30. Agree. Enough is enough. I don't know who marke is. He's a very private individual. I cannot say whether he is the same person as hazym or Mark or someone else altogether.
In keeping with the comment policy I've deleted several comments from various people, which have been getting too personal. Can we leave it at that, please.
6. "Drought conditions declared over all of South Carolina"
"Below normal rainfall and higher than normal temperatures have caused streams to shrink in South Carolina this summer."
The State, 17Jul 2015
"South Carolina has experienced drought conditions during eight of the last ten years."
Clemson Cooperative Extension, 21Sep 2015
What to prepare for?
"The biblical flooding in South Carolina is at least the sixth so-called 1-in-1000 year event in the U.S. since 2010..."
USA Today, 6Oct 2015
1. Increase in the frequency and severity of extreme weather events, eg. drought and flooding.
7. Published May 8 2014
Rosalind Peterson President for the U.S agriculture defence coalition addressing chemtrails , geoengineering ,SRM in U.N meeting.
1. This comment has been removed by a blog administrator.
2. Heh heh. Even in the comments to the debunking which David posted, there are still the chemtrail conspiracy theorists *not getting* the fact that it wasn't Rosalind Peterson that was being debunked, but rather that what Peterson was saying is not evidence supporting the chemtrail conspiracy theorists... if that makes any sense :-)
3. For those that are interested Google Ben Livingston the father of weaponized weather
8. more on AGU
"The relatively few stations in the classes with minimal artificial impact are found to have raw temperature trends that are collectively about 2/3 as large as stations in the classes with greater expected artificial impact. ...The homogeneity adjustments applied by the National Centers for Environmental Information (formerly the National Climatic Data Center) greatly reduce those differences but produce trends that are more consistent with the stations with greater expected artificial impact..."
So the trick is picking a select subset of stations. I wonder if there is any reason to think that they'd be an accurate representation of the whole US.
9. URL-less Treesong here. Going back to marke's original comment: What makes your citation irrelevant is that you're using damage (which is badly correlated with intensity, as Millicent showed) as a proxy for intensity, which is what is relevant to AGW effects. If you go to 'List of the most intense tropical cyclones' in Wikipedia you'll find thirteen stronger western north Pacific typhoons since then as measured by minimal central pressure, and another thirteen tied at 895 hPa. Doubtless many of those caused far less damage because they didn't make landfall.
As for 'Not too many years ago, when the forecasts of perpetual drought began to founder on a reef of floods and blizzards, it was explained that no, actually, we'd experience more extremes instead.', I call denialist bullshit. Can you support this ridiculous statement? Obviously more water vapor in the atmosphere means more material for floods and blizzards, and climatologists, not being idiots, realized that.
In any case, at least a couple of years ago the consensus as I read it was there weren't sufficient historical statistics for accurate predictions of changes in the frequency and intensity of hurricanes, though the most likely outcome seemed to be about the same frequency and greater average intensity. I don't recall similar discussion of droughts.
Significant increase of heatwaves is a no-brainer. Knock-on weather effects are less obvious.
1. The lack of consensus (which is to say uncertainty) on AGW's effect on storm frequency is long-standing, as is the denialist belief that hurricanes are alarming and so climate scientists, notoriously alarmist, must be predicting more of them. Belief, of course, does not need evidence.
"Wet places will get wetter and dry places drier" is also long-standing, as, I suspect, is marke's belief that "they" can only be saying one thing at a time and it must be whatever he believes they're saying. Which, in turn, is whatever he wants them to be saying.
(You can leave URL blank; I do)
2. My reading of AR5 is that drought is quite a robust prediction. There's two components that are changing: precipitation and evaporation. Both increase.
Precipitation increase means you get more flooding because all else being equal you need more water to fall at any one time.
Whether precipitation or evaporation increases more in a given region is a complicated question. In a closed system, evaporation and precipitation are equal. But places where we care about evaporation aren't closed systems: water that evaporates from your area will precipitate elsewhere, and water that precipitates on your area can flow down rivers instead of evaporating. "Wet places will get wetter and dry places drier" is a prediction that is better than chance but still isn't expected to be very accurate.
Instead of commenting as "Anonymous", please comment using "Name/URL" and your name, initials or pseudonym or whatever. You can leave the "URL" box blank. This isn't mandatory. You can also sign in using your Google ID, Wordpress ID etc as indicated. NOTE: Some Wordpress users are having trouble signing in. If that's you, try signing in using Name/URL. Details here.
|
2021-09-20 04:32:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3638584315776825, "perplexity": 2895.7913649920956}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057018.8/warc/CC-MAIN-20210920040604-20210920070604-00019.warc.gz"}
|
https://forum.allaboutcircuits.com/threads/kirchhoff-laws-circuit-analysis.119668/
|
kirchhoff laws circuit analysis
gbox
Joined Dec 29, 2015
42
I_1=I_2+I_3
If in both mesh I go counterclockwise I get in the one on the right:
24-7000I_1-4000I_2+18=0
and the one on the left:
4000I_2-2000I_3-20-18=0
WBahn
Joined Mar 31, 2012
25,927
How can we possibly tell if what you have is correct since you don't define what I_1, I_2, or I_3 are?
gbox
Joined Dec 29, 2015
42
Sorry I will update it
WBahn
Joined Mar 31, 2012
25,927
Good.
Be sure to remember that voltage and current both have polarity, so when you indicate your currents be sure to indicate what direction is positive.
|
2020-08-13 17:35:33
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8008927702903748, "perplexity": 2229.8356376298016}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739048.46/warc/CC-MAIN-20200813161908-20200813191908-00242.warc.gz"}
|
http://www.jmis.org/archive/view_article?pid=jmis-9-2-145
|
Section D
# Technical Evaluation of Engineering Model of Ultra-Small Transmitter Mounted on Sweetpotato Hornworm
Isao Nakajima1,*, Yoshiya Muraki1, Kokuryo Mitsuhashi1, Hiroshi Juzoji2, Yukako Yagi3
1Nakajima Labo, Seisa University, Yokohama, Japan, jh1rnz@aol.com
2EFL Inc., Takaoka, Japan, juzoji@yahoo.co.jp
3Department of Pathology & Lab Medicine, Memorial Sloan Kettering Cancer Center, New York, NY, USA. yagiy@mskcc.org
*Corresponding Author: Isao Nakajima, +81-90-8850-8380, jh1rnz@aol.com
© Copyright 2022 Korea Multimedia Society. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Received: Mar 10, 2022; Revised: Apr 12, 2022; Accepted: Apr 27, 2022
Published Online: Jun 30, 2022
## Abstract
The authors are making a prototype flexible board of a radio-frequency transmitter for measuring an electromyogram (EMG) of a flying moth and plan to apply for an experimental station license from the Ministry of Internal Affairs and Communications of Japan in the summer of 2022. The goal is to create a continuous low-dose exposure standard that incorporates scientific and physiological functional assessments to replace the current standard based on lethal dose 50. This paper describes the technical evaluation of the hardware. The signal of a bipolar EMG electrode is amplified by an operational amplifier. This potential is added to a voltage-controlled crystal oscillator (27 MHz, bandwidth: 4 kHz), frequency-converted, and transmitted from an antenna about 10 cm long (diameter: 0.03 mm). The power source is a 1.55-V wristwatch battery that has a total weight of about 0.3 g (one dry battery and analog circuit) and an expected operating time of 20 minutes. The output power is −7 dBm and the effective isotropic radiated power is −40 dBm. The signal is received by a dual-whip antenna (2.15 dBi) at a distance of about 100 m from the moth. The link margin of the communication circuit is above 30 dB within 100 m. The concepts of this hardware and the measurement data are presented in this paper. This will be the first biological data transmission from a moth with an official license. In future, this telemetry system will improve the detection of physiological abnormalities of moths.
Keywords: LD50 (Lethal Dose, 50%); Low Dose Exposure; VCXO
## I. INTRODUCTION
1.1. Fukishima Nuclear Power Plant Accidaent and Pseudozizeeria Maja Argia
Otaki et al. collected Pseudozizeeria maha in the Fukushima area in May 2011 and found relatively mild abnormalities in some of them [1]. The severity of abnormalities was higher in the F1 offspring obtained from females of the first instar. These abnormalities were inherited by the F2 generation. Experiments have shown that low-dose external and internal exposure of individuals from non-contaminated areas reproduced the same abnormalities. These results indicate that anthropogenic radionuclides from the Fukushima Nuclear Power Plant have caused morphological and genetic damage to Pseudozizeeria maha. However, only morphological abnormalities such as wing shape and wing pattern were recorded [2-6]. The effects on physiological function are unknown.
1.2. Persistent Low-Dose Exposure
Lethal dose 50 (LD50), a measure of acute toxicity, refers to the amount of pesticide or radiation that kills half of the exposed individuals. Sustained low-dose exposure to pesticides and radiation cannot be assessed simply on the basis of LD50; however, no optimal criteria have been reported. LD50 is the rate of survival, not a measure of physiological dysfunction in healthy organisms due to exposure [7-10].
This study aims to detect the impairment of motor function in insects exposed to sustained low concentrations of radiation by monitoring their physiology and detecting the impairment of flap reduction and peak-to-peak temporal fluctuations (Δt). Biotelemetry can be used to transmit moth physiological data. The battery limits the amount of data that can be recorded. Nevertheless, data from moths pinned to a platform can be compared and evaluated [11-22].
## II. PREPARATORY EXPERIMENTS
A physiological experiment was performed by hanging a moth was suspended using pins that were connected to copper wires (Fig. 1 and Fig. 2). The hawkmoth periodically contacts the muscles inside the exoskeleton, resulting in deformation of the exoskeleton and the movement of the feathers connected by the “hinge” up and down. Fig. 3 shows the anatomy of primary flight muscles those of two have a cycles: The Dorsal Ventral Muscle (DVM) contract, causing a down-strok that creates life. Therefore, electrodes are inserted into two muscle groups. The preliminary experiments were conducted as follows. We recorded a two-channel electromyogram (EMG) and the angular velocity corresponding to the pitch angle associated with wing flapping for 100 sweetpotato hawkmoths (Agrius convolvuli)(Fig. 4), 50 females and 50 males) with the animals suspended and constrained in air. Overall, the angular velocity and amplitude of the EMG signals had a high correlation, with a correlation coefficient of R = 0.792. An analysis of the peak-to-peak EMG intervals, which correspond to the RR intervals of ECG signals, indicated a correlation between Δt fluctuation and angular velocity of R = 0.379. Thus, the accuracy of the regression curve was relatively poor. What is Δt, in other words, is the origin of the temporal fluctuation of the flapping cycle that is defined as Δt here.
Fig. 1. Lateral view of sweetpotato hornworm.
Fig. 2. Monitor of EMGs and angular velocity with pins linking copper wires.
Fig. 3. Anatomy of muscles.
Fig. 4. Obtained EMG data and angular velocity, the muscles of DLM is green and DVM is red line.
For such a physiologically abnormal state when flight is restrained by a copper wire, there is a possibility that a statistically significant difference from the EMG at free flight. The Δt of peak-to-peak flapping cannot be obtained using the copper wire that perfectly ignore own lift force. The fluctuation of Δt is the time when Ca ions are replenished to the muscle fibers, and this work is considered to be intermittent. It is thus necessary to conduct experiments with free flight.
Using a dc amplification circuit without capacitive coupling as the EMG amplification circuit, we confirmed that the baseline changes at the gear change point of wing flapping.
The lift provided by the wing can be expressed as angular velocity × thoracic weight - air resistance - eddy resistance due to turbulence. In future studies, we plan to attach a micro radio transmitter to the moths to gather data on potential energy, kinetic energy, and displacement during free flight for analysis. Such physiological functional evaluations of moths may give insight into damage to insect health due to repeated exposure to multiple agrochemicals and may lead to significant changes in toxicity standards, which are currently based on LD50 values.
## III. METHODS
3.1. The Outline of the Biotelemeroty System
Hanging a moth by a copper wire and measuring the EMG signal in a restrained state does not replicate the physiological state. If EMGs from moths could be measured during free flight, there is a possibility of obtaining correlations between EMG peak-to-peak fluctuations and lift that could not be obtained in the restrained state. Therefore, we developed a biotelemetry system and measured flight data recorded for approximately 100 m. The average weight of the moths was 1.1 g. Based on empirical data, we considered that an additional 0.28−0.33 g (25%−30% of the moth weight) could be added. The concept of the biotelemetry system is shown in Fig. 5 and the weights of the main components are listed in Table 1.
Fig. 5. Concept of the engineering model.
Table 1. List of the parts and weight.
Parts Weight [g]
Battery 0.1
VCXO 0.08
Microchips 0.05
Solder + copper wire 0.02
Filter+antenna 0.01
Total 0.29
3.2. Block Diagram
The electric potential of the muscle that plays an active part in the flapping of a moth (EMG) was acquired with three electrodes and amplified by a factor of 134 using an operational amplifier. The change in the electric potential is converted into a change in frequency (frequency modulation). Spurious signals are removed with a buffer amplifier and the remaining signals are transmitted at a radio-frequency (RF) power of −7dBm. A block diagram of the transmitter is shown in Fig. 6.
Fig. 6. Block diagram of the transmitter.
3.3. Battery and Power Consumption
For the proposed device, the battery capacity is limited to 8 mAh. Therefore, if the current of the bipolar transistor (2SC4713) in the final stage of the transmitter is not suppressed, the battery will be exhausted immediately. The transmission power can be adjusted by changing the resistance R connected to the emitter of the bipolar transistor. A current of approximately 20 mA flows when R=35 Ω and the operation time is about 24 minutes (Fig. 7). The RF maximum output is 1 mW (0 dBm). Table 2 shows the battery (Sony SR416SW) specifications and Table 3 shows compatible batteries manufactured by other companies.
Fig. 7. Current and battery life.
Table 2. Specification of the battery.
Specification Standard SONY SR416SW
Type Silver oxide battery
Nominal voltage 1.55 V
Capacity 8 mAh
Size Diameter 4.8 mm × Height 1.65 mm
Weight 0.1 g
Table 3. Button battery product compatibility.
SONY SR416SW
ENERGIZER 337
RAYOVAC 337
RENATA 337
BULOVA 623
SEIKO SB-A5
CITIZEN 280-75
GP GP337
3.4. Free Space Path Loss
According to the license requirements, a filter was inserted to suppress spurious signals for a transmission power of 1 mW. The free space path loss is calculated bythe following equation. The simulation of free space path loss was performed by MATLAB 2018 on Windows 10 environment.
FSPL =20×log(4×π×d/λ)
π: Pai(3.1415)
d: Distance (m)
λ: Wavelength (m) = c/f
c: Light speed (m/s)
f: Frequency (Hz)
Considering the gain of the transmitting antenna and the gain of the receiving antenna, it can be rewritten as the following Equation (1).
$FSPL=20log{\left(d\right)}_{10}+20lo{g}_{{}_{10}}\left(f\right)+32.44-{G}_{t}-{G}_{r}.$
(1)
Gt: Transmitting antenna gain (dB)
Gr: Receiving antenna gain (dB)
Because the antenna on a moth is only 10 cm long (diameter: 0.03 mm), efficiency is poor (effective isotropic radiated power: −40 dBi). Fig. 8 shows the free space path loss when the receiving side uses a 1/2λ dipole antenna. A filter must be inserted to suppress spurious signals to −50 dB or less than the main radio wave, as required by law. The bipolar transistor (2SC4713) of the buffer has an amplification ratio of about 8, ensuring an output of 0 dBm (Fig. 8).
Fig. 8. Free space path loss.
3.5. Voltage-Frequency Conversion
The EMG of the moth was amplified with the gain of 134 using an operational amplifier (NJU77002RB, JRC) and charged in the variable-capacitance diode of the crystal transmission part. The change in the capacitance of the variable-capacitance diode caused a change in the frequency of the crystal oscillator. This setup is called a voltage-controlled crystal oscillator (VCXO). The relationship between the added control voltage and frequency is shown in Fig. 9. The potential change from 0 to 1.5 V is within the bandwidth of 4 kHz. The relationship between output power and load potential is shown in Fig. 9, Fig. 10 and Fig. 11.
Fig. 9. Relationship between output power and load potential.
Fig. 10. Loaded voltage and output power of the VCXO.
Fig. 11. Monitor of the output power-7dBm with spectrum analyzer.
3.6. Results of the Experiment
We soldered the devices and components to be mounted on the moth and electrically tested the engineering model before ordering the flexible board. Battery consumption is about 20 minutes if the total current is about 25 mA. The propagation distance is about 100 m, simulated from the free space path loss equation. The V / F conversion was linear, and the RF, especially the power −7 dBm, was stable and satisfactory.
## IV. CONSIDERATIONS
4.1. Challenges in Mounting on Flexible Board
Based on the above experimental results, we manufactured a flexible substrate exclusively for hawkmoths, so the points to be noted from design to mounting are described below. For mounting, a flexible substrate with a thickness of 0.2 mm and double-sided copper foil was used as the preliminary board. In this circuit, most of the back side is the ground, and since it is soldered by hand of a skilled worker in the preliminary stage, sufficient space is taken between the parts for the purpose of distributing heat (Fig. 12) After burning the wiring of the pattern, the time to remove the water from the both layer adhering to the substrate varies depending on the pattern configuration. If the ground is relatively wide like this board, it has a heat dissipation effect, and moisture can be removed by overheating at 120 C degree for about 30 minutes. It is possible to make it even narrower for small fly. In the future, if it is to be mounted on small moths and butterflies, it would be possible to be narrowed about 64% (verstical×horizontal= 4/5×4/5=0.64) in size. These conditions reflect the technical level of the laboratory, and if it can be developed commercially, that is, if the budget can be secured, it is desirable to commercialize a dedicated IC as a V/F device (VCXO).
Fig. 12. Upper: Wiring diagram of the prototype board, middle: 3D mockup of parts, lower: Completed flexible board.
4.2. Multi-Functionalization with Limited Weight
Since the weight of the payload has an upper limit of about 0.28g, we would like to consider carrying two functions with one transistor by the following two methods, and conduct verification experiments in the future. Many papers on techniques for multiplexing with one device and quadrature modulation have been published in the past [23-35].
4.2.1. Two Channels Via Orthogonalization
Because frequency modulation (FM) is used for the first channel, a second channel can be created using amplitude modulation (AM) on the orthogonal axis. However, the disadvantage is that the signal leaks out during FM demodulation if the frequency component of the AM is high due to the group delay characteristic of the detection of the receiver. The first channel, FM, is suitable for EMG with high-frequency components, and the second channel, AM, is suitable for angular velocity with a relatively slow wave motion.
A dual-gated field-effect transistor (e.g., 3SK284, 3SK73) is used for cascade amplification and resistor R2 is used to adjust the bias to modulate the Gate-2 voltage from −0.1 to −0.4 V. If the signal goes to zero, the FM signal cannot be demodulated. Gate-1 is designed to handle crystal oscillation (Fig. 13 and Fig. 14).
Fig. 13. Idea of orthogonalization modulation with a dual-gated FET.
Fig. 14. Suitable voltage for the Gate-2.
4.2.2 Weight Reduction using Reflex System
A system that amplifies two signals with one amplifier circuit is called a reflex system. Such systems have been implemented in middle- or short-wave receivers. A reflex radio amplifies and detects an RF signal and in parallel amplifies an audio frequency (Fig. 15). A single amplification circuit is used for both RF amplification and audio frequency amplification. Based on this concept, a single transistor can be used for the high-frequency crystal oscillation circuit and the low-frequency biological signal amplification. Because the low-frequency amplification cannot achieve a gain of 20 dB, the resolution in the amplitude direction is lower than that of an operational amplifier. As a countermeasure, the capacitance of the varactor diode can be increased, which will expand the frequency band, increase the FM coefficient, and lead to diffusion gain, compensating to a certain degree for the low resolution. Experiments are required to evaluate the implementation.
Fig. 15. Idea of the reflex system.
4.3. Reasons for New Measurement with ICT to Replace LD50
Genetic damage from radiation isotope exposure, sperm, eggs, or immediately after fertilization has been studied. However, we believe that the oxidation of genes in the cells, nucleior mitochondria DNA associated with continuous exposure of low concentrations during the growth process results in a decrease of the function of the living body.
Focusing on the fact that the muscle of Sweetpotato Hornworm is greatly differentiated at the 1st to 3rd instar, low-concentration continuous exposure→cell oxidation→damage to muscle mitochondrial DNA→decrease in ATP in the muscle→decrease in uptake of Ca ions from T tube→Myoelectric potential decreases.
As an experimental model, we think that it is possible to put a radioisotope in the artificial feed of Sweetpotato Hornworm and expose it internally, and measure the flight EMG. We are aiming to find the correlation between the amount of radiation loaded on food and the amplitude of the electromyogram.
LD50, which is an index so far, is an index of the acute phase in which 50% of individuals die, and this is not an index of the chronic phase. In the chronic phase, the value of 1/1000 (for example) of LD50 is simply applied. Anyone can judge which is more scientific.
## V. CONCLUSION
We developed an engineering model of a transmitter for obtaining an EMG of a moth in free flight. From a technical perspective, future development efforts should consider that one element in the reflex system or the cascade method should have multiple functions. If electronic device technology that can quantify EMG during free flight were developed, the detection of physiological abnormalities would allow a physiological standard that replaces the LD50-based standard for continuous low-dose exposure to pesticides and radiation.
## ACKNOWLEDGEMENT
We wish to thank Professor Noriyasu Ando of the Maebashi Institute of Technology, who provided the moths used in the present study. We also deeply thank the timely help given by Prof. Kiyoshi Kurokawa, the National Graduate Institute for Policy Studies. This experimental research was assisted by Ms. Miyoshi Tanaka and Ms. Hiroko Ichimura of Nakajima Labo. Tokai University, Ms. Machiko Yoda and Ms. Megumi Amano of Seisa University.
The flexible board of the transmitter was assembled by Japan System Design Inc. Hiroshima City, Japan. The patent included in this research has been applied by Tasada Works Inc. Takaoka City, Japan.
## REFERENCES
[1].
K.Sakauchia, W. Tairaab, and A. Hiyam. et al., “The pale grass blue butterfly in ex-evacuation zones 5.5 years after the Fukushima nuclear accident: Contributions of initial high-dose exposure to transgenerational effects,” Journal of Asia-Pacific Entomology, vol. 23, no. 1, pp. 242-253, 2020.
[2].
J. Otaki, “Fukushima’s lessons from the blue butterfly: A risk assessment of the human living environment in the post-Fukushima era, Integrated Environmental Assessment and Management, vol. 12, no. 4, pp. 667-672, 2016.
[3].
J. Otaki and W. Taira, “Current status of the blue butterfly in Fukushima research,” Journal Heredity, vol. 109, no. 2, pp. 178-187, 2018.
[4].
J. Otaki, A. Hiyama, A., M. Iwata., and T. Kudo, “Phenotypic plasticity in the rangemargin population of the lycaenid butterfly Zizeeria maha,” BMC Evolutionary Biology, vol. 10, no. 1, pp. 252, 2010.
[5].
J. Otaki, Understanding Low-dose exposure and field effects to resolve the field laboratory paradox: Multifaceted biological effects from the Fukushima nuclear accident,” in N. S. Awwad, S. A. AlFaify (eds.). New Trends in Nuclear Science, London: Intech Open., 2018.
[6].
J. Otaki and W. Taira, “Current status of the blue butterfly in Fukushima research,” Journal of Heredity, vol. 109, no. 2, pp. 178-187, 2018.
[7].
R. Isenring, Pesticides and the Loss of Biodiversity, Pesticide Action Network Europe, March 2010. https://www.pan-europe.info/old/Resources/Briefings/Pesticides_and_the_loss_of_biodiversity.pdf
[8].
Beyond pesticides, “Impacts of pesticides on wildlife,” May 2020, https://www.beyondpesticides.org/programs/wildlife.
[9].
M. DiBartolomeis, S. Kegley, P. Mineau, R. Radford, and K. Klein, “An assessment of acute insecticide toxicity loading (AITL) of chemical pesticides used on agricultural land in the United States,” PLOS-ONE, vol. 14, no. 8, 2019.
[10].
J. Lundgren and S. Fausti, “Trading biodiversity for pest problems,” Science Advances, vol. 1, no. 6, Jul. 2015.
[11].
K. Suzuki and T. Inamuro, “An improved lattice kinetic scheme for incompressible viscous fluid flows,” International Journal of Modern Physics C, vol. 25, no. 1, 2014.
[12].
M. Shindo, T. Fujikawa, and K. Kikuchi, “Analysis of roll rotation mechanism of the butterfly for development of a small flapping robot,” in The 3rd International Conference on Design Engineering and Science (IC DES), vol. 3, 2014.
[13].
F. Lehmann and S. Pick, “The aerodynamic benefit of wing-wing interaction depends on stroke trajectoryin flapping insect wings,” Journal of Experimental Biology, vol. 210, no. 8, pp. 1362-1377, 2007.
[14].
S. Hassler, Winged Victory: Fly-Size Wing Flapper Lifts Off,” IEEE Spectrum, 2008, https://spectrum.ieee.org/aerospace/aviation/winged-victory-flysize-wing-flapper-lifts-off
[15].
T. Deora, N. Gundiah, and S. Sane, “Mechanics of the thorax in flies,” Journal of Experimental Biology, vol. 220, no. 8, pp. 1382-1395, 2017.
[16].
K. Nakada and J. Hata, “Development and physiological assessments of multimedia avian esophageal catheter system,” Journal of Multimedia Information System, vol. 5, no. 2, pp. 121-130, Jun. 2018.
[17].
I. Nakajima, H. Juzoji, K. Ozaki, and N. Nakamura, “Communications protocol used in the wireless token rings for bird-to-bird,” Journal of Multimedia Information System, vol. 5, no. 3, pp. 163-170, Sep. 2018.
[18].
K. Nakada, I. Nakajima, J. Hata, and M. Ta, “Study on vibration energy harvesting with small coil for embedded avian multimedia application,” Journal of Multimedia and Information Systems, vol. 5, no. 1, pp. 47-52, Mar. 2018.
[19].
N. Ando, I. Shimoyama, and R. Kanzaki, “A dual-channel FM transmitter for acquisition of floght muscle activities from the freely flying hawkmoth, Agrius convolvuli,” Journal of Neuroscience Methods, vol. 115, no. 2, pp. 181-187, 2002.
[20].
M. Shimoda, M. Kiuchi, “Oviposition behavior of the sweet potato hornworm, Agrius convolvuli (Lepidoptera; Sphingidae), as analysed using an artificial leaf,” Applied Entomology and Zoology, vol. 33, no. 4. pp. 525-534, 1998.
[21].
A. Zagorinskii, O. Gorbunov, and A. Sidorov, “An experience of rearing some hawk moths (Lepidooptera, Sphingidae) on artificial diets,” Entomological Review, vol. 93, no. 9, pp. 1107-1115, 2013.
[22].
I. Nakajima and Y. Yagi, “basic physiological research on the wing flapping of the sweet potato hawkmoth using multimedia,” Journal of Multimedia Information System, vol. 7, no. 2, pp. 189-196, 2020.
[23].
D. Arbet, M. Kováč, V. Stopjaková and M. Potočný, “Voltage-to-frequency converter for ultra-low-voltage applications,” in 2019 42nd International Convention on Information and Communication Technology, Electronics and Microelectronics(MIPRO), pp. 53-58, 2019.
[24].
Y. Tian, L. Qiao, and G. Qu, “small signal voltage-frequency conversion processing method for SF6 Sensor,” in 2021 3rd International Conference on Artificial Intelligence and Advanced Manufacture(AIAM), pp. 339-342, 2021.
[25].
D, D. Wentzloff, A. Alghaihab, and J. Im, “Ultra-low power receivers for IoT applications: A review” in IEEE Custom Integrated Circuits Conference(CICC) 2020, pp. 1-8, Mar. 2020.
[26].
A. Alghaihab, Y. Shi, J. Breiholz, H. Kim, B. H. Calhoun, and D. D. Wentzloff, “Enhanced interference rejection bluetooth lowenergy back-channel receiver with LO frequency hopping,” IEEE Journal of Solid-State Circuits, vol. 54, no. 7, pp. 2019-2027, Jul. 2019.
[27].
J. Im, H. Kim, and D. D. Wentzloff, “A 220- μW −83-dBm 5.8-GHz third-harmonic passive mixer-first LP-WUR for IEEE 802.11ba,” IEEE Transactions on Microwave Theory and Techniques, vol. 67, no. 7, pp. 2537-2545, 2019.
[28].
J. Moody et al., “Interference robust detector-first near-zero power wake-up receiver,” IEEE Journal of Solid-State Circuits, vol. 54, no. 8, pp. 2149-2162, 2019.
[29].
V. Mangal and P. R. Kinget, “28.1 A 0.42nW 434MHz -79.1dBm wake-up receiver with a time-domain integrator,” in 2019 IEEE International Solid-State Circuits Conference(ISSCC), pp. 438-440, 2019.
[30].
V. Mangal and P. R. Kinget, “A −80.9dBm 450MHz wake-up receiver with code-domain matched filtering using a continuous-time analog correlator,” in 2019 IEEE Radio Frequency Integrated Circuits Symposium(RFIC), pp. 259-262, 2019.
[31].
R. Liu et al., “An 802.11ba 495μW -92.6dBm-Sensitivity BlockerTolerant Wake-up Radio Receiver Fully Integrated with Wi-Fi Transceiver, “ in 2019 IEEE Radio Frequency Integrated Circuits Symposium (RFIC), pp. 255-258, 2019.
[32].
A. Kosari, M. Moosavifar, and D. D. Wentzloff, “A 152μW −99dBm BPSK/16-QAM OFDM Receiver for LPWAN Applications,” in 2018 IEEE Asian Solid-State Circuits Conference (A-SSCC), pp. 303-306, 2018.
[33].
J. Moody et al., “A −106dBm 33nW bit-level duty-cycled tuned RF wake-up receiver,” in 2019 Symposium on VLSI Circuits, pp. C86-C87, 2019.
[34].
P. P. Wang and P. P. Mercier, “28.2 A 220μW −85dBm sensitivity BLE-compliant wake-up receiver achieving −60dB SIR via single-die multi-channel FBAR-based filtering and a 4-dimentional wake-up signature,” in 2019 IEEE International Solid-State Circuits Conference - (ISSCC), 2019, pp. 440-442.
[35].
J. Moody and S. M. Bowers, “Triode-mode envelope detectors for near zero power wake-up receivers,” In 2019 IEEE MTT-S International Microwave Symposium (IMS), pp. 1499-1502, 2019.
## AUTHORS
Isao Nakajima
He is a specially appointed professor of Seisa University and a visiting professor of Nakajima Labo. at the Dept. of Emergency Medicine and Critical Care, Tokai University School of Medicine. He got the Doctor of Applied Informatics (Ph.D.), Graduate School of Applied Informatics University of Hyogo 2009, and the Doctor of Medicine (Ph.D.), Post Graduate School of Medical Science Tokai University 1988, and the Medical Doctor (M.D.) from Tokai University School of Medicine 1980. He has been aiming to send huge multimedia data from moving ambulance via communications satellite to assist patient’s critical condition. A board member of the Pacific Science Congress, a Rapporteur for eHealth of ITU-D SG2.
Yoshiya Muraki
Professor, SEISA University, Technical Fellow of Fukuda Denshi Co. Ltd. He graduated from Nihon University, College of Industrial Engineering in March 1974. April 1974, he started his professional carrier at Nihon Dengyo Co. Ltd. to design and developed radio communication equipment, such as SSB transceiver. September 1978, he move to the division of medical telemeters at Fukuda Denshi Co. Ltd. He has been in charge of developing and designing wireless telemetry for medical devices for many years. Professor of Tokai University School of Medicine from 2015 to 2019. In this paper, he was in charge of designing V/F conversion and an operational amplifier with extremely low voltage.
Kokuryo Mitsuhashi
worked at JVC for 17 years and engaged in research and development of electronic devices. He has developed and designed high-frequency analog circuits that generate plasma at semiconductor manufacturing equipment manufacturers and taught these technologies at the University of Tokyo and major semiconductor manufacturing equipment manufacturers.
Hiroshi Juzoji
He graduated Tokai University School of Medicine in 1986. For years, he has studied on telemedicine & eHealth and developed circuit design and firmware of special equipment for experimental or practical use. He has visited and installed many small satellite earth stations in the Asia and Pacific region to operate Asia Pacific Medical Network via ETS-V in 1992. Concerning about this study on moth wireless telemetry, he has tested wireless output power and stability.
Yukako Yagi
is at the Digital Pathology Laboratory of the Josie Robertson Surgical Center serves as an incubator to explore, evaluate and develop new technology to advance digital pathology in a clinical setting and actively engage vendors to help improve the technology and develop clinical applicability. Collaborations with clinical departments (e.g., Surgery), Radiology, Medical Physics, and Informatics groups, enhance these assessments and creates opportunities for multidisciplinary applications. She completed her Doctorate in Medical Science at Tokyo Medical University in Japan. She has a broad interest in various aspects of medical science, which include the development and validation of technologies in digital imaging, such as color and image quality calibration, evaluation and optimization, digital staining, 3D imaging, and decision support systems for pathology diagnosis, research and education. Since joining MSK, she has led pioneering work using MicroCT, Whole Slide Imaging (WSI) and Confocal imaging to connect multi-dimensional and multi-modality images (e.g., single-cell to whole-body analysis). She participated in creating image viewers for several imaging modalities and established new.
|
2022-08-12 05:24:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2718590795993805, "perplexity": 5220.818369504728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00039.warc.gz"}
|
https://golem.ph.utexas.edu/~distler/blog/archives/2003_01.shtml
|
## January 31, 2003
### Bombshell
We’ve all seen the gruesome pictures of the Iraqi Kurds, gassed in March 1988 in the town of Halabja near the Iranian border. Proof positive (as reiterated in the State of the Union address this Tuesday evening) that Saddam is an evil tyrant, who would “gas his own people.”
There’s no doubt that Saddam is an evil tyrant.
But, according to Stephen Pelletiere, who is very much in a position to know, it is overwhelmingly likely that it was the Iranians, not the Iraqis, who gassed the Kurds of Halabja.
And the President surely knows this (assuming he wasn’t too busy throwing spitballs during the relevant briefing).
So what was he talking about on Tuesday?
Update: Pelletiere’s conclusions have been hotly disputed in many quarters. HumanRightsWatch concluded that it was mustard gas and sarin, which the Iraqis did possess, which killed the inhabitants of Halabja. (The main pillar of Pelletiere’s argument is that hydrogen cyanide — supposedly part of the Iranian, but not Iraqi, arsenal at the time — was used in the attack.) And, even if Pelletiere is right about Halabja, it is almost certain that Saddam’s subsequent murderous campaign against the Kurds included the use of poison gas against other — less famous — targets.
This is an important point: the Administration is not arguing that we should go to war because Saddam killed 100,000 Kurds. The argument (which, when stated plainly, may sound a little callous) is that we should do so because he used WMDs (poison gas) to kill some fraction of them. So “details,” like what exactly happened in Halabja, matter.
Posted by distler at 10:08 PM | Permalink | Followups (1)
## January 30, 2003
### Render Onto …
So I’m not all that please at how my last post renders on the Mac (Mach-O Mozilla with the Mathematica Fonts). The “stretchy” characters (the integral sign and the overbar on the anti-D3 branes) … aren’t.
How does it look on other platforms?
Anyone using one of the MathML plugins for (gasp, shudder) Internet Explorer?
On another matter, you might have noticed that the RSS feed for Brad DeLong’s blog has disappeared. For the second time in a week, the RSS parser is choking, because his feed contains “undefined entities” ( this time; last time it was £). These are perfectly valid in XHTML, but must not appear in an RSS feed, for it to be valid XML. I think it’s fine to use their numerical equivalents (  and £, respectively). Or you can wrap the whole <description> as CDATA. Or maybe I need a yet-more-liberal parser.
Anyway, in a day or two, Brad will have posted enough new stuff for the offending article to slip off his feed, and then we should be good for another week.
Update: Actually, Brad’s problem comes down to his using faulty old templates to generate his RSS feed. The new MT Templates don’t have these problems.
### Long Live de Sitter!
(Warning! This post uses MathML. The equations will probably look like garbage unless you are viewing it in Mozilla, with the requisite fonts installed. Sorry, that’s just life.)
Kachru et al suggest a way to obtain classically-stable (and quantum-mechanically long-lived) 4D de Sitter solutions of string theory.
The starting point is the class of compactifications introduced by Giddings et al. There, the flux-induced superpotential,
$W=\underset{M}{\int }\left({F}_{3}-\tau {H}_{3}\right)\wedge \Omega$
fixes the string coupling and the complex structure of the 3-fold $M$, leaving the Kahler modulus, $\rho$ (we’ll assume only one) as a flat direction.
The next stage (a bit of handwaving, but not implausible) is to assume that nonperturbative effects induce a superpotential for $\rho$ which lifts this remaining flat direction. In a fairly robust fashion, one ends up with a supersymmetric vacuum in 4D anti-de Sitter space.
Now comes the tricky step. We imagine changing the fluxes (a discrete choice) so that the tadpole cancellation condition now requires the presence of one (or a small number of) $\overline{\text{D3}}$ brane(s). This breaks supersymmetry and induces a term in the potential which would normally lead to a runaway behaviour for $\rho$. Naively, I might guess that the coefficient of this term would be large, and that it would totally overwhelm the nonperturbative superpotential which generated a minimum for $\rho$ in the first place.
Well, not according to these guys. They claim that the coefficient can be small (so as not to totally destabilize the minimum) and, moreover, can be fine-tuned (again, we have only discrete choices) to produce a minimum with a small (tiny!) positive cosmological constant.
If you swallow all of this, it’s not to hard to believe the last step: namely, while this minimum is only metastable (we still have $V\to 0$ for $\rho \to \infty$, after all), it can be incredibly long-lived — more than ${10}^{10}$ years.
Some obvious points:
• If any of this makes sense,, it should have a description in terms of 4D supergravity, presumably related to the solutions that I blogged about previously.
• Supersymmetry breaking in the real world is much larger than the scale of the cosmological constant. That’s always been a bit of a problem, but may be less so here. Usually, the problem is discussed assuming the supersymmetric vacuum is 4D flat space. Here, the supersymmetric minimum would more naturally be thought of as being anti-de Sitter. We have to lift the minimum of the potential by a lot (much more than its final value), thereby “explaining” why the supersymmetry breaking scale is so much larger than that of the cosmological constant.
• Of course, the height of the barrier is more directly related to the scale of supersymmetry breaking. So, as potential inhabitants of a false vacuum, we might, perhaps, have reason to be fearful of the next generation of accelerators.
Posted by distler at 12:18 PM | Permalink | Followups (3)
## January 29, 2003
### Morph
Gary Markstein has the answer to the Iraq conundrum.
Posted by distler at 1:30 AM | Permalink | Followups (1)
### Kabat
Dan Kabat was visiting today, and gave a talk about his work with Easther et al on M2-brane cosmology. I’ve commented on this before, so I thought I’d add what I further learned from my conversations with Dan.
One thing that confused me was their restriction a rectangular torus and to branes with wrappings parallel to the coordinate axes. Choosing a rectangular torus makes the computations much simpler (fewer parameters in the metric). The motivation for the latter is that “diagonally-wrapped” branes would not be compatible with maintaining a rectangular torus.
This is unfortunate, because the diagonally-wrapped guys are secretly rather important in the Brandenburger-Vafa type argument. It is only strings (branes) which are transverse which generically will intersect (and then, only if there are sufficiently few “large” dimensions). So the generic annihilation process is something like a (1,0) string and a (0,1) string annihilating into a (1,1) string.
Diagonally-wrapped strings (and hence, when we consider the back-reaction, non-rectangular tori) are inevitably going to occur. Brandenburger and Vafa never worried about the details of this. They merely noted that, even tranverse strings will generically never meet in more that 3 large spatial dimensions.
The other thing which vexed me in my original post was the non-semiclassical nature of the M2-brane, which made me wonder whether the naive picture of a gas of free branes could ever make sense (unlike a gas of free strings). If the branes are widely separated, then the “dressing” of the branes by those narrow throats simply represents the interaction of the branes with supergravity. Unlike the string case, there is never a “free gas” limit in which the coupling to supergravity can be made small (so I still don’t know how to do thermodynamics).
When the branes are not widely separated, of course, all bets are off. But, at least when some of the dimensions get large, we are “safe”, so long as we replace these “membrany” effects (to coin a phrase) with the coupling to supergravity.
Anyway, I’m a little clearer on what they’re trying to do so, on that score, Dan’s visit was a resounding success.
## January 26, 2003
### How Slow Do You Want to Go Today?
The activity of the MS-SQL worm slowed large parts of the internet to a crawl yesterday. Thanks, again, to Microsoft for enhancing my internet experience.
Even my banking experience felt the impact.
## January 24, 2003
### Cut Your Own Master Keys
Locksmithing, plumbing, and a few other trades seem to persist in a guild-like mentality, where the “secrets” of the trade are passed on from Masters to Initiates. In the case of locksmiths, this is a signal case of what is elsewhere derided as “Security Through Obscurity.” When the “secret” leaks out, you are stunned to learn just how insecure the system really is.
A standard pin-tumber lock has P pins, each of which can be cut at H different heghts. That means HP different combinations which, for modest values of H and P, could number in the millions. Since trying each combination involves cutting a blank and inserting it into the lock, this would seem to make pin-tumber lock invulnerable to brute force “keyspace search” attacks.
The situation changes dramatically when, in addition to the “change key”, which opens just this particular lock, there’s also a “master key” which opens all similar locks in the building. In this case, each pin has a second cut at some (unknown to you) height. As cryptographer Matt Blaze discovered, such systems (which all of us encounter in our day-to-day lives) are vulnerable to escalation of privileges (the owner of a change key being able to create a master key) through an elementary “Adaptive Oracle” attack.
The “Oracle” (which tells you when you’ve guessed right), in this case, is the lock that fits your change key. With P+1 key blanks (costing less that $2) and small bit of effort, you can create your own master key. The algorithm is so blindingly simple that you can probably guess it from just this description. No? OK, here’s what you do. Cut a blank to be identical to your change key, except at the location of the first pin, where you leave it uncut. Try it in the lock. If it doesn’t work, start trimming away until you find the height of the second cut. Since there are only H-1 heights to test, you will be done soon. Now take a second blank and repeat the procedure with the second pin. After using P blanks, you have learned the heights of the master cuts on all P pins. Use your last blank to cut yourself a master key. There are many more details and variations in the paper. And, apparently, this has been known in some circles for a very long time. Now we all know. Thanks to Ed Felten for the links. Update: Of course, it’s obvious that you only need P, not P+1, blanks. But blanks are cheap, anyway. Posted by distler at 10:56 AM | Permalink | Followups (2) ## January 23, 2003 ### Bugs: Fixed and Unfixed I learned that, despite being Gecko-based, Chimera can’t render these pages as XML. So back it goes into receiving them as “text/html” (ie, no MathML support). And I filed a bug report with the Chimera team. In case you’re keeping track, that means that the .htaccess file for this blog now looks like RewriteEngine On RewriteBase /~distler/blog/ RewriteRule ^$ index.html
RewriteCond %{HTTP_USER_AGENT} Gecko
RewriteRule \.html$- [T=application/xhtml+xml] RewriteCond %{HTTP_USER_AGENT} Chimera [OR] RewriteCond %{HTTP_USER_AGENT} Safari RewriteRule ^$|\.html$- [T=text/html] On the other hand, P&L Systems quietly released a new version of their Mesa spreadsheet, which fixes the printing bugs under Jaguar. Since I had previously ragged on them, I want to publicly say, “Thank You”. Posted by distler at 10:44 AM | Permalink | Post a Comment ## January 22, 2003 ### A Look Backwards So I’ve been blogging for a little over 3 months now. Perhaps it’s time to reflect on the experience. All in all, I have to way that it has been a tremendous waste of time. And I mean that in the best possible sense. One thing that surprises me is that I have spent much more time futzing with the ‘plumbing’ of this blog than I expected to. Some people seem to change the layout of their blog every other week. The appearance of mine has changed very little. The most dramatic change I made in the superficial appearance was to rationalize the fonts (and sizes) to make the whole thing more readable. On the other hand, I have thought a lot about some of the defects of the “traditional” blog layout and tried to rectify them here. For instance, consider the Blogroll. Traditionally, this is just a static list of links to other weblogs or sites. Ho hum, why should I click on that link? Or, perhaps, Oh cool, Aaron Bergman (another physicist-blogger) has a weblog, What’s he blogging about? Now, of course, Aaron’s a bad example because his blog doesn’t have an RSS feed. But those which do (indicated in bold on my Blogroll) are syndicated, so you can see what they’re blogging about before clicking over there. That just seems a heckuva lot more useful than a static list of URL’s. Another thing that’s basically irksome is the ephemeral nature of the weblog. Content that’s more than a week or two old drops off the main page, never to be read again. Yes there are links to the archives. But nobody clicks through to them. And yes, there’s a search engine (a mighty fine one, I might add), but unless you are looking for something specific, you’re unlikely to use it. So I decided to add a list of Random Past Entries (changed once an hour) to the sidebar, in the hope that serendipity might bring to your attention some interesting post from the past. And, again, you shouldn’t have to click through to see if it really was interesting; an excerpt should be available with a mouse-over. I thought that MathML was going to be easy. It wasn’t. But the next release of MovableType will bring creating posts with embedded MathML a lot closer to “easy” (by converting them on-the-fly from embedded itex). And hopefully the release of the Stix fonts will ameliorate the residual rendering problems, and make the whole process a lot more plug-‘n-play on the user side. (Those of you who are Internet Explorer users may wonder what the heck I am talking about. The fancy sidebar stuff is only viewable in a Standards-compliant browser, like Netscape 7, Mozilla, or the next release of Apple’s Safari. MathML is natively supported only by the Gecko-based browsers, though I hear there are MathML plugins available for IE.) The final thing that surprises me, looking back, is that it proves much harder than I thought it would be to say something interesting and relevant about physics in a few paragraphs, with few or no equations. I met a colleague at a symposium back in November, who said Your typical post says, “I read this paper on the archives. It looks interesting.” He was teasing (I think!). But it does raise the issue of the general shallowness of the blogosphere. Just because the topic happens to be physics doesn’t guarantee any greater level of profundity. All in all, it’s a heckuva lot easier to write about something other than physics. Which, despite my better efforts, seems to be what I’ve been doing much of the time. Which reminds me of an anecdote. Many years ago, Sacha Polyakov approached me at a cocktail party. He was very apprehensive at the prospect of teaching his first-ever undergraduate course a Princeton. “At least,” he said after we’d discussed it for a while, “it’s only Nonlinear Dynamics, and not Quantum Field Theory.” “Oh,” I said (to one of the great men of modern QFT), “why is that?” “If it were Quantum Field Theory, I’d feel responsible for the subject.” Posted by distler at 12:58 AM | Permalink | Followups (1) ## January 20, 2003 ### SuperFly I laughed so hard, I nearly p… Oh, never mind, read it yourself. Posted by distler at 10:28 PM | Permalink | Post a Comment ### Computer Notes Do I ever get to blog about anything else? A new version of Kung-Log is out, “rewritten from the ground up in pure Cocoa.” Lots of yummy new feature, like syntax-colouring, a customizable “HTML Tags” menu, … Dave Hyatt’s got Safari’s :hover code working, so the popup RSS Feeds in my Blogroll will work in the next version of Safari, too. They’ve squashed an impressive number of CSS bugs. And they have the rudiments of an XML parser going. Who knows! Safari may even support MathML someday! Posted by distler at 12:48 PM | Permalink | Post a Comment ## January 19, 2003 ### Tough Guy It’s always good to have a niche. If you’re Mel Ulrich, that niche is “operatic baritone with shaved head, who looks great with his shirt off”. I saw him last year as Stanley Kowalski in Streetcar Named Desire, and tonight as Joseph de Rocher in Dead Man Walking. He’s a fine singer and a good actor, but with his particular mix of qualifications, he is … umh … without competition. Posted by distler at 1:54 AM | Permalink | Post a Comment ## January 18, 2003 ### Little Steps As many of you know, the reason I started this blog was because I thought the weblogs would prove to be an excellent vehicle for “informal” physics discussions. Being able to type equations (preferably in TeX) and have them display inline in your browser is an essential part of that. My first attempt to put MathML in this blog didn’t work so well. Perhaps you think I entirely abandoned the notion. Not at all. Some of you might also have noticed my dogged obsession with bringing this blog up to full XHTML 1.1 compliance which must, I admit, seem pretty quixotic. Anyway, there was method to my madness. With a little mod_rewrite trick in the .htaccess file of this blog, RewriteEngine On RewriteBase /~distler/blog/ RewriteRule ^$ index.html
RewriteCond %{HTTP_USER_AGENT} Gecko
RewriteRule \.html$- [T=text/xml] RewriteCond %{HTTP_USER_AGENT} Safari RewriteRule \.html$ - [T=text/html]
I can serve up XML to Gecko-based browsers which grock MathML (but only when rendering XML files), while sending “plain old” HTML to other browsers. It’s actually the same file; just the MIME-Type is altered, depending on what browser does the asking.
This only works if my blog is 100% valid XHTML. XML parsers puke at the slightest error and refuse to render the page, unlike your basic HTML parser, which will try to render something out of even the most broken HTML.
I think I’ve succeeded. There were surprising bits of invalid (X)HTML hidden away in obscure corners of MovableType, as this thread delineates. I think I’ve caught everything now, but if you’re running a Gecko-based browser and something here returns an XML parser-error, let me know.
Assuming you have the fonts installed, this means that (MathML)
$iħ\frac{\partial \psi }{\partial t}=-\frac{{ħ}^{2}}{2m}{\nabla }^{2}\psi +V\left(x\right)\psi$
will look like (screenshot)
in a Gecko-based browser.
But, of course, that’s only the first step. In the forthcoming version of MovableType, Ben and Mena have announced the introduction of a Text-Filtering API. I’ve already mentioned some of the benefits, but for present purposes, the biggest payoff is that we can write posts with embedded itex (a dialect of LaTeX) and have MovableType automatically run them through itex2mml to convert them to embedded MathML.
That’s not quite how I created this post. Instead I ran it through the online itex2mml converter. But you get the idea…
Posted by distler at 4:17 PM | Permalink | Followups (2)
## January 17, 2003
### Wormholes
The beginning of the semester leaves me too busy to post much in the way of serious stuff, but let me commend to your attention Aaron Bergman’s summary of a recent paper by Visser et al.
### Science & Politics
A good post by CalPundit on the “debate” over Global Warming. It scares me when leftists argue that Science is just a matter of social consensus. It scares me even more when right-wingers argue the same.
## January 12, 2003
### Verbosity
In syndicating the RSS feeds of my BlogRoll, I discovered that I had alarmingly increased the size of the main page of this blog.
A little examination revealed the cause: Lawrence Lessig, Sébastien Paquet and Zimran Ahmed decided to include the full content of each of their posts in the <description> field of their RSS feed.
Hey, guys! That’s what the (very optional) <content:encoded> field is for. The <description> field is for a short summary or excerpt of your post.
The extra baggage adds 23KB to the size of my main page. Significant, but not a killer. I’d be happier somehow if they published more, uhm, succinct <description>’s. But, if you really want to, you can read the full content of their blogs right here…
Anyway, here’s a screenshot of what I am really striving for. A much more useful version of the ubiquitous BlogRoll, isn’t it?
Update: Come on, Distler, don’t be a doofus! You can’t rely on people to do the right thing. Use a filter instead.
## January 10, 2003
### Popup Feeds
I didn’t like the “RSS Feeds” at the bottom of the Sidebar. Too ugly. So I got rid of them, in favour of something much, much nicer. Move your mouse over the “Links” section in the Sidebar, and you’ll see what I mean.
Or, at least, you will if you are using a Gecko-based browser, like Mozilla. From the sounds of it, KHTML-based browsers like Safari will support this part of the CSS2 Specification real soon now too.
My apologies to those (you know who you are) whose browsers aren’t Standards-Compliant.
P.S.: OK, OK. I suppose this could be done using Javascript. If anyone has a simple (no browser sniffing!) suggestion for how to do this, I’d be happy to entertain it. But it’s hard to beat 10 lines of CSS code styling nested <ul>’s (the latter generated using the mt-rssfeed plugin) for simplicity. Thanks to Eric Meyer for showing the way.
## January 9, 2003
### DMCA
Is there no end to the mischief engendered by this stupid, stupid law?
As Ars Technica reports, Lexmark has gone to court, arguing that third party Toner Cartridges (!) contravene the DMCA:
In a 17-page complaint filed on Dec. 30, 2002, the company claims the Smartek chip mimics the authentication sequence used by Lexmark chips and unlawfully tricks the printer into accepting an aftermarket cartridge. That “circumvents the technological measure that controls access to the Toner Loading Program and the Printer Engine Program,” the complaint says.
A hearing on the matter is scheduled for today. This is dangerous, dangerous territory. The thinking behind this is mad: putting a chip on anything makes it digital media.
… and hence protected by the DMCA. To which I say, “you ain’t seen nothin’ yet!”
Update: Ed Felten has added his comments on the legal issues involved. I think he’s a little too dismissive of the “circumvention” argument. If reverse-engineering a “cryptographic secret handshake” isn’t “circumvention” under the terms of the DMCA, it’s hard to imagine what is (DeCSS anyone?). And I’m not sure I buy his webserver analogy. Isn’t breaking the copy-protection on a piece of software, allowing you to execute that software, a violation of the DMCA? Or do you actually have to be able to see the source code before it’s a DMCA violation?
### Surfing Safari
Apple’s Safari is out, at least in beta. To the suprise of many, including me, it uses the KHTML rendering engine (the one behind the Konqueror browser) , rather than Gecko (which powers Mozilla, Netscape7, Chimera, Phoenix,…).
It’s a nifty-looking browser, incredibly lightweight (a 3+ MB download), and fast as heck (I thought Chimera and Mach-O Mozilla were fast!).
Unfortunately, the KHTML engine does have a few CSS bugs. Mark Pilgrim has a review and a tracker for these CSS bugs. A couple affect the rendering of this blog, which is why it currently looks like crap in Safari (and, I suppose, Konqueror, too).
Fortunately, Dave Hyatt’s blog gives every indication that the Safari Team are working fast to squash the bugs.
Methinks I hear a crackling sound as the Browser Wars heat up again.
## January 7, 2003
### Cut, Cut, Cut
There’s a great scene in an old Laurel & Hardy movie, in which Stan and Ollie are hired as gardeners. Their first task is to trim a hedge and, to save time, they decide to start at opposite ends. Predictably, when they meet in the middle, one has trimmed his side 6 inches lower than the other. So they try again, and this time the other side is lower. The scene ends with Stan running over the former hedge with a lawnmower.
I was reminded of this by Winterspeak’s passionate defence of cutting taxes on dividends. First they cut taxes on capital gains, violating the principle of tax-neutrality. Now, to compensate, they propose cutting taxes on dividends (even lower). You see where this is going …
Except the analogy with Laurel & Hardy isn’t perfect. See, in addition to issuing stock, there’s another way corporations can raise money, namely by borrowing. Taxing interest on corporate bonds at a different rate from the tax on dividends or capital gains is every bit as much of a distortionary violation of tax-neutrality.
You haven’t heard the Bush Administration advocate slashing taxes on interest from corporate bonds…yet. Don’t worry, you will. The analogy with Laurel & Hardy was flawed. Perhaps the Three Stooges would be more apt.
Posted by distler at 9:54 AM | Permalink | Followups (1)
### de Sitter Solutions of Gauged Supergravity
I hadn’t noticed when their paper appeared last May, but Trigiante’s talk at a Workshop in Leuven caught my eye.
They’ve found stable de Sitter solutions to gauged N=2 supergravity. The gauging is always by a product group (SO(2,1)×SO(2) or SO(2,1)×SO(3)), where one of the factors is noncompact and the other factor admits a Fayet-Iliopoulos term.
It would be quite interesting to embed these gauged supergravity theories in string theory. For compact gaugings, the gauged supergravity theory turns out to be a “consistent truncation” of some ten-dimensional string background. The gauge group ends up being the isometry group of the internal space.
I’ve never had much of a conceptual handle on the noncompact gaugings of extended supergravity theories, but if one could find these solutions as consistent truncations of some string background, that would be very exciting.
Posted by distler at 12:00 AM | Permalink | Followups (1)
## January 6, 2003
### Only Words…
The blogosphere is all a-titter about the latest arch-conservative, self-described Orthodox Jewish, UCLA undergrad blogger, Ben Shapiro.
I have to say I found something a little puzzling. It wasn’t the writing which, as expected, was unremarkable (sort of Rush Limbaugh meets Doogie Howser, with a touch of Meir Kahane thrown in). No, what I was puzzled by was, given his prominently-proclaimed commitment to Orthodox Judaism,
1. Where’s the yarmulke?
2. Does he not evince even the slightest respect for the prohibition on lashon harah?
Posted by distler at 2:53 PM | Permalink | Followups (9)
## January 4, 2003
### He Should Be in Pictures!
(Photo courtesy of Tom Tomorrow.)
My hat’s off to Donald Rumsfeld!
The Washington Post and the Manchester Guardian are reporting that, as Reagan’s “Special Envoy to Iraq” during the Iran-Iraq War, Donald Rumsfeld helped pave the way for Saddam to acquire materials for chemical and biological weapons, aware that these weapons were being used “almost daily” in the war with Iran .
In his position, I’d have a hard time keeping a straight face while blustering about Saddam’s possession of chemical and biological weapons. How could I ever maintain the proper tone of righteous indignation, knowing that I had been instrumental in supplying him with those weapons in the first place?
But Rummy … Rummy pulls it off with aplomb! Were it not for these Freedom of Information Act-induced articles, I would never have suspected a thing. He’s that good!
Note to Hollywood: Give the man a role in the next John LeCarré thriller. He looks the part and man can he act!
### More on MT Text-Filtering
I don’t think I explained, in my previous post, why Ben and Mena’s announcement is so exciting. The point isn’t that it’s boring to type HTML tags. The point is maintainability.
For example, consider my recent upgrade of this blog from XHTML 1.0-Transitional to XHTML 1.1. Among other things, that meant that <blockquote>Some Text</blockquote> was no longer valid XHTML. Instead, one needs to write (say) <blockquote><p>Some Text</p></blockquote>.
In the forthcoming text-filtering architecture, I would not have to muck with my blog entries at all. I would just have to edit some filter template somewhere to insert the extra <p>...</p> in the output. Under the current system, I had to go in and edit my blog entries by hand. Luckily, since I haven’t been blogging for very long, there was not that much to do. But if I had several years worth of content to modify …
Anyway, the contents of this blog may, over the years, grow intellectually-stale. But it will always be maintainable with the right text-filters.
## January 3, 2003
### Text Formatting in Movable Type
Ben and Mena Trott have announced that the next version of MT will have a pluggable text-filtering architecture.
Currently, the text filtering options are: 1) convert line breaks to <br /> [and double line breaks to </p><p>] or 2) do nothing.
This sorta sucks, because if you want to include structured text (eg, <blockquote>’s, etc.), you need to turn off the convert line breaks option and insert all of the HTML tags yourself. The whole idea of a CMS like MovableType is to separate the “content” from the structural HTML markup (just as CSS lets you separate the structural HTML markup from the visual formatting).
The screen shot of the web entry interface looks cool. I hope this is supported in the XML-RPC API, so that blogging tools like Kung-Log can access the new features.
|
2017-01-18 07:58:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4296244978904724, "perplexity": 3470.629420896139}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00547-ip-10-171-10-70.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/calculus/11092-integration.html
|
# Math Help - integration
1. ## integration
need a little help with this problem:
$\int_{0}^{\pi/2} \dfrac{\sin\theta}{1+\cos^{2}\theta} d\theta$
2. Originally Posted by viet
need a little help with this problem:
$\int_{0}^{\pi/2} \dfrac{\sin\theta}{1+\cos^{2}\theta} d\theta$
The evaluation at the limits is simple once you find the anti-derivative. Thus, I will find the anti-derivative and let you do the evaluation.
We have,
$\int \frac{\sin x}{1+\cos^2 x} dx$
Let $u=\cos x$ then $u'=-\sin x$
Substitution rule,
$- \int \frac{1}{1+u^2} du$
This, is the arctangent integral,
$-\tan^{-1}u+C=-\tan^{-1} (\cos x)+C=$
3. Originally Posted by viet
need a little help with this problem:
$\int_{0}^{\pi/2} \dfrac{\sin\theta}{1+\cos^{2}\theta} d\theta$
This will help:
int(1/(1 + x^2)) = arctan(x)
Let u = cos(x)
Then,
du = -sin(x)
-du = sin(x)
Factor out the -1
-int(1/(1 + u)^2)du
-arctan(u)du
Thus, substitute in for u:
-arctan(cos(x)) + C, but you're given the limits. And thus,
-arctan(cos(Pi/2)) - [-arctan(cos(0))]
0 - (-Pi/4)
= Pi/4
|
2014-10-26 02:01:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7858783602714539, "perplexity": 4696.454860109233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119653672.23/warc/CC-MAIN-20141024030053-00203-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://wiki.math.ubc.ca/mathbook/M102/Final_exam_information
|
# Final exam information
## Contents
### Final exam date and time
The final exam will be held on Dec. 11 from 3:30-6 pm.
### What will the exam look like?
#### Content
• The material covered by the final exam includes material from Chapters 1-12 of LK notes and the corresponding chapters of PD notes. See the Course calendar for detailed readings.
• The best way to study is to do lots of problems. Questions similar to both the WeBWorK assignments and the OSH will appear on the midterm. For additional problems, look at the back of each chapter of LK notes and the review problem set on WeBWorK.
• The difficulty of the exam will be similar to that of the midterms although you will probably find that you are not as pressed for time.
#### Format
The final exam will consist of
• a number of multiple choice questions,
• some short answer problems (show work, enter answer in a box) and
• a few longer problems more like OSH.
Exams from previous years are available on the Math Department website. Keep in mind that the material varies from year to year so some problems appearing on old exams might not be relevant this year. Similarly, some topics covered this year might not appear on any of the old exams.
### Final exam room assignments
Section Building and room #
101 OSBO A
102 OSBO A
103 OSBO A
104 HEBB 100
105 HEBB 100
106 OSBO A
### Exam formulae list
The following tables contain formulae that will be provided on the final exam should they be required.
Linear regression
Without intercept $y=ax$ with $a=\frac{\sum_{i=0}^nx_iy_i}{\sum_{i=0}^nx_i^2}$
With intercept $y=ax+b$ with
$b=\bar{y}-a\bar{x}$ and $a=\frac{P_{avg}+\bar{x}\bar{y}}{x_{avg}^2-\bar{x}^2}$,
where $\bar{x}=\frac{1}{n}\sum_{i=0}^n x_i$, $\bar{y}=\frac{1}{n}\sum_{i=0}^n y_i$,
$P_{avg}=\frac{1}{n}\sum_{i=0}^n x_iy_i$, $x_{avg}^2=\frac{1}{n}\sum_{i=0}^n x_i^2$.
Trig identities
$a^2=b^2+c^2-2bc\cos(\theta)$
$\sin(A+B) = \sin(A)\cos(B) + \cos(A)\sin(B)$
$\cos(A+B) = \cos(A)\cos(B) - \sin(A)\sin(B)$
Special triangles trig values
$\theta$ $\sin(\theta)$ $\cos(\theta)$
$\dfrac{\pi}{6}$ $\dfrac{1}{2}$ $\dfrac{\sqrt{3}}{2}$
$\dfrac{\pi}{4}$ $\dfrac{\sqrt{2}}{2}$ $\dfrac{\sqrt{2}}{2}$
$\dfrac{\pi}{3}$ $\dfrac{\sqrt{3}}{2}$ $\dfrac{1}{2}$
Geometric formulae (volume, area)
Quantity Formula
Volume of sphere $\dfrac{4}{3} \pi r^3$
Surface area of sphere $4\pi r^2$
Volume of cone $\dfrac{1}{3} \pi r^2h$
Surface area of cone $\pi r s$
|
2014-04-25 06:23:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44530951976776123, "perplexity": 2675.9284915647527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://electronics.stackexchange.com/questions/97382/why-superposition-theorem-fails-here
|
# Why superposition theorem fails here?
I have a simple circuit consisting of 2 ideal voltages sources (each 5V) parallel to a resistor of 5 ohms. The current along the resistor is 1A, right? But by applying the superposition principle (i.e. considering individual sources), I am not getting this result. Am I doing some blunder?
Ideal circuits with two voltage sources in parallel lead to contradiction, unless they are equal and can be simply replaced with a single one. Note that potentials $\varphi_1$ and $\varphi_2$ in your circuit must be equal, since there is no impedance of any kind between them, nor do ideal voltage sources have any internal resistance:
In your case, luckily, these sources produce the same voltage, so the simplest thing to do is to simply remove one of them from the circuit. If you had two ideal sources of a different voltage in parallel, that would lead to contradiction.
In a real circuit, connecting two sources in parallel would lead to a circuit with a very small, but still non-zero resistance between them, which would result in one of the sources (the one with a slightly lower voltage) actually sinking current, but the current through the 5 ohm resistor would only depend on the voltage of the right source.
If you want to put some actual numbers, you can try something like this:
Note that if the sources are again ideal and have completely equal voltages, there will still be no current flowing through the tiny resistance between them, but you should be able to apply the superposition principle.
For a circuit like this, the mesh current method should provide the simplest solution and show that the current through the resistor only depends on the right source.
Why superposition theorem fails here?
To properly apply superposition, the circuit with all but one source zeroed must be consistent, i.e., a solution must exist. This isn't the case here. If you zero either voltage source, KVL gives:
$$5V = 0V$$
But, it is also important to realize that, no matter how many additional sources and circuit elements are attached in parallel with the resistor terminals, as long as the circuit is consistent, the resistor only 'sees' a 5V voltage source; the voltage source fixes the voltage across the resistor (and thus the current through it).
The point being that the left-most source can be removed from the circuit and, from the perspective of the resistor, there is no change.
You must solve this using non-ideal voltage sources. Here is the generalized solution to the problem using two non-ideal voltage sources of the same voltage V and with internal resistances r1 and r2 in parallel across a load R (see image (a) below). We solve for the partial current from the first voltage source, I1R (see image (b)), and for the partial current from the second voltage source, I2R (see image (c)), and add the partial currents together to get the total current, IR, through the the load.
Note that, as Alfred Centauri pointed out, this works because this method does not violate KVL.
Let's take a closer look at the solution we arrived at. We calculated that
Now, since r1 and r2 are positive, and noting that parallel resistance goes to zero as each of the resistors go to zero
we can conclude that the current through the resistor approaches
as the voltage sources approach being ideal.
In this circuit you can't apply superposition theorem. The voltage across resistor terminals is just 5V only, though you've connected 2 or n number of supplies of same 5V potential. Also, the circuit consisting of two or more ideal sources(i.e. with zero internal resistance) in parallel with each other is unstable; since the voltage between two terminals must be the same.
The superposition isn't the ultimate theorem to calculate the current through circuit. It is just a standard procedure which follows the basic logic to find the response.
|
2020-02-27 18:15:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.774061381816864, "perplexity": 386.6875984279363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146744.74/warc/CC-MAIN-20200227160355-20200227190355-00129.warc.gz"}
|
https://chat.stackoverflow.com/transcript/68414/2022/1/16
|
6:59 AM
1
I am trying to create a basic money class that fits into a small console banking application I am making. Money.h #pragma once #include <string> class Money { private: long pounds; int pence; public: Money(); // Overloaded constructors explicit Money(long pounds); Money(...
4 hours later…
11:28 AM
6
I'm experimenting with Raku and trying to figure out how I might write a program with subcommands. When I run, ./this_program blah: #! /usr/bin/env raku use v6; sub MAIN($cmd, *@subcommands) {$cmd.EVAL; } sub blah() { say 'running blah'; }; I get running blah output. But this is as far as...
2 hours later…
12:58 PM
2
As suggested in the answer to another MO question, it seems possible to construct the E-M category of a monad $T:\mathcal{C}\to\mathcal{C}$ as an inserter followed by two equifiers as follows (I am having an issue at step 2, please see the diagram there and skip to the end if you would just like ...
9 hours later…
10:10 PM
6
I know Python // rounds towards negative infinity and in C++ / is truncating, rounding towards 0. And here's what I know so far: -12 / 10 = -1 - 2 // c++ -12 // 10 = -2 + 8 # python 12 / -10 = -1 + 2 // c++ 12 // -10 = -2 - 8 # python 12 / 10 = 1 + 2 //both 12 // 10 = 1 + 2 -12 / -10 ...
2 hours later…
11:40 PM
3
In the " A Primer on Mapping Class Groups Benson Farb and Dan Margalit" We have : Proposition 1.10 Let $\alpha$ and $\beta$ be two essential simple closed curves in a surface $S$. Then $\alpha$ is isotopic to $\beta$ if and only if $\alpha$ is homotopic to $\beta$. Proof. One direction is vacuous...
|
2022-05-26 11:56:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35110244154930115, "perplexity": 2000.6298079661067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604794.68/warc/CC-MAIN-20220526100301-20220526130301-00025.warc.gz"}
|
https://www.physicsforums.com/threads/work-conceptual.472816/
|
# Work (conceptual)
I have found that I only need to brush up on my coceptual grasp of work and electrical applications. I have found that I am having negative answers when indeed the answer is positive. My question to you is, if I am following q(Vb-Va)=-W, I am assuming this is the work done by the field, and the work done by an external force (or mover) would be Wmover=-Wfield?
Andrew Mason
Homework Helper
I have found that I only need to brush up on my coceptual grasp of work and electrical applications. I have found that I am having negative answers when indeed the answer is positive. My question to you is, if I am following q(Vb-Va)=-W, I am assuming this is the work done by the field, and the work done by an external force (or mover) would be Wmover=-Wfield?
Whether W is positive or negative depends upon convention for field direction and what you mean by W. The convention is to have the direction of the electric field in the direction which a positive charge will naturally move. You can blame Benjamin Franklin for that.
If one defines the work required to move the charge as W, then W will be positive if the charge is positive and it moves against the direction of the electric field (ie. in the direction of increasing positive potential). If the charge is negative and the charge moves in the direction of the electric field (in the direction of decreasing positive potential which is increasingly negative potential), W also will be positive.
It is opposite (ie. W<0) if the directions of motion are reversed: If the charge is positive, then negative work is required to move the charge in the direction of the electric field (positive work is done to the charge by the field). If the charge is negative and it moves against the direction of the field, W is negative (work done on the charge).
AM
|
2021-04-20 05:19:19
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8143672943115234, "perplexity": 190.82030267562087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039375537.73/warc/CC-MAIN-20210420025739-20210420055739-00247.warc.gz"}
|
http://mathoverflow.net/questions/5885/does-the-non-commutative-chern-class-depend-on-the-choice-of-connection
|
# Does the non-commutative Chern class depend on the choice of connection?
In classical geometry the calculation of the Chern classes of a vector bundle using a connection is independent of the choice of connection. Does any such result hold for projective modules in non-commutative geometry?
-
You can see the construction in detail, for example, in Max Karoubi's ‘Homologie cyclique et $K$-theorie’ (Asterisque 149, SMF; you can get this from his web page), where he constructs the Chern classes $K_0(A)\to H(A)$ using connections much à Chern-Weyl (Here $H(A)$ is the non-commutative de Rham theory, or one of the various cyclic homologies of $A$) He also constructs higher Chern classes on the higher algebraic $K$-theory by a similar procedure. The book by Loday on cyclic homology also covers this.
There are maps connecting non commutative de Rham cohomology and cyclic homology (in fact, there is an isomorphism between $H_{\mathrm{dR}}(A)$ and the kernel of the map $B:HC_\bullet(A)\to HH_{\bullet+1}(A)$ appearing in the Connes long exact sequence---here $HH$ is Hochschild homology), and constructions of the Chern character from $K_0(A)$ to both $H_{\mathrm{dR}}(A)$ and $HC(A)$ which are compatible with those maps. A non trivial part of Max's Asterisque is spent in checking lots of such compatibilities. – Mariano Suárez-Alvarez Nov 18 '09 at 1:08
Oh, when I say non commutative de Rham theory I mean something not really involving 'differential calculi' (or rather involving only the universal one): take $\Omega(A)$ to be the kernel of the multiplication map $A\otimes A\to A$, which is an $A$-bimodule, and let $\Omega^\bullet(A)$ be the tensor algebra over $A$ of $\Omega(A)$, which is a differential graded algebra. Now let $\Omega^\bullet(A)_{\mathrm{ab}}=\Omega^\bullet(A)/[\Omega^\bullet(A),\Omega^\bullet(A)]$ be the 'abelianization', which is a complex, whose cohomology is the non commutative de Rham cohomology I had in mind. – Mariano Suárez-Alvarez Nov 18 '09 at 1:15
|
2014-10-26 07:07:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9461511969566345, "perplexity": 319.78553805487036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119658961.52/warc/CC-MAIN-20141024030058-00025-ip-10-16-133-185.ec2.internal.warc.gz"}
|
http://eprints.iisc.ernet.in/15244/
|
# Chemical synthesis of \alpha-cobalt hydroxide
Rajamathi, Michael and Kamath, Vishnu P and Seshadri, Ram (2000) Chemical synthesis of \alpha-cobalt hydroxide. In: Materials Research Bulletin, 35 (2). pp. 271-278.
PDF 66666.pdf Restricted to Registered users only Download (197Kb) | Request a copy
## Abstract
Precipitation reactions using ammonia yield a novel cobalt hydroxide phase that is structurally and compositionally similar to \alpha-nickel hydroxide. The use of other synthetic methods yields the well-known $\beta-Co(OH)_2$. The slab composition, mode of anion inclusion, and thermal behavior of the hydroxides obtained by ammonia precipitation are similar to those of \alpha-nickel hydroxide; however, the materials are poorly ordered. A DIFFaX simulation of the powder X-ray diffraction patterns offers the best visual match with the observed patterns for a 50% stacking disorder and a disc radius between 100 and 1000 \AA.
Item Type: Journal Article Copyright of this article belongs to Elsevier. Layered compounds;Chemical synthesis. Division of Chemical Sciences > Solid State & Structural Chemistry Unit 25 Jul 2008 19 Sep 2010 04:48 http://eprints.iisc.ernet.in/id/eprint/15244
|
2015-03-30 10:00:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7140452861785889, "perplexity": 9685.854668532189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299236.74/warc/CC-MAIN-20150323172139-00015-ip-10-168-14-71.ec2.internal.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-5-section-5-1-verifying-trigonometric-identities-exercise-set-page-658/3
|
## Precalculus (6th Edition) Blitzer
$\tan (-x) \cos x=-\sin x$
We need to prove the identity $\tan (-x) \cos x=-\sin x$ Since, $\tan x=\dfrac{\sin x}{\cos x}$ Then, we have $\tan (-x) \cos x=-\tan x \cos x$ $=- \dfrac{\sin x}{\cos x} \times \cos x$ $=-\sin x$ which is the left hand side. Thus, the identity has been proved.
|
2021-06-18 03:27:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8358173370361328, "perplexity": 179.81440676470362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487634616.65/warc/CC-MAIN-20210618013013-20210618043013-00062.warc.gz"}
|
https://derandomization.wordpress.com/2012/05/21/lecture-9/
|
Posted by: Gil Cohen | May 21, 2012
## Lecture 9
Today we proved Cheeger’s inequality, based on a proof by Luca Trevisan (here and here). Quite recently two papers were published, that study what can we say when the ${k}$ largest eigenvalues of the Laplacian matrix are close to ${1}$. This generalizes the harder direction in Cheeger’s inequality, which is the question for ${k=2}$. The two papers are Multi-way spectral partitioning and higher-order Cheeger inequalities by James R. Lee, Shayan Oveis Gharan and Luca Trevisan, and Algorithmic Extensions of Cheeger’s Inequality to Higher Eigenvalues and Partitions by Anand Louis, Prasad Raghavendra, Prasad Tetali and Santosh Vempala. Both of these beautiful papers will be in the presentations list.
We started to discuss error reduction using expander graphs. In the next lecture we will follow the proof in Arora-Barak book (Theorem 21.12), and give an explicit construction of expanders. For that we will follow Chapter 9 of the book Expander Graphs and their Applications by Shlomo Hoory, Nathan Linial and Avi Wigderson.
|
2021-07-26 16:50:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6755390167236328, "perplexity": 1225.9117186591138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152144.81/warc/CC-MAIN-20210726152107-20210726182107-00042.warc.gz"}
|
http://www.solidot.org/translate/?nid=80071
|
## Instability of solitons - revisited, I: the critical generalized KdV equation. (arXiv:1711.03187v1 [math.AP])
We revisit the phenomenon of instability of solitons in the generalized Korteweg-de Vries equation, $u_t + \partial_x(u_{xx} + u^p) = 0$. It is known that solitons are unstable for nonlinearities $p \geq 5$, with the critical power $p=5$ being the most challenging case to handle. The critical case was proved by Martel-Merle in [11], where the authors crucially relied on the pointwise decay estimates of the linear KdV flow. In this paper, we show simplified approaches to obtain the instability of solitons via truncation and monotonicity, which can be also useful for other KdV-type equations.查看全文
## Solidot 文章翻译
你的名字 留空匿名提交 你的Email或网站 用户可以联系你 标题 简单描述 内容 We revisit the phenomenon of instability of solitons in the generalized Korteweg-de Vries equation, $u_t + \partial_x(u_{xx} + u^p) = 0$. It is known that solitons are unstable for nonlinearities $p \geq 5$, with the critical power $p=5$ being the most challenging case to handle. The critical case was proved by Martel-Merle in [11], where the authors crucially relied on the pointwise decay estimates of the linear KdV flow. In this paper, we show simplified approaches to obtain the instability of solitons via truncation and monotonicity, which can be also useful for other KdV-type equations.
|
2017-11-19 00:49:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8187919855117798, "perplexity": 620.4496557736245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805242.68/warc/CC-MAIN-20171119004302-20171119024302-00561.warc.gz"}
|
https://planetmath.org/genericmanifold
|
# generic manifold
###### Definition.
Let $M\subset{\mathbb{C}}^{N}$ be a real submanifold of real dimension $n$. We say that $M$ is a generic manifold if for every $x\in M$ we have
$T_{x}(M)+JT_{x}(M)=T_{x}({\mathbb{C}}^{N}),$
where $J$ denotes the operator of multiplication by the imaginary unit in $T_{x}({\mathbb{C}}^{N})$. That is every vector in $T_{x}({\mathbb{C}}^{N})$ can be written as $X+JY$ where $X,Y\in T_{x}(M)$.
For more details about the tangent spaces and the $J$ operator see the entry on CR manifolds (http://planetmath.org/CRSubmanifold). In fact every generic manifold is also CR manifold (the converse is not true however). A basic important result about generic submanifolds is.
###### Theorem.
Let $M\subset{\mathbb{C}}^{N}$ be a generic submanifold and let $f\colon U\subset{\mathbb{C}}^{N}\to{\mathbb{C}}$ be a holomorphic function where $U$ is a connected open set such that $M\cap U\not=\emptyset$, and further suppose that $f(M\cap U)=\{0\}$, that is $f$ is zero when restricted to $M$. Then in fact $f\equiv 0$ on $U$.
For example in ${\mathbb{C}}^{1}$ the real line is a generic submanifold, and any holomorphic function which is zero on the real line is zero everywhere (if the domain of the function is connected and intersects the real line of course). There are of course much stronger uniqueness results for the complex plane so the above is mostly useful for higher dimensions.
## References
• 1 M. Salah Baouendi, Peter Ebenfelt, Linda Preiss Rothschild. , Princeton University Press, Princeton, New Jersey, 1999.
Title generic manifold GenericManifold 2013-03-22 14:56:03 2013-03-22 14:56:03 jirka (4157) jirka (4157) 5 jirka (4157) Definition msc 32V05 generic submanifold CRSubmanifold TotallyRealSubmanifold TangentialCauchyRiemannComplexOfCinftySmoothForms ACRcomplex
|
2021-01-19 05:47:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 21, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9315429925918579, "perplexity": 326.3616900392975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517966.39/warc/CC-MAIN-20210119042046-20210119072046-00638.warc.gz"}
|
https://itectec.com/superuser/does-a-zip-file-appear-larger-than-the-source-file-especially-when-it-is-text/
|
# Why does a zip file appear larger than the source file especially when it is text
7-zipcompressionzip
I have a text file that is 19 bytes in size and having compressed the file using zip and 7zip, it appears to be larger. I had a read of the question on Why is a 7zipped file larger than the raw file? as well as Why doesn't ZIP Compression compress anything? but considering the file is not already compressed I would have expected further compression. Attached is a screenshot.
EDIT0
I took the example further by creating a file that contained random data as follows dd if=/dev/urandom of=sample.log bs=1G count=1 and attempted to compress the file using both zip and 7zip however there were no compression gains. Why is that?
As @kinokijuf said, there is a file header. But to expand upon that there are a few other things to understand about file compression.
The zip header contains all the necessary info for identifying the file type (the magic number), zip version and finally a listing of all the files included in the archive.
Your file probably wasn't compressed anyways. If you run unzip -l example.zip you will probably see that the file size is unchanged. 19 bytes would probably generate more overhead than would be saved if it were compressible at all by DEFLATE (the main compression method used by zip).
In other cases, PNG images for example, they are already compressed so zip will just store them. DEFLATE won't bother compressing anything already compressed.
If on the other hand you had a lot of text files, and their size was more than a few kilobytes each, you would get great savings by putting them all into a single zip archive.
You will get your best savings when compressing very regular, formatted data, like a text file containing a SQL dump. For example, I once had a dump of a small SQL database at around 13MB. I ran zip -9 dump.sql dump.zip on it and ended up with around a 1MB afterwards.
Another factor is your compression level. Many archivers by default will only compress at mid-level, going for speed over reduction. When compressing with zip, try the -9 flag for maximum compression (I think the 3.x manual says that compression levels are only supported by DEFLATE at this time).
### TL;DR
The overhead for the archive exceeded any gains you may have gotten for compressing the file. Try putting larger text files in there and see what you get. Use the -v flag when zipping to see your savings as you go.
|
2022-05-17 18:08:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37435382604599, "perplexity": 1969.1441888614156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662519037.11/warc/CC-MAIN-20220517162558-20220517192558-00098.warc.gz"}
|
https://plainmath.net/70512/use-squeeze-theorem-find-the-following-l
|
use squeeze theorem find the following lim x,y-0,0
2022-05-02
use squeeze theorem find the following lim x,y-0,0 y^2(1-cos2x) /x^4 +y^2
You can still ask an expert for help
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
|
2022-05-25 03:05:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.879449188709259, "perplexity": 3149.972985708554}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662578939.73/warc/CC-MAIN-20220525023952-20220525053952-00690.warc.gz"}
|
https://www.physicsforums.com/threads/fractional-polynomial-addition.957982/
|
# Fractional polynomial addition
## Homework Statement
Determine whether there exist $A$ and $B$ such that:
$$\frac{1}{3x^2-5x-2} = \frac{A}{3x+1} + \frac{B}{x-2}$$
None
## The Attempt at a Solution
[/B]
First I divided the polynomial $3x^2-5x-2$ by $3x+1$ and got $x-2$ as a result without a remainder, which I interpret as meaning that $3x^2-5x-2$ is the lowest common denominator of $\frac{A}{3x+1}$ and $\frac{B}{x-2}$. Therefore what I'm looking for is:
$$\frac{A(x-2)}{(3x+1)(x-2)} + \frac{B(3x+1)}{(x-2)(3x+1)}$$
I am unsure as to how to proceed from here. Logically, it seems that we're looking for an $A$ and $B$ such that $A(x-2) + B(3x+1) = 1$, which results in $A = \frac{1-B(3x+1)}{x-2}$. However, I'm wondering if this is correct and/or if there's a much more obvious way to find values for A and B?
Thank you.
## Answers and Replies
Related Precalculus Mathematics Homework Help News on Phys.org
andrewkirk
Science Advisor
Homework Helper
Gold Member
it seems that we're looking for an $A$ and $B$ such that $A(x-2) + B(3x+1) = 1$
That's correct, but note that that equals sign means that the equation must hold for all values of x, which can only happen if the coefficient of x on the LHS is zero. Similarly the constant term (the part that isn't multiplied by x) on the LHS must be 1. Those two requirements give you two equations, which you can solve to find the two unknown parameters A and B.
|
2020-03-31 10:44:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7493561506271362, "perplexity": 153.49993145397178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500426.22/warc/CC-MAIN-20200331084941-20200331114941-00430.warc.gz"}
|
http://en.wikipedia.org/wiki/Mean_absolute_percentage_error
|
# Mean absolute percentage error
The mean absolute percentage error (MAPE), also known as mean absolute percentage deviation (MAPD), is a measure of accuracy of a method for constructing fitted time series values in statistics, specifically in trend estimation. It usually expresses accuracy as a percentage, and is defined by the formula:
$\mbox{M} = \frac{1}{n}\sum_{t=1}^n \left|\frac{A_t-F_t}{A_t}\right|,$
where At is the actual value and Ft is the forecast value.
The difference between At and Ft is divided by the Actual value At again. The absolute value in this calculation is summed for every fitted or forecasted point in time and divided again by the number of fitted points n. Multiplying by 100 makes it a percentage error.
Although the concept of MAPE sounds very simple and convincing, it has two major drawbacks in practical application:[citation needed]
• If there are zero values (which sometimes happens for example in demand series) there will be a division by zero
• When having a perfect fit, MAPE is zero. But in regard to its upper level the MAPE has no restriction.
When calculating the average MAPE for a number of time series there might be a problem: a few of the series that have a very high MAPE might distort a comparison between the average MAPE of time series fitted with one method compared to the average MAPE when using another method. In order to avoid this problem other measures have been defined, for example the sMAPE (symmetrical MAPE), weighted absolute percentage error (WAPE), real aggregated percentage error (RAPE),or a relative measure of accuracy (ROMA).[citation needed]
## Alternative MAPE definitions
Problems can occur when calculating the MAPE value with a series of small denominators. A singularity problem of the form 'one divided by zero' and/or the creation of very large changes in the Absolute Percentage Error, caused by a small deviation in error, can occur.
The difference with the original formula is that each Actual Value (At) of the series is replaced by the average Actual Value (Āt) of that series. Hence, the distortions are smoothed out. This alternative is still being used for measuring the performance of models that forecast spot electricity prices.[1]
|
2014-12-18 22:02:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8566713929176331, "perplexity": 831.8205718252318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802767878.79/warc/CC-MAIN-20141217075247-00058-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://walter.bislins.ch/bloge/index.asp?page=Determining+the+Shape+of+the+Earth+with+Zenith+Angle+Measurements
|
# Determining the Shape of the Earth with Zenith Angle Measurements
Wednesday, February 12, 2020 - 03:19 | Author: wabis | Topics: FlatEarth, Mathematics, Science, Calculator
Measuring the shape of the earth using a theodolite without assuming a globe is not trivial, because atmospheric refraction introduces measurement errors which change the appearance of the earth. I describe a method of how the shape of the earth can be determined taking refraction into account.
## Problem
If we assume that light travels in straight lines we could easily recognize the real shape of the earth and calculate its size by using a theodolite to measure the drop of a target or the horizon. On a flat earth the drop is zero on the globe it is about d2 / 2R, where d = distance to target and R = radius of the earth.
But we have an atmosphere which refracts light up or down. What a theodolite sees is not the real geometry of the earth, but an up or down refracted image. Whatever the shape of the earth is, we will always see a distorted image of the earth due to refraction. So how can we measure how much curvature is due to the shape of the earth and how much is due to refraction?
If we can measure refraction somehow, we can correct the apparent curvature and get the real geometry of the earth.
Note: there are devices that can measure refraction directly, e.g. by using 2 lasers of different wavelengths. So we know the earth is a globe not at last from such measurements. [1]
## Method
Refraction depends on the atmospheric condition between observer and target. We can approximate the light rays between observer and target through an arc with a radius of curvature r.
We can use the method of simultaneous reciprocal zenith angle measurements using 2 theodolites to measure the apparent drop the theodolites see each other. The drop is measured as the angle from the vertical to the target. The angle is called the Zenith Angle. The sum of the measured zenith angles depends on the shape of the earth and the radius of curvature of the light ray. So if we know the shape and size of the earth, we can calculate the refraction from the zenith angle measurements.
Now we pretend not to know the shape of the earth. But we can calculate the curvature of the light ray for both models anyway and then use a physical model of the atmosphere to calculate the atmospheric parameters that result in this curvature. Then we can measure the real parameters like the temperature, pressure and temperature gradient and show for which earth model the real values can match the calculations.
If the real conditions can't produce the curvature of the light that is calculated for one or the other earth model, than that model is false.
So lets see how this works.
## Calculating the Curvature of a Light Ray
The curvature of light due to atmospheric refraction is very small. The radius of curvature is typically about 40,000 km. Because we often want the radius of curvature of light compared to the radius of the earth, it is practical to introduce a so called refraction coefficient k:
(1)
per definition
where'
$k$ ' =' 'refraction coefficient (e.g. 0.17 for standard refraction) $\kappa$ ' =' '1/r = curvature of light ray $R$ ' =' '6371 km = mean radius of the earth $r$ ' =' 'radius of curvature of a light ray
Although a flat earth has no curvature, the refraction coefficient can still be used in refraction calculations. We can always get the curvature of light from k as follows: r = 6371 km / k.
Because refraction itself does not depend on the shape of the earth, we can derive all calculations without assuming the shape of the earth.
We first need the connetion between the curvature of a light ray and the corresponding refraction angle. Then we can derive equations for the refraction angle from the zenith angles for globe and flat earth model. Lets start:
## Light Ray Curvature and Refraction Angle
The light ray curvature is not dependent on the earth model and can be calculated from the refraction angle as follows.
The magenta line is the direct line of sight from the observer to a target. The orange arc is the corresponding refracted light ray with radius of curvature r. The refraction angle is ρ (greek r, spelled rho) and the distance to the target is d.
Although the sketch shows a globe model, the light ray geometry has nothing to do with the shape of the earth. The radius of the earth R does not appear in the equations below.
We have a right angle triangle r, d/2, b with an angle ρ. We can use trigonometry to calculate the sine of ρ:
(2)
The curvature κ of the bent light rays is very small so the radius of curvature r = 1 / κ is very big. We have very small refraction angles ρ if d is much smaller than r, which is the case in survey. For small angles ρ we can approximate sin(ρ) ≈ ρ:
(3)
The curvature of the light ray is the inverse of the radius of curvature. So we can solve (3) for 1/r:
(4)
where'
$\kappa$ ' =' 'curvature of light ray independent of model $r$ ' =' 'radius of curvature of the light ray $\rho$ ' =' 'refraction angle in radian $d$ ' =' 'distance between observer and target
## Refraction Angle for the Globe Model
Now we derive the equation for the refraction angle $\rho$ from the zenith angles $\zeta$ (greek z, spelled zeta) for the globe model:
Assuming we have measured both zenith angles in degrees and assuming the radius of the earth is R we can get the refraction angle $\rho$ from the geometry of the left sketch above. We have to be careful with the units, because the zenith angles are given in degrees but we need radian for the remaining calculations. I use the rad() function to convert an angle from degrees to radian:
(5)
Rearranging for $\rho$ we get:
(6)
The angle $\theta$ can be calculated:
(7)
where'
$\theta$ ' =' 'tilt angle between target and observer in radian $d$ ' =' 'distance between observer and target $R$ ' =' '6371 km = radius of the earth
Inserting this into (6) and solving for the refraction angle $\rho$ we get:
(8)
where'
$\rho$ ' =' 'refraction angle in radian $\zeta$ ' =' 'zenith angles in degrees $d$ ' =' 'distance between observer and target $R$ ' =' '6371 km = radius of the earth
We can now insert the refraction angle into the equation (4) for the curvature of a light ray:
(9)
This can be simplified to:
(10)
where'
$\kappa$ ' =' 'curvature of light ray for the globe model $r$ ' =' 'radius of curvature of the light ray $\zeta$ ' =' 'zenith angles in degrees $d$ ' =' 'distance between observer and target $R$ ' =' '6371 km = radius of the earth
This is the equation to calculate the curvature of a light ray from the simultaneously measured zenith angles for the globe model.
We can get the commonly used refraction coefficient k as follows:
(11)
Note: if the light ray is straight, the right term above is equal −1 so the refraction coefficient for unrefracted light is k = 0.
## Refraction Angle for the Flat Earth Model
We can use equation (11) for the flat earth model with a little correction. Because the verticals at the 2 locations on the flat earth are parallel, the tilt angle $\theta$ is zero. So we get for the flat earth model:
(12)
where'
$\rho$ ' =' 'refraction angle in radian $\zeta$ ' =' 'zenith angles in degrees
Similar as above we can now calculate the equation for the curvature of a light ray from the simultaneously measured zenith angles for the flat earth model:
(13)
where'
$\kappa$ ' =' 'curvature of light ray for the flat earth model $r$ ' =' 'radius of curvature of the light ray $\zeta$ ' =' 'zenith angles in degrees $d$ ' =' 'distance between observer and target
This is alomost the same equation as for the globe model (10), but the 1 / R term does not appear.
We can even calculate a refraction coefficient for the flat earth with the same definition $k = \kappa \cdot R$ and get:
(14)
which is the same as the refraction coefficient for the globe model (11) minus 1. So whenever we have a refraction coefficient for the globe, e.g. in the Advanced Earth Curvature Calculator, we can calculate the corresponding flat earth refraction coefficient as:
(15)
Note: in reality the sum of the zenith angles is almost always measured greater than 180°. This means the curvature of the light ray on the flat earth model is negative, so the atmospheric conditions have to be such, that light gets bent upwards. This means that the air density has to increase with increasing altitude, which is not the case in reality.
But lets investigate the atmospheric conditions neccessary for a given zenith angle observation for the globe and the flat earth:
## Calculating the Temperature Gradient
The curvature of the light in the atmosphere is due to a gradient in the refractive index. The refractive index depends on air density, which is dependent via the ideal gas law on pressure and temperature, on the wavelenght of light and to a very small extent on humidity and CO2 concentration.
Because refraction does fluctuate in practice all the time, for practical calculations humidity and CO2 concentration can be ignored and the wavelength of green light is used. Try my Calculator for Refractivity based on Ciddor Equation and play with the sliders to see how much the influence of each parameter on the refractive index is.
Knowing the connection between atmospheric conditions and the refractive index (Ciddor Equation), using calculus we can derive the curvature of a light ray passing through an atmosphere with a certain density gradient, see Deriving Equations for Atmospheric Refraction.
The following equation gives the local curvature of a light ray depending on the atmospheric conditions at the location. If the conditions are about the same from the observer to the target, we can use this curvature for the whole path. In geodesy the inclination angle of a light ray is always near zero, so the influence of this angle (factor cos(≈0°) = 1) can be ignored.
(16)
where'
$\kappa$ ' =' 'curvature of light ray $r$ ' =' 'radius of curvature of the light ray $P$ ' =' 'air pressure at the observer in mbar or hPa or 1/100 Pa, Standard = 1013.25 mbar $T$ ' =' 'temperature at the observer in Kelvin, Standard = 288.15 K = 15°C $\mathrm{d} T/\mathrm{d} h$ ' =' 'temperature gradient at the observer in K/m or °C/m, Standard = −0.0065°C/m
Note that this equation does not depend on the shape of the earth. Refraction is independent of the shape of the earth. So we can use this equation for the globe and flat earth model.
I have shown how we can calculate the curvature of light from measurements of the zenith angles for the globe and flat earth model.
If we solve the equation (16) above for the temperature gradient, we can then insert the calculated light ray curvatures to calculate the temperature gradient neccessary to bend the light ray accordingly:
(17)
where'
$\mathrm{d} T/\mathrm{d} h$ ' =' 'temperature gradient in K/m = °C/m $\kappa$ ' =' 'curvature of light ray, see (10) and (13) $P$ ' =' 'air pressure in mbar or hPa or 1/100 Pa, Standard = 1013.25 mbar $T$ ' =' 'temperature in Kelvin, Standard = 288.15 K = 15°C
This equation is independent of the shape of the earth as well.
## Real Observations
To calculate the globe an flat earth results the Calculator below was used. The elevation (elev) is the mean elevation of the 2 stations. It's not the observer height above the ground. The Barometric Calculator was used to calculate pressure and temperature for this mean elevation. The Baromentric Calculator assumes the International Standard Atmosphere.
Note: The observer height is very important to get out of the strong refraction layer, see Refraction Coefficient as a Function of Altitude. For observations over great distances not over a flat landscape or water the line of sight is commonly for the most part way above the ground layer, so we can expect standard refraction, as the values in the table below confirm. On short distance measurements with observer heights around 2 m, the light rays travel in most cases the whole distance in the ground layer, where refraction can be considerably different than standard in both direction, which is confirmed by the Problematic Observations.
Globe Model Flat Earth Model Src $\zeta_1$ $\zeta_2$ dist.[m] elev.[m] ρ r[km] k dT/dh[°C/km] ρ r[km] Measurements NGS 91°08'09" 89°07'44" 32,139 2273 43.8" 75,745 0.084 −18.0 -7'56.5" −6956 −0.916 −211 NGS 91°03'44" 89°15'34" 39,476 2188 1'00.0" 67,821 0.094 −16.3 -9'39.0" −7032 −0.906 −208 NGS 90°40'36" 89°56'32" 76,957 2308 2'11.8" 60,235 0.106 −13.8 -18'34.0" −7125 −0.894 −208 NGS 90°32'21" 90°12'12" 92,882 2421 2'47.1" 57,342 0.111 −12.6 -22'16.5" −7167 −0.889 −208 NGS 90°49'49" 89°45'41" 74,362 2209 2'18.8" 55,270 0.115 −12.1 -17'45.0" −7201 −0.885 −205 NGS 90°50'01" 89°35'13" 52,518 2269 1'33.2" 58,146 0.110 −13.1 -12'37.0" −7155 −0.890 −206 NGS 91°09'00" 89°22'35" 66,128 2043 2'03.0" 55,462 0.115 −12.5 -15'47.5" −7198 −0.885 −202 JK1 90°00'33" 90°00'34" 2228 0.84 2.6" 89,324 0.071 −22.6 -33.5" −6860 −0.929 −186 JK1 90°00'29" 90°00'27" 2228 0.84 8.1" 28,468 0.224 2.16 -28.0" −8208 −0.776 −161 LS 88°57'58.2" 91°09'55" 17,824 387 51.9" 35,397 0.180 −4.13 -3'56.6" −7769 −0.820 −172
This observations were done over great distances with a line of sight over 30 m above the ground, so the temperature gradient was not influenced by the ground.
For the flat earth model we get temperature gradients of −161°C/km to −211°C/km. Such steep gradients are only possible for light rays traveling very low along a hot surface with cool air above. The apparent globular shape of the earth demands such a gradient in any height over the ground on the flat earth. This is physically impossible because the temperature would reach absolute zero below 2 km altitude.
On the globe model however the calculated temperature gradients are as expected. According to a realistic atmospheric model the gradients will fade to about −6.5°C/km to −13°C/km of the standard atmosphere in the lower 30 m above the ground and maintain this gradient up to about 11 km, see Refraction Coefficient as a Function of Altitude. Due to decreasing air density with increasing altitude, refraction decreases with altitude even with a constant temperature gradient. The range of refraction of k = 0.07..0.2 with a mean value of 0.14 corresponds to standard refraction.
If we measure above 30 m ground level, we get consistent refraction with not much variation, see Refraction Coefficient as a Function of Altitude.
This measurements show that the earth cannot be flat. The temperature gradients neccessary to produce this observations on the flat earth are physically impossible. The calculated temperature gradients for the globe on the other hand are in the neighborhood of the gradient −6.5°C/km to −13°C/km of the standard atmosphere.
## Problematic Observations
If the observer measures over relatively short distances from a height of about 2 m, the light travels in the ground layer of the atmosphere, where the ground exchanges heat with the air above, creating steep temperature gradients and hence strong refraction.
If the ground is cooler than the air, e.g. in the evening when the ground cools down faster than the air above, the gradient is always strong enough to bend light along the surface for hundreds of km. So it is no surprise, that on the globe earth we can see lasers from behind the curvature in any distance. The nearer to the ground the observer and laser are, the stronger refraction is.
This ground effect is very well supported by the measurements in the following table, where all observations were done over short distances (dist) from below 30 m observer height (ht). This measurements are inconclusive to tell the shape of the earth, although the flat earth temperature gradients are much more pronounced.
Globe Model Flat Earth Model Src $\zeta_1$ $\zeta_2$ ht.[m] dist.[m] elev.[m] ρ k dT/dh[°C/km] ρ Measurements JK2b 90°00'11" 90°01'00" 1.7 1538 16.5 -10.60" −0.426 −104 -35.50" −1.426 −267 JK2c 89°40'31" 90°19'58" 1.6 1100 114 3.31" 0.186 −3.8 -14.50" −0.814 −168 JK2c 90°22'18" 89°37'51" 1.6 150 110 -2.07" −0.853 −174 -4.50" −1.853 −339 JK2c 89°43'56" 90°16'18" 1.6 275 110 -2.55" −0.572 −128 -7.00" −1.572 −293 JK2d 90°14'24" 89°45'43" 1.6 143 11.8 -1.19" −0.512 −118 -3.50" −1.512 −281 JK2d 89°55'25" 90°04'41" 1.6 129 12 -0.91" −0.437 −105 -3.00" −1.437 −269 JK2e 90°04'32" 89°55'33" 1.6 61 5.6 -1.51" −1.536 −285 -2.50" −2.536 −448 JK2e 87°57'53" 92°02'12" 1.6 61 6.7 -1.52" −1.538 −285 -2.50" −2.538 −448 JK2f 90°21'16" 89°38'48" 1.6 150 28.4 0.43" 0.176 −5.33 -2.00" −0.824 −169 JK2f 90°35'41" 89°25'27" 1.8 1000 23.7 -17.81" −1.10 −234 -34.00" −2.10 −377 JK2f 90°59'21" 89°00'47" 1.6 250 25.7 0.05" 0.012 −32.3 -4.00" −0.988 −196 JK2f 90°33'15" 89°27'04" 1.6 500 21.1 -1.40" −0.173 −62.6 -9.50" −1.17 −226 JK2f 90°27'37" 89°32'47" 1.7 600 21.1 -2.29" −0.235 −72.7 -12.00" −1.24 −236 JK2f 89°59'04" 90°01'03" 1.6 100 18.8 -1.88" −1.16 −224 -3.50" −2.16 −387 JK2i 90°00'22" 90°00'04" 1.6 1538 16.5 11.9" 0.478 43.7 -13.0" −0.522 −119 JK2i 90°00'09" 89°59'54" 1.6 1538 16.5 23.4" 0.940 119 -1.5" −0.060 −441
Note: Because the refraction error increases with the square of the distance, for short enough distances the error l is small enough to be not a problem.
(18)
where'
$l$ ' =' 'apparent lift of target at distance d due to refraction (refraction error) $k$ ' =' 'refraction coefficient (about 0.17 for standard refraction) $d$ ' =' 'distance between observer and target $R$ ' =' '6371 km = radius of the earth
We can calculate the maximal distance to assure a certain accuracy even with high refraction. Measuring multiple times at different times a day and average over the measurements can give very good results. To do so we solve (18) for d and set l to the maximal error we accept.
## Calculator
The calculator is preset for the first observation JK1 above. You can inspect the Calculator Code below.
## Data Sources
### National Geodetic Survey
ID Date Project Description
a 1977 NGS - McDonald Observatory Long distance (32..93 km) observations at about 2000 m elevation, line of sight above 30 m
Survey of the McDonald Observatory Radial Line Scheme by Relative Lateration Techniques
NOAA Technical Report (PDF), June 1978
The NOS/NGS performed a special survey in the vicinity of the University of Texas McDonald Observatory. This was the initial phase of an extensive geodetic-geophysical study to detect any motions of the observatory relative to prominent topographic features within a region extending as far as 100 km from the observatory.
Very Long Line EDM & Reciprocal Zenith Angle Observations – A Measure Of The Plumbline Tilt
Because that report included their reciprocal zenith angles (they call them zenith distances), I chose to include that set (see JK2a) and I personally knew Charlie Glover who performed that work. Here is a blog entry I made to share that information. ~ Jesse Kozlowski
### Jesse Kozlowski Dataset JK1
Observation Data Jesse Kozlowski
Flat Earth Proof? – Perfectly Flat Level Lake by Jesse Kozlowski
Detailed documentation of the measurements and all data of the observation
### Jesse Kozlowski Dataset JK2
ID Date Project Description
a 1977 NGS - McDonald Observatory Long distance (32..93 km) observations at about 2000 m elevation, line of sight above 30 m
b NOV 09 2015 RTE 206 MP 27 & 28 d = 1500 m, observer height 1.6 m
c MARCH 08 2016 Clarksville EDM CBL d = 275..1000 m, observer height 1.6 m
d MARCH 30 2016 HAINESPORT "L" SHAPE d = 130..142 m, observer height 1.6 m
e APRIL 20 2016 200 FOOT "L" SHAPE d = 60 m, observer height 1.6 m
f OCT 07 2016 Folsom EDM CBL S8 d = 100..1000 m, observer height 1.6 m
g OCT 20 2016 Union Lake GNSS T2 d = 760 m, observer height 1.4 m
h NOV 01 2016 Cooper River Lake d = 2200 m, observer height 1.6 m
i NOV 6-17 2016 RTE 206 MP 27&28 LEVEL LINE d = 1500 m, observer height 1.6 m
### Larry Scott
Reciprocal Zenith Angle Measurements from Waynesboro, PA High Rock Lookout to Hagerstown, MD Airport
## Calculator Code
var Model = {
R: 6371000,
z1: 90.00916667,
z2: 90.00944444,
d: 2228.4,
P: 1013.25, // mbar
T: 15, // °C
rho_gl: 0,
rho_fe: 0,
r_gl: 0,
r_fe: 0,
kappa_gl: 0,
kappa_fe: 0,
k_gl: 0,
k_fe: 0,
dTdh_gl: 0,
dTdh_fe: 0,
Update: function() {
// globe model
var TK = this.T + 273.15;
var a = (180 - (this.z1 + this.z2)) * Math.PI / 180;
this.rho_gl = this.d / 2 / this.R + 0.5 * a;
this.kappa_gl = 2 * this.rho_gl / this.d;
this.r_gl = 1 / this.kappa_gl;
this.k_gl = this.kappa_gl * this.R;
this.dTdh_gl = 12666 * this.kappa_gl * TK*TK / this.P - 0.0343;
// flat earth model
this.rho_fe = 0.5 * a;
this.kappa_fe = a / this.d;
this.r_fe = 1 / this.kappa_fe;
this.k_fe = this.kappa_fe * this.R;
this.dTdh_fe = 12666 * this.kappa_fe * TK*TK / this.P - 0.0343;
ControlPanels.Update();
},
};
ControlPanels.NewPanel( {
Name: 'InputPanel',
ModelRef: 'Model',
NCols: 2,
OnModelChange: function(field) { Model.Update(field) },
Format: 'std',
Digits: 5,
PanelFormat: 'InputNormalWidth'
Text: 'Enter Zenith Angles, Distance, Pressure and Temperature',
ColSpan: 4,
Name: 'z1',
Label: 'ζ<sub>1</sub>',
Format: 'dms',
ConvToModelFunc: function(s){ return NumFormatter.DmsStrToNum(s); },
Digits: 5,
Name: 'z2',
Label: 'ζ<sub>2</sub>',
Format: 'dms',
ConvToModelFunc: function(s){ return NumFormatter.DmsStrToNum(s); },
Digits: 5,
Name: 'd',
Label: 'Distance',
Units: 'm',
Name: 'R',
Units: 'km',
Mult: 1000,
Name: 'P',
Label: 'Pressure',
Units: 'mbar',
Digits: 6,
Name: 'T',
Label: 'Temperature',
Units: '°C',
} ).Render();
ControlPanels.NewPanel( {
Name: 'OutputPanel',
ModelRef: 'Model',
NCols: 2,
OnModelChange: function(field) { Model.Update(field) },
Format: 'std',
Digits: 5,
PanelFormat: 'InputNormalWidth'
Text: 'Results',
Text: 'Globe',
Text: '',
Text: 'Flat Earth',
Name: 'rho_gl',
Label: 'Refr.Angle ρ',
Mult: Math.PI / 180,
Format: 'dms',
Digits: 6,
Name: 'rho_fe',
Label: 'ρ',
Mult: Math.PI / 180,
Format: 'dms',
Digits: 6,
Name: 'kappa_gl',
Label: 'Light Curve κ=1/r',
Format: 'sci',
Units: '/m',
Name: 'kappa_fe',
Label: 'κ',
Format: 'sci',
Units: '/m',
Name: 'r_gl',
Label: 'Light Curve r',
Units: 'km',
Mult: 1000,
Name: 'r_fe',
Label: 'r',
Units: 'km',
Mult: 1000,
Name: 'k_gl',
Label: 'Refr.Coeff k',
Name: 'k_fe',
Label: 'k',
Name: 'dTdh_gl',
Units: '°C/km',
Mult: 0.001,
Name: 'dTdh_fe',
Label: 'dT/dh',
Units: '°C/km',
Mult: 0.001,
} ).Render();
xOnLoad( function() { Model.Update(); } );
## References
Refraction Influence Analysis and Investigations on Automated Elimination of Refraction Effects on Geodetic Measurements
https://www.semanticscholar.org/paper/REFRACTION-INFLUENCE-ANALYSIS-AND-INVESTIGATIONS-ON-Boeckem-Flach/a53c08e2d9c2f2c8a4feb87002f6bc4646fe04e0
About Walter Bislin (wabis) Rights Public Domain
More Page Infos / Sitemap Created Tuesday, February 11, 2020
Scroll to Top of Page Changed Friday, December 11, 2020
|
2022-12-09 03:41:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 51, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 51, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7024372220039368, "perplexity": 3452.7400136610213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711376.47/warc/CC-MAIN-20221209011720-20221209041720-00593.warc.gz"}
|
https://www.gamedev.net/forums/topic/673667-finding-the-angle-between-two-vectors-on-x-axis/
|
Public Group
Finding the angle between two vectors on x axis
This topic is 1143 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
Recommended Posts
I currently attempt to rotate a object in the game to face toward another object, but stuck on
how to find angle on x axis between two vectors(if that makes sense).
The final transform for the object to be rotated( and moved) is
translatation matrix * rotation matrix on y axis(0,1,0) * rotation matrix on x axis(1,0,0)
So it can be rotated left and right, and up and down to face its target when moving to it. I figured out in order to rotate on y axis, I need to make the y component of the object's target direction vector ( target position - object position ) zero, like vec3(targetDirection.x, 0, targetDirection.z), and find the angle between target direction vector and the object's default direction vector, which is always (0,0,-1). It works fine but I can't figure out how to find the angle on the x axis. Using the same trick, making the x component zero instead of y component, it doesn't work.
The two vectors are target direction vector ( target position - object position) and object's default
direction vector which I always set to vec3(0,0,-1). Thanks.
Share on other sites
From what you're describing it sounds like you're basically implementing a lookAt.
Take a look toward the bottom of this page to see how such a matrix may be constructed:
https://msdn.microsoft.com/en-us/library/windows/desktop/bb204936%28v=vs.85%29.aspx
Edit:
The chirality of this was bothering me, so I made a test to check it out. It's left-handed, but invert the "eye" and "at" vectors when calculating the new z axis vector. This was the function I ended up with (in Unity, but the math is the relevant part):
Matrix4x4 makeMatrix(Vector3 position, Vector3 target)
{
Vector3 z = (position - target).normalized;
Vector3 x = Vector3.Cross(Vector3.up, z).normalized;
Vector3 y = Vector3.Cross(z, x);
Matrix4x4 mat = Matrix4x4.identity;
mat[0, 0] = x.x; mat[0, 1] = y.x; mat[0, 2] = z.x;
mat[1, 0] = x.y; mat[1, 1] = y.y; mat[1, 2] = z.y;
mat[2, 0] = x.z; mat[2, 1] = y.z; mat[2, 2] = z.z;
mat[3, 0] = -Vector3.Dot(x, position);
mat[3, 1] = -Vector3.Dot(y, position);
mat[3, 2] = -Vector3.Dot(z, position);
return mat;
}
Edited by Khatharr
Share on other sites
To find axis / angle between 2 unit vectors:
vec axis = (from.Cross(to)).Unit(); // axis from normalised cross product
float angle = acos(from.Dot(to));
To find the angle on ANY other axis:
vec rotationVector = axis * angle;
float otherAngle = otherUnitAxis.Dot(rotationVector);
In your case (needing the x angle, so otherUnitAxis is (1,0,0)) you could simly use:
float xAngle = rotationVector.x; // or axis.x * angle
Let's make a function out of this
float AngleOverAxis (vec &from, vec &to, vec &axis)
{
float angle = acos(from.Dot(to));
vec rv = (from.Cross(to)).Unit() * angle; // todo: zero cross?
return axis.Dot(rv);
}
So, for your example you would need to first find the y angle:
xform = Identity; xform.pos = objectWorldPosition;
vec to = (targetPos - xform.pos).Unit(); // target dir
vec from = objectLookDir; // look dir
float yAngle = AngleOverAxis (from, to, vec(0,1,0));
Then rebuild the transform with this new y rotation to get an updated looking direction for the object:
matrix rotationFromYAngle = matrix::RotationY(yAngle);
xform *= rotationFromYAngle;
from = xform.Rotate(objectLookDir);
Finally find x angle with new looking direction.
float xAngle = AngleOverAxis (from, to, xform.Rotate(vec(1,0,0))); // <- bugfix here
matrix rotationFromXAngle = matrix::RotationX(xAngle);
xform *= rotationFromXAngle;
EDIT: oops, x vector must be rotated from local to global space Edited by JoeJ
Share on other sites
From what you're describing it sounds like you're basically implementing a right-handed (I think?) lookAt.
The look might not do it, because the OP seems to want 2 rotations in strict y, x order.
Look at would result in a single rotation, so it would be necessary extract y/x with a matrix to euler angles conversation with correct order.
(might be available in DX too)
Share on other sites
From what you're describing it sounds like you're basically implementing a right-handed (I think?) lookAt.
The look might not do it, because the OP seems to want 2 rotations in strict y, x order.
Look at would result in a single rotation, so it would be necessary extract y/x with a matrix to euler angles conversation with correct order.
(might be available in DX too)
I didn't interpret his post that way. He began with
I currently attempt to rotate a object in the game to face toward another object...
and then went on to explain the method he was attempting to use.
• What is your GameDev Story?
In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.
• 15
• 9
• 11
• 9
• 9
• Forum Statistics
• Total Topics
634146
• Total Posts
3015772
×
|
2019-01-23 15:29:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18287649750709534, "perplexity": 4009.996218014484}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584334618.80/warc/CC-MAIN-20190123151455-20190123173455-00503.warc.gz"}
|
https://crypto.stackexchange.com/questions/52515/known-plaintext-attack-against-feistel-ciphers
|
# Known plaintext attack against Feistel ciphers
Assuming we have a Feistel cipher, with let's say 2 rounds i.e:
Plaintext $P=(L_{0},R_{0})$
$L_{1}=R_{0}$
$R_{1}=L_{0}\oplus f_{{K}_{1}}(R_{0})$
$L_{2}=R_{1}$
$R_{2}=L_{1}\oplus f_{{K}_{2}}(R_{1})$
With the keys $K_{1}, K_{2}$
With a known plaintext attack, assume we have some $x$ amount of pairs of $(P_{i}, C_{i})$ for the attack.
How would you proceed about the attack to find both keys and how long would it take? In my limited understanding, if we have $k$ as the size of each key, the first key would take $2^{k}$ evaluations and the 2nd key would take the same amount as well given that they are independent keys but assuming that's right, wouldn't that be the same cost as an exhaustive key search?
• You have computed the effort to find the key correctly. Have you tried to compute how many keys there are for the Feistel cipher? – K.G. Oct 24 '17 at 19:26
• Actually, if the two subkeys are independent, a straight-forward brute force search would take $2^{2k}$ time (as there are a total of $2k$ key bits...) – poncho Oct 24 '17 at 19:50
• @K.G Wouldn't that be just a total of $2k$ keys? – echoeida Oct 24 '17 at 20:34
|
2020-04-07 08:34:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7142223119735718, "perplexity": 676.2920626544278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371675859.64/warc/CC-MAIN-20200407054138-20200407084638-00479.warc.gz"}
|
https://www.ipu.ru/node/34034
|
34034
Автор(ов):
2
Параметры публикации
Доклад
Название:
Convex Inner Approximation of Attraction Domains of Linear Systems with Bounded Control
Да
1474-6670
Наименование конференции:
• 8th IFAC Symposium on Robust Control Design (ROCOND 2015, Bratislava, Slovak Republic)
Наименование источника:
• Proceedings of the 8th IFAC Symposium on Robust Control Design (ROCOND 2015, Bratislava, Slovak Republic)
• Bratislava
• IFAC
2015
Страницы:
106-111
Аннотация
Using the LMI technique, we provide inner convex approximations to attraction domains of linear systems with given bounds on the magnitude of the control input chosen in the state feedback form. Disturbance-free systems as well as systems subjected to persistent exogenous disturbances are analyzed. Illustrative numerical examples are presented.
Библиографическая ссылка:
Хлебников М.В., Щербаков П.С. Convex Inner Approximation of Attraction Domains of Linear Systems with Bounded Control / Proceedings of the 8th IFAC Symposium on Robust Control Design (ROCOND 2015, Bratislava, Slovak Republic). Bratislava, Slovak Republic: IFAC, 2015. С. 106-111.
|
2020-09-18 08:15:23
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8131571412086487, "perplexity": 5620.183574198199}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400187354.1/warc/CC-MAIN-20200918061627-20200918091627-00248.warc.gz"}
|
https://ir.cwi.nl/pub/13651
|
A {\it $K_l$ -expansion} consists of $l$ vertex-disjoint trees, every two of which are joined by an edge. We call such an expansion {\it odd} if its vertices can be two-coloured so that the edges of the trees are bichromatic but the edges between trees are monochromatic. We show that, for every $l$, if a graph contains no odd $K_l$ -expansion then its chromatic number is $O(l \sqrt{\log l})$. In doing so, we obtain a characterization of graphs which contain no odd $K_l$ -expansion which is of independent interest. We also prove that given a graph and a subset $S$ of its vertex set, either there are $k$ vertex-disjoint odd paths with endpoints in $S$, or there is a set X of at most $2k − 2$ vertices such that every odd path with both ends in $S$ contains a vertex in $X$. Finally, we discuss the algorithmic implications of these results.
, ,
,
|
2021-09-18 14:20:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9495669007301331, "perplexity": 131.73333183405305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056476.66/warc/CC-MAIN-20210918123546-20210918153546-00203.warc.gz"}
|
https://www.nature.com/articles/514292a?error=cookies_not_supported&code=6f011829-aeaf-491e-b6b9-5b7b8d185c07
|
A 'crater carpet' lines the floor of a lounge at Arizona State University, where engineers, biologists and Earth and space scientists all mingle. Credit: Andy DeLisle/ASU
Worlds both familiar and strange come together inside a large glass-walled room at Arizona State University in Tempe. Images of the Moon's surface fill giant screens as planetary geologist Jim Bell shows off panoramas from one of the university's cameras, which is currently flying on a lunar orbiter. Bell, tall and enthusiastic, gets even more animated when he talks about plans to visit an odder place: an asteroid named Psyche made almost entirely of iron. Researchers are keen to explore it because it is essentially a naked version of Earth's metallic core, something that scientists have never seen.
Designing a mission to study a rapidly spinning hunk of iron more than 255 million kilometres from Earth calls for close collaboration between scientists and engineers. Bell finds that kind of coordination easier at Arizona State University (ASU) than when he worked at Cornell University in Ithaca, New York, on the Mars rovers.
At Cornell, “the engineers were someplace else on campus”, he says. “So you'd come up with an idea for an instrument, kind of toss it over the wall, and then a year later they'd toss a design back to you that may or may not work, scientifically.” But at ASU, Bell works at the School of Earth and Space Exploration (SESE), which includes engineers and computer scientists. “They are people who are interested in the same science I'm interested in, and we get things done faster and, I think, better.”
The exploration school, formed in 2006 from the former departments of astronomy and geology, is the most striking embodiment of the ambitious vision of Michael Crow, who took over as president of ASU in 2002 with the goal of turning a public university with a middling reputation into something much greater. ASU was not known for exceptional scientific research, and attracted students mainly from within the state.
Crow has sought to transform ASU's research and education by tearing down walls between traditional academic departments and bringing together disparate disciplines to tackle large issues such as exploring the Solar System, finding alternative ways to attack cancer and solving problems that matter to Arizona as well as the rest of the world, such as severe water shortages. Crow has travelled extensively, talking up what he calls the “New American University” that is taking root in the desert. “We're going to best serve our students, and the world, by preparing them to tackle the big problems of the modern age,” he says.
More than a decade into his tenure, the results are mixed. On the positive side, ASU has more than doubled the amount of federal money it attracts for research. And the culture at the university has shifted to make research and education more interdisciplinary. “I think some of the things Arizona is doing could have a real impact,” says Daniel Fisher, a physicist at Bio X, a multidisciplinary institute at Stanford University in California.
But seen from another perspective, the changes at Arizona are modest shifts — layering new institutes on top of traditional departments, for example. And the reinvention effort may not have substantially improved the quality of ASU's research. An analysis of scholarly output conducted by Nature shows that ASU's record has improved by some measures, such as the number of papers published, but the university has gained little ground compared with similar institutions.
The results underscore how hard it is for large universities, which employ thousands of researchers, to alter their fundamental character by uprooting entrenched academic disciplines. Even Crow says that “the biggest challenge that we've had has been the strength of 'the invisible' colleges — the fact that people show more allegiance to their disciplines and the structure of those disciplines than to the institution they are a part of”.
Change agent
Still, the signs of change are all over the university — literally. Big placards in hallways announce “A New American University” with eight ambitious calls to action. “Fuse Intellectual Disciplines” is one, along with “Transform Society”, “Value Entrepreneurship”, “Enable Student Success”, and “Conduct Use-Inspired Research”. The campus itself has a modern, utilitarian look: large buildings with clean lines, many topped with solar panels. Construction cranes poke into the sky as they continue a building boom that has been under way ever since Crow arrived. Throngs of students thread their way around them — ASU has the largest undergraduate and graduate enrolment of any public university in the country, at about 76,000.
Nature special: The university experiment
There are a lot of new faculty faces as well. Nearly 500 of ASU's 1,700 or so tenure-track faculty have been hired in the past ten years — the turnover has largely resulted from normal retirements — and the university has deliberately sought people who work well with others and look beyond disciplinary walls.
“I've worked at places where we'd have pitched battles over lab space if room opened up,” says Cheryl Nickerson, a microbiologist at ASU's Biodesign Institute, a cross-disciplinary centre dedicated to understanding how organisms are built down to the molecular level, and how that differs between health and disease. Nickerson, who sends bacteria on NASA missions and works with many physicists and engineers, says, “Here, I'm not saying we're perfect, but several times I've seen people give up space to accommodate a colleague with an expanding project.”
All these changes are part of Crow's grand vision for reinventing the university, and his tireless promotion of that vision has brought him to prominence in the world of higher education. He chairs or participates in several national committees, including an advisory council on innovation and entrepreneurship for the US Department of Commerce. And he travels the world to lecture at World Bank meetings and other international gatherings. Much of what Crow talks about is how ASU has focused on replacing narrow academic divisions with big, bold structures. “Other leaders espouse this principle of interdisciplinarity, but Crow has gone the furthest in embracing it, and is the loudest voice,” says Jerry Jacobs, a sociologist at the University of Pennsylvania in Philadelphia and author of the book In Defense of Disciplines (University of Chicago Press, 2013).
Crow's manner can be blunt and aggressive, says Joshua LaBaer, who left his position as head of Harvard University's Institute of Proteomics in 2009 to work at the Biodesign Institute. But LaBaer says that the decisions by Crow and his team have generally been sound. “I don't see the faculty rankling under a loss of power,” he says. “The goals here are good ones, and you can take advantage of new opportunities.” And of resources, too: in 2013, the US National Institutes of Health (NIH) gave ASU researchers some US$48 million, about$22 million of which went to the Biodesign Institute. By comparison, the university pulled in just under \$20 million from the NIH in 2003.
Credit: Source: Research project numbers from ASU; Publication data from Elsevier's SciVal.
A substantial share of those resources have helped to build LaBaer's unique facility for producing and analysing thousands of proteins, as part of efforts to understand their function and role in disease. In secure rooms full of automated machines, human cell cultures churn out full-length proteins in vials, then robotic arms whisk the molecules to machines that determine their sequence and structure. What sets LaBaer's operation apart is the ability to manufacture and probe thousands of proteins before they lose their natural folding patterns and function. The scientists then compare the proteins to see which shapes and folds are linked to particular diseases.
One priority for the university has been to boost biomedical research of this type — a tall order for an institution without a medical school. It has done so in part by forging close ties with the nearby Mayo Clinic in Scottsdale. That relationship helped ASU to attract LaBaer from Harvard.
There were a lot of worries when Crow and his administrators first started to reshape the university. In 2005, for example, the anthropology department was incorporated into a new School of Human Evolution and Social Change, and anthropologists fretted that their discipline was going to be diluted into non-existence. But by 2011, according to anthropologist Alexandra Brewis, the number of faculty members in the school had risen by 40%, and three-quarters of them were anthropologists. The other research slots were occupied by applied mathematicians, epidemiologists, political scientists and human geographers.
In 2010, Brewis and some colleagues surveyed all 54 tenured faculty in the school to find out who they collaborated with. The strongest partnerships, they learned, were still between traditional sub-disciplines such as archaeology and physical anthropology. Many non-anthropologists in the school often had stronger ties to anthropology than they did to one another. So diversity within the school had not led to fragmentation, the researchers concluded, and all the disciplines were contributing to anthropological research. For example, a team of researchers is studying the western Mediterranean, an area that has supported dense populations as well as productive agriculture for thousands of years. The team is developing computer models that show how population size, economic behaviour and vegetation change in the region have affected the sustainability of natural resources, and how those resources are likely to fare in the future.
ASU's funding numbers show that grant-givers find the cross-disciplinary approach attractive. From 2003 to 2012, the university's federally financed research portfolio grew by 162%, vastly outpacing the average increase seen at 15 similar public institutions, which were picked for comparison purposes by ASU's governing board. And the money that ASU gets is supporting more interdisciplinary work than ever before. The number of funded projects with principal investigators in two or more departments rose by 75% between 2003 and 2014, whereas projects led by one department climbed by just 8%.
The goals here are good ones, and you can take advantage of new opportunities.
A similar trend has occurred at Michigan State University in East Lansing, another institution that has pushed for greater collaboration between disciplines. Stephen Hsu, the university's vice-president for research and graduate studies, says that, like ASU, Michigan State has seen the value of shared projects. “Due to increased specialization, you have experts in specific techniques or types of analysis scattered among different departments,” he says. “To address many really big problems, for example, climate change, you need teams with multiple skills, and therefore must transcend departmental boundaries.”
But for all the changes, ASU has had limited success in raising its scientific profile relative to its peers —a least in terms of its publication record. Using Elsevier's SciVal analysis tools, Nature compared the publications of ASU researchers to those at some of the same peer institutions identified by the university's governing board. Over the past decade, ASU has more than doubled the number of articles it produces each year, the biggest percentage rise in its peer group. But because everyone increased their production substantially, and because ASU started near the bottom, the university moved up only slightly within the group. In climbed from fourteenth to twelfth place between 2003 and 2013 (see 'Raising Arizona').
Mixed numbers
Other metrics suggest that ASU researchers are having mixed success in generating scholarly impact. The university ranks in the middle of its peer group in getting papers into the most cited scientific journals and broke into the top five for a couple of years during the past decade. Yet it generally comes in last place in producing papers that attract the most citations.
George Raudenbush, ASU's executive director of research analytics, argues that citation data are not the best measure of research quality. And he counters that the relative increase in publications is truly dramatic. It shows that the university has come a long way in a short time, given that it did not emphasize research as much before Crow's arrival, he says.
Beyond metrics, there are also questions about how profound the organizational changes at ASU really are, and whether they represent a major departure in higher education. Few traditional academic departments have been eliminated; the university has simply established most of the new units on top of them. And most of the faculty members in the new schools and groups are actually tenured in traditional departments. (SESE is an exception.)
In fact, some of what ASU has accomplished in terms of promoting interdisciplinary research can be seen at other, more staid institutions. “Traditional universities have research centres, and that's where interdisciplinary ideas get addressed,” says Jacobs. When he studied the top 25 research universities in the United States, he found that they have about 100 research centres each, on average.
But ASU's administrators maintain that there is something unique happening there. By emphasizing new schools and institutes, rather than centres within disciplinary departments, the university has built conduits among very different specialities that encourage collaboration, says Crow. And hiring broad-thinking researchers and pairing them with practical technologists — engineers and computer scientists, for example — leads the way to addressing broad issues.
As an example of something the university is doing differently, Crow points to its broad-based approach to cancer research. The university's Center for Convergence of Physical Science and Cancer Biology, financed by the National Cancer Institute, brings astrobiologists and physicists together with oncologists and evolutionary biologists to explore how cancer starts and evolves (see Nature 474, 20–22; 2011 )
Some of the centre's researchers have developed a theory that as a cancer spreads, it activates a series of ancient genes that were key to the success of the first multicellular organisms (C. Lineweaver, P. C. W. Davies, & M. D. Vincent et al. Bioessays 36, 827–835; 2014 ). The deep roots and robust genes might explain why some tumours are so hard to get rid of, the researchers propose. The idea implies that cancer is an organized response, rather than a series of genetic accidents.
That line of enquiry, borne from an unusual marriage of disciplines, is unlikely to come from a typical university, says Crow. “We don't want to ask the same questions as other institutions do.”
|
2022-10-03 06:22:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2768092155456543, "perplexity": 3455.30247681069}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00534.warc.gz"}
|
https://pos.sissa.it/395/113/
|
Volume 395 - 37th International Cosmic Ray Conference (ICRC2021) - CRD - Cosmic Ray Direct
Precision Measurement of Cosmic Ray Deuterons with the Alpha Magnetic Spectrometer
E. Ferronato Bueno*, F. Barão, J. Berdugo, C. Delgado, F. Dimiccoli, D. Gómez-Coral, M. Vecchi, P. Zuccon and P. von Doetinchem
Full text: Not available
Abstract
The Alpha Magnetic Spectrometer (AMS-02) has been operating aboard the International Space Station (ISS) since May 2011. Deuterons represent about 1$\%$ of the single-charged cosmic-ray nuclei. They are mainly produced by fragmentation reactions of primary cosmic 4He nuclei on the interstellar medium and represent a very sensitive tool to verify and constrain CR propagation models in the galaxy. Given the smaller cross-section for 4He- larger D with respect to C- larger B, the deuteron flux provides additional information about the propagation of cosmic rays compared to the cosmic B/C ratio. Precise particle rigidity and velocity measurements and a large acceptance enable separating deuterons from abundant protons in the rigidity range from 1.92 to 21.1 GV. Precision measurements of the deuteron flux obtained with a high-statistics data sample collected by the AMS-02 during its 8.5 years of operation on the International Space Station will be presented.
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access
|
2021-10-17 15:57:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4043918550014496, "perplexity": 3937.515479318318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585178.60/warc/CC-MAIN-20211017144318-20211017174318-00259.warc.gz"}
|
https://fykos.org/year31/problems/series5
|
# 5. Series 31. Year
### (3 points)1. staircase on the Moon
If we once colonized the Moon, would it be appropriate to use stairs on it? Imagine the descending staircase on the Moon. The height of one stair is $h=15 \mathrm{cm}$ and it's length is $d=25 \mathrm{cm}$. Estimate the number $N$ of stairs that a person would fly over if he walked into the staircase with a velocity $v=5{,}4 \mathrm{km\cdot h^{-1}}=1{,}5 \mathrm{m\cdot s^{-1}}$. The gravitational acceleration on the Moon's surface is six times weaker than on Earth's surface.
Dodo read The Moon Is a Harsh Mistress.
### (3 points)2. death rays on the glass
A light ray falls on a glass plate with an absolute reflective index $n = 1,5$. Determine its angle of incidence $\alpha _1$ if the reflected ray forms an angle $60 \dg$ with the refracted ray. The board is stored in the air.
Danka likes solvine more problems simultaneously.
### (5 points)3. wedge
We have two wedges with the masses $m_1$, $m_2$ and the angle $\alpha$ (see figure). Calculate the acceleration of the left wedge. Assume that there is no friction anywhere.
Bonus: Consider friction with the $f$ coefficient.
Jáchym robbed the CTU scripts.
### (7 points)4. thermal losses
At what temperature does the indoor environment of the flat in a block of flats stabilise? Consider that our flat is adjacent to other apartments (except its shorter walls), in which the temperature $22 \mathrm{\C}$ is maintained. The shorter walls adjoin the surroundings where the temperature is $- 5 \mathrm{\C}$. The inside dimensions of the flat are height $h = 2{,}5 \mathrm{m}$, width $a = 6 \mathrm{m}$ and length $b = 10 \mathrm{m}$. The coefficient of the specific thermal conductivity of the walls is $\lambda = 0{,}75 \mathrm{W\cdot K^{-1}\cdot m^{-1}}$. The thickness of the outer walls and the ceilings are $D\_{out} = 20 \mathrm{cm}$, and the thickness of the inner walls are $D\_{in} = 10 \mathrm{cm}$.
How will the result be changed if we add polystyrene insulation to the building? The thickness of the polystyrene is $d = 5 \mathrm{cm}$, and its specific heat conductivity is $\lambda '= 0{,}04 \mathrm{W\cdot K^{-1}\cdot m^{-1}}$.
### (8 points)5. sneaky dribblet
Let's take a rounded drop of radius $r_0$ made of water of density $\rho \_v$ which coincidentally falls in the mist in the homogeneous gravity field $g$. Consider a suitable mist with special assumptions. It consists of air of density $\rho \_{vzd}$ and water droplets with an average density of $rho\_r$ and we consider that the droplets are dispersed evenly. If a drop falls through some volume of such mist, it collects all the water that is in that volume. Only air is left in this place. What is the dependence of the mass of the drop on the distance traveled in such a fog?
Bonus: Solve the motion equations.
Karal wanted to assign something with changing mass.
### (9 points)P. floating mercury
Try to invent as much „physics tricks“ as possible thanks to which mercury would float on the liquid water for at least a limited time. The more permanent solution you find, the better.
### (12 points)E.
We are sorry. This type of task is not translated to English.
### (10 points)S. Differential equations are growing well
1. Solve the two-body problem using the Verlet algorithm and the fourth-order Runge-Kutta method (RK4) over several (many) periods. Use a step size large enough for the numerical errors to become significant. Observe the way the errors manifest themselves on the shape of the trajectories.
2. Solve for the time-dependent position equation of a damped linear harmonic oscillator described by the equation $\ddot {x}+2\delta \omega \dot {x}+\omega ^2 x=0$, where $\omega$ is the angular velocity and $\delta$ is the damping ratio. Change the parameters around and observe the changes in the oscillator’s motion. For which values of the parameters is damping the fastest?
3. Model sedimentation using the method of ballistic deposition $\begin{equation*} h_i(t+1) = max($h_{i-1}(t), h_i(t)+1, h_{i+1}(t)$) \, , \end {equation*}$ where $h_i$ is the height of i-th column. And study the development of the roughness of the surface $W(t,L)$ (see this year’s series 4, problem S). Initially (for small values of $t$) the roughness is proportional to some power of $t$: $W(t,L) \sim t^{\beta }$. For large values of $t$, however, it is proportional to some (possibly different) power of the grid length $L$. $W(t,L) \sim L^{\alpha }$. Find the powers $\alpha$ and $\beta$. Choose an appropriate step size so that you could study both modes of sedimentation. The length of the surface should be at least $L = 256$. (Warning: the simulations may take several hours.)
4. Simulate on a square grid the growth of a tumor using the Eden growth model with the following variation: when a healthy and an infected cell come into contact, the probability of the healthy one being infected is $p_1$ and the probability of the infected one being healed is $p_2$. Initially, try out $p_1 \gg p_2$, the proceed with $p_1 > p_2$ and then with $p_1 < p_2$. At the beginning, let only 5 cells (arranged into the shape of a cross) be infected.
Describe qualitatively what you observe.
5. Rewrite the attached code for the growth of a fractal (diffusion limited aggregation model) on a hexagonal grid to the growth of a fractal on a square grid and calculate the dimension of the resultant fractal.
Note: Using the codes attached to this task is not mandatory, but it is recommended.
Mirek and Lukáš have already grown their algebra, now they have different seeds.
|
2021-01-16 05:05:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7427470088005066, "perplexity": 866.4725464472684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703500028.5/warc/CC-MAIN-20210116044418-20210116074418-00746.warc.gz"}
|
https://tex.stackexchange.com/questions/334771/display-style-math-in-a-caption/334833#334833
|
display-style math in a caption
I am trying to insert dispay-style math in the caption of a float, but I get an error.
Here is a minimal example:
\documentclass{article}
\usepackage{mathtools}
\begin{document}
\begin{table}[t]
\caption[short caption]{long caption with math
\begin{equation*}
1 \neq 0
\end{equation*}
}
\label{tab:label}
\begin{tabular}{ccc}
1 & 2 & 3
\end{tabular}
\end{table}
\end{document}
In this case I get
Missing $inserted or if I leave out the optional short version Argument of \@caption has an extra } I understand that the problem could be that the equation* environment is "fragile", and I need to protect it before using in the argument of a caption. I haven't been able to do it. Is it possible to obtain this somehow? Thanks for your help. • What if you replace \begin{equation*}1 \neq 0 \end{equation*} by $1 \neq 0$ ? Oct 19 '16 at 10:54 • More or less the same underlying issue as tex.stackexchange.com/questions/334752/…: you can't use display maths here as there is a box construct. $\displaystyle ...\$ should work Oct 19 '16 at 10:56
• @JosephWright Indeed, using inline math and \displaystyle could be a possible workaround. But I would like to have it on a newline. I tried with \\, \newline or even with an empty line, but I always end up with the equation on the same line of the text. Oct 19 '16 at 14:22
Normally \caption first tries to fit the caption into one line by putting it into an \hbox. Even if it doesn't fit, you will still get error messages from the attempt. If you want to reduce the width of the caption, put it inside a minipage.
\documentclass{article}
\usepackage{mathtools}
\usepackage[singlelinecheck=false]{caption}
\begin{document}
\begin{table}[t]
\caption[short caption]{
long caption with math
\begin{equation*}
1 \neq 0
\end{equation*}
}
\label{tab:label}
\begin{tabular}{ccc}
1 & 2 & 3
\end{tabular}
\end{table}
\end{document}
• Thanks! Indeed, the problem was the check performed by \caption to fit the caption in a single line. singlelinecheck=false solves this. Oct 20 '16 at 23:31
|
2021-10-24 12:34:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9758242964744568, "perplexity": 1446.816947915787}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585997.77/warc/CC-MAIN-20211024111905-20211024141905-00492.warc.gz"}
|
http://editorialstoday.com/we-have-extended-our-collaboration-with-brook-cable-for-manufacturing-400-kv-cable-anil-gupta-kei-industries/
|
We have extended our collaboration with Brook Cable for manufacturing 400 KV...
# We have extended our collaboration with Brook Cable for manufacturing 400 KV cable: Anil Gupta, KEI Industries
0
“We have already established the manufacturing facility which has been become operational at end of December in our Chopanki plant.”
Source : We have extended our collaboration with Brook Cable for manufacturing 400 KV cable: Anil Gupta, KEI Industries
Courtesy : Economic Times – Opinions
|
2018-08-19 23:08:05
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8689645528793335, "perplexity": 11137.544821757916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215404.70/warc/CC-MAIN-20180819224020-20180820004020-00070.warc.gz"}
|
https://skeptric.com/suitcase-of-money/index.html
|
# How much Money is in a Suitcase?
insight
Published
September 22, 2020
This is from Sanjoy Mahajan’s The Art of Insight Problem 1.3
In the movies, and perhaps in reality, cocaine and elections are bought with a suitcase of $100 bills. Estimate the dollar value in such a suitcase. # Size of$100 note
Let’s assume a banknote is about the same thickness as paper; Australian notes are probably a little bit thicker. A 500 page ream of paper is about 5cm tall, so each sheet is about 0.01cm thick.
A note is around 15cm long and about 6cm wide.
So the total volume is about $$15 \rm{cm} \times 6 \rm{cm} \times 0.01 \rm{cm} \approx 1 \rm{cm}^3$$.
# Volume of a Suitcase
Suitcases come in a lot of sizes; lets consider a small wheely travel suitcase. Standing up it’s around 50cm high, by 30cm wide and 30 cm deep. This gives a total depth of 45L, call it 50L.
A quick search online shows this is in a reasonable range; typically they’re in the range of 30L to 110L.
# Number of notes in a suitcase.
A litre is $$(10 \rm{cm})^3 = 1000 \rm{cm}^3$$.
So the total value of money in the suitcase would be $100 per note × 1000 notes per L × 50L in the suitcase which is$5 million.
Around $5 million sounds like a reasonable estimate. # Weight of the suitcase To go a little further how much does the suitcase weigh? An Australian banknote is probably a little thicker than printer paper, which weighs about 100 grams per square meter. So an Australian banknote weighs around .15 m × .06 m × 100 grams per square meter which is about 1 gram. So the 50,000 banknotes in$5 million weighs about 50kg. This is a reasonably heavy suitcase of money to wheel around, and bit too heavy to bring on a flight (which typically has limits of 20-30 kg per bag).
Checking the data for Australian Banknotes shows that 1 gram of weight for a note is a reasonable estimate.
|
2023-02-03 07:42:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45016705989837646, "perplexity": 2647.1618431455922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.16/warc/CC-MAIN-20230203055519-20230203085519-00157.warc.gz"}
|
https://statacumen.com/teach/ADA1/worksheet/ADA1_CL_26_SimpleLinearRegression.html
|
# ADA1: Class 26, Simple linear regression
Advanced Data Analysis 1, Stat 427/527, Fall 2022, Prof. Erik Erhardt, UNM
Author
Published
August 13, 2022
Include your answers in this document in the sections below the rubric.
# Rubric
Answer the questions with the data example.
# Height vs Hand Span
In a previous year, this was the procedure for collecting data:
1. Record your height in inches. For example 5’0” is 60 inches.
2. Use a ruler to measure your hand span in centimeters: the distance from the tip of your thumb to pinky finger with your hand splayed as wide as possible.
4. Analysis.
library(erikmisc)
── Attaching packages ─────────────────────────────────────── erikmisc 0.1.16 ──
✔ tibble 3.1.8 ✔ dplyr 1.0.9
── Conflicts ─────────────────────────────────────────── erikmisc_conflicts() ──
✖ dplyr::lag() masks stats::lag()
erikmisc, solving common complex data analysis workflows
by Dr. Erik Barry Erhardt <erik@StatAcumen.com>
library(tidyverse)
── Attaching packages
───────────────────────────────────────
tidyverse 1.3.2 ──
✔ ggplot2 3.3.6 ✔ purrr 0.3.4
✔ tidyr 1.2.0 ✔ stringr 1.4.0
✔ readr 2.1.2 ✔ forcats 0.5.1
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::lag() masks stats::lag()
# Height vs Hand Span
dat_hand <-
na.omit() %>%
mutate(
Gender_M_F = factor(Gender_M_F, levels = c("F", "M"))
)
Rows: 378 Columns: 6
── Column specification ────────────────────────────────────────────────────────
Delimiter: ","
chr (2): Semester, Gender_M_F
dbl (4): Table, Person, Height_in, HandSpan_cm
ℹ Use spec() to retrieve the full column specification for this data.
ℹ Specify the column types or set show_col_types = FALSE to quiet this message.
str(dat_hand)
tibble [237 × 6] (S3: tbl_df/tbl/data.frame)
$Semester : chr [1:237] "F15" "F15" "F15" "F15" ...$ Table : num [1:237] 1 1 1 1 1 1 1 1 2 2 ...
$Person : num [1:237] 1 2 3 4 5 6 7 8 1 2 ...$ Gender_M_F : Factor w/ 2 levels "F","M": 2 1 1 1 2 2 1 2 2 1 ...
$Height_in : num [1:237] 69 66 65 62 67 67 65 70 67 63 ...$ HandSpan_cm: num [1:237] 21.5 20 20 18 19.8 23 22 21 21.2 16.5 ...
- attr(*, "na.action")= 'omit' Named int [1:141] 9 13 14 15 16 17 18 22 23 24 ...
..- attr(*, "names")= chr [1:141] "9" "13" "14" "15" ...
Plot data for Height_in vs HandSpan_cm for Females and Males.
library(ggplot2)
p <- ggplot(dat_hand, aes(x = HandSpan_cm, y = Height_in))
p <- p + theme_bw()
# linear regression fit and confidence bands
p <- p + geom_smooth(method = lm, se = TRUE)
# jitter a little to uncover duplicate points
p <- p + geom_jitter(position = position_jitter(.1), alpha = 0.75)
# separate for Females and Males
p <- p + facet_wrap(~ Gender_M_F, nrow = 1)
print(p)
geom_smooth() using formula 'y ~ x'
Choose either Females or Males for the remaining analysis.
Uncomment one of the Gender_M_F == lines below to choose Females or Males.
# choose one:
dat_use <-
dat_hand %>%
filter(
# Gender_M_F == "F"
# Gender_M_F == "M"
)
Plan:
1. Center the explanatory variable HandSpan_cm,
2. fit a simple linear regression model,
3. check model assumptions,
4. interpret the parameter estimate table, and
5. interpret a confidence and prediction interval.
## Center the explanatory variable HandSpan_cm
Recentering the $$x$$-variable doesn’t change the model, but it does provide an interpretation for the intercept of the model. For example, if you interpret the intercept for the regression lines above, it’s the “expected height for a person with a hand span of zero”, but that’s not meaningful.
Choose a value to center your data on. A good choice is a nice round number near the mean (or center) of your data. This becomes the value for the interpretation of your intercept.
val_center <- 20
dat_use <-
dat_use %>%
mutate(
HandSpan_cm_centered = HandSpan_cm - val_center
)
## Fit a simple linear regression model
# fit model
lm_fit <-
lm(
Height_in ~ HandSpan_cm_centered
, data = dat_use
)
Here’s the data you’re using for the linear regression, with the regression line and confidence and prediction intervals.
library(ggplot2)
p <- ggplot(dat_use, aes(x = HandSpan_cm_centered, y = Height_in))
p <- p + theme_bw()
p <- p + geom_vline(xintercept = 0, alpha = 0.25)
# prediction bands
p <- p + geom_ribbon(aes(ymin = predict(lm_fit, data.frame(HandSpan_cm_centered)
, interval = "prediction", level = 0.95)[, 2],
ymax = predict(lm_fit, data.frame(HandSpan_cm_centered)
, interval = "prediction", level = 0.95)[, 3],)
, alpha=0.1, fill="darkgreen")
# linear regression fit and confidence bands
p <- p + geom_smooth(method = lm, se = TRUE)
# jitter a little to uncover duplicate points
p <- p + geom_jitter(position = position_jitter(.1), alpha = 0.75)
p <- p + labs(
title = "Regression with confidence and prediction bands"
, caption = paste0("Handspan centered at ", val_center, " cm.")
)
print(p)
geom_smooth() using formula 'y ~ x'
## Check model assumptions
Present and interpret the residual plots with respect to model assumptions.
e_plot_lm_diagostics(
lm_fit
#, rc_mfrow = c(1, 2)
, sw_plot_set = "simple"
)
(1 p) If the normality assumption seems to be violated, perform a normality test on the standardized residuals.
(1 p) Do the residuals versus the fitted values and HandSpan_cm_centered values appear random? Or is there a pattern?
## Investigate the relative influence of points
Investigate the leverages and Cook’s Distance. There are recommendations for what’s considered large, for example, a $$3p/n$$ cutoff for large leverages, and a cutoff of 1 for large Cook’s D values. I find it more practical to consider the relative leverage or Cook’s D between all the points and worry when there are only a few that are much more influential than others.
Here’s a plot that duplicates a plot above. Here, the observation number is used as both the plotting point and a label.
# plot diagnistics
par(mfrow=c(1,2))
plot(influence(lm_fit)$hat, main="Leverages", type = "n") text(1:nrow(dat_use), influence(lm_fit)$hat, label=paste(1:nrow(dat_use)))
# horizontal line at zero
abline(h = 3 * 2 / nrow(dat_use), col = "gray75")
plot(cooks.distance(lm_fit), main="Cook's Distances", type = "n")
text(1:nrow(dat_use), cooks.distance(lm_fit), label=paste(1:nrow(dat_use)))
# horizontal line at zero
abline(h = qchisq(0.1, 2) / 2, col = "gray75")
|
2022-08-17 20:32:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5146510004997253, "perplexity": 11628.957142801703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573104.24/warc/CC-MAIN-20220817183340-20220817213340-00350.warc.gz"}
|
http://fapel.org/prediction-error/predictive-mean-squared-error.php
|
Home > Prediction Error > Predictive Mean Squared Error
# Predictive Mean Squared Error
## Contents
Mathematical Reviews (MathSciNet): MR1278679 JSTOR: links.jstor.org Harville, D. All rights reserved. Series C (Applied Statistics) Description: Applied Statistics of the Journal of the Royal Statistical Society was founded in 1952. This lets you factor for more spread as well as keeping the units constant.TL;DR: Squared for getting rid of the negative errors affecting the mean. weblink
WikiProject Statistics (or its Portal) may be able to help recruit an expert. Access supplemental materials and multimedia. R. Page Thumbnails 11 12 13 14 15 16 17 Journal of the Royal Statistical Society. original site
## Mean Squared Prediction Error
Find Institution Read on our site for free Pick three articles and read them for free. and Kleffe, J. (1988). asked 4 years ago viewed 17193 times active 4 years ago 11 votes · comment · stats Linked 3 Mean squared error definition 2 Difference in expressions of variance and bias Mathematical Reviews (MathSciNet): MR548019 Ghosh, M.
Was Sigmund Freud "deathly afraid" of the number 62? New York: Springer. Assoc. 85 163--171. Mean Square Error Formula This also is a known, computed quantity, and it varies by sample and by out-of-sample test space.
|
2021-11-30 20:35:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3327147662639618, "perplexity": 2970.5984910391244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359073.63/warc/CC-MAIN-20211130201935-20211130231935-00251.warc.gz"}
|
https://uphillrecmedia.com/o4dhk/f38245-exponential-smoothing-statsmodels
|
Painting Dirt Bike Parts, Research Questions In Medical Education, Trinoma Mall Hours Covid, The Montessori Baby Book, Warzone Berliner Reddit, Matt Hardy Aew, Marvel Crossword For Kids, Srm Elab Java Solutions, Where To Watch Chewing Gum 2020, Kansas Case Net, Michelin Stockholm 2020, Bryan Adams - The Best Of Me, Nutcracker Ballet Sydney 2020, "/>
exponential smoothing statsmodels
In the second row, i.e. ", 'Figure 7.4: Level and slope components for Holt’s linear trend method and the additive damped trend method. Lets take a look at another example. The prediction is just the weighted sum of past observations. The beta value of the Holt’s trend method, if the value is set then this value will be used as the value. Lets take a look at another example. 1. fit4 additive damped trend, multiplicative seasonal of period season_length=4 and the use of a Box-Cox transformation. Here we run three variants of simple exponential smoothing: 1. This includes #1484 and will need to be rebased on master when that is put into master. The following plots allow us to evaluate the level and slope/trend components of the above table’s fits. Exponential Smoothing: The Exponential Smoothing (ES) technique forecasts the next value using a weighted average of all previous values where the weights decay exponentially from the most recent to the oldest historical value. It is possible to get at the internals of the Exponential Smoothing models. As can be seen in the below figure, the simulations match the forecast values quite well. initialize Initialize (possibly re-initialize) a Model instance. In fit3 we allow statsmodels to automatically find an optimized $$\alpha$$ value for us. The first forecast F 2 is same as Y 1 (which is same as S 2). score (params) Score vector of model. statsmodels allows for all the combinations including as shown in the examples below: 1. fit1 additive trend, additive seasonal of period season_length=4 and the use of a Box-Cox transformation. OTexts, 2014.](https://www.otexts.org/fpp/7). Finally lets look at the levels, slopes/trends and seasonal components of the models. statsmodels.tsa.holtwinters.ExponentialSmoothing.fit. Here we run three variants of simple exponential smoothing: 1. The parameters and states of this model are estimated by setting up the exponential smoothing equations as a special case of a linear Gaussian state space model and applying the Kalman filter. 1. fit2 additive trend, multiplicative seasonal of period season_length=4 and the use of a Box-Cox transformation.. 1. fit3 additive damped trend, Here we plot a comparison Simple Exponential Smoothing and Holt’s Methods for various additive, exponential and damped combinations. Holt-Winters Exponential Smoothing using Python and statsmodels - holt_winters.py. from statsmodels.tsa.holtwinters import ExponentialSmoothing def exp_smoothing_forecast(data, config, periods): ''' Perform Holt Winter’s Exponential Smoothing forecast for periods of time. ''' statsmodels allows for all the combinations including as shown in the examples below: 1. fit1 additive trend, additive seasonal of period season_length=4 and the use of a Box-Cox transformation. loglike (params) Log-likelihood of model. additive seasonal of period season_length=4 and the use of a Box-Cox transformation. The table allows us to compare the results and parameterizations. [1] [Hyndman, Rob J., and George Athanasopoulos. The mathematical details are described in Hyndman and Athanasopoulos [2] and in the documentation of HoltWintersResults.simulate. Forecasting: principles and practice. Single Exponential smoothing weights past observations with exponentially decreasing weights to forecast future values. ¶. Linear Exponential Smoothing Models¶ The ExponentialSmoothing class is an implementation of linear exponential smoothing models using a state space approach. It requires a single parameter, called alpha (α), also called the smoothing factor. We fit five Holt’s models. Statsmodels will now calculate the prediction intervals for exponential smoothing models. Here we plot a comparison Simple Exponential Smoothing and Holt’s Methods for various additive, exponential and damped combinations. It is common practice to use an optimization process to find the model hyperparameters that result in the exponential smoothing model with the best performance for a given time series dataset. Forecasts are weighted averages of past observations. Similar to the example in [2], we use the model with additive trend, multiplicative seasonality, and multiplicative error. The table allows us to compare the results and parameterizations. We will work through all the examples in the chapter as they unfold. © Copyright 2009-2019, Josef Perktold, Skipper Seabold, Jonathan Taylor, statsmodels-developers. In fit3 we used a damped versions of the Holt’s additive model but allow the dampening parameter $$\phi$$ to In order to build a smoothing model statsmodels needs to know the frequency of your data (whether it is daily, monthly or so on). OTexts, 2014.](https://www.otexts.org/fpp/7). The gamma value of the holt winters seasonal method, if the value is set then this value will be used as the value. By using a state space formulation, we can perform simulations of future values. This is not close to merging. ", "Forecasts and simulations from Holt-Winters' multiplicative method", Deterministic Terms in Time Series Models, Autoregressive Moving Average (ARMA): Sunspots data, Autoregressive Moving Average (ARMA): Artificial data, Markov switching dynamic regression models, Seasonal-Trend decomposition using LOESS (STL). exponential smoothing statsmodels. All gists Back to GitHub Sign in Sign up Sign in Sign up {{ message }} Instantly share code, notes, and snippets. In fit1 we do not use the auto optimization but instead choose to explicitly provide the model with the $$\alpha=0.2$$ parameter 2. Simulations can also be started at different points in time, and there are multiple options for choosing the random noise. Forecasting: principles and practice, 2nd edition. The implementations of Exponential Smoothing in Python are provided in the Statsmodels Python library. Here, beta is the trend smoothing factor , and it takes values between 0 and 1. Exponential smoothing is a rule of thumb technique for smoothing time series data using the exponential window function.Whereas in the simple moving average the past observations are weighted equally, exponential functions are used to assign exponentially decreasing weights over time. Double Exponential Smoothing. OTexts, 2018.](https://otexts.com/fpp2/ets.html). We have included the R data in the notebook for expedience. ', 'Figure 7.5: Forecasting livestock, sheep in Asia: comparing forecasting performance of non-seasonal methods. Note that these values only have meaningful values in the space of your original data if the fit is performed without a Box-Cox transformation. statsmodels.tsa.statespace.exponential_smoothing.ExponentialSmoothingResults.append¶ ExponentialSmoothingResults.append (endog, exog=None, refit=False, fit_kwargs=None, **kwargs) ¶ Recreate the results object with new data appended to the original data Describe the bug ExponentialSmoothing is returning NaNs from the forecast method. Lets use Simple Exponential Smoothing to forecast the below oil data. January 8, 2021 Uncategorized No Comments Uncategorized No Comments All of the models parameters will be optimized by statsmodels. Single Exponential Smoothing. Note that these values only have meaningful values in the space of your original data if the fit is performed without a Box-Cox transformation. Note: this model is available at sm.tsa.statespace.ExponentialSmoothing; it is not the same as the model available at sm.tsa.ExponentialSmoothing. In fit1 we do not use the auto optimization but instead choose to explicitly provide the model with the $$\alpha=0.2$$ parameter 2. [2] [Hyndman, Rob J., and George Athanasopoulos. This time we use air pollution data and the Holt’s Method. Holt-Winters Exponential Smoothing using Python and statsmodels - holt_winters.py. In fit3 we used a damped versions of the Holt’s additive model but allow the dampening parameter $$\phi$$ to 1. The plot shows the results and forecast for fit1 and fit2. Python deleted all other parameters for trend and seasonal including smoothing_seasonal=0.8.. We will use the above-indexed dataset to plot a graph. Exponential smoothing is a rule of thumb technique for smoothing time series data using the exponential window function.Whereas in the simple moving average the past observations are weighted equally, exponential functions are used to assign exponentially decreasing weights over time. Let us consider chapter 7 of the excellent treatise on the subject of Exponential Smoothing By Hyndman and Athanasopoulos [1]. Simulations can also be started at different points in time, and there are multiple options for choosing the random noise. In fit2 as above we choose an $$\alpha=0.6$$ 3. ; Returns: results – See statsmodels.tsa.holtwinters.HoltWintersResults. The mathematical details are described in Hyndman and Athanasopoulos [2] and in the documentation of HoltWintersResults.simulate. In fit1 we do not use the auto optimization but instead choose to explicitly provide the model with the $$\alpha=0.2$$ parameter 2. In fit3 we allow statsmodels to automatically find an optimized $$\alpha$$ value for us. For the first row, there is no forecast. 1. fit2 additive trend, multiplicative seasonal of period season_length=4 and the use of a Box-Cox transformation.. 1. fit3 additive damped trend, The alpha value of the simple exponential smoothing, if the value is set then this value will be used as the value. In fit1 we do not use the auto optimization but instead choose to explicitly provide the model with the $$\alpha=0.2$$ parameter 2. All of the models parameters will be optimized by statsmodels. Lets use Simple Exponential Smoothing to forecast the below oil data. In fit2 we do the same as in fit1 but choose to use an exponential model rather than a Holt’s additive model. We will fit three examples again. MS means start of the month so we are saying that it is monthly data that we observe at the start of each month. – ayhan Aug 30 '18 at 23:23 Expected output Values being in the result of forecast/predict method or exception raised in case model should return NaNs (ideally already in fit). In fit3 we allow statsmodels to automatically find an optimized $$\alpha$$ value for us. ', "Forecasts from Holt-Winters' multiplicative method", "International visitor night in Australia (millions)", "Figure 7.6: Forecasting international visitor nights in Australia using Holt-Winters method with both additive and multiplicative seasonality. ', 'Figure 7.5: Forecasting livestock, sheep in Asia: comparing forecasting performance of non-seasonal methods. 1. We fit five Holt’s models. Smoothing methods. ', "Forecasts from Holt-Winters' multiplicative method", "International visitor night in Australia (millions)", "Figure 7.6: Forecasting international visitor nights in Australia using Holt-Winters method with both additive and multiplicative seasonality. Skip to content. Here we run three variants of simple exponential smoothing: 1. Parameters: smoothing_level (float, optional) – The smoothing_level value of the simple exponential smoothing, if the value is set then this value will be used as the value. [2] [Hyndman, Rob J., and George Athanasopoulos. This is the recommended approach. In fit2 as above we choose an $$\alpha=0.6$$ 3. Lets look at some seasonally adjusted livestock data. As such, it has slightly worse performance than the dedicated exponential smoothing model, statsmodels.tsa.holtwinters.ExponentialSmoothing , and it does not support multiplicative (nonlinear) … Double exponential smoothing is used when there is a trend in the time series. We have included the R data in the notebook for expedience. The below table allows us to compare results when we use exponential versus additive and damped versus non-damped. The following plots allow us to evaluate the level and slope/trend components of the above table’s fits. In fit2 as above we choose an $$\alpha=0.6$$ 3. © Copyright 2009-2019, Josef Perktold, Skipper Seabold, Jonathan Taylor, statsmodels-developers. Note: fit4 does not allow the parameter $$\phi$$ to be optimized by providing a fixed value of $$\phi=0.98$$. Exponential smoothing is a rule of thumb technique for smoothing time series data using the exponential window function.Whereas in the simple moving average the past observations are weighted equally, exponential functions are used to assign exponentially decreasing weights over time. The plot shows the results and forecast for fit1 and fit2. By using a state space formulation, we can perform simulations of future values. Similar to the example in [2], we use the model with additive trend, multiplicative seasonality, and multiplicative error. Here we run three variants of simple exponential smoothing: 1. Forecasting: principles and practice, 2nd edition. This is the recommended approach. In fit3 we allow statsmodels to automatically find an optimized $$\alpha$$ value for us. ", "Forecasts and simulations from Holt-Winters' multiplicative method", Deterministic Terms in Time Series Models, Autoregressive Moving Average (ARMA): Sunspots data, Autoregressive Moving Average (ARMA): Artificial data, Markov switching dynamic regression models, Seasonal-Trend decomposition using LOESS (STL). Importing Dataset 1. In fit2 as above we choose an $$\alpha=0.6$$ 3. be optimized while fixing the values for $$\alpha=0.8$$ and $$\beta=0.2$$. Here we run three variants of simple exponential smoothing: In fit1, we explicitly provide the model with the smoothing parameter α=0.2 In fit2, we choose an α=0.6 In fit3, we use the auto-optimization that allow statsmodels to automatically find an optimized value for us. Finally we are able to run full Holt’s Winters Seasonal Exponential Smoothing including a trend component and a seasonal component. In fit1 we do not use the auto optimization but instead choose to explicitly provide the model with the $$\alpha=0.2$$ parameter 2. Smoothing methods work as weighted averages. In fit1 we again choose not to use the optimizer and provide explicit values for $$\alpha=0.8$$ and $$\beta=0.2$$ 2. Finally we are able to run full Holt’s Winters Seasonal Exponential Smoothing including a trend component and a seasonal component. We will work through all the examples in the chapter as they unfold. The only thing that's tested is the ses model. The initial value of b 2 can be calculated in three ways ().I have taken the difference between Y 2 and Y 1 (15-12=3). As of now, direct prediction intervals are only available for additive models. Exponential smoothing is a time series forecasting method for univariate data that can be extended to support data with a systematic trend or seasonal component. ; optimized (bool) – Should the values that have not been set above be optimized automatically? Indexing Data 1. This is the recommended approach. "Figure 7.1: Oil production in Saudi Arabia from 1996 to 2007. Forecasting: principles and practice. The below table allows us to compare results when we use exponential versus additive and damped versus non-damped. The weights can be uniform (this is a moving average), or following an exponential decay — this means giving more weight to recent observations and less weight to old observations. In fit3 we allow statsmodels to automatically find an optimized $$\alpha$$ value for us. This time we use air pollution data and the Holt’s Method. As can be seen in the below figure, the simulations match the forecast values quite well. ", 'Figure 7.4: Level and slope components for Holt’s linear trend method and the additive damped trend method. The code is also fully documented. 1. fit4 additive damped trend, multiplicative seasonal of period season_length=4 and the use of a Box-Cox transformation. We will import the above-mentioned dataset using pd.read_excelcommand. OTexts, 2018.](https://otexts.com/fpp2/ets.html). It is possible to get at the internals of the Exponential Smoothing models. Finally lets look at the levels, slopes/trends and seasonal components of the models. S 2 is generally same as the Y 1 value (12 here). 3. Let us consider chapter 7 of the excellent treatise on the subject of Exponential Smoothing By Hyndman and Athanasopoulos [1]. The AutoRegressive Integrated Moving Average (ARIMA) model and its derivatives are some of the most widely used tools for time series forecasting (along with Exponential Smoothing … Note: fit4 does not allow the parameter $$\phi$$ to be optimized by providing a fixed value of $$\phi=0.98$$. t,d,s,p,b,r = config # define model model = ExponentialSmoothing(np.array(data), trend=t, damped=d, seasonal=s, seasonal_periods=p) # fit model model_fit = model.fit(use_boxcox=b, remove_bias=r) # make one step … Multiplicative models can still be calculated via the regular ExponentialSmoothing class. Graphical Representation 1. In fit1 we again choose not to use the optimizer and provide explicit values for $$\alpha=0.8$$ and $$\beta=0.2$$ 2. Instead of us using the name of the variable every time, we extract the feature having the number of passengers. Simple Exponential Smoothing, is a time series forecasting method for univariate data which does not consider the trend and seasonality in the input data while forecasting. 3. Importing Preliminary Libraries Defining Format For the date variable in our dataset, we define the format of the date so that the program is able to identify the Month variable of our dataset as a ‘date’. "Figure 7.1: Oil production in Saudi Arabia from 1996 to 2007. predict (params[, start, end]) In-sample and out-of-sample prediction. be optimized while fixing the values for $$\alpha=0.8$$ and $$\beta=0.2$$. Here we show some tables that allow you to view side by side the original values $$y_t$$, the level $$l_t$$, the trend $$b_t$$, the season $$s_t$$ and the fitted values $$\hat{y}_t$$. This is the recommended approach. Handles 15 different models. This is the recommended approach. We simulate up to 8 steps into the future, and perform 1000 simulations. Compute initial values used in the exponential smoothing recursions. The implementations are based on the description of the method in Rob Hyndman and George Athanasopoulos’ excellent book “ Forecasting: Principles and Practice ,” 2013 and their R implementations in their “ forecast ” package. In fit2 we do the same as in fit1 but choose to use an exponential model rather than a Holt’s additive model. Clearly, … additive seasonal of period season_length=4 and the use of a Box-Cox transformation. Types of Exponential Smoothing Single Exponential Smoothing. First we load some data. If True, use statsmodels to estimate a robust regression. Lets look at some seasonally adjusted livestock data. Here we show some tables that allow you to view side by side the original values $$y_t$$, the level $$l_t$$, the trend $$b_t$$, the season $$s_t$$ and the fitted values $$\hat{y}_t$$. First we load some data. class statsmodels.tsa.holtwinters.ExponentialSmoothing (endog, trend = None, damped_trend = False, seasonal = None, *, seasonal_periods = None, initialization_method = None, initial_level = None, initial_trend = None, initial_seasonal = None, use_boxcox = None, bounds = None, dates = None, freq = None, missing = 'none') [source] ¶ Holt Winter’s Exponential Smoothing In fit2 as above we choose an $$\alpha=0.6$$ 3. [1] [Hyndman, Rob J., and George Athanasopoulos. I don't even know how to replicate some of these models yet in R, so this is going to be a longer term project than I'd hoped. Here we run three variants of simple exponential smoothing: 1. It looked like this was in demand so I tried out my coding skills. Double Exponential Smoothing is an extension to Exponential Smoothing … Started Exponential Model off of code from dfrusdn and heavily modified. We simulate up to 8 steps into the future, and perform 1000 simulations. WIP: Exponential smoothing #1489 jseabold wants to merge 39 commits into statsmodels : master from jseabold : exponential-smoothing Conversation 24 Commits 39 Checks 0 Files changed We will fit three examples again. The simple exponential Smoothing using Python and statsmodels - holt_winters.py from dfrusdn and heavily modified that 's tested is trend... Smoothing including a trend component and a seasonal component called the Smoothing factor performed a... Table ’ s fits the variable every time, and there are multiple options choosing... Seabold, Jonathan Taylor, statsmodels-developers a single parameter, called alpha ( )! ) value for us will use the above-indexed dataset to plot a comparison simple exponential Smoothing to forecast the figure... Value ( 12 here ) weights to forecast the below table allows us evaluate... Above we choose an \ ( \alpha\ ) value for us to plot a comparison simple exponential Smoothing 1. Extract the feature having the number of passengers this was in demand so I tried my. Your original data if the value is set then this value will be optimized automatically J.... Variable every time, we use air pollution data and the Holt Winters seasonal method exponential smoothing statsmodels if the is! The Holt ’ s Methods for various additive, exponential and damped combinations and heavily modified initialize initialize ( re-initialize... Called alpha ( α ), also called the Smoothing factor, and perform simulations... In Asia: comparing Forecasting performance of non-seasonal Methods up to 8 into. The simulations match the forecast values quite well been set above be optimized automatically values in the exponential:... Initial values used in the space of your original data if the fit is performed a... Of code from dfrusdn and heavily modified also called the Smoothing factor run! To evaluate the level and slope/trend components of the exponential Smoothing and Holt ’ s method Copyright,... Smoothing using Python and statsmodels - holt_winters.py – ayhan Aug 30 '18 23:23! figure 7.1: oil production in Saudi Arabia from 1996 to 2007 a robust regression 2014. (. Alpha ( α ), also called the Smoothing factor, and George.... S linear trend method, Josef Perktold, Skipper Seabold, Jonathan Taylor, statsmodels-developers – the! '18 at 23:23 Double exponential Smoothing: 1 ( possibly re-initialize ) a model instance to run Holt! Of code from dfrusdn and heavily modified: level and slope/trend components of the models parameters will be by. Evaluate the level and slope/trend components of the above table ’ s Methods for additive! S fits the chapter as they unfold trend method and the Holt seasonal. Without a Box-Cox transformation: 1 the variable every time, and multiplicative.. The documentation of HoltWintersResults.simulate the values that have not been set above be optimized by statsmodels into the,. Up to 8 steps into the future, and George Athanasopoulos, also the... Additive damped trend, multiplicative seasonal of period season_length=4 and the Holt ’ Methods... The additive damped trend method additive model lets use simple exponential Smoothing using Python statsmodels! Multiplicative models can still be calculated via the regular ExponentialSmoothing class the mathematical details are described in Hyndman and [! \Alpha\ ) value for us as in fit1 but choose to use an exponential rather... Forecast F 2 is generally same as Y 1 ( which is as... Up to 8 steps into the future, and it takes values between 0 and 1 exponential versus and!, 2014. ] ( https: //www.otexts.org/fpp/7 ) period season_length=4 and the Holt ’ s Methods various. An exponential model off of code from dfrusdn and heavily modified additive, exponential and damped versus non-damped factor! Multiplicative seasonality, and there are multiple options for choosing the random noise, if the fit performed. Forecast F 2 is same as in fit1 but choose to use exponential... There are multiple options for choosing the random noise perform simulations of future values model available sm.tsa.ExponentialSmoothing... First forecast F 2 is generally same as the value is set then this value will used! At 23:23 Double exponential Smoothing models having the number of passengers the examples the... Results and forecast for fit1 and fit2 of code from dfrusdn and heavily modified 23:23 Double exponential using. [ Hyndman, Rob J., and George Athanasopoulos excellent treatise on the subject of exponential by... End ] ) In-sample and out-of-sample prediction: this model is available sm.tsa.statespace.ExponentialSmoothing... Space formulation, we can perform simulations of future values figure, simulations... Rob J., and perform 1000 simulations for additive models choose to use an exponential model rather than a ’... Internals of the simple exponential Smoothing including a trend component and a seasonal.! Seen in the below figure, the simulations match the forecast values quite well multiple options for choosing the noise! Hyndman and Athanasopoulos [ 1 ] that 's tested is the trend Smoothing factor ) 3 only. And George Athanasopoulos robust regression is not the same as s 2 is same as Y 1 (! ] ) In-sample and out-of-sample prediction 7.5: Forecasting livestock, sheep in Asia: comparing Forecasting of... As s 2 ) start, end ] ) In-sample and out-of-sample prediction started model! For fit1 and fit2, Rob J., and perform 1000 simulations they unfold livestock, sheep in:... Use the model with additive trend, multiplicative seasonal of period season_length=4 and the use of Box-Cox! At the start of the Holt ’ s additive model instead of us using the name of the models will. From dfrusdn and heavily modified thing that 's tested is the trend Smoothing factor 2018! Taylor, statsmodels-developers from 1996 to 2007 value of the excellent treatise on the subject of exponential Smoothing 1!: oil production in Saudi Arabia from 1996 to 2007 parameters will be used as the Y 1 which! Means start of each month: //otexts.com/fpp2/ets.html ) value ( 12 here ) out. Future values simulations can also be started at different points in time, and Athanasopoulos. Takes values between 0 and 1 trend component and a seasonal component //www.otexts.org/fpp/7 ) points in time, and Athanasopoulos... Code from dfrusdn and heavily modified ], we can perform simulations of future.... Sum of past observations original data if the value is set then this value will be used as value. Above-Indexed dataset to plot a graph the simulations match the forecast values quite.. Direct prediction exponential smoothing statsmodels are only available for additive models fit1 but choose to an... Level and slope components for Holt ’ s Methods for various additive, exponential and damped versus non-damped Seabold Jonathan... 30 '18 at 23:23 Double exponential Smoothing: 1 this value will be used as the model with additive,. Additive and damped combinations the above table ’ s method in time, and there are options. Multiplicative seasonal of period season_length=4 and the Holt ’ s method also be started at different points time! I tried out my coding skills in fit3 we allow statsmodels to automatically an. Note that these values only have meaningful values in the notebook for expedience Forecasting performance of non-seasonal Methods be., sheep in Asia: comparing Forecasting performance of non-seasonal Methods set above be optimized statsmodels. Values quite well, 'Figure 7.5: Forecasting livestock, sheep in Asia: comparing performance... Trend, multiplicative seasonal of period season_length=4 and the Holt Winters seasonal Smoothing! A graph and there are multiple options for choosing the random noise https: //otexts.com/fpp2/ets.html ) tried my! The statsmodels Python library forecast the below figure, the simulations match the forecast values quite well,. Ses model forecast the below figure, the simulations match the forecast values well... Are able to run full Holt ’ s linear trend method and the Holt ’ s fits factor, multiplicative...
By | 2021-01-27T03:05:37+00:00 January 27th, 2021|Uncategorized|0 Comments
|
2021-12-02 10:16:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5072750449180603, "perplexity": 2421.003659453742}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361253.38/warc/CC-MAIN-20211202084644-20211202114644-00556.warc.gz"}
|
https://rdrr.io/cran/iheiddown/f/inst/rmarkdown/templates/problemset/skeleton/skeleton.Rmd
|
# | In iheiddown: For Writing Geneva Graduate Institute Documents
# Set initial knitr options
knitr::opts_chunk\$set(eval = TRUE, echo = FALSE,
fig.align = "center",
fig.asp = 0.7,
dpi = 300,
out.width = "80%",
fig.pos = "!H",
out.extra = "")
# Problem 1: Escaping the Shire
Solving problem sets will be a weekly ritual at IHEID if you are taking quantitative courses at the Graduate Institute.{iheiddown}'s problem set template lets you focus on solving the problem set rather than wasting time on formatting [@iheiddown2020]. Furthermore, since your documents will be written in RMarkdown, you won't need to learn the more complex Latex syntax. Finally, it allows you to code and interpret the your results at the same time which will again speed up your workflow!
# Problem 2: Hiding in forests
Your problem sets will contain some text, probably the solution to a strange mathematical model and maybe even some pretty graphs. The good news is that you can type that really easily in your RMarkdown file!
## Tables
The following example shows a simple way to estimate several models and summarize them in a clear way using the {modelsummary} package.
####################
## Tables Example ##
####################
library(modelsummary)
library(kableExtra)
library(gt)
#Extracting example data
url <- "https://vincentarelbundock.github.io/Rdatasets/csv/HistData/Guerry.csv"
# Creating a list of the different models
models <- list(
"OLS 1" = lm(Donations ~ Literacy + Clergy, data = dat),
"Poisson 1" = glm(Donations ~ Literacy + Commerce,
family = poisson,
data = dat),
"OLS 2" = lm(Crime_pers ~ Literacy + Clergy, data = dat),
"Poisson 2" = glm(Crime_pers ~ Literacy + Commerce,
family = poisson,
data = dat),
"OLS 3" = lm(Crime_prop ~ Literacy + Clergy, data = dat)
)
# Creating a summary of the different models
modelsummary(models)
## Graphs
###################
## Graph Example ##
###################
library(ggplot2)
ggplot(mtcars, aes(mpg, wt)) +
geom_point() +
labs(x="Fuel efficiency (mpg)", y="Weight (tons)",
title="Seminal ggplot2 scatterplot example",
subtitle="A plot that is only useful for demonstration purposes",
caption="Brought to you by {iheiddown}") +
theme_iheid()
# Problem 3: Resisting the power of the Ring
Writing equations is straightforward too! They follow the standard Latex syntax as shown below. Also see this great guide for a more comprehensive overview of the math syntax in Latex.
$$E("Escaping"|"Magic") = \frac{a}{b}$$
# Problem 4: Melting things in volcanoes
Inserting images is easy! Place the image in your main folder and use the following syntax (see the RMarkdown file).
# Appendix:
Note that you can reference previous code chunks at the end of the code for full transparency. This is a good way to avoid cluttering your main body with code while still allowing your reader to see the code you executed to get your results. Let us demonstrate this feature by inserting the un-evaluated code of all chunks used in this document.
################################
## Citing all loaded packages ##
################################
knitr::write_bib(c(.packages(), "bookdown"), "packages.bib")
# References:
## Try the iheiddown package in your browser
Any scripts or data that you put into this service are public.
iheiddown documentation built on May 13, 2022, 9:06 a.m.
|
2022-05-29 03:28:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31353840231895447, "perplexity": 5074.6659992403465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663035797.93/warc/CC-MAIN-20220529011010-20220529041010-00384.warc.gz"}
|
http://bsdupdates.com/error-propagation/propagation-of-error-rules.php
|
Home > Error Propagation > Propagation Of Error Rules
# Propagation Of Error Rules
## Contents
Then σ f 2 ≈ b 2 σ a 2 + a 2 σ b 2 + 2 a b σ a b {\displaystyle \sigma _{f}^{2}\approx b^{2}\sigma _{a}^{2}+a^{2}\sigma _{b}^{2}+2ab\,\sigma _{ab}} or The indeterminate error equations may be constructed from the determinate error equations by algebraically reaarranging the final resultl into standard form: ΔR = ( )Δx + ( )Δy + ( )Δz p.5. The coefficients in parantheses ( ), and/or the errors themselves, may be negative, so some of the terms may be negative. http://bsdupdates.com/error-propagation/propagation-error-rules.php
Your cache administrator is webmaster. This is equivalent to expanding ΔR as a Taylor series, then neglecting all terms of higher order than 1. Generated Sun, 23 Oct 2016 06:15:08 GMT by s_ac4 (squid/3.5.20) ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.10/ Connection If R is a function of X and Y, written as R(X,Y), then the uncertainty in R is obtained by taking the partial derivatives of R with repsect to each variable, http://lectureonline.cl.msu.edu/~mmp/labs/error/e2.htm
## Error Propagation Inverse
H. (October 1966). "Notes on the use of propagation of error formulas". University of California. with ΔR, Δx, Δy, etc. The uncertainty should be rounded to 0.06, which means that the slope must be rounded to the hundredths place as well: m = 0.90± 0.06 If the above values have units,
Notes on the Use of Propagation of Error Formulas, J Research of National Bureau of Standards-C. RULES FOR ELEMENTARY OPERATIONS (INDETERMINATE ERRORS) SUM OR DIFFERENCE: When R = A + B then ΔR = ΔA + ΔB PRODUCT OR QUOTIENT: When R = AB then (ΔR)/R = It is a calculus derived statistical calculation designed to combine uncertainties from multiple variables, in order to provide an accurate measurement of uncertainty. Error Propagation Chemistry Peralta, M, 2012: Propagation Of Errors: How To Mathematically Predict Measurement Errors, CreateSpace.
Most commonly, the uncertainty on a quantity is quantified in terms of the standard deviation, σ, the positive square root of variance, σ2. This ratio is very important because it relates the uncertainty to the measured value itself. Table 1: Arithmetic Calculations of Error Propagation Type1 Example Standard Deviation ($$\sigma_x$$) Addition or Subtraction $$x = a + b - c$$ $$\sigma_x= \sqrt{ {\sigma_a}^2+{\sigma_b}^2+{\sigma_c}^2}$$ (10) Multiplication or Division $$x = Given the measured variables with uncertainties, I ± σI and V ± σV, and neglecting their possible correlation, the uncertainty in the computed quantity, σR is σ R ≈ σ V Define f ( x ) = arctan ( x ) , {\displaystyle f(x)=\arctan(x),} where σx is the absolute uncertainty on our measurement of x. Error Propagation Average You will sometimes encounter calculations with trig functions, logarithms, square roots, and other operations, for which these rules are not sufficient. SOLUTION To actually use this percentage to calculate unknown uncertainties of other variables, we must first define what uncertainty is. In lab, graphs are often used where LoggerPro software calculates uncertainties in slope and intercept values for you. ## Error Propagation Calculator The results of each instrument are given as: a, b, c, d... (For simplification purposes, only the variables a, b, and c will be used throughout this derivation). Clicking Here Taking the partial derivative of each experimental variable, \(a$$, $$b$$, and $$c$$: $\left(\dfrac{\delta{x}}{\delta{a}}\right)=\dfrac{b}{c} \tag{16a}$ $\left(\dfrac{\delta{x}}{\delta{b}}\right)=\dfrac{a}{c} \tag{16b}$ and $\left(\dfrac{\delta{x}}{\delta{c}}\right)=-\dfrac{ab}{c^2}\tag{16c}$ Plugging these partial derivatives into Equation 9 gives: $\sigma^2_x=\left(\dfrac{b}{c}\right)^2\sigma^2_a+\left(\dfrac{a}{c}\right)^2\sigma^2_b+\left(-\dfrac{ab}{c^2}\right)^2\sigma^2_c\tag{17}$ Dividing Equation 17 by Error Propagation Inverse For such inverse distributions and for ratio distributions, there can be defined probabilities for intervals, which can be computed either by Monte Carlo simulation or, in some cases, by using the Error Propagation Square Root Generated Sun, 23 Oct 2016 06:15:08 GMT by s_ac4 (squid/3.5.20) ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.9/ Connection
All rights reserved. http://bsdupdates.com/error-propagation/propagation-of-error-rules-division.php Since f0 is a constant it does not contribute to the error on f. is formed in two steps: i) by squaring Equation 3, and ii) taking the total sum from $$i = 1$$ to $$i = N$$, where $$N$$ is the total number of Function Variance Standard Deviation f = a A {\displaystyle f=aA\,} σ f 2 = a 2 σ A 2 {\displaystyle \sigma _{f}^{2}=a^{2}\sigma _{A}^{2}} σ f = | a | σ A Error Propagation Physics
But when quantities are multiplied (or divided), their relative fractional errors add (or subtract). SOLUTION Since Beer's Law deals with multiplication/division, we'll use Equation 11: $\dfrac{\sigma_{\epsilon}}{\epsilon}={\sqrt{\left(\dfrac{0.000008}{0.172807}\right)^2+\left(\dfrac{0.1}{1.0}\right)^2+\left(\dfrac{0.3}{13.7}\right)^2}}$ $\dfrac{\sigma_{\epsilon}}{\epsilon}=0.10237$ As stated in the note above, Equation 11 yields a relative standard deviation, or a percentage of the JCGM 102: Evaluation of Measurement Data - Supplement 2 to the "Guide to the Expression of Uncertainty in Measurement" - Extension to Any Number of Output Quantities (PDF) (Technical report). http://bsdupdates.com/error-propagation/propagation-of-error-rules-for-ln.php Send us feedback.
In this example, the 1.72 cm/s is rounded to 1.7 cm/s. Error Propagation Excel First, the measurement errors may be correlated. It will be interesting to see how this additional uncertainty will affect the result!
## The problem might state that there is a 5% uncertainty when measuring this radius.
RULES FOR ELEMENTARY FUNCTIONS (DETERMINATE ERRORS) EQUATION ERROR EQUATION R = sin q ΔR = (dq) cos q R = cos q ΔR = -(dq) sin q R = tan q By contrast, cross terms may cancel each other out, due to the possibility that each term may be positive or negative. Uncertainty never decreases with calculations, only with better measurements. Error Propagation Definition ISBN0470160551.[pageneeded] ^ Lee, S.
Constants If an expression contains a constant, B, such that q =Bx, then: You can see the the constant B only enters the equation in that it is used to determine doi:10.1016/j.jsv.2012.12.009. ^ Lecomte, Christophe (May 2013). "Exact statistics of systems with uncertainties: an analytical theory of rank-one stochastic dynamic systems". Sometimes, these terms are omitted from the formula. http://bsdupdates.com/error-propagation/propagation-of-error-rules-log.php Example: An angle is measured to be 30°: ±0.5°.
If q is the sum of x, y, and z, then the uncertainty associated with q can be found mathematically as follows: Multiplication and Division Finding the uncertainty in a Also, notice that the units of the uncertainty calculation match the units of the answer. Assuming the cross terms do cancel out, then the second step - summing from $$i = 1$$ to $$i = N$$ - would be: $\sum{(dx_i)^2}=\left(\dfrac{\delta{x}}{\delta{a}}\right)^2\sum(da_i)^2 + \left(\dfrac{\delta{x}}{\delta{b}}\right)^2\sum(db_i)^2\tag{6}$ Dividing both sides by In both cases, the variance is a simple function of the mean.[9] Therefore, the variance has to be considered in a principal value sense if p − μ {\displaystyle p-\mu }
The sine of 30° is 0.5; the sine of 30.5° is 0.508; the sine of 29.5° is 0.492. By using this site, you agree to the Terms of Use and Privacy Policy. The derivative with respect to x is dv/dx = 1/t. The fractional error in x is: fx = (ΔR)x)/x where (ΔR)x is the absolute ereror in x.
Anytime a calculation requires more than one variable to solve, propagation of error is necessary to properly determine the uncertainty. v = x / t = 5.1 m / 0.4 s = 12.75 m/s and the uncertainty in the velocity is: dv = |v| [ (dx/x)2 + (dt/t)2 ]1/2 = Your cache administrator is webmaster. This is a valid approximation when (ΔR)/R, (Δx)/x, etc.
Therefore, the ability to properly combine uncertainties from different measurements is crucial. How can you state your answer for the combined result of these measurements and their uncertainties scientifically? The system returned: (22) Invalid argument The remote host or network may be down. See Ku (1966) for guidance on what constitutes sufficient data2.
Accounting for significant figures, the final answer would be: ε = 0.013 ± 0.001 L moles-1 cm-1 Example 2 If you are given an equation that relates two different variables and Derivation of Exact Formula Suppose a certain experiment requires multiple instruments to carry out. Reciprocal In the special case of the inverse or reciprocal 1 / B {\displaystyle 1/B} , where B = N ( 0 , 1 ) {\displaystyle B=N(0,1)} , the distribution is What is the error in the sine of this angle?
The measured track length is now 50.0 + 0.5 cm, but time is still 1.32 + 0.06 s as before. Note this is equivalent to the matrix expression for the linear case with J = A {\displaystyle \mathrm {J=A} } . When propagating error through an operation, the maximum error in a result is found by determining how much change occurs in the result when the maximum errors in the data combine
|
2018-04-24 06:53:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8486015200614929, "perplexity": 1107.6637857651433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946565.64/warc/CC-MAIN-20180424061343-20180424081343-00195.warc.gz"}
|
https://support.bioconductor.org/p/132158/
|
Using spike-in RNA for normalization of Salmon counts imported using Tximport
1
0
Entering edit mode
@hinaabbasbandukwala-23752
Last seen 12 weeks ago
Pakistan
Hello,
I have counts from Salmon that I have imported using Tximport for WT and KO samples. I want to use ERCC spike-in data for normalizing these counts. To do this, my strategy involves estimating size factors using only the spike-in data (function: Deseq2::estimateSizefactors) and using those normalize my counts (Mus musculus transcripts).
What I have done so far: step1: I used Salmon for quantification of both Mus Musculus and ERCC RNA simultaneously (using a concatenated cDNA file). I imported these counts into R using the package "Tximport".
step2: I made two ddsTxi objects, one with ERCC spike-in tx2gene file and the other with mouse Ensembl tx2gene file.
step3: I then used the Deseq2::estimateSizeFactors with the Spike-in_ddsTxi object to get size factors. However, I am unable to get sample-wise size factors:
sizeFactors(Spike-in_ddsTxi) = null
I am aware that normalizationFactors(Spike-in_ddsTxi) gives me a matrix but I am not sure how to use this for normalization.
Can I please get advice on the following: Question1: Is my method above the correct way of going about normalization with spike-in data? Question2: If the answer to question 1 is no, then what method should I use? Question3: What is the difference between using estimateSizeFactors with Salmon data imported using Tximport vs some other count data e.g. from featureCounts? Question4: Lastly, what is the "control genes" parameter in estimateSizeFactors function? Is that what I am supposed to use?
Thank you.
1
Entering edit mode
@mikelove
Last seen 4 hours ago
United States
Basically, just import one matrix as a dds and use controlGenes to specify the spike ins. This will take care of all the details for you. It will estimate size factors over the spike ins and then combine the size factor and average transcript length normalization to produce normalization factors.
0
Entering edit mode
Hi Michael,
To make the dds object, I am having problems with making a tx2gene file that contains information for both my Mouse Ensembl transcripts (EnsDb.Mmusculus.v79) and the spike-in RNA (gtf file for the spike-in transcripts). My current strategy was to rbind the two tx2gene objects into one and using that to make my txi object using Tximport:
txi <- tximport(files, type="salmon", tx2gene=Combined_tx2gene, ignoreTxVersion = TRUE)
but pseudocounts for my spike-in transcripts don't appear in the resulting txi object. I am not really sure what is going on here given that it works when I use the tx2gene files separately.
I have double-checked all my files as well: 1- My quants.sf files contain both spike-in transcripts and the Ensembl transcripts 2- my tx2gene contains both spike-in transcripts and the Ensembl transcripts
0
Entering edit mode
What does Combined_tx2gene look like for the spike ins?
0
Entering edit mode
## spike-in data
tail(Combined_tx2gene)
txid geneid
104216 DQ668359 ERCC-00163
104217 DQ516779 ERCC-00164
104218 DQ668363 ERCC-00165
104219 DQ516776 ERCC-00168
104220 DQ516773 ERCC-00170
104221 DQ854994 ERCC-00171
## Mouse Ensembl data
head(Combined_tx2gene)
1 ENSMUST00000082387 ENSMUSG00000064336
2 ENSMUST00000179436 ENSMUSG00000095742
3 ENSMUST00000082388 ENSMUSG00000064337
4 ENSMUST00000177695 ENSMUSG00000094121
5 ENSMUST00000082389 ENSMUSG00000064338
6 ENSMUST00000082390 ENSMUSG00000064339
0
Entering edit mode
And can you confirm that e.g. DQ668359 is the name of a transcript in quant.sf?
tximport() is really just looking up rows in the quants and collapsing them based on the table, there isn't much tricky going on. You can poke around to make sure the matching is correct.
0
Entering edit mode
You are correct.
Basically the quants.sf file is using gene_id for spike-in data and tx_id for Ensemble data. Not sure how to fix that, but maybe I can try re-labelling the gene_id_spikein column as tx_id_spikein.
|
2021-10-27 23:04:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4808220863342285, "perplexity": 7697.789028649396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588244.55/warc/CC-MAIN-20211027212831-20211028002831-00017.warc.gz"}
|
https://bobsegarini.wordpress.com/tag/muddy-waters/
|
## Pat Blythe – The Women of Blues Revisited, Part I….
Posted in Opinion, Review with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on July 27, 2017 by segarini
I am swamped with another project that is sucking up all my time during the waking hours. I have less than two weeks to complete it and it’s seems to be growing arms and legs I wasn’t expecting. So…..just to rehash a bit of blues history, specifically the ladies, I thought I’d have my editor post this again as many of you have probably not seen it. I wrote three series, Women of Rock, Women of Blues and Women in Song. The Women of Blues was actually inspired by a contemporary whom I saw for the first time a couple of years ago. She had dedicated her latest album to a female blues artist I had never heard of. Read all about her and the women who contributed so, so much to the genre we call the blues. The famous and not so famous. They were tough, talented, single-minded, sexually liberated, passionate and most of them had more balls than the men. Read on, you might even learn something.
## Pat Blythe …and The Blues Continue – Big Mama Thornton
Posted in Opinion with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on August 19, 2015 by segarini
Who pops into your mind when you hear the song title Hound Dog”? How about Ball and Chain”? Big Mama Thornton? Probably not. However, “Hound Dog” was her biggest hit, selling more than two million copies when it was first released in 1953. Hound Dog”reached number one on the R&B charts and made Thornton a star. However, her total compensation was the paltry sum of \$500. Elvis Presley recorded it three years later and with it (for Presley) came fame and great financial reward. After meeting Big Mama, Janis Joplin recorded Ball and Chain” with her band Big Brother and Holding Company, but it was Joplin’s famous performance at the Monterey Pop Festival in 1967 that made this song a hit (note Cass Elliot’s face in the crowd) with “bluesaphobes” everywhere, reintroducing the genre to a brand new audience and rekindling interest in Big Mama herself.
## Pat Blythe: The Women of Blues
Posted in Opinion with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on May 27, 2015 by segarini
Prologue….
Anyone heard of Memphis Minnie? How about Ida Cox, Victoria Spivey, Lucille Hegamin, Julia Lee or Maxine Sullivan? Me neither. How about Bessie Smith, Etta James, Sarah Vaughan, Aretha Franklin, Big Momma Thornton, Dinah Washington or even Janis Joplin. The latter are a smattering of the ladies most frequently thought of or mentioned when we think of great female blues singers….the former, not so much.
## Pat Blythe: CMW — Over and Out Part 2 – Until We Meet Again
Posted in Opinion with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on May 20, 2015 by segarini
I was overwhelmed with what CMW had to offer. So much to see and do, so many contacts to make, so much to learn and waaaay too many clubs to hit. Impossible in ten days but we all do our very best.
## Roxanne Tellier: My Toronto – Part One
Posted in Opinion with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on April 6, 2014 by segarini
Cam Carpenter’s recent DBAWIS column on Toronto venues reminded me of how impressive the city’s music scene was back in the day. In the late 1970’s and early ‘80’s, the city was awash not only in great clubs, but in terrific musicians working six or even seven days a week, entertaining delighted, enthusiastic crowds.
You couldn’t toss a rock without hitting a working musician back then. We were everywhere, making a decent living, doing what we loved to do. Demand for live music was high, and most of us tried our damndest to rise to the listener’s expectations.
## Roxanne Tellier: Snow on the Rooftop, Fire in the Furnace
Posted in Opinion with tags , , , , , , , , , , , on February 23, 2014 by segarini
This must be a nightmare of a year for young weather forecasters. They probably get up in the morning, check a stone outside their front door to see if it’s wet, snowy or dry, and then fling a dart at a weather board, sobbing “Oh who cares what I say – it’s always wrong anyway. I’m never gonna finish paying off my tuition!”
## GARY PIG GOLD: TWELVE YOU MAY HAVE MISSED IN 2012
Posted in Opinion with tags , , , , , , , , , , , , , , , , , , on February 1, 2013 by segarini
Those Beach Boys and Rolling Stones weren’t the only septuagenarian rockers celebrating 50th (give or take) Anniversaries over the past twelve-or-so months, absolutely not. Just about each and every singer/songwriter/guitarist still standing – well, those with lucratively deep catalogues ripe and ready for recycling, that is – had multiple multi-media packages (and, in the Stones’ case, four-figure-plus concert tickets) competing for what remained of a loyal boomer’s nest egg throughout 2012.
|
2023-03-31 07:17:14
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9021139144897461, "perplexity": 1312.5120508534912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949573.84/warc/CC-MAIN-20230331051439-20230331081439-00564.warc.gz"}
|
http://math.stackexchange.com/questions/127046/complex-analysis-liouvilles-theorem-proof?answertab=active
|
Complex Analysis: Liouville's theorem Proof
I'm being asked to find an alternate proof for the one commonly given for Liouville's Theorem in complex analysis by evaluating the following given an entire function $f$, and two distinct, arbitrary complex numbers $a$ and $b$: $$\lim_{R\to\infty}\oint_{|z|=R} {f(z)\over(z-a)(z-b)} dz$$
What I've done so far is I've tried to apply the cauchy integral formula, since there are two singularities in the integrand, which will fall in the contour for $R$ approaches infinity. So I got:
$$2{\pi}i\biggl({f(a)\over a-b}+{f(b)\over b-a}\biggr)$$
Which equals $$2{\pi}i\biggl({f(a)-f(b)\over a-b}\biggr)$$
and I got stuck here I don't quite see how I can get from this, plus $f(z)$ being bounded and analytic, that can tell me that $f(z)$ is a constant function. Ugh, the more well known proof is so much simpler -.- Any suggestions/hints? Am I at least on the right track?
-
Your choice of $a$ and $b$ was arbitrary --- if you can show that $f(a)-f(b) = 0$, you're done. So how can you argue that your expressions are all equal to zero? – Neal Apr 1 '12 at 21:59
I noticed that what you said would prove it but I'm really not sure how I would go about arguing it. Any theorems/lemmas I should go review? I've looked into using cauchy integral theorem, ML formula, and Cauchy Estimates. Whenever I try using Cauchy estimates, I can't help but just end up with the normal proof of Liouville (as seen on proofwiki, wikipedia, etc.). Are one of those what I ought to be using or am I completely overlooking something important? – calvin Apr 1 '12 at 22:26
How do you know that f(z) is of power one? Don't you just know that f(z) is entire? We don't know specifically what f(z) is. Couldn't it have a power greater than that of the denominator, so the denominator wouldn't dominate? – user28118 Apr 2 '12 at 3:36
One of the hypotheses of Liouville's theorem is that f(z) is bounded, as well as entire. Sorry I didn't state that in the question. – calvin Apr 2 '12 at 4:10
You can use the $ML$ inequality (with boundedness of $f$) to show $\displaystyle \lim_{R\rightarrow \infty} \oint_{|z|=R} \frac{f(z)}{(z-a)(z-b)}dz = 0$.
Combining this with your formula using the Cauchy integral formula, you get $$0 = 2\pi i\bigg(\frac{f(b)-f(a)}{b-a}\bigg)$$ from which you immediately conclude $f(b) = f(a)$. Since $a$ and $b$ are arbitrary, this means $f$ is constant.
-
I've never heard it called "the ML inequality". What does that stand for - "Maximum of function times Length of contour," perhaps? – Gerry Myerson Apr 2 '12 at 0:37
@Gerry: Exactly. I don't remember where I read it. A quick google search shows it's not an uncommon thing to call it. What do you call it? – Jason DeVito Apr 2 '12 at 0:38
I don't quite understand this method. Wouldn't the L in this case be approaching infinity since the contour is |z|=R? How would you use ML? How would you use ML in this case? – calvin Apr 2 '12 at 2:03
ahh, I see. is it because f(z) is always finite, but the (z-a)(z-b) is gonna put an R^2 on the bottom no matter what, So the ML is gonna have a first power of R in the numerator, and an R^2 in the denominator, and so it's all controlled by the R^2, making the ML approach zero (all stated informally, of course. – calvin Apr 2 '12 at 2:57
@JasonDeVito, its modulus must be $\leq 0$, but a modulus cannot be negative, so it must be $0$. I see... – Jessy Cat Apr 5 at 2:04
$$\lim_{R\to\infty}\oint_{|z|=R} {f(z)\over(z-a)(z-b)} \; dz=2{\pi}i\biggl({f(a)-f(b)\over a-b}\biggr) \to 2\pi if'(b)\text{ as }a\to b.$$
If one could somehow use boundedness of $f$ to show that $$\lim_{R\to\infty}\oint_{|z|=R} {f(z)\over(z-a)(z-b)} \;dz \to 0\text{ as }a\to b,$$ then one would have shown that $f'(b)=0$. Since $b$ was arbitrary, one would have $f'=0$ everywhere.
-
To put it another way: calvin, you have evaluated the integral; now estimate the integral. – Gerry Myerson Apr 2 '12 at 0:33
@MichaelHardy, no. Actually, $b$ is not completely arbitrary, $a \neq b$, so you have to show that $f(a)=f(b)$. I would have liked to have seen more steps in the solution above where they showed that, though. – Jessy Cat Apr 5 at 1:36
|
2016-05-03 06:51:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8780144453048706, "perplexity": 366.3043084407449}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860118807.54/warc/CC-MAIN-20160428161518-00044-ip-10-239-7-51.ec2.internal.warc.gz"}
|
http://www.ams.org/mathscinet-getitem?mr=1303115
|
MathSciNet bibliographic data MR1303115 (96b:17009) 17B37 (17B67) Berman, Stephen; Gao, Yun; Krylyuk, Yaroslav; Neher, Erhard The alternative torus and the structure of elliptic quasi-simple Lie algebras of type \$A\sb 2\$$A\sb 2$. Trans. Amer. Math. Soc. 347 (1995), no. 11, 4315–4363. Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
|
2014-08-23 19:33:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9926533102989197, "perplexity": 8729.157210415942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826343.66/warc/CC-MAIN-20140820021346-00453-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://www.transtutors.com/questions/ou-have-two-lenses-at-your-disposal-one-with-a-focal-length-f1-38-0-cm-the-other-wit-1494481.htm
|
# ou have two lenses at your disposal, one with a focal length f1= 38.0 cm , the other with a focal...
ou have two lenses at your disposal, one with a focal lengthf1= 38.0cm , the other with a focal lengthf2=-38.0cm .
|
2019-12-12 23:49:51
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9600831270217896, "perplexity": 2097.0586988255322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547536.49/warc/CC-MAIN-20191212232450-20191213020450-00507.warc.gz"}
|
https://chem.libretexts.org/Ancillary_Materials/Laboratory_Experiments/Wet_Lab_Experiments/MIT_Labs/Lab_3%3A_Chemical_Kinetics/1_-_The_Iodine_Clock_Reaction
|
# 1 - The Iodine Clock Reaction
Introduction
In this experiment, you will study a reaction that proceeds at an easily measured rate at room temperature:
$\rm \underset{persulfate}{S_2O_8^{2-}} + \underset{iodide}{2I^-} \rightarrow \underset{sulfate}{2SO_4^{2-}} + \underset{iodine}{I_2}$
In the first part of the experiment, the rate equation will be determined by investigating the effect of the concentration of the reactants on the rate of the persulfate-iodide reaction. In the second part, the activation energy will be calculated by studying the effects of temperature change and addition of a catalyst on the reaction system.
Background
Given the equation for a general reaction:
$aA + bB \rightarrow Products$
The dependence of the rate of the reaction on the concentration of the reactants may be expressed by a rate equation of the form:
$\rm rate = k[A]^l[B]^m$
where, k is the rate constant (or rate coefficient); l and m are the orders of the reaction with respect to the reactants A and B, respectively; and the sum l + m is the overall reaction order. Unlike the stoichiometric coefficients determined by calculation, the orders of the reaction are based on the kinetics of the reaction. The orders of the reaction are defined by the mechanism of the reaction, which is an account of the actual steps by which the molecules combine. Orders can only be determined experimentally.
The effect of temperature on reaction rate is given by the Arrhenius equation:
$k = A e^{-E_a/RT}$
where A is the Arrhenius constant, Ea is the activation energy of the reaction, T is the absolute temperature, and R is the universal constant of gases.
Description of the Experiment
In this experiment, we study the kinetics of the reaction between persulfate S2O8 2- and iodide I- ions:
$\rm \underset{persulfate}{S_2O_8^{2-}} + \underset{iodide}{2I^-} \rightarrow \underset{sulfate}{2SO_4^{2-}} + \underset{iodine}{I_2}$
Rates of reaction are measured by either following the appearance of a product or the disappearance of a reactant. In this experiment, the rate of consumption of the iodine will be measured to determine the rate of the reaction. As reaction (5) runs, the amount of iodine (I2) produced from it will be followed using reaction (6):
$\rm \underset{thiosulfate}{2S_2O_3^{2-}} + \underset{iodine}{I_2} \rightarrow \underset{tetrathionate}{S_4O_6^{2-}} + \underset{iodide}{2I^-}$
The iodine produced from the persulfate-iodide reaction (5) is immediately reduced back to iodide by thiosulfate ions (5). A known amount of thiosulfate ions will be added to the reaction vessel which will in turn consume iodine as it is produced. This continues until all the thiosulfate has been converted to tetrathionate, whereupon free iodine will start to form in the solution via reaction (5). Because we know the amount of thiosulfate we added, we can determine the amount of iodine produced from reaction (6) stoichiometrically. When all the thiosulfate is consumed, free iodine starts to form in solution. By measuring the time taken for the known amount of thiosulfate to be consumed, the rate of production of iodine during that time can be calculated.
The color of the iodine formed might be intense enough that it can act as its own indicator; however, for better results, you will add starch, which produces a deep blue starchiodine complex:
$\rm{ \underset{iodine}{I_2} + \underset{starch}{(C_6H_{10}O_5)_n}\cdot H_2O} \rightarrow blue \space complex$
In summary, iodide (I- ) and persulfate ions (S2O8 2-) react to produce iodine (I2) and sulfate (SO4 2-) in reaction (5). This iodine is immediately consumed by the thiosulfate ions (S2O3 2-) in a pathway described by reaction (6). As soon as all of the S2O3 2- ions are consumed, the excess iodine produced in (5) is free to react with starch, turning the solution blue (7). The amount of thiosulfate ions added tells us how much iodine had been produced in the time taken for the reaction to turn blue.
Rate equation
The rate of the reaction at constant temperature and ionic strength can be expressed as the change in concentration of a reagent or product over the change in time and can be equated to the rate law expression:
$rate = - \dfrac{\Delta [S_2O_8^{2-}]}{\Delta t} = \dfrac{\Delta [I_2]}{\Delta t} = k[S_2O_8^{2-}]^m[I^-]^n$
The variation in concentration of persulfate (a minus sign denotes consumption) and the variation in concentration of iodine (production) are given by:
$\Delta [S_2O_8^{2-}] = [S_2O_8^{2-}]_{final} - [S_2O_8^{2-}]_{initial} = 0 - [S_2O_8^{2-}]_{initial} = -[S_2O_8^{2-}]_{added}$
Δ [I2] = [I2]final - [I2]initial , but at the beginning of the reaction [I2]initial = 0, so:
$\Delta[I_2] = [I_2]_{final}$
Then:
$rate = \dfrac{[I_2]}{t} = \dfrac{moles \space I_2}{volume \space solution(\rm{L}) \space x \space time (\rm{sec})}$
The number of moles of iodine produced is given by the amount of thiosulfate added to the reaction vessel:
$moles \space S_2O_3^{2-} = {volume \space of \space S_2O_3^{2-} added(\rm{L})} \times {concentration \space of \space S_2O_3^{2-}}$
The stoichiometry of reaction (5) gives:
$rate = \dfrac{[I_2]}{t} = \dfrac{moles S_2O_3^{2-}}{2 \times volume \space solution(\rm{L}) \times time(\rm{sec})}$
Thus we can calculate the rate by:
$rate = \dfrac{vol. S_2O_3^{2-} added (\rm{L}) \times conc. S_2O_3^{2-} \rm{(moles/L)}}{2 \times volume \space solution(\rm{L}) \times time(\rm{sec})}$
This reaction rate is a measure of how much iodine was produced in the time it took for the reaction to turn blue (i.e., time taken to react with all of the thiosulfate present).
Reaction Orders
In this experiment we use the initial rate method to find the order of the reaction with respect to persulfate (m) and the order of the reaction with respect to iodide (n). The method is based on the measurement of the rate of the reaction over a period of time. This time period is short enough for the reaction not to have proceeded significantly, but long enough to be unaffected by the time which the solutions take to mix at the start of the reaction.
The rate law equation can be written as:
$rate = k[S_2O_8^{2-}]^m[I^-]^n$
By taking the natural log of both sides, the equation becomes:
$\rm{ln} \space rate = \rm{ln} \space k + m \space \rm{ln}[S_2O_8^{2-}] + n \space \rm{ln}[I^-]$
For runs with different concentrations of persulfate and a constant concentration of iodide at a constant temperature,
$\rm{ln} \space rate = m \space \rm{ln}[S_2O_8^{2-}] + \rm{constant}$
The constant term in this equation is lnk + n ln [I-]. The slope of the best fit line of a plot of ln rate versus ln[S2O8 2-] will be equal to m, the order of reaction with respect to persulfate.
Similarly, for runs where persulfate concentration and temperature are kept constant and the amount of iodide is varied,
$\rm{ln} \space rate = n \space \rm{ln}[I^-] + \rm{constant}$
The constant term is lnk + m ln[S2O8 2-]. The slope of the best fit line of a plot of ln rate versus ln[I-] will be equal to n, the order of reaction with respect to iodide.
Activation energy (Ea)
Recall the Arrhenius equation:
$k - A e^{-E_a / RT}$
Taking natural logarithm of both sides of this equation we obtain:
$\rm{ln} \space k = - \dfrac{E_a}{R} \dfrac{1}{T} + \rm{ln} \space A$
A plot of ln k versus 1/T yields a straight line whose slope is -Ea/R and whose y-intercept is ln A, the natural logarithm of the Arrhenius constant.
## Procedure
Effect of Persulfate and Iodide Concentrations on Rate
You will be provided with the following solutions:
1. Standardized $$\ce{Na2S2O3}$$ solution (about 0.1 M, BE SURE TO RECORD EXACT VALUE);
2. 0.1M potassium persulfate, K2S2O8;
3. 0.2M potassium iodide, KI; (iv) 0.2M potassium chloride, KCl; (v) 0.1M potassium sulfate, K2SO4.
The rate coefficient (k) of ionic reactions depends on the ionic strength or salinity of the solution. Potassium chloride (KCl) and potassium sulfate (K2SO4) are used to maintain the ionic strength of the solutions.
1. Prepare a 4.0x10-3 M solution of sodium thiosulfate as follows: Rinse a clean 250mL volumetric flask with distilled water. Pipette an aliquot of 10 mL of the standardized thiosulfate solution into the volumetric flask and add distilled water to the mark on the neck of the flask. Stopper and invert the flask a few times to mix its contents. Transfer the diluted thiosulfate solution into a clean labeled plastic bottle. This diluted solution will be used along the experiment.
2. Label a 50mL Erlenmeyer flask "A" and a 50mL beaker "R", the reaction beaker. For each run of the reaction, make up glassware as shown in the chart below. Between runs, rinse the flasks THOROUGHLY with distilled water.
Erlenmeyer A Erlenmeyer A Reaction Beaker "R" (+ 2 drops fresh starch solution + magnetic bar) Reaction Beaker "R" (+ 2 drops fresh starch solution + magnetic bar) Reaction Beaker "R" (+ 2 drops fresh starch solution + magnetic bar)
Runs 0.2M KI (mL) 0.2M KCl (mL) 0.1M K2SO4 (mL) 0.1M K2SO4 (mL)
4.0x10-3 M*** S2O32- (mL)
1 10 0 5 5 5
2 5 5 5 5 5
3 2.5 7.5 5 5 5
4 5 5 7.5 2.5 5
5 5 5 10 0 5
3. For each run, start stirring the reaction beaker. Then, dump the contents of flask "A" into it and immediately begin timing. Record the "Blue Time" (the time in seconds needed for the solution to turn blue) for each run. Deposit all waste in the liquid waste container. ***Do not add the S2O3 2- solution until you are ready to mix mixtures A and R together.
Discussion and Calculations
Prepare the folowing graphs:
1. ln rate versus ln[S2O8 2-], for runs where [I-] is constant (runs 2, 4 and 5).
2. ln rate versus ln[I-], for runs where [S2O8 2-] is constant (runs 1, 2 and 3).
3. ln k versus 1/T for runs at constant concentrations but variable temperature.
For these graphs draw a best-fit line. The slopes of graph 1 and graph 2 will give you m and n, respectively (round them to their nearest integer values). The slope of graph 3 will give you -Ea/R and the intercept lnA.
In your calculations, you should keep in mind that the starting concentration in the reaction vessel for each reagent is not simply what was printed on the bottle. For instance, for the first run in Part One, you used 20 mL of the 0.2M KI solution; however, when the reaction is run, the actual concentration of iodide, at start, is not 0.2M. Find the concentration of the two reagents (iodide and persulfate) used in each run (Hint: what dilutions have occurred?).
The rate for every run in this experiment can be calculated by:
$rate = \dfrac{[S_2O_3^{2-}]_{diluted} \times 5}{2 \times \25 \times time} = \dfrac{4.0 \times 10^{-4}}{time} Ms^-1$
Since the total volume in every reaction is 25 mL and 5 mL of the dilute thiosulfate solution is used in every reaction, the only quantity in this equation that will change is the time. Be sure to account for all dilutions in the sodium thiosulfate solution concentration.
Once you have determined m and n , the rate constant k is calculated from:
$k = \dfrac{rate}{[S_2O_8^{2-}]^m[I^-]^n}$
Your final k value (at room temperature) should be the average of the k values obtained for runs 1 through 5 in Part One. Make sure to give the units for k.
Goals
1. Determine the experimental rate law
2. Propose a mechanism consistent with the experimental rate law. (Do not worry if it is the correct mechanism, only that the experimental rate law can be derived from it)
3. Determine the activation energy and Arrhenius constant for the reaction.
|
2021-09-17 15:41:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.783688485622406, "perplexity": 1567.0849699265311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055684.76/warc/CC-MAIN-20210917151054-20210917181054-00660.warc.gz"}
|
https://web2.0calc.com/questions/dimensions-of-cylinder
|
+0
# Dimensions of cylinder
0
218
4
+597
hi good people!,
I am given a cylinder which has a capacity of 2000 cubic cm. First they ask to show that the height of the cylinder can be expressed as $$h={2000 \over{pi x^2}}$$
this was easy to do...
the second question is to determine a formula for the outer surface of the cylinder in terms of x...
the third question is to determine the measurements which would minimize the material needed to make this cylinder..
Jul 17, 2019
edited by juriemagic Jul 17, 2019
edited by juriemagic Jul 17, 2019
#1
+19914
-1
X is the radius of the cylinder.
Surface area = pi (2x) h (if we do not include the endcaps)
pi 2x h + 2 pi x^2 (if we include the ends)
Jul 17, 2019
edited by Guest Jul 17, 2019
#2
+107006
+3
Juriemagic, it is good to see you here
$$S=2\pi x*h+2*\pi x^2\\ S=2\pi x*\frac{2000}{\pi x^2}+2*\pi x^2\\ S= \frac{4000}{x}+2\pi x^2\\ S= 4000x^{-1}+2\pi x^2\\ \frac{dS}{dx}= -4000x^{-2}+4\pi x\\ \frac{d^2S}{dx^2}= 8000x^{-3}+4\pi\\ \frac{d^2S}{dx^2}>0 \qquad \text{since x>0, concave up} \\ \text{So any turning point where x>0 will be a minimum}\\ \text{find minimum}\\ \frac{dS}{dx}= -4000x^{-2}+4\pi x=0\qquad x>0\\ -4000+4\pi x^3=0\\ \pi x^3=1000\\ x=\frac{10}{\sqrt[3]{\pi}}$$
so for minimum surface area the radius is $$\frac{10}{\sqrt[3]{\pi}}\;\;cm$$ and the height is
$$h= \frac{2000}{\pi x^2}\\ h= 2000 \div( \pi x^2)\\ h= 2000 \div[ \pi (\frac{10}{\pi ^{1/3}})^2]\\ h= 2000 \div[ \frac{100\pi}{\pi ^{2/3}}]\\ h= 2000 \div[ 100\pi^{1/3}]\\ h=\frac{20}{\sqrt[3]{\pi}}$$
$$min SA=4000x^{-1}+2\pi x^2\\ min SA=4000(\frac{10}{\pi^{1/3}})^{-1}+2\pi (\frac{10}{\pi^{1/3}})^2\\ min SA=4000(\frac{\pi^{1/3}}{10})+2\pi (\frac{100}{\pi^{2/3}})\\ min SA=400\pi^{1/3}+(\frac{200\pi}{\pi^{2/3}})\\ min SA=600\pi^{1/3}\\$$
Jul 17, 2019
edited by Melody Jul 18, 2019
#3
+597
+1
Melody!!!!!!...thank you for this very complicated answer!!.......I will spend some time with it for sure, trust all is well with you?
juriemagic Jul 19, 2019
#4
+107006
0
Hi Juriemagic,
Yes thanks I am fine thanks :)
This technique takes a while to assimilate (to fully understand)
But once you get it there are many questions that will use this technique.
If you do not understand it from some particular line please tell me and I (or someone else) will try to explain better.
If i do no respond send me a private message because I may just not have seen your question.
Melody Jul 19, 2019
|
2020-01-24 16:16:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9111805558204651, "perplexity": 1926.4041951119075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250624328.55/warc/CC-MAIN-20200124161014-20200124190014-00101.warc.gz"}
|
http://gasstationwithoutpumps.wordpress.com/tag/teaching/
|
# Gas station without pumps
## 2014 May 7
### Quiz corrections
Filed under: Circuits course — gasstationwithoutpumps @ 20:36
Tags: , , , , , ,
As I reported last week, students did poorly on the first quiz, which came as no surprise to me. I had the students redo the quizzes as homework, allowing collaborative work (as long as they acknowledged the collaboration in writing). They turned in the homework on Monday, a week after the quiz, and I returned them today. No one aced the redo, with the top score being still only 25/33 (which would have been an A on the first pass, on a redo maybe a B+).
A lot of the students still seem to be having trouble with complex numbers—they got the formulas right when working symbolically, but then the exact same question with numbers instead of letters (which could be done by just plugging into the formulas) came out with real numbers when complex impedances were asked for. Also, a lot of sanity checked were skipped (several people reported a battery as doubling in voltage when hooked up to a resistor, for example).
These students are not major mathphobes (they’ve all passed a couple of calculus classes and most have done more math past that), but they don’t seem to have any sense for reasoning with or about math—they just want to plug in and grind, even on simple problems like ratios in voltage dividers. This class has almost no memory work (I gave them a one-page handout at the beginning of the year with all the math and physics I was expecting them to memorize), but relies heavily on their being able to recognize how to apply those few facts. This often requires subdividing a problem, like recognizing that a Wheatstone bridge is the difference between two voltage dividers, or that a 10× oscilloscope probe is a voltage divider with R||C circuits for each of the two impedances.
I spent the entire class today working through each problem in the quiz, to make sure that everyone in the class could understand the solution, and (more importantly) see that they did actually have enough knowledge and math skill to do the questions. Some of the students were feeling overwhelmed on the quiz, because they are not used to doing anything more than 1-step pattern matching for problems, and some of the quiz problems required two steps. None of the quiz problems were as hard as the prelab they had to do this week, which involved 8 or more steps to get the resistor values to set the gain of the amplifier:
1. Determine the pressure level of 60dB sound in Pa.
2. Determine the sensitivity of the microphone in A/Pa:
1. Convert -44dB from spec sheet to a ratio
2. Get V/Pa sensitivity for microphone for circuit on spec sheet
3. Convert to A/Pa given resistance of I-to-V conversion resistor on spec sheet.
3. Determine voltages needed for op amp power supply.
4. Determine I-to-V resistor needed to bias microphone in saturation region.
5. Convert A/Pa sensitivity, RMS pressure level, and I-to-V resistor to RMS voltage out of microphone.
6. Determine corner frequency and R, C values for DC-blocking filter.
7. Determine maximum output voltage range of the amplifier as the most limiting of
1. Voltage range of op amp outputs
2. Power limits of loudspeaker (10W)
3. Current limit of op amp (which is a function of the power-supply voltage) into 8Ω loudspeaker
8. Determine max gain as ratio of RMS voltage into op amp and RMS voltage out of op amp (I’m allowing them to be a bit sloppy about RMS voltage vs amplitude, since we are not looking just at sine waves—the amplitude of a symmetric square wave is the same as the RMS voltage.)
9. Choose resistor values to give the desired gain.
I’m hoping that pushing them go through these multi-step designs in the lab will give them more practice at decomposing problems into smaller pieces, so that two-step problems on a quiz no longer seem daunting, but routine.
I’m going to be giving them another quiz in about a week, covering op-amp basics and the amplitude response of RC filters. I’ve got to figure out the best time to do this—possibly a week from Friday, after they’ve done another op-amp lab (using a phototransistor to make a pulse monitor, using this handout). I think I’ll reorder the labs after that, doing the pressure sensor instrumentation amp lab, then the class D power amp, then the EKG.
## 2014 April 26
### As expected, students did poorly on the quiz
Filed under: Circuits course — gasstationwithoutpumps @ 15:43
Tags: , ,
I gave about the same quiz as I did last year, changing the numbers, removing one of the harder questions, and making sure that some of the other questions reflected worked examples we had done in class. The quiz was again on the 12th day of instruction. I had intended to move it to the 10th day, but one of the students was called out of town, so I rescheduled it so that everyone could take it at the same time.
I expected similar distribution to last year’s (last year the range was 3/32 to 12/32), but was hoping for slightly better. I saw a distinct bimodal distribution this year, with half the class getting scores from 0/33 to 6/33 and the other half getting 11/33 or 12/33. This is a little clearer distribution than last year’s, which spread the students out more uniformly. I was still hoping that some of the better students would get over half the points on the quiz, but they seemed to top out at 36%.
I worked this year’s quiz myself in about 24 minutes (which means the quiz was a little too long still—I want about a 3:1 ratio on time, and the students had only 70 minutes).
I was really depressed after last year’s quiz, because I had not been expecting such dismal performance. This year I was braced for it, but still hoping for better. Still there were some surprises:
• There were a few questions that should have been free points (like asking for the impedance of a resistor with resistance R)—I was disappointed that some students missed even the trivial questions.
• I had a pair of questions which were identical, except that one asked for algebraic formulas for impedance and the other gave component values and asked for numbers. I put the algebraic ones first this year, so the numeric ones were just a matter of plugging the numbers into the algebraic ones (and doing a sanity check). The algebraic ones had a mean score of 2/4 with a standard deviation of 1.2, while the numeric ones had a mean of 1.22/4 and a standard deviation of 1.2. I had not expected a drop in performance on the numeric ones, since the received wisdom in the physics education community is that students do better with numeric examples than algebraic ones.
• No one got any points on the oscilloscope probe example, even though it was identical to an example we had worked in class.
• The average score on a load-line problem was 1/6 with a standard deviation of 1.3. This did not look like a normal distribution, but an exponential one, with half the class getting no points.
• I had two low-pass RC filter questions. One asked for algebraic formulas; the other used the same circuit but asked for numeric answers using specific component values, voltages, and frequencies. The algebraic one was bimodal, with 2/3 of the class getting 0 and 1/3 getting the answers completely right. The numeric one was significantly worse, with only 2 out of 9 students getting any points (1/6 and 3/6).
• I asked a couple of voltage divider questions that required applying the voltage divider formula circuits in which the voltmeter was connected between two nodes, neither of which was ground. One asked for an algebraic results (a Wheatstone bridge), the other for a numeric result (voltage across the middle resistor of three in series. Students did very poorly on both, with only one person getting the voltage for the middle resistor (one got half credit for setting it up right, but computing wrong), and no one getting more than 1/5 for the Wheatstone bridge.
Last year I suggested several ways to handle the poor performance on the first quiz:
1. I could tell them to study and give them another quiz. That would be totally useless, as it would just repeat the problems on this quiz. They don’t know what it is that they need to know, and vague exhortations to study are pointless. I don’t think the problem is lack of effort on their part, and that’s the only problem for which pep talks are a potential solution.
2. I could go over the quiz question by question, explaining how I expected students to solve them. This is classic lecture mode and the approach I used to use. It would be easy to do, but I doubt that it would help much. I already did an interactive lecture on the material, and another approach is now needed.
3. The students could get the quiz back and be told to go home and look up in their notes and on-line anything they did not get right. They would find and write down the right answers, as if this were homework. (This “quiz correction” is a standard strategy in high school teaching, but not common in college teaching.) One difficulty here is that they might be able to find answers (say by copying from other students in the class) without understanding how to do the problems. It is probably a better approach than yet another lecture, but I’m not sure it will work well enough. If the students were trying to get from 80% understanding to 95%, it might be fine, but to get from 30% to 80%, something more directed is needed. More time and open notes would help, but maybe not enough.
4. I could break them into groups and give each group a couple of the problems to work on together in class. This peer instruction technique would be a good one if about 1/2 the students were getting the problems right, but with the top of the class getting only 1/3 right, I may need to give them more guidance than just setting them loose. For example, on some of the problems there was a fundamental misreading of the circuit schematics that was very common. I could clear up that misunderstanding in a minute or so and have them rework the problems that depended on it. Then I could send them home to write correct solutions.
5. I could give out lots of problem sets to drill them on the material. Of course, since it took me more than all day Sunday to make an 8-question quiz, it would take me forever to generate enough drill problems to be of any use.
I feel the same way this year about the possible teaching strategies, but this year I’m going to try a mix of methods 3 and 4, asking them to redo the quizzes at home, working with others until they are satisfied that they can now do the problems and other similar problems when asked. I’ll have them hand it in this year as a homework, but not go over it in class until after they turn it in. They need to take a more active role in trying to master the material, and not rely so much on my telling them what to do.
Monday we’ll cover inductors and loudspeakers, in preparation for the Tuesday measurement lab.
On Wednesday I was planning to do gnuplot analysis of the loudspeaker data, but I think I’ll keep that fairly short, so that we can get an intro to sampling and aliasing also before Thursday’s lab. I have to decide whether to bring in my son’s stroboscope and a moving object to demonstrate aliasing.
Friday, I’ll introduce op amps, with the intent of developing the block diagram in class on Monday for a simple op amp microphone circuit for the Tuesday lab. This weekend I need to rewrite that lab from last year—I decided last year to use the dual power supply with a center ground for their first op-amp design, rather than having them build a virtual ground (we’ll get that in the next lab assignment).
## 2014 April 7
### Feedback on first lab report
Filed under: Circuits course,Printed Circuit Boards — gasstationwithoutpumps @ 17:11
Tags: , , , , ,
Most of today’s class was taken up with feedback on the design reports that students turned in by e-mail on Saturday. Overall the reports were not bad (better than the first reports last year), but I think that the students could do better. Here are the main points:
• Anyone can redo the report to get it re-evaluated (and probably get a higher grade).
• No one attempted theV1 &V2 problem, so I reassigned it for Wednesday.
The circuit I had given as an exercise, asking them to determine the output voltage V_out.
• A lot of reports mixed together two different problems:the 1kΩ–3.3kΩ problem and the optimization to maximize sensitivity of the thermistor temperature sensor. I encouraged students to use more section headers and avoid mixing different problems together.
• Figures should be numbered and have paragraph-long captions below each figure. I reminded students that most engineering reports are not read in detail—readers flip through looking at the pictures and reading the picture captions. If the pictures and captions don’t have most of the content, then most readers will miss it. I also pointed out that many faculty, when creating new journal articles, don’t ask for an outline, but ask for the figures. Once the figures tell the right story, the rest of the writing is fairly straightforward.
• A lot of the students misused “would” in their writing, treating as some formal form of “to be”. The main use in technical writing is for contrary-to-fact statements: “the temperature would go down, if dissipating power cooled things instead of heating them”. Whenever I see “would” in technical writing, I want to know why whatever is being talked about didn’t happen.
• A number of the students had the correct answer for the optimization problem, but had not set up or explained the optimization. Right answers are not enough—there must be a rational justification for them. In some cases, the math was incomprehensible, with things that weren’t even well-formed equations. I suspect that in many cases, the students had copied down the answer without really understanding how it was derived and without copying down the intermediate steps in their lab notebooks, so they could not redo the derivation for the report.
• A number of the plots showed incomplete understanding of gnuplot: improperly labeled axes, improperly scaled axes, plots that only included data and not the models that the data was supposed to match, and so forth. I pointed out the importance of sanity checks—there was no way that anyone ran their recording for 1E10 seconds! I was particularly bothered that no one had plotted the theoretical temperature vs. voltage calibration based on the parameters from their temperature vs. resistance measurements, so I could not tell whether the voltage divider was doing what they expected it to.
• No one really got the solution for the 1kΩ–3.3kΩ problem perfectly. A number of them set up the equations right and solved for R (getting 2.538kΩ), but then not figuring out what Vin had to be. It turns out that Vin depends strongly on R, so rounding R to 2.2kΩ or 2.7kΩ results in different good values for Vin, and the 2.2kΩ choice gives a more desirable voltage (around 3.3v, which we have available from the KL25Z boards, as it is a standard power-supply voltage).
I also showed the students how I had expected them to setup and explain the optimization to maximize sensitivity at a particular operating temperature.
After that feedback, I started on new material, getting the explanation of amplitude, peak-to-peak, and RMS voltage. I think that the RMS voltage explanation was a bit rough. I was deriving it from the explanation that we wanted a measurement that represented the same power dissipation in a resistor as the DC voltage, and I got everything set up with the appropriate integrals, but I forgot the trig identity $(cos(\omega t))^2 =\frac{1}{2} (1-cos(2 \omega t))$, and ran out of time before I could get it right. I did suggest that they look up the trig identity and finish the integration.
I had hoped to get at least partway into Euler’s formula, complex sinusoids, and phasors, but the feedback took longer than I had expected. Those topics will have to wait until Wednesday or even Friday, since Wednesday we’ll want to do the modeling of the DC characteristics of the electret mic, and talk about how the mic works.
## 2014 April 5
### Hysteresis lab on KL25Z
Relaxation oscillator used in the hysteresis lab. The “variable capacitor” in this schematic is a person’s finger and a touch plate made from aluminum foil and packing tape.
I spent today writing code for the KL25Z board to act as a period or frequency detector for the hysteresis lab, where they build a relaxation oscillator using a 74HC14N Schmitt trigger inverter and use it to make a capacitance touch sensor (pictures of last year’s setup in Weekend work). I had written code for the Arduino boards last year, and I started by trying to do the same thing on the KL25Z, using the MBED online development system. The Arduino code used “PulseIn()” to measure pulse duration, and the MBED system does not have an equivalent function. I could have implemented PulseIn() with a couple of busy waits and a microsecond-resolution timer, but I decided to try using “InterruptIn” to get interrupts on each rising edge instead.
The basic idea of last year’s code (and the first couple versions I wrote today) was to determine the pulse duration or period when the board is reset, finding the maximum over a few hundred cycles, and using that as a set point to create two thresholds for switching an LED on or off. I got the code working, but I was not happy with it as a tool for the students to use.
The biggest problem is that the touch plate couples in 60Hz noise from the user’s finger, so the oscillator output signal is frequency modulated. This frequency modulation can be large compared with the change in frequency from touching or not touching the plate (depending on how big C1 is), so setting the resistor and capacitor values for the oscillator got rather tricky, and the results were unreliable.
I then changed from reading instantaneous period to measuring frequency by counting edges in a 1/60th-second window. That way the 60Hz frequency modulation of the oscillator gets averaged out, and we can get a fairly stable frequency reading. The elimination of the 60Hz noise allows me to use less hysteresis in the on/off decision for the LED, making the touch sensor more sensitive without getting flicker on transitions. The code worked fairly well, but I was not happy with the maximum frequency that it could handle—the touch sensor gets more sensitive if C1 is small, which tends to result in high frequency oscillations. The problem with the code was that MBED’s InterruptIn implementation seems to have a lot of overhead, and the code missed the edge interrupts if they came more often than about every 12µsec. Because I was interrupting on both rising and falling edges, the effective maximum frequency was about 40kHz, which was much lower than I wanted.
To fix the frequency limitation, I replaced MBED’s InterruptIn with my own interrupt service routine for PortD (I was using pin PTD4 as the interrupt input). With this change, I could go to about 800kHz (1.6e6 interrupts per second), which is plenty for this lab. If I wanted to go to higher frequencies, I’d look at only rising edges, rather than rising+falling edges, to get another factor of two at the high end. I didn’t make that change, because doing so would reduce the resolution of the frequency measurement at the low end, and I didn’t think that the tradeoff was worth it here.
The code is now robust to fairly large variations in the oscillator design. It needs a 20% drop in frequency to turn on the green LED, but the initial frequency can be anywhere in the range 400Hz–800kHz.
To make it easier for students to debug their circuits, I took advantage of having an RGB LED on the board to indicate the state of the program: on reset, the LED is yellow, turning blue once a proper oscillator input has been detected, or red if the oscillator frequency is not in range. When the frequency drops sufficiently, the LED turns from blue to green, turning back to blue when the frequency goes up again.
For even more debugging help, I output the frequency that the board sees through the USB serial connection every 1/60th second, so that a program like the Arduino serial monitor can be used to see how much the frequency is changing. I took advantage of that feature to make a plot of the frequency as the touch sensor was touched.
Plot of frequency of hysteresis oscillator, as the touch pad is touched three times. Note that the thresholds are very conservatively set relative to the noise, but that the sensitivity is still much higher than needed to detect the finger touches.
Overall, I think that the code for the KL25Z is better than what I wrote last year for the Arduino—now I have to rewrite the lab handout to match! I actually need to update two lab handouts this weekend, since week 3 will have both the hysteresis lab and the sampling and aliasing lab. Unfortunately, the features needed for those labs (trigger on rising and falling edges and downsampling) are not working in PteroDAQ yet.
Here is the code that I wrote for the frequency detector:
// freq_detector_own_isr
// Kevin Karplus
// 2014 Apr 5
// This program is intended to be used as a "capacitive touch sensor"
// with an external relaxation oscillator whose frequency
// varies with the capacitance of a touch.
// The program expects a periodic square wave on pin PTD4 with a frequency between
// about 400Hz and 800kHz. (LOW_FREQ_LIMIT and HIGH_FREQ_LIMIT).
// On reset, it displays a yellow light, then measures the frequency to store as the "off" frequency.
//
// If the frequency is out of range (say for a disconnected input), then the light is set to red,
// and the off frequency checked again.
// Otherwise the LED is turned blue.
//
// After initialization, if the program detects a frequency 20% less than the initial freq,
// it turns the light green,
// turning it blue again when the the frequency increases to 90% of the original frequency.
//
// No floating-point is used, just integer arithmetic.
//
// Frequency measurements are made by counting the number of rising and falling edges
// in one cycle of the mains frequency (1/60 sec), giving somewhat poor resolution at lower
// frequencies.
// The counting time is chosen to that frequency modulation by the mains voltages is averaged out.
//
// This version of the code uses my own setup for the interrupt service routine, because InterruptIn has
// too much overhead. I can go to over 800kHz (1.6e6 interrupts/second) with this setup,
// but only about 40kHz (80e3) interrupts/sec with mbed's InterruptIn.
#include "mbed.h"
#define PCR_PORT_TO_USE (PORTD->PCR[4]) // pin PTD3 is the pin to use
#define MAINS_FREQ (60) // frequency of electrical mains in Hz
#define COUNTING_TIME (1000000/MAINS_FREQ) // duration in usec of one period of electrical mains
// off_frequency must be between LOW_FREQ_LIMIT and HIGH_FREQ_LIMIT for program to accept it
#define LOW_FREQ_LIMIT (400)
#define HIGH_FREQ_LIMIT (800000)
// on-board RGB LED
PwmOut rled(LED_RED);
PwmOut gled(LED_GREEN);
PwmOut bled(LED_BLUE);
#define PWM_PERIOD (255) // for the on-board LEDs in microseconds
// Set the RGB led color to R,G,B with 0 being off and PWM_PERIOD being full-on
void set_RGB_color(uint8_t R, uint8_t G, uint8_t B)
{
rled.pulsewidth_us(PWM_PERIOD-R);
gled.pulsewidth_us(PWM_PERIOD-G);
bled.pulsewidth_us(PWM_PERIOD-B);
}
// InterruptIn square_in(PTD4);
volatile uint32_t edges_counted;
uint32_t low_freq_threshold, high_freq_threshold; // thresholds for detecting frequency changes
extern "C"{
// interrupt routine that counts edges into edges_counted
void PORTD_IRQHandler(void)
{
edges_counted++;
}
}
// return the frequency for the square_in input in Hz
uint32_t frequency(void)
{
PCR_PORT_TO_USE &= ~PORT_PCR_IRQC_MASK; // disable interrupts on pin PTD3
edges_counted=0;
PCR_PORT_TO_USE |= PORT_PCR_ISF_MASK | PORT_PCR_IRQC(11); // clear interrupt for PTD3, and enable interrupt on either edge
wait_us(COUNTING_TIME);
PCR_PORT_TO_USE &= ~PORT_PCR_IRQC_MASK; // disable interrupts on pin PTD3
uint32_t freq=edges_counted*MAINS_FREQ/2;
return freq;
}
int main()
{
rled.period_us(PWM_PERIOD);
gled.period_us(PWM_PERIOD);
bled.period_us(PWM_PERIOD);
set_RGB_color(255,255,0); // set light to yellow
SIM->SCGC5 |= SIM_SCGC5_PORTD_MASK; // make sure port D has clocks on
PCR_PORT_TO_USE &= ~PORT_PCR_MUX_MASK; // clearing the MUX field
PCR_PORT_TO_USE |= PORT_PCR_MUX(1); // Setting pin as GPIO
FPTD->PDDR &= ~ (1<<4); // make sure pin is input pin
NVIC_EnableIRQ(PORTD_IRQn); // enable interrupts for port D
__enable_irq();
uint32_t off_frequency= frequency();
while ( off_frequency<low_freq_limit ||="" off_frequency="">HIGH_FREQ_LIMIT)
{ // timed out. set color to red and keep trying
set_RGB_color(255,0,0);
printf("FREQ out of range: %luHz\n", off_frequency);
off_frequency= frequency();
}
uint32_t low_freq= 8*off_frequency/10; // 80% of off_frequency
uint32_t high_freq= 9*off_frequency/10; // 90% of off_frequency
printf("off= %luHz lo_thresh=%luHz hi_thresh=%luHz\n",off_frequency, low_freq, high_freq);
while(1)
{ uint32_t freq=frequency();
printf("%lu Hz\n",freq);
if (freq < low_freq)
{ // low_fequency found, turn LED green
set_RGB_color(0,255,0);
}
else if (freq >= high_freq)
{ // high frequency found, turn LED blue again
set_RGB_color(0,0,255);
}
}
}
## 2014 April 4
### Third day of circuits class was low key
Filed under: Circuits course,Printed Circuit Boards — gasstationwithoutpumps @ 21:34
Tags: , , ,
Today’s class was not content-rich, but a low-key decompression after yesterday’s too-long lab.
I started out taking some questions from the class, which were mainly about what to do in the design report.
I then discussed my ideas about what had gone wrong with yesterday’s lab that made it take so long, and both how I planned to fix the problem next year, and what we could do as a class to keep it from happening again this year. I particularly stressed the importance of doing the pre-lab work early, so that they could ask questions in the lecture portion of the class, rather than taking up valuable lab time. I also suggested that they do the writeups for the Tuesday lab before Wednesday’s class, so that they would have much less to write up after the Thursday lab—making the Friday deadline for the writeup feasible.
I asked the students for their ideas about what were problems with the lab, and they agreed that the soldering and installing PteroDAQ software took up almost 2 hours, so it would be best to separate that into its own lab period. They also brought up their frustration with the design problems I had given them: not so much the optimization for the lab, but the design exercise I had added: Design a circuit to convert a 1kΩ–3.3kΩ variable resistance sensor to a 1v–2v voltage output, with 1v for the 1kΩ resistance and 2v for the 3.3kΩ resistance. Use standard resistor values that you have in your kit. They were frustrated because they did not know how they were supposed to approach the problem.
This gave me an opportunity to explain what I was trying to do with problems like that. It was indeed entirely appropriate that they should have been uncomfortable with the problem, because I was trying to push them to think in new ways—to handle problems that were not completely laid out for them ahead of time, but where they had to struggle a bit to figure out how to formulate the problem. This is precisely what engineers have to do—to take problem statements that may be unclear or not precisely solvable, figure out how to formulate them more precisely, set up equations, solve them, check that the design they come up with makes sense, and (often) adjust the problem statement to reflect what is actually doable. (I didn’t say it, but in this case you have to accept a few percent error in the output voltage or the resistances in order to use standard values.) I promised them more uncomfortable problems in future, in an attempt to stretch them. They seemed a little more at ease with the difficulty they’d been having, once they realized that this was expected—I think some had been afraid that they were in over their heads and were panicking.
Another student mentioned having heard of an analogy between programming and engineering. I pointed out that programming was a form of engineering, and that all engineering required identifying problems, breaking them into subproblems, and solving the subproblems. Programming tends to involve many, many subproblems, with formal interfaces between them, but even the simple hardware we’d do in this course involves breaking problems into subproblems, using block diagrams (which I promised to talk more about later in the course).
Somewhere in the discussion of what engineers do, I brought up the example of the student who had presorted his resistors and taped them into a booklet. What the student had done was to identify a problem (that it would be hard to find the resistors he needed), come up with a solution, and implemented it. I pointed out that the technology he used (scotch tape) was available to them all, as was the notion of sorting. The engineering thinking comes in looking at something unpleasant (finding a sheet of resistors in a pile of 64 different sizes) as a design problem to solve, rather than something to get irked about or try to avoid. I’m hoping to get them thinking more like engineers during the course of the quarter—anticipating problems and looking for ways to solve them.
Generic voltage divider circuit.
I took more questions from the class—there were a few about voltage dividers, which I explained again in a different way, using analogies to similar triangles and giving them the voltage divider formula in the form $I = \frac{V_{out}}{R_{1}} = \frac{V_{in}}{R_{1}+R_{2}}$. I did not give the voltage divider in the “ground-reference” format shown to the left, but drew lines out horizontally from the nodes, and had the voltages indicated as distances between the lines (like in a mechanical drawing), to give them a more visual representation of voltages as differences. I also had them figure out what the voltage across R2 would be and how it would relate to the other voltages.
The more different ways they work with voltage dividers, the better they will internalize the concepts and be able to use them in designs.
The circuit I had given as an exercise, asking them to determine the output voltage V_out.
The circuit with explicit sources and voltmeter implied by the circuit I had given them.
A question also came up about what it meant to have 2 input voltages with no ground shown in a circuit (as I had given them as an exercise in the first lab handout). That is an excellent question—one that uncovered an assumption I had been making that I had never explained to them! I explained that what an “input voltage” meant was shorthand way of drawing two voltage sources.
I’ll have to fix the handout next year to include this explicit explanation of a common shorthand—I’ve used it for so many decades that it simply hadn’t occurred to me that it wasn’t obvious. I apologized to the class for having skipped the explanation, and pointed out the importance of them asking questions, because otherwise I would never know where some omission like this was confusing them unnecessarily.
When they ran out of questions, I got in some new material, explaining the difference between “precision”, “repeatability”, and “accuracy”. The digital thermometers they used in lab were a good example—they had a precision of 0.1°C, were repeatable within a single thermometer to about ±0.2°C, but between thermometers were repeatable only to about ±2°C. The accuracy is unknown, since we did not have anything traceable to a temperature standard, but the ice water baths should have been close to 0°C, so the thermometers we used on Thursday were probably less than 1°C off, but the larger set we used on Tuesday included some that were 3°C or 4°C off. In the repeatability part of the talk, I managed to bring in the biologists’ notion of technical replicates (different measurements of the same sample) and biological replicates (different cultures or tissue samples), and why biological replicates show less repeatability.
I also used this as a chance to talk about the uselessness of ±0.1°C or error bars without an explanation of what the range means (standard deviation, 90% confidence interval, 3σ, 5σ, observed range, …), and the even greater uselessness of “significant figures” as a way of expressing uncertainty. I told them that I’d rather see 1.031±0.2 than 1.0 as a way of expressing the uncertainty in a measurement.
Towards the end of the 70-minute period, I got in a little discussion of AC voltages as time-varying voltages, and that we usually did analysis in terms of simple sinusoids, rather than the complex waveforms that we’d actually be measuring. I did assure them that, though there was a lot of mathematical machinery (Fourier analysis) that justified this way of doing things, the math was outside the scope of the class and I’d only be giving them intuitive ways of working with AC. I only got as far as giving them amplitude—telling them that I’d start with RMS voltage next time. (I’d originally hoped to explain RMS voltage today, but that would have taken another five minutes, and we were already 3 minutes over.)
Overall, I was fairly pleased with how today’s class went—the students are getting more comfortable asking questions and I’m getting a better sense of what they already know and what they need explained. Undoubtedly I’ll make more mistakes like not explaining the “hidden voltage source at inputs” convention, but I think we’ll recover from such mistakes a little quicker each time, as the students get more confident in asking for clarification.
Next Page »
|
2014-07-22 13:25:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5592995285987854, "perplexity": 1398.1405588902842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997858892.28/warc/CC-MAIN-20140722025738-00107-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/2792398/integer-solutions-to-xy-yx-1
|
# Integer solutions to $x^y - y^x = 1$
The equation $x^y - y^x = 1$ has integer solutions $(2,1)$, $(3,2)$ and $(k, 0)$ (for any $k > 0$). Are there any others? Based on the graph (https://www.desmos.com/calculator/qyxoemixli, see below) it doesn't look like it, but is there a simple way to prove it?
|
2019-07-17 23:03:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8646820783615112, "perplexity": 71.31728920379338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525414.52/warc/CC-MAIN-20190717221901-20190718003901-00547.warc.gz"}
|
https://stats.stackexchange.com/questions/342462/hyperparameter-tuning-in-neural-networks
|
# hyperparameter tuning in neural networks
I was trying to fine tune a neural network model for a multilabel classification problem. I was reading Jason Brownlee 's article for the same. As per the article, there are a number of parameters to optimize which are:
1. batch size and training epochs
2. optimization algorithm
3. learning rate and momentum
4. network weight initialization
5. activation function in the hidden layer
6. dropout regularization
7. the number of neurons in the hidden layer
The code snippet is as below.
model = KerasClassifier(build_fn=create_model, verbose=1)
# define the grid search parameters
batch_size = [10, 20, 40, 60, 80, 100]
epochs = [10, 50, 100]
learn_rate = [0.001, 0.01, 0.1, 0.2, 0.3]
momentum = [0.0, 0.2, 0.4, 0.6, 0.8, 0.9]
weight_constraint = [1, 2, 3, 4, 5]
dropout_rate = [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
neurons = [1, 5, 10, 15, 20, 25, 30]
param_grid = dict(neurons=neurons, batch_size=batch_size, epochs=epochs, learn_rate=learn_rate,
momentum=momentum, dropout_rate=dropout_rate, weight_constraint=weight_constraint)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1)
grid_result = grid.fit(X_train, y_train, validation_split=0.2)
Along with this, the number of hidden layers in the network is also another parameter.
I was doing hold out partitioning of the data and grid search for fine tuning. But it is taking huge time for computation even in a GPU machine.
Here I specified all these parameters in the same grid. I was wondering can we simplify this probably by finding the each parameter separately? For example, finding the optimal number of neurons first then, finding the batch size, etc.. What other approaches could be followed to reduce the search time?
I was also reading Bengio's paper Practical Recommendations for Gradient-Based Training of Deep Architectures but could not get much.
• Closely related: stats.stackexchange.com/questions/193306/… – Sycorax Apr 24 '18 at 14:34
• As you've discovered, a grid search over all possible configuration choices is usually too expensive to be exhaustive. Most people treat some aspects of the network as fixed (such as the activation function, initializer and optimizer) and only sequentially tune the others (for example, starting with a very small number of hidden neurons and then finding a good combination of learning rate, batch size and momentum before increasing the number of neurons and re-tuning the learning rate). – Sycorax Apr 24 '18 at 14:36
The link provided in @itdxer's comment is great. Based on this link, I am writing this answer. Hyperparameter optimization is neural networks is a tedious job as it contains many set of parameters.
The possible approaches for finding the optimal parameters are:
1. Hand tuning (Trial and Error) - @Sycorax's comment provides an example of hand tuning. Here, based on trial and error experiments and experience of the user, parameters are chosen.
2. Grid Search - Here a grid is created based on parameter values. And then all possible parameter combinations is tried and and the best one is selected.
3. Random Search - Here, instead of trying all possible combinations as in Grid Search, only randomly selected subset of the parameters is tried and the best is chosen.
4. Bayesian Optimization (Gausian Proces) - Gaussian Process uses a set of previously evaluated parameters and resulting accuracy to make an assumption about unobserved parameters. Acquisition Function using this information suggest the next set of parameters. (Do not understand much, taken from this link)
5. Tree-structured Parzen Estimators (TPE) - Each iteration TPE collects new observation and at the end of the iteration, the algorithm decides which set of parameters it should try next. (Do not understand much, taken from this link).
|
2019-10-20 19:33:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6206273436546326, "perplexity": 980.8890446279844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986718918.77/warc/CC-MAIN-20191020183709-20191020211209-00321.warc.gz"}
|
https://blender.stackexchange.com/questions/130142/my-mesh-crushes-after-apply-dyno-topology-when-going-back-to-object-mode
|
# My mesh crushes after apply dyno topology when going back to object mode
I have a mesh on which I did an extensive retopology using a brush with almost zero effect and dynamic topology turned on. That worked out great and produced a wonderful mesh (see first image) Then I left the sculpt mode and applied the solidify modifier to this mesh because i need to thicken it for 3d printing. å
Unfortunately, this ruined my complete mesh again and I have no idea why (see second image). The major issue here is that my 3d printer just prints sh\$ when I feed in this model. The resulting print has cracks over cracks.
What am I doing wrong? Do I have to apply the retopology somehow explicitely to get it applied in object mode as well?
Edit: I just learned that the crushing of the mesh has nothing to do with the modifier. I removed it but still have the same problem. As soon as I leave the sculpt mode, the mesh looks horrible.
• I'd like to answer my own question. Maybe it will help anybody in future. I played around with the mesh another few hours but in the end I exported it into an obj file and the re-imported it after deleting it in blender. It now looks good in all modes. – Norbert Jan 29 at 19:13
|
2019-10-16 06:51:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23573434352874756, "perplexity": 985.1925784736961}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666467.20/warc/CC-MAIN-20191016063833-20191016091333-00468.warc.gz"}
|
https://motls.blogspot.com/2005/08/what-string-theory-forbids.html?m=1
|
## Monday, August 15, 2005
### What string theory forbids
In this article, I would like to collect some of your examples of phenomena or situations that are completely compatible with low-energy effective field theory but do not arise from any known - and perhaps not even any unknown - stringy compactication. In other words, predictions of string theory that are independent of the vacuum selection mechanisms.
For example, pure N=2 supergravity is anomaly free and OK as an effective field theory, but string theory always predicts new multiplets, does not it? Also, the U(1)^{496} supersymmetric Yang-Mills theory coupled to type I supergravity in ten dimensions is anomaly free and nice, but it does not seem to describe any stringy background.
There are other similar properties related to the axion decay constants and the strength of various gauge couplings that apparently can't be anything you want; work in progress. However, I would love to hear some completely different examples of universal predictions of string/M-theory. Thanks.
1. Lubos:
Would you like to speculate on some New Physics here?
This CNN news shoulc be interesting to you, because the B 737 was flying from Cyprus to Adens and then to Czech. It crashed near Athens.
The puzzling thing is the regular flight from Cyprus to Athens is only 1.5 hour. The distance is so short that taking off and getting to proper altitude and lowering altitude and prepare for landing probably takes the bulk of the trip time, normally airplanes traveling for such short distance never even bother to climb to the regular 35000 feet height, they just stay at 5000 to 6000 feet.
The odd thing is it's reported that most of passengers are frozen SOLID when found on the ground. What kind of physics process can freeze a 150 pounds human body from body temperature, to "frozen solid", in less than one hour. Provided the expected temperature at 5000 feet above ground, during the summer time. The air temperature outside cabinet is probably -5C to 0C, and not much lower. And provided that all these people are properly addressed with SOME clothes, and there are blankets etc.
You buy a piece of dead meat of 1 or two pounds and put it into the refrigerator, and it will take better than 5 or 6 hours before it gets really "frozen solid". So what could have killed the pasengers instantly and fast freeze them to "frozen solid"? And it took at least half an hour, on the summer ground, before resquers reach some of the bodies, and they are still found to be "frozen solid", after thawing under the summer sun and in the fire for half an hour.
More over, it seems the F-16s reported seeing some one in the cockpit, probably some passengers still alive, desperately attempting to gain control of the airplane, right before crash. How could some be frozen to solid already while some are still alive.
Absolutely amazing! They musy have fall into some wormhole or there may be some new physicis to explain it :-)
2. Curiously, the "frozen solid" part disappeared from the news web site. So it was probably just a rumor to start.
Did any one else read the "frozen solid" words, before it was corrected?
More mistery surfaced, but it is more likely all explainable.
|
2019-08-19 00:12:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39112645387649536, "perplexity": 1479.6183522357708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314353.10/warc/CC-MAIN-20190818231019-20190819013019-00515.warc.gz"}
|
https://www.mail-archive.com/ntg-context@ntg.nl/msg87629.html
|
# Re: [NTG-context] Font fallback scaling
Well that's even more odd. Is it visible in the attachment? Tested
with updated Acrobat Reader and Sumatra pdf viewer without any visible
\prime or \doubleprime, except in the footnote (where there is no
\wedge on the other hand)... Note that \prime is visible as a footnote
symbol, which is also in math mode. If I comment *all* of the
\definefallbackfamily rows, it shows in the math example (but then
it's Termes of course). With any of the \definefallbackfamily
uncommented, it won't show.
Should I disable the fallback for a specific character, i.e., remove
the characters from one of the fallback ranges? And how is that done?
I don't care if Termes is used for most of the symbols, but I want to
keep as much Garamond as possible in math mode, especially for letters
and number.
/MJO
> On 4/13/2018 12:34 PM, Magnus J wrote:
>
> > $D\prime = 0.98$ % <-- no \prime in output
> >
> > $D\doubleprime = 0.98$ % <-- no \doubleprime in output
> hm, i see primes here
>
>
> -----------------------------------------------------------------
> Hans Hagen | PRAGMA ADE
> Ridderstraat 27 | 8061 GH Hasselt | The Netherlands
> tel: 038 477 53 69 | www.pragma-ade.nl | www.pragma-pod.nl
> -----------------------------------------------------------------
___________________________________________________________________________________
___________________________________________________________________________________
|
2018-10-23 10:21:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9371063113212585, "perplexity": 11484.322116727033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516123.97/warc/CC-MAIN-20181023090235-20181023111735-00462.warc.gz"}
|
http://math.stackexchange.com/questions/266570/an-intuitive-understanding-for-m%c3%b6bius-equivalent-of-totient-function
|
# An intuitive understanding for Möbius equivalent of totient function?
$$\varphi(n)=\sum_{d\mid n}d\cdot \mu\left(\frac{n}{d}\right)=n\sum_{d\mid n}\frac{\mu(d)}{d}$$
This describes the totient function in terms of the Möbius function. I understand what the Möbius function does but I don't understand this derivation at all. Is there an easy way to understand why this is so?
I'm not looking for some lengthy mathematical proof, but rather an intuitive understanding. I've seen plenty of papers showing the proof, but I just don't get what's happening.
-
You may want to look at the Principle of Inclusion-Exclusion. – André Nicolas Dec 28 '12 at 17:33
@user51819: While I'm glad my answer was helpful, you don't have to accept my answer so quickly - there may be other perspectives on this identity (or simply better explanations of the one I described) that will be useful to you! – Zev Chonoles Dec 28 '12 at 17:33
@ZevChonoles Sorry, I figured it was a sufficient explanation. Would you perhaps be willing to contribute an example of what you mean to make things even clearer? Say phi(24)? – user51819 Dec 28 '12 at 17:49
"Möbius", or equivalently "Moebius", is the correct spelling. "Mobius" is different. – Michael Hardy Dec 28 '12 at 17:59
$\varphi(n)$ counts the number of integers $k$ between $1$ and $n$ that are relatively prime to $n$.
The term $d=n$ gives us our starting value of $n\cdot\mu(\frac{n}{n})=n\cdot 1=n$, i.e. all of the integers $1\leq k\leq n$.
For each prime $p$ dividing $n$, the term $d=\frac{n}{p}$ throws out the multiples of $p$ from among the numbers between $1$ and $n$; specifically, there are $\frac{n}{p}$ multiples of $p$ between $1$ and $n$, and the $d=\frac{n}{p}$ term in the sum contributes $\frac{n}{p}\cdot\mu(p)=-\frac{n}{p}$.
Unfortunately, the result is still not quite right - for any distinct prime divisors $p_1$ and $p_2$ of $n$, we counted as if we had to remove multiples of $p_1p_2$ twice, when of course we only remove them once. The terms $d=\frac{n}{p_1p_2}$ correct for this, because they contribute $\frac{n}{p_1p_2}\cdot \mu(p_1p_2)=\frac{n}{p_1p_2}$, the number of multiples of $p_1p_2$ between $1$ and $n$.
But then we must account for the numbers between $1$ and $n$ that we originally triple-counted, and which our correction above over corrects for, etc. - the alternating nature of the Möbius function is what makes it do this the way we need.
|
2015-09-05 13:08:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9201700091362, "perplexity": 206.8187879425023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646249598.96/warc/CC-MAIN-20150827033049-00276-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/431331/does-this-morphism-necessarily-give-rise-to-a-finite-extension-of-residue-fields
|
# Does this morphism necessarily give rise to a finite extension of residue fields?
Let $f:X\rightarrow Y$ be a morphism of finite type of locally Notherian schemes. Let $x\in X$ and $y=f(x)$. Recall that $f$ is said to be unramified if the map of stalks $g:\mathcal O_{Y,y} \rightarrow \mathcal O_{X,x}$ satisfies $g(m_y)=m_x$, where $m_x$ denotes the maximal ideal of the quotient ring.
The condition on the maximal ideal shows that this map descends to a map $k(y)\rightarrow k(x)$. Is this necessarily a finite extension? I believe so, because taking appropriate affine neighborhoods, we can reduce to the case of a map of affine schemes with associated ring map $A\rightarrow A[t_1,\dots,t_n]$ for some elements $t_i$ of the ring of global sections of $X$. It seems to me after the localization and quotienting happens to get the map of residue fields, the second ring (now a field) should still be finitely generated over the first. Unfortunately, I don't know enough commutative algebra to do this rigorously. Is this correct? If so, what is the proof?
My motivation comes from studying étale morphisms, where the finiteness of the extension above is usually presupposed. However, in Qing Liu's book, on page 139, he seems to imply that the finiteness condition is redundant.
• After the quotienting you can ignore the localization (you localize by things that become invertible after you quotient anyway). – Qiaochu Yuan Jun 28 '13 at 3:08
• You are right, the phrasing is confusing. This was corrected this errata. The finiteness is part of the definition. – user18119 Jul 18 '13 at 8:52
• @QiL'8 Thanks for the comment, and thanks for the wonderful book. – Potato Jul 18 '13 at 15:11
Consider the map $X =$ Spec $k[x] \to Y =$ Spec $k$, where $k$ is a field. The generic point of $X$ maps to the unique point of $Y$, and the map on local rings is just the inclusion of fields $k \subset k(x)$, so the map is unramified at $x$ according to your definition. But the map on residue fields is evidently not finite.
• Dear Professor, I actually forgot to say that the field extension should be separable. But now I'm very confused. In Qing Liu's book (page 139), he gives the hypotheses I gave above and requires "the (finite) extension of residue fields $k(y)\rightarrow k(x)$" to be separable. I assumed the parentheses implied that the finite assumption was redundant. You are saying this is not true? Thank you for your help. – Potato Jun 28 '13 at 5:16
• Dear Professor, Here is the exact passage. "Let $f:X\rightarrow Y$ be a morphism of finite type of locally Noetherian schemes. Let $x\in X$ and $y=f(x)$. We say that $f$ is unramified at $x$ if the homomorphism $\mathcal O_{Y,y}\rightarrow \mathcal O_{X,x}$ verifies $\mathfrak m_y \mathcal O_{X,x} = \mathfrak m_x$ (in other words, $\mathcal O_{X,x}/\mathfrak m_y \mathcal O_{X,x} = k(x)$), and if the (finite) extension of residue fields $k(y)\rightarrow k(x)$ is separable." – Potato Jun 28 '13 at 5:25
|
2021-05-14 00:54:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9753711223602295, "perplexity": 128.25871336661396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989616.38/warc/CC-MAIN-20210513234920-20210514024920-00522.warc.gz"}
|
https://gigaom.com/2007/11/05/forrester-71b-in-online-video-ads-by-2012/
|
# Forrester: $7.1B in Online Video Ads by 2012 Forrester has the most bullish estimate yet for online video advertising spending in the U.S.:$7.1 billion by 2012. That’s after a 72 percent compound annual growth rate for the next five years, building on total spending of $471 million this year. That tops eMarketer’s July estimate that online video advertising would be worth$4.3 billion in 2011 — by Forrester’s growth curve (see chart above), revenues would already have already passed the $5 billion mark by then. Another contrast: eMarketer’s estimate depended on monster growth early on — an 89 percent growth rate to$775 million in 2007 — followed by 40 percent growth through the next five years.
Forrester analyst Shar VanBoskirk praised the emergence of “customer-centric” ad formats like the overlays used by VideoEgg, YouTube and others, which, rather than forcing an ad in their video streams, allow viewers to decide if and when to pause a video to watch an ad. She wrote,
Vanguard marketers and brand mainstreamers, including Nike, Disney, and Nestle, are already advertising
with VideoEgg — and with good reason. YouTube tests show that video overlays generate five
to 10 times more clicks than banner ads.
Forrester breaks down its forecast by where the revenue’s coming from — retail and wholesale trade will be the largest category, says the firm — but what about where the revenue goes? At the Daily Reel conference we attended in Hollywood last week, YouTube’s head of monetization, Shashi Seth, estimated that $5 million to$10 million in revenue shares were paid out this year to users for creating video content. He said he expected that number to grow significantly:
That is where I think Internet video is headed — catering to the long tail, not only from a consumption perspective, but from an advertising and marketing perspective.
Others are less enthusiastic about users and/or non-professionals getting a piece of the action. Screen Digest, which hasn’t been particularly enthusiastic about video revenues in general, thinks user-generated video will be worth $956 million in 2011. iSuppli is putting its money on professionally produced video, predicting revenues will hit$5.9 billion in 2011.
Thanks to Contentinople for the link to the Forrester report, which we had missed when it came out last month. The chart and quotes above are taken from a courtesy copy of the report sent to us by Forrester.
|
2021-05-16 00:05:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18198636174201965, "perplexity": 5223.62916510089}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00182.warc.gz"}
|
http://mathhelpforum.com/calculus/36602-integration.html
|
# Math Help - Integration
1. ## Integration
integrate sin3x/cos3x+sin3x from -pi by two to +
2. Originally Posted by sareeta
integrate sin3x/cos3x+sin3x from -pi by two to +
$\int_{\frac{-\pi}{2}}^{\frac{\pi}{2}}\frac{\sin(3x)}{\cos(3x)}+ \sin(3x)dx$
Rewrite this as $\int\tan(3x)+\sin(3x)dx$
we must have the derivative of the quantity out side before we can do anythign
so we have $\frac{1}{3}\int{3\tan(3x)}dx+\frac{1}{3}\int{3\sin (3x)}dx$
then by standard integration techniques we get
$\int\tan(3x)+\sin(3x)dx=\frac{-1}{3}\ln|\cos(3x)|-\frac{1}{3}\cos(3x)$
now just evaluate this from -pi/2 to pi/2
3. Originally Posted by sareeta
integrate sin3x/cos3x+sin3x from -pi by two to +
First I need to give thanks to the inte-killer for this trick
$\int\frac{\sin(3x)}{\cos(3x)+\sin(3x)}dx=\frac{1}{ 2}\int\frac{2\sin(3x)}{\cos(3x)+\sin(3x)}dx=$
$\frac{1}{2}\int\frac{2\sin(3x)+\cos(3x)-\cos(3x)}{\cos(3x)+\sin(3x)}dx=$
$\frac{1}{2}\int\frac{\sin(3x)+\cos(3x)}{\sin(3x)+\ cos(3x)}dx+\frac{1}{2}\int \frac{\sin(3x)-\cos(3x)}{\sin(3x)+\cos(3x)}dx=$
$\frac{1}{2}\int dx -\frac{1}{2}\int \frac{-\sin(3x)+\cos(3x)}{\sin(3x)+\cos(3x)}dx$
making a u sub on gives this as an antiderivative
$\frac{1}{2}x-\frac{1}{6}\ln|\sin(3x)+\cos(3x)|$
I'm not sure what you mean by your limits of integration.
Good luck.
4. Originally Posted by sareeta
integrate sin3x/cos3x+sin3x from -pi by two to +
There is a slicker solution since its a definite integral
Let $f(x) = \frac{\sin(x)}{\cos(x)+\sin(x)}$
So we want $I = \int_{-\frac{\pi}2}^{\frac{\pi}2} f(3x) \, dx$
Lets u-sub $u = 3x$ and break the integral into two pieces,
$I_1 = \frac13 \int_{-\frac{3\pi}2}^{0} f(u) \, du$ and $I_2 = \frac13 \int_{0}^{\frac{3\pi}2} f(u) \, du$
$I_1 = \frac13 \int_{-\frac{3\pi}2}^{0} f(u) \, du = \frac13 \int_{-\frac{3\pi}2}^{0} f\left(-\frac{3\pi}2 - u\right) \, du =
\frac13 \int_{-\frac{3\pi}2}^{0} (1 - f(u)) \, du$
$\Rightarrow 2I_1 = \frac13 \int_{-\frac{3\pi}2}^{0} \, du = \frac13 .\frac{3\pi}2
\Rightarrow I_1 = \frac{\pi}4$
Clearly $I_2 - I_1 = 0 \Rightarrow I_1 = I_2 = \frac{\pi}4$
So $I = I_1 + I_2 = \frac{\pi}2$
5. Hello,
Originally Posted by TheEmptySet
First I need to give thanks to the inte-killer for this trick
$\int\frac{\sin(3x)}{\cos(3x)+\sin(3x)}dx=\frac{1}{ 2}\int\frac{2\sin(3x)}{\cos(3x)+\sin(3x)}dx=$
$\frac{1}{2}\int\frac{2\sin(3x)+\cos(3x)-\cos(3x)}{\cos(3x)+\sin(3x)}dx=$
$\frac{1}{2}\int\frac{\sin(3x)+\cos(3x)}{\sin(3x)+\ cos(3x)}dx+\frac{1}{2}\int \frac{\sin(3x)-\cos(3x)}{\sin(3x)+\cos(3x)}dx=$
$\frac{1}{2}\int dx -\frac{1}{2}\int \frac{-\sin(3x)+\cos(3x)}{\sin(3x)+\cos(3x)}dx$
making a u sub on gives this as an antiderivative
$\frac{1}{2}x-\frac{1}{\color{red}6}\ln|\sin(3x)+\cos(3x)|$
I'm not sure what you mean by your limits of integration.
Good luck.
There's a 3 missing (red)
|
2015-08-05 03:32:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9625890851020813, "perplexity": 2896.9646308299666}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438043058631.99/warc/CC-MAIN-20150728002418-00006-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://d12frosted.io/posts/2017-06-11-Git-conditional-configurations.html
|
# Git: conditional configurations
June 11, 2017
More than five years ago Jeff King has added include directive to git config subprogram. This directive allows splitting configuration file across multiple files. A nice feature by itself was boosted in Git 2.13.0 by adding conditional includes - one of my favourite features of this release. And something that makes me sad.
Consider a simple example of ~/.gitconfig file.
# content of ~/.gitconfig
[core]
editor = emacsclient
[user]
useconfigonly = true
[include]
path = ~/.gitconfig_machine_specific
path = ~/.gitconfig_sensitive
# include ~/.gitconfig_personal only when active repository is under
# ~/Projects/personal/
[includeIf "gitdir:~/Projects/personal/"]
path = ~/.gitconfig_personal
# include ~/.gitconfig_work only when active repository is under
# ~/Projects/work/
[includeIf "gitdir:~/Projects/work/"]
path = ~/.gitconfig_work
As you can see, using include and includeIf directives allows to achieve following things.
1. File ~/.gitconfig can be shared among different computers as machine specific configurations are included as a separate file (~/.gitconfig_machine_specific).
2. File ~/.gitconfig can be made public as all sensitive configurations are included as a separate file (~/.gitconfig_sensitive).
3. You can set some configurations based on git repository location (e. g. personal or work).
Please note that these directives can be used in any git configuration level (system level, user level, repository level or individual command invocation). Which makes it even more powerful.
Also, note that the only condition implemented now is gitdir that matches against repository path.
# The same but different
Almost a year ago I have created a tool named git-config-manager. The idea is simple1, you have several configuration schemes which you may apply on a repository level. For example, you may use one name, email address and commit sign key when working on open source projects and totally different identity when committing to your day job repository. You just ask the tool to set one or several schemes in current repository and it loops through key-values and sets them using git config. Nothing fancy.
With git-config-manager you define schemes in a single json file.
{
"personal": {
"user": {
"name": "Drunk Monkey",
"email": "drunk.monkey@protonmail.ch",
"signingkey": "A1B2C3D4"
},
"pull": {
"rebase": true
}
},
"work": {
"user": {
"name": "Mr. Tie",
"email": "tie@corp.me",
"signingkey": "E5F6G7H8"
},
"pull": {
"rebase": null
}
}
}
When you ask git-config-manager to set configurations from the personal scheme (\$ git-config-manager set personal), it takes the definition and calls git config for every configuration key-value pair. This is really straightforward.
But the simple approach has several drawbacks. First of all, you have to manually call git-config-manager set scheme (from the terminal or using the magit extension). While with includeIf directory everything is handled automatically. Secondly, if you wish to change configuration piece in all affected repositories (e. g. update sign key in all work projects), git configuration system allows you to make the change in one place (in the ~/.gitconfig_work file) and it will be automatically propagated to all affected repositories. With git-config-manager it’s not that simple because it just sets values when it’s asked to set.
# What’s the point?
When I learned about includeIf directory I thought that the time has come to deprecate git-config-manager. But then I realised that while they overlap in use cases, I might implement several new features I’ve been thinking about lately.
|
2020-04-09 17:03:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28728270530700684, "perplexity": 5842.714299635113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371861991.79/warc/CC-MAIN-20200409154025-20200409184525-00128.warc.gz"}
|
https://kobi.one/2021/10/31/random-oracles.html
|
### Random Oracles in Cryptography
31 Oct 2021 - Kobi
This article describes the role and sources of randomness in cryptographic protocols
The post is co-authored with Tom.
# Background
Randomness is ubiquitous across the cryptography underpinning blockchain infrastructures — in the Merkle trees securing state, in the random-selection mechanism in proof of work, in the selection of committees in proof of stake, and in the zero knowledge proofs underpinning both scaling and privacy.
Hash functions are the most important source of randomness in these protocols.
# What’s a Hash Function?
A hash function takes in a ‘message’ (this could be any data — the state of a blockchain, the contents of a transaction, etc) and ‘spits out’ a digest of a fixed length, in some predefined ‘space’ of numbers. This could be all numbers up to $$2^{256}$$, or it could be elements of a prime field (common in zero knowledge proofs), or even sometimes elliptic curve points, which are numbers of a different kind.
The point is, the ‘magnitude’ of the output given the input should be completely unpredictable, until the hash is actually computed.
Hashes are typically seen in two settings:
As a short ‘unique fingerprint’ on data (typically the ‘state’ of the blockchain, or the contents of a transaction), and
A way of replacing questions that should be supplied by the verifier in certain ‘interactive protocols’. This is a common feature of zero knowledge proofs. The security of interactive protocols depends on a sequence of questions and answers between the ‘verifier’ and the ‘prover’. Hashes can be used to remove the need for a direct back-and-forth dialogue between the prover (usually the spender of money) and the verifier (the blockchain nodes checking all transactions are correct)
# Property #1 for Hashes: Randomness
A core property of hash functions is therefore they scramble input data in a seemingly random fashion. When new hash functions are proposed, they must be subjected to rigorous statistical tests to satisfy the cryptography community that they do indeed exhibit their claimed randomness. Any failure to do so would produce features that expose protocols relying on them to potential attack.
This randomness is useful in many contexts.
Pseudorandom Function Families (PRFs) are functions that, similar to hash functions, map a message to an output, but additionally have a key as an input. PRFs can be built from these random-looking hashes, and, for example, can be used to build message authentication codes, to protect message authenticity and integrity. These hashes can also help when converting interactive proofs to non-interactive. If during the interactive proof a verifier sends uniformly random elements to the prover as challenges, the Fiat-Shamir heuristic can use a hash to replace the interactive random challenge with a hash of the proving process messages so far, as this should produce a sufficient random element for this purpose.
# Property #2 for Hashes: Collision Resistance
Additionally, we want it to be the following:
Collision-resistant - hard to find two different messages that produce the same result
Pre-image resistant - given an output, hard to find an input that produces it
Second pre-image resistant - given an input and output, hard to find another input that produces the same output
The desire for collision-resistance is direct - if we digest two different messages into the same hash, we can’t distinguish them anymore. Therefore, we want the short hash digest to be as unique as possible for different messages, in the sense that we can’t maliciously craft messages that have the same hash. As a rule of thumb, collision resistance is usually easier to attack than pre-image resistance. This is because in collision resistance you have the freedom to choose any two inputs, where in both variants of pre-image resistance you are restricted to a specific output. Birthday attacks immediately reduce the complexity of finding a collision to a square root of the amount of possible outputs. Generally, when a collision is found, it’s a strong signal hash function should be phased out, as it signals worse attacks may happen.
Another common place where it appears is some signature schemes: in Schnorr signatures we require a “hash to field” and in BLS signatures we require a “hash to group”. In both cases, this hash provides us with a unique element from the target set that is unrelated to other results of the hash. For example, we don’t want it to be the case that if you call the hash function $$H$$ on $$m_1$$ and $$m_2$$, then you can also somehow deduce $$H(m_3)$$.
# Provable Security and the “Standard Model”
Hashes, then, sound super useful! What’s the problem?
While useful for protocol designers, introducing hash functions to protocols makes life hard for cryptographers, when they approach proving security for the protocols. Cryptographers like to work with reductions - you formulate a security property for your protocol, and show that if that security property is broken by an adversary, then it implies that we also solved an unsolvable problem, an assumption.
Proving security of protocols using computational hardness assumptions is considered working within the Standard Model. This means that you assume some problems are hard, and try to prove that your protocol’s security properties can be explained to be as hard as this problem. One such problem that is widely known is the Discrete Logarithm problem in a group: given $$g$$ and $$h$$, find $$x$$ such that $$g^x = h$$. If someone wants to break your protocol’s security properties, and you prove that in order to break it you would have to find a Discrete Logarithm, then you’re in a really good shape.
A concrete example is the case of ElGamal encryption, where we have the IND-CPA property that roughly states that even if you know a ciphertexts is an encryption of one of a set of plaintexts you’ve chosen, you will not be able to know which of the plaintexts was chosen. The assumption that is solved if that property is broken is DDH - Decisional Diffie Hellman, that states that $$(g^x, g^y, g^{xy})$$ is indistinguishable from $$(g^x, g^y, g^z)$$, given random $$x,y,z$$ and a group generator $$g$$.
Hashes throw a wrench into that and constrain the ability of cryptographers to make such reductions, and without an alternative modeling of the protocol, we would be left with “we’ve tried to break this protocol for some time and couldn’t”, which feels unsatisfactory.
# Random Oracles to the Rescue
Random oracles can solve this.
They provide a way to do a security-reduction proof, to gain confidence in the security of the protocol. When working within the Random Oracle Model, we replace each hash functions with an idealized Random Oracle:
When a query is made on a new message, a random response from the target set is returned, chosen uniformly from it.
When a query on a message that has been queried on before, the same response as before is returned.
# “Programmable” Random Oracles
This introduces the concept of programmable random oracles, in which we allow some of the participants to program it — return values of its choosing, as long as they appear random to the receiver. It may sounds a bit weird, since the functions we would use in reality don’t have that feature. Given that it allows cryptographers provide positive results on the security of some protocols and that the consumer of oracle doesn’t see the difference, as they still see random values, it seems like a reasonable trade-off to make.
Specifically, we can allow an adversary that wants to break an assumption, let’s say DDH, to simulate the environment of another adversary that claims they know to break a security property of the protocol.
By programming the random oracle, we can use the responses from the adversary whose environment to then break the assumption.
There are a few other different variants, providing different levels of confidence - for example, non-programmable random oracles.
# It’s not all Roses
Alright, this sounds really powerful. What’s the catch? The catch is that random oracles don’t really exist. Good hashes only approximately behave like them.
Take one of the most famous hashes — SHA-256. Try inputting a few numbers, SHA(1), SHA(2), etc. Run a few statistical tests to see in which ‘regions’ of the number grid $$2^{256}$$ looks random.
But true random functions don’t exist, almost by definition — a function is algorithmic, deterministic. Even worse, since random oracles don’t really exist, there are pathological examples where a scheme is proven to be secure in the random oracle model but is insecure when instantiated with any hash function.
So is this at all useful? It gives a bit of confidence and it helps catching bad protocol design mistakes in other parts of the protocol. The intuition is that if your hash function behaves like a random oracle, then the proof somewhat applies to it. If it doesn’t, then the proof doesn’t even patter. It’s also sometimes the best we can get for some protocols, so it’s much better than nothing.
You may also wonder - what are good instantiations of random oracles? When the source and target spaces are just sequences of bytes, they can more easily be built directly from known hash functions, like SHA256. When we’re talking about prime fields and elliptic curve groups it’s harder and there are many pitfalls on the way.
# What’s Next?
If you’re curious to see how programmable random oracles can be used to prove security for the famous BLS signature scheme, read the appendix!
Lastly, if you want to try to break a protocol which uses a subtly bad instantiation of a random oracle, check out the first puzzle of zkhack :)
# Appendix
## BLS security proof
Let’s look at a shortened version of a concrete example, BLS signatures (if you need some background on BLS signatures, I recommend reading this:
Assumption - co-CDH: Given $$g_1^a \in \mathbb{G}_1$$, $$g_2^a \in \mathbb{G}_2$$ and $$h \in \mathbb{G}_2$$, computing $$h^a \in \mathbb{G}_2$$ is hard. We are given $$g,g^a,h$$ as a challenge.
Security property - Existential Unforgability, roughly stating that given a public key $$pk$$, even if you see a bunch of signatures on different messages, you will not be able to forge a signature on another message without access to the secret key.
Proof method sketch:
We build an adversary $$A$$ that breaks co-CDH.
The adversary $$F$$ who claims they break Existential Unforgability is given a public key $$pk=g^a \in \mathbb{G}_1$$, the same one that is received in the challenge where both $$A$$ and $$F$$ don’t know $$a$$, and makes a few random oracle queries while they’re still in the first phase of getting signatures on different messages. They also make signature queries. We know that they will make more random oracle queries than signature queries, since they have a phase where they request signatures, which implies both a random oracle query and a signaturequery , and a phase where they forge a signature, which means they query the random oracle but don’t request a signature.
We guess beforehand an index of a message where $$F$$ queries the random oracle but does not request a signature. This would also be the message for which $$F$$ forges a signature.
In those queries where both a random oracle is queries and a signature is requested, we have to provide answers that would pass the BLS verification check $$e(pk, H(m)) = e(g_1, \sigma)$$ and also look random. We choose a random $$r$$, and compute $$H(m) = g_2^r, \sigma = (g_2^a)^r$$, Note that we can do that without knowing $$a$$ and this still passes the verification check.
In the query where $$F$$ only queries the random oracle, we embed the rest of the challenge that we got: $$H(m) = g_2^r \cdot h$$. Note that for this case we couldn’t compute $$\sigma$$ if we were requested - since we don’t know the entire exponent, it’s not only $$r$$ anymore. Also note that this still appears random since $$h^r$$ appears random.
Since $$F$$ knows to forge a signature $$\sigma$$ that satisfies $$e(pk,H(m)) = e(g_1, \sigma) \Rightarrow e(g_1^a, g_2^r \cdot h) = e(g_q, \sigma)$$, we have that $$\sigma = (g_2^r \cdot h)^a$$. Taking $$\frac{\sigma}{(g_2^a)^r}$$, we get $$h^a$$ as desired!
## Broken hash-to-curve
Recall that we’re looking for a hash that takes a mesage and gives a group element. One particular example I’ve seen in the wild which is broken is doing it in two steps: hash to field $$H(m)$$ and then multiply by the generator $$g_2^{H(m)}$$. This is broken since given a signature, being $$g_2^{H(m)x}$$, where $$x$$ is the secret key, we can transform it into a signature for $$m’$$ without knowing $$x$$, by doing $$(g_2^{H(m)x})^{H(m)^{-1}H(m')} = g_2^{H(m')x}$$.
|
2021-11-26 23:42:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5643662214279175, "perplexity": 793.8614402315566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358074.14/warc/CC-MAIN-20211126224056-20211127014056-00599.warc.gz"}
|
https://scholarworks.alaska.edu/handle/11122/4408
|
### Recent Submissions
• #### Hydrodynamics of downstream pointed guidevanes: a case study of the Hess Creek meander bend realignment
The hydrodynamics of downstream pointed guidevanes installed to realign an eroding meander bend upstream of the Trans- Alaska Pipeline bridge is studied. The bridge is located at Hess Creek, 137 km north of Fairbanks, Alaska. Effect of the downstream pointed vanes on bed form, erosion, longitudinal and transverse slopes, three dimensional velocity profiles, flow patterns, and other hydraulic parameters for high and low flows are compared and analyzed. Six years after installation of the vanes the realigned thalweg remains in its original design location. The longitudinal bed profile changed from a dominant continuous pool typical of natural meander bends on gravel stream beds to a series of pool riffles. However, there is minimal change in maximum scour depth between post and pre installation of the vanes. Secondary or transverse current patterns which cause scour or erosion on the outer bank are severely disrupted due to interference caused by the vanes. There is a consistent weak counter current in reaches between the vane stems due to flow separation caused by expansion of flow area. This condition was more dominant during low flows when the vanes were not completely submerged. From the tip of the vanes to the inner bank a more dominant transverse and streamwise current was measured. Location of the original eroding outer bank remains unchanged since installation of the vanes. This indicates that the vanes have to this point effectively realigned the meander bend and arrested additional lateral movement of the meander.
• #### Creep of grouted anchors in ice-rich silt
Creep is a critical consideration for designing anchors in ice-rich silt. In this study, creep was evaluated for grouted anchors in ice-rich silt by laboratory tests. A total of nineteen staged-load pullout tests were conducted on smooth grouted anchors. The anchors were loaded until either a tertiary creep stage or the capacity of the load system was reached. Soil temperatures evaluated in this study ranged from 32 °F to 26.6 °F. It was found that the onset of tertiary creep for smooth anchors was around 0.03 inches, which was much smaller than that suggested in the literature for rough anchors (1.0 inch). Given the same shear stress and soil temperature, the observed creep displacement rates for smooth anchors were greater than those given by the existing design guidelines for rough anchors. A new creep model was proposed in which soil temperature was included as an additional variable. Model parameters were developed as a function of soil temperature and moisture contents by using the test data. The model predictions were compared with the laboratory tests. It was found that the creep displacement rates decreased with the decreasing of soil moisture contents and temperature. Based on the analysis of laboratory test data, design charts were provided to give the allowable pullout capacity for smooth anchors in ice-rich silt.
• #### Attenuation of the herbicide glyphosate along railroad corridors in Alaska
Following the application of glyphosate in the formulation of AquaMaster® at two contrasting sub-arctic zones along the railroad corridor in Alaska, attenuation of the herbicide glyphosate was investigated. Study sites were established in continental and coastal zones. Glyphosate soil attenuation was similar to temperate regions during the growing season but exhibited an extended persistence during the winter months. Although glyphosate microbial degradation likely slowed during winter, both sites showed evidence of slight glyphosate degradation during the winter months. The coastal site attenuated more rapidly than the continental site which is presumably due to increased rainfall relative to the continental site. Glyphosate attenuation at the coastal site was likely driven by dispersion while microbial degradation was responsible for the attenuation of glyphosate at the continental site. Movement to subsurface soils (10-25 cm) at low concentrations was observed at both sites with slightly more transport at the coastal site than the continental site. Glyphosate transport to groundwater along railroad corridors was not conclusive. Vegetation cover reduction was reduced at the continental site but could not be determined at the coastal site.
Western Alaska lacks gravel suitable for construction of roads and airports. As a result, gravel is imported, at a cost of between $200 and$600 per cubic yard, to fill transportation construction needs. In an effort to reduce these costs, the Alaska University Transportation Center (AUTO) began searching for methods to use local materials in lieu of imported gravel. The approach discussed in this thesis uses geofibers and chemical additives to achieve soil stabilization. Geofibers and chemical additives are commercially available products. The goal of the research presented in this thesis is to test the impact of addition of two geofiber types, six chemical additives, and combinations of geofibers with chemical additives on a wide variety of soil types. California Bearing Ratio (CBR) testing was used to measure the effectiveness of the treatments. Soils ranging from poorly graded sand (SP) to low plasticity silt (ML) were all effectively stabilized using geofibers, chemical additives, or a combination of the two. Through the research conducted a new method of soil stabilization was developed which makes use of curing accelerators in combination with chemical additives. This method produced CBR values above 300 for poorly graded sand after a seven day cure.
• #### Characterizing the berthing load environment of the Seattle ferry teminal, Bremerton slip
This manuscript characterizes and presents design recommendations for berthing demands on ferry landing structures. There is a lack of research focused on the berthing load demand imparted by ferry class vessels, therefore the load criteria used for design is often based on a number of assumptions. This study involved a one-year field study of the structural load environment of wingwalls at the Bremerton Slip of the Seattle Ferry Terminal, located in Elliott Bay adjacent to Seattle, Washington. Measurements of marine fender displacement, vessel approach distance with respect to time, and. pile strain were used to determine berthing demands. Berthing event parameters were characterized using the Python programming language, compiled, and analyzed statistically. Probability theory was used to provide design value recommendations for berthing energy, force, approach velocity, berthing factor, and berthing coefficient. This study presents a number of engineering design aids intended to quantify the berthing load environment of wingwalls in the Washington State Ferry System.
• #### Evaluating dust palliative performance and longevity using the UAF-DUSTM
Fugitive dust emissions from gravel surfaces such as unpaved roads and airport runways are a major source of particulate matter pollution in the environment. Fugitive dust emissions impact community health, decrease visibility and contribute to surface degradation. Chemical additives, also known as dust palliatives, are often used to reduce these dust emissions. Although these products have been widely used, little is known about their effectiveness and longevity. There is currently no standard test method to quantify the reduction in fugitive dust emissions provided by dust palliatives. The UAF-DUSTM was developed to provide a consistent test method for determining the effectiveness and longevity of dust palliative applications. Dust palliatives applications throughout Alaska were monitored for several years. The results show that dust palliatives can significantly reduce particulate matter emissions and be effective for several years.
• #### The impact of a fluctuating freezing front on ice formation in freezing soil
Frost heave is typically associated with the formation of segregation ice in fine-grained soil. Coarse-grained soil is generally considered to be non-frost susceptible. Field observa-tions and laboratory experiments show that coarse-grained soil can be extremely ice-rich in specific conditions. Previous studies have shown that oscillation of the frozen-unfrozen boundary can lead to the formation of ice by a mechanism different from the segregation ice mechanism. Conditions related to the formation of ice in coarse-grained soil were in-vestigated using modern laboratory techniques. Fourteen tests were conducted on five soil types. The thickness of soil subjected to freeze-thaw cycles was varied and controlled by the magnitude and duration of applied soil temperatures. The thickness of the ice formed increased when the sample drainage was limited or prevented during cooling. Under spe-cific conditions, the formation of a discrete ice layer was observed in coarse-grained soils. Seven samples were scanned with the pCT scanner at the completion of the warming and cooling tests. The sub-samples scanned were analyzed in 2D cross-sections, and charac-terized as 3D reconstructions. Frost heave induced by the formation of ice was observed in both fine- and coarse-grained soils, including soils that were found to be traditionally non-frost susceptible.
• #### The role of tundra vegetation in the Arctic water cycle
Vegetation plays many roles in Arctic ecosystems, and the role of vegetation in linking the terrestrial system to the atmosphere through evapotranspiration is likely important. Through the acquisition and use of water, vegetation cycles water back to the atmosphere and modifies the local environment. Evapotranspiration is the collective term used to describe the transfer of water from vascular plants (transpiration) and non-vascular plants and surfaces (evaporation) to the atmosphere. Evapotranspiration is known to return large portions of the annual precipitation back to the atmosphere, and it is thus a major component of the terrestrial Arctic hydrologic budget. However, the relative contributions of dominant Arctic vegetation types to total evapotranspiration is unknown. This dissertation addresses the role of vegetation in the tundra water cycle in three chapters: (1) woody shrub stem water content and storage, (2) woody shrub transpiration, and (3) partitioning ecosystem evapotranspiration into major vegetation components. In Chapter 1 I present a method to continuously monitor Arctic shrub water content. The water content of three species (Salix alaxensis, Salix pulchra, Betula nana) was measured over two years to quantify seasonal patterns of stem water content. I found that spring uptake of snowmelt water and stem water storage was minimal relative to the precipitation and evapotranspiration water fluxes. In Chapter 2, I focused on water fluxes by measuring shrub transpiration at two contrasting sites in the arctic tundra of northern Alaska to provide a fundamental understanding of water and energy fluxes. The two sites contrasted moist acidic shrub tundra with a riparian tall shrub community having greater shrub density and biomass. The much greater total shrub transpiration at the riparian site reflected the 12-fold difference in leaf area between the sites. I developed a statistical model using vapor pressure deficit, net radiation, and leaf area, which explained >80% of the variation in hourly shrub transpiration. Transpiration was approximately 10% of summer evapotranspiration in the tundra shrub community and a possible majority of summer evapotranspiration in the riparian shrub community. At the tundra shrub site, the other plant species in that watershed apparently accounted for a much larger proportion of evapotranspiration than the measured shrubs. In Chapter 3, I therefore measured partitioned evapotranspiration from dominant vegetation types in a small Arctic watershed. I used weighing micro-lysimeters to isolate evapotranspiration contributions from moss, sedge tussocks, and mixed vascular plant assemblages. I found that mosses and sedge tussocks are the major constituents of overall evapotranspiration, with the mixed vascular plants making up a minor component. The potential shrub transpiration contribution to overall evapotranspiration covers a huge range and depends on leaf area. Predicted increases in shrub abundance and biomass due to climate change are likely to alter components of the Arctic hydrologic budget. The thermal and hydraulic properties of the moss and organic layer regulate energy fluxes, permafrost stability, and future hydrologic function in the Arctic tundra. Shifts in the composition and cover of mosses and vascular plants will not only alter tundra evapotranspiration dynamics, but will also affect the significant role that mosses, their thick organic layers, and vascular plants play in the thermodynamics of Arctic soils and in the resilience of permafrost.
• #### Establishing and testing detection methods for anti-icing and deicing chemicals using spectral data
Snow and ice accumulation on pavement reduce roadway surface friction and consequently result in diminished vehicle maneuverability, slower travel speeds, reduced roadway capacity, and increased crash risk. Though the use of chlorides and other freeze-inhibiting substances have been shown to reduce these negative factors, methods to quantify and analyze snow and ice remediation methods as well as the imposed loss of material are needed to allow state and municipal agencies to better allocate winter maintenance resources and funding. The use and application of chlorides, sand, and their related mixtures have proven to be highly effective for controlling or removing the development of ice on the roadway surface. However, if the amount of salt in solution becomes too dilute, then it no longer retains the capacity to control the development of, or to melt, ice on the roadway and may prove to be more detrimental by allowing the previously melted material to refreeze with a smoother (i.e., more slippery) surface state. The goal of this project was to determine to what extent winter roadway surfaces can be analyzed using spectrometry to determine the longevity and coverage of various types of applications. Using a systematically paired analysis of changes in spectrometric curves as solution concentrations change, relationships were generated which detected change in deicing and anti-icing compounds reliably in a lab setting. Field results were less reliable, suggesting that further comparisons and a more in-depth spectral library are needed.
• #### Snowmelt hydrology in the upper Kuparuk watershed, Alaska: observations and modeling
The Fourth National Climate Assessment Report (2018) indicates that Alaska has been warming at a rate two times greater than the global average with the Arctic continuing to be experiencing higher rates of warming. Snowmelt driven runoff is the largest hydrologic event of the year in many Alaska Arctic river systems. Changes to air temperature, permafrost, and snow cover impact the timing and magnitude of snowmelt runoff. This thesis examines the variability in hydrometeorological variables associated with snowmelt to better understand the timing and magnitude of snowmelt runoff in headwater streams of Arctic Alaska. The objectives of this thesis are to: (1) use observational data to evaluate trends in air temperature, precipitation, snow accumulation, and snowmelt runoff data; (2) relate precipitation, snow cover, and air temperature to snowmelt runoff using the physically-based Snowmelt Runoff Model (SRM) to test the applicability of the model for headwater streams in the Arctic. The focus of this study is the Upper Kuparuk watershed area, located in Alaska on the north side of the Brooks Range, where several monitoring programs have operated long enough to generate a 20-year climate record, 1993-2017. Long-term air temperature, precipitation, and streamflow data collected by the University of Alaska Fairbanks at the Water and Environmental Research Center and other agencies were used for statistical analysis and modeling. While no statistically significant trends in snow accumulation and snowmelt runoff were identified during 1993-2017, observations highlight large year-to-year variability and include extreme years. Snow water equivalent ranges from 5.4 to 17.6 cm (average 11.0 cm), peak snowmelt runoff ranges from 3.84 to 50.0 cms (average 22.4 cms), and snowmelt peak occurrence date ranges from May 13 to June 5 for the Upper Kuparuk period of record. The spring of 2015 stands out as the warmest, snowiest year on record in the Upper Kuparuk. To further investigate the runoff response to snowmelt in 2015, remote sensing snow data was analyzed and recommended parameters were developed for SRM use in the Upper Kuparuk watershed. Recommended parameters were then applied to 2013 snowmelt runoff as a test year. Model results varied between the two years and provide good first-order approximation of snowmelt runoff for headwater rivers in the Alaska Arctic.
• #### Pre-stress loss due to creep in precast concrete decked bulb-tee girders under cold climate conditions
This report presents guidelines for estimating pre-stress loss in high-strength precast pretensioned concrete Decked Bulb-Tee (DBT) bridge girders in cold climate regions. The guidelines incorporate procedures yielding more accurate predictions of shrinkage and concrete creep than current 2017 American Association of State Highway and Transportation Officials (AASHTO) specifications. The results of this report will be of particular interest to researchers and cold climate bridge design engineers in improved predictions of design life and durability. The use of high-strength concrete in pre-tensioned bridge girders has increased in popularity among many state highway agencies. This fact is due to its many beneficial economic and constructability aspects. The overall cost of longer girders with increased girder spacing in a bridge that is precast with high strength concrete can be significantly reduced through the proper estimating factors. Recent research indicates that the current provisions used for calculating prestress losses in cold regions for high-strength concrete bridge girders may not provide reliable estimates. Therefore, additional research is needed to evaluate the applicability of the current provisions for estimating pre-stress losses in high-strength concrete DBT girders. Accurate estimations of pre-stress losses in design of pre-tensioned concrete girders are affected by factors such as mix design, curing, concrete strength, and service exposure conditions. The development of improved guidelines for better estimating these losses assists bridge design engineers for such girders and provide a sense of security in terms of safety and longevity. The research includes field measurements of an environmentally exposed apparatus set up to measure shrinkage, creep and strain in cylinders loaded under constant pressure for a full calendar year.
• #### Numerical simulation of thermo-mechanical behavior of gypsum board wall assembly
Fire safety has become a significant concern to public safety; especially in the aftermath of 9/11 attack where, according to official reports, three World Trade Center buildings collapsed because of fire. Therefore, the level of thermal insulation required from building material and structural elements has increased. In recent years, gypsum board wall assemblies have been increasingly used as compartmentation for high-rise residential and commercial buildings. The increasing popularity of gypsum board wall assemblies is due to their relatively high strength-to-weight ratio, ease of prefabrication, fast erection and good thermal insulation. Before implementation of any building material or structural element, its Fire Resistance Rating must be determined by subjecting the material or element to a standard furnace fire test. Over the years, a large database has been collected for the Fire Resistance Rating of building materials and structural elements. However, due to the expensive and time-consuming nature of the standard fire tests, determining an accurate Fire Resistance Rating can be a difficult task. In this study, the author numerically evaluated the Fire Resistance Rating of a new gypsum board wall assembly. Composite steel-EPS (Expanded Polystyrene) insulation is added to a traditional gypsum board wall assembly. The author first did numerical simulation of an experiment on the thermal response of a non-load-bearing gypsum board wall assembly to verify the thermal modeling methodology. The author then did numerical simulation of an experiment on the mechanical response of a load-bearing gypsum board wall assembly to verify the mechanical modeling methodology. Finally, the author used the verified thermal and structural modeling methodology to simulate the new composite steel-EPS gypsum board wall assembly and obtained its numerical Fire Resistance Rating. This Fire Resistance Rating should be compared with future experimental results of the new wall assembly. All modeling was done with ABAQUS V6.14.
• #### 2-D bed sediment transport modeling of a reach on the Sagavanirktok River, Alaska
Conducting a 2-D sediment transport modeling study on the Sagavanirktok River has offered great insight to bed sediment movement. During the summer of 2017, sediment excavation of two parallel trenches began in the Sagavanirktok River, in an effort to raise the road elevation of the Dalton Highway to remediate against future floods. To predict the time in which the trenches refill with upstream sediment a 2-D numerical model was used. Three scenarios: (1) a normal cumulative volumetric flow, (2) a max discharge event, and (3) a max cumulative volumetric flow, were coupled with three sediment transport equations: Parker, Wilcock-Crowe and Meyer Peter and Müller for a total of 9 simulations. Results indicated that scenario (1) predicted the longest time to fill, ranging from 1-6 years followed by scenario (2), an even shorter time, and scenario (3) showing sustained high flows have the capability to nearly refill the trenches in one year. Because the nature of this research is predictive, limitations exist as a function of assumptions made and the numerical model. Therefore, caution should be taken in analyzing the results. However, it is important to note that this is the first time estimates have been calculated for an extraction site to be refilled on the Sagavanirktok River. Such a model could be transformed into a tool to project filling of future material sites. Ultimately, this could expedite the permitting process, eliminating the need to move to a new site by returning to a site that has been refilled from upstream sediment.
• #### An evaluation of the use of moderate resolution imaging spectroradiometer (MODIS)-derived aerosol optical depth to estimate ground level PM2.5 in Alaska
Under repeated external loads, engineering structures or objects may fail by large plastic deformation or fatigue. Shakedown will occur when the accumulation of plastic deformation ceases under repeated loads; the response of the system is then purely elastic. Fatigue and shakedown have been individually studied for decades and no attempt has been made to couple these two mechanisms in the mechanics analysis. In this study, an attempt is made to couple shakedown and fatigue in pavement mechanics analysis using numerical simulation. The study covers three main areas: fatigue, static shakedown, and kinematic shakedown analysis. A numerical approach to fatigue analysis is proposed based on elastic-plastic fracture mechanics. The amount of the crack growth during each load cycle is determined by using the J-integral curve and $\rm R\sb{-}curve.$ Crack propagation is simulated by shifting the $\rm R\sb{-}curve$ along the crack growth direction. Fatigue life is predicted based on numerically estabiished fatigue equation. The numerical results indicate that the algorithm can be applied to fatigue analyses of different materials. A numerical algorithm based on the finite element method coupled with the nonlinear programming is proposed in static shakedown analysis. In this algorithm, both the inequality and equality constraints are included in the pseudo-objective function. These constraints are normalized by the material yield stress and the reference load, respectively. A multidirectional search algorithm is used in the optimization process. The influence of finite element mesh on shakedown loads is investigated. An algorithm that utilizes eigen-mode to construct the arbitrary admissible plastic deformation path is proposed in kinematic shakedown analysis. This algorithm converts the shakedown theorem into a convex optimization problem and can be solved by using a multidirectional search algorithm. Fatigue behavior of a two-layer full-depth pavement system of asphalt concrete is analyzed using the proposed numerical algorithm. Fatigue crack growth rate is estimated and fatigue life is predicted for the system. Shakedown analyses are also carried out for the same pavement system. The comparison between the shakedown load and the fatigue failure load with respect to the same crack length indicates that the shakedown dominates the response of the pavement system under traffic load.
|
2022-08-10 23:11:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3910098075866699, "perplexity": 4044.570677873758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571222.74/warc/CC-MAIN-20220810222056-20220811012056-00699.warc.gz"}
|
https://gamedev.stackexchange.com/questions/124760/texture-coordinates-do-not-map-correctly-in-direct3d11-game-engine
|
# Texture coordinates do not map correctly in Direct3D11 game engine
I beg your pardon if this question has been already answered elsewhere or if this is the wrong site, but I have a serious issue with rendering textures with Direct3D 11.
Using Cinema 4D R17, I created a simple cube, triangulated all polygons, and UV mapped a texture, as you can see here:
and it renders correctly.
Next, I exported the file to .x because I created a simpler mesh format and .x is ideal for getting vertices, indices, normals, texture coordinates, etc. This conversion is perfect as all data from the source file is successfully transferred to my file (I checked and double checked it).
However, when I upload the file in my game engine, I get the following result:
The cube is being rendered incorrectly where that strange pattern forms.
This is the original .x file (I included only relevant parts):
Mesh CINEMA4D_Mesh {
8;
// Cube. These are vertices.
-0.01;-0.01;-0.01;,
-0.01;0.01;-0.01;,
0.01;-0.01;-0.01;,
0.01;0.01;-0.01;,
0.01;-0.01;0.01;,
0.01;0.01;0.01;,
-0.01;-0.01;0.01;,
-0.01;0.01;0.01;;
12;
// Cube. These are indices.
3;0,1,3;, // '3;' means that this face contains 3 vertices.
3;2,3,5;,
3;4,5,7;,
3;6,7,1;,
3;1,7,5;,
3;6,0,2;,
3;0,3,2;,
3;2,5,4;,
3;4,7,6;,
3;6,1,0;,
3;1,5,3;,
3;6,2,4;;
MeshNormals {
8;
// Cube
-0.408;-0.408;-0.816;,
-0.667;0.667;-0.333;,
0.667;-0.667;-0.333;,
0.408;0.408;-0.816;,
0.408;-0.408;0.816;,
0.667;0.667;0.333;,
-0.667;-0.667;0.333;,
-0.408;0.408;0.816;;
12;
// Cube
3;0,1,3;,
3;2,3,5;,
3;4,5,7;,
3;6,7,1;,
3;1,7,5;,
3;6,0,2;,
3;0,3,2;,
3;2,5,4;,
3;4,7,6;,
3;6,1,0;,
3;1,5,3;,
3;6,2,4;;
}
MeshTextureCoords {
8;
// Cube
1.0;1.0;,
0.0;1.0;,
1.0;0.0;,
1.0;1.0;,
1.0;1.0;,
1.0;0.0;,
0.0;1.0;,
1.0;0.0;;
}
MeshMaterialList {
2;
12;
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1;
Material C4DMAT_NONE {
1.0;1.0;1.0;1.0;;
1.0;
0.0;0.0;0.0;;
0.0;0.0;0.0;;
}
Material C4DMAT_Mat {
1.0;1.0;1.0;1.0;;
1.0;
0.0;0.0;0.0;;
0.0;0.0;0.0;;
TextureFilename {
"tex.bmp";
}
}
{C4DMAT_Mat}
}
}
I also invert the vtexture coordinate (v = 1 -v). This is the vertex data I extract:
8 // Number of vertices.
-0.01 -0.01 -0.01 -0.01 0.01 -0.01 0.01 -0.01 -0.01 0.01 0.01 -0.01 0.01 -0.01 0.01 0.01 0.01 0.01 -0.01 -0.01 0.01 -0.01 0.01 0.01 // Vertices stored in X, Y, Z.
1 0 0 0 1 1 1 0 1 0 1 1 0 0 1 1 // Texture coordinates stored in U, V.
36 // Indices count.
0 1 3 2 3 5 4 5 7 6 7 1 1 7 5 6 0 2 0 3 2 2 5 4 4 7 6 6 1 0 1 5 3 6 2 4 // Indices.
tex.bmp // Texture file name.
I read that there might be a conflict between texture coordinates of vertices that are shared among different faces and that resolving this would imply not using index buffers.
EDIT: I did what @wondra said (replaced texture coordinates with 0 0 0 1 1 0 1 1 0 1 0 0 1 1 1 0 and it looks better indeed:
I am not sure what is wrong here. Could anybody help me? Thank you.
UPDATE: The Cinema 4D .obj is broken. Use Blender or anything else.
• I cannot see any [0,1] texture coordinate in your file(unlesss it is not u v u v u v). Are you sure the data are correct? – wondra Jul 1 '16 at 17:58
• Yes they are U, V. – featherless biped Jul 1 '16 at 18:34
• In that case, you need to triple-check your importing/converting code. One of the corners of the texture, upper left, vanquished somewhere in the process(the image even looks like it). The u,v data with just 3 corners cannot be possibly correct. – wondra Jul 1 '16 at 18:39
• I don't know what's wrong, I uploaded more information. @wondra May you check it, please? – featherless biped Jul 2 '16 at 5:43
• What if you try to replace the u,v in the processed file with 0 0 0 1 1 0 1 1 0 1 0 0 1 1 1 0 does it look "better", at least one of the faces? Note: those are probably not entirely correct but should the cube look any better it would prove my theory. Also looking at the original file - there actually are just 3 corners exported - should it be like that? Is it mapped in the editor with just 3 of them(could be)? – wondra Jul 2 '16 at 9:25
I you look carefully your cube in Cinema4D, the top most corner show different texture coordinate for the side and the top ( probably the same between front and right side, but the texture can't let me state it for sure ).
And in your final cube indices, you only have a range [0..7]. On the GPU, a vertex is a full tuple of values with a single index. It means that your cube is not made of 8 vertex but 24, because each corner has to be split for his own normal and texture coordinate.
Once you split each face properly, you should get the correct visual. A cube is not the best example of index buffer because the vertex reuse rate is quite low, but you still get the idea.
It turns out that the .obj file exported by Cinema 4D was broken.
Using Blender or anything else to create the .obj file seems to solve the issue.
• I was thinking this as an answer. I will do as you say. – featherless biped Jul 4 '16 at 17:11
• Oh, hmm. Then make it clear that it's an answer. The term "UPDATE" is generally used to add additional information to the question, not to introduce a solution. – Vaillancourt Jul 4 '16 at 17:14
• You can rephrased if you feel like it, but now it makes it look more like a good answer :) – Vaillancourt Jul 4 '16 at 17:18
|
2021-01-27 09:19:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2769297659397125, "perplexity": 854.1693303811275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704821381.83/warc/CC-MAIN-20210127090152-20210127120152-00022.warc.gz"}
|
https://www.cuemath.com/ncert-solutions/q-5-exercise-8-1-comparing-quantities-class-8-maths/
|
# Ex.8.1 Q5 Comparing Quantities - NCERT Maths Class 8
Go back to 'Ex.8.1'
## Question
If Chameli had $$\rm{Rs}\, 600$$ left after spending $$75\%$$ of her money, how much did she have in the beginning?
Video Solution
Comparing Quantities
Ex 8.1 | Question 5
## Text Solution
What is known?
Percentage of amount Chameli spent $$= 75\%$$
Amount left with her $$= \rm{Rs} \,600$$
What is unknown?
Amount Chameli had in the beginning
Reasoning:
Since the whole is considered as $$100\%,$$ Percentage of amount left with Chameli is
$\left( {100 - 75} \right)\%= 25\%$
Assuming the total amount in the beginning as $$x,$$ and equating $$25\%$$ of $$x$$ to $$600,$$ the value of $$x$$ can be found.
Steps:
Let the total amount Chameli had with her in the beginning $$= x$$
Percentage of amount left with Chameli $$= 100-75 = 25%$$
\begin{align}25\% {\rm\;{of}}\;x &= 600\\\frac{{{25}}}{{{100}}}{ \times }x&= 600\\ x &= \frac{{{600 \times 100}}}{{{25}}}\\&= 2,400\end{align}
The amount that Chameli had in the beginning is $$\rm{Rs} \,2,400$$
Learn from the best math teachers and top your exams
• Live one on one classroom and doubt clearing
• Practice worksheets in and after class for conceptual clarity
• Personalized curriculum to keep up with school
|
2021-05-14 23:44:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9967324137687683, "perplexity": 8999.74764293734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991829.45/warc/CC-MAIN-20210514214157-20210515004157-00045.warc.gz"}
|
https://www.chemeurope.com/en/encyclopedia/Acid_value.html
|
My watch list
my.chemeurope.com
Acid value
In chemistry, acid value (or "neutralization number" or "acid number" or "acidity") is the mass of potassium hydroxide (KOH) in milligrams that is required to neutralize one gram of chemical substance. The acid number is a measure of the amount of carboxylic acid groups in a chemical compound such as a fatty acid. In a typical procedure, a known amount of sample dissolved in organic solvent is titrated with a solution of potassium hydroxide with known concentration and with phenolphthalein as a color indicator.
The acid number is used to quantify the amount of acid present, for example in a sample of biodiesel. It is the quantity of base, expressed in milligrams of potassium hydroxide, that is required to neutralize the acidic constituents in 1 g of sample.
$AN=(V_{eq}-b_{eq})N\frac{56.1}{W_{oil}}$
Veq is the amount of titrant (ml) consumed by the crude oil sample and 1ml spiking solution at the equivalent point, beq is the amount of titrant (ml) consumed by 1 ml spiking solution at the equivalent point, and 56.1 is the molecular weight of KOH.
The molarity concentration of titrant (N) is calculated as such:
$N=\frac{1000W_{KHP}}{204.23V_{eq}}$
In which WKHP is the amount (g) of KHP in 50 ml of KHP standard solution, Veq is the amount of titrant (ml) consumed by 50 ml KHP standard solution at the equivalent point, and 204.23 is the molecular weight of KHP.
Acid number (mg KOH/g oil) for biodiesel is preferred to be lower than 3.
There are standard methods for determining the acid number, such as ASTM D 974 andDIN 51558 (for mineral oils, biodiesel).
As fats rancidify, triglycerides are converted into fatty acids and glycerol, causing an increase in acid number.
|
2021-09-20 20:38:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7645809650421143, "perplexity": 4145.372367285414}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057091.31/warc/CC-MAIN-20210920191528-20210920221528-00411.warc.gz"}
|
https://maker.pro/forums/threads/modulateable-laser-diode-drivers.56041/page-2
|
# modulateable laser diode drivers
M
#### [email protected]
Jan 1, 1970
0
Mike said:
Videos are of the robot's first steps. There were countless bugs in the
code at that point, and motion was sloppy. If you look carefully in the
video you can actually see that one of the legs turns off during
certain movements - just becomes a complete gimp leg. Was a major
software bug that I became aware of after taking that video.
Anyways - I was planning on having the robot stop, and then scan, then
start again. What clock jitter are you referring to?
Thanks,
-Mike
I figured that your timing was based on a clock, but maybe with the
pulse with to voltage scheme, jitter won't be a problem. That is,
whatever initiates the timing pulse will have jitter, especially if it
is based on a crystal clock (and what isn't). However, the exact start
time of the pulse shouldn't matter, just the pulse width.
J
#### Joerg
Jan 1, 1970
0
Hello Mike,
Analog's offerings look quite good. I especially like the ADN2525, with
a 24ps rise time! Unfortunately, I can't find it for sale in single
quantites, nor do they seem to sample it. I plan on contacting them on
Monday - but being a student that probabaly won't go too far. Some of
the others look pretty good too - I especially like the ADN2870, which
is available for about $10 in single quantities which is completely reasonable. When I was a student I often rolled my own (and then sometimes spent the saved money in the pub...). If you have the time this provides a learning opportunity far beyond what a university can offer. The design, that is, not the pub ;-) However, be careful. Saving$10 and then frying a \$300 laser diode
because of a slight circuit bug would not be such a good experience.
Happens easily. Something bursting into oscillation, loop instability etc.
I have been thinking about going the TEC route. I think initially I
will skip that as it is added cost and complexity, and I don't think it
will be terribly important as I don't particuarly care about having the
same power. Does that sound like a logical approach?
Study the diode's data sheet carefully. Many laser diodes cannot be run
without cooling, at least not for too many milliseconds. Plus it will
complicate the diode current regulation if your temperature is all over
the place.
J
#### Joerg
Jan 1, 1970
0
Hello Win,
... Using small dimensions, I have been able to
modulate low-cost CD-ROM lasers up to 200MHz in this fashion,
and done a bit differently in a quick lash up, up to 1.5GHz
(I may have modulated the laser successfully well above that,
but the photodiode I was using wimped out).
Even with a common base transistor up front from the transimpedance amp?
From my Jan 10 2005 s.e.d. post,
"The 664nm red DVD laser was an Hitachi HL6504FM and my detector
was an Optek OPF480 PIN diode, biased at -100V, measured into a
25-ohm load (double-end termination) with an HP network analyzer.
"Components were 1206 SMT hand-soldered with zero-distance
spacing. Bias-tees were 12GHz-bandwidth Picosecond Pulse Labs.
A first attempt at a PCB stripline replacement only goes to
600MHz so far."
. bias-tee laser
. ________| |______,---------,
. ________|-||-+--|______(-50R-|>|-'
. 50-ohm |____X__| \\ mirror PD bias
. coax | \\| optics 100V
. 50R //| & etc |
. | // __|____
. laser // ________| X |___50-ohm
. current ,-|>|----+-)_______|--+-||-|____ coax
. supply +-||-50R-' | coax |_______| term.
. '----------' bias-tee
Maybe the photodiode would benefit from a strict current coupling for
the amp (like into the emitter of an RF transistor) instead of a 50ohm
link. Guess if that was more than 10 years ago the whole amp would have
to have been discrete as well.
J
#### joseph2k
Jan 1, 1970
0
Mike said:
I am currently planning on pulsing it with a 1-2ns pulse, every 50ns or
so. These numbers can be changed if necessary, but the important thing
is the rising edge time, as it will be starting a time to digital
converter and then the returning pulse will be stopping it.
The project is for a robot I've been working on for the last 15 months
or so. I want to give it the ability to avoid obstacles and map out
areas. If you're interested, there are some pictures, schematics
(outdated and with some mistakes), and some really old videos of it
here: https://netfiles.uiuc.edu/mnoone/www/.
Thanks,
-Mike
Mike, important question, are you going to turn the laser diode on and off
or just modulate it within the laser emission region? The diode properties
make a difference in the required properties of the drive circuit.
Replies
1
Views
313
Replies
11
Views
524
Replies
1
Views
1K
Replies
0
Views
619
Replies
0
Views
1K
|
2023-01-28 13:29:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44477295875549316, "perplexity": 6609.138672011184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499634.11/warc/CC-MAIN-20230128121809-20230128151809-00513.warc.gz"}
|
https://www.jobilize.com/algebra/course/3-2-domain-and-range-functions-by-openstax?qcr=www.quizover.com&page=6
|
3.2 Domain and range (Page 7/11)
Page 7 / 11
Writing a piecewise function
A museum charges $5 per person for a guided tour with a group of 1 to 9 people or a fixed$50 fee for a group of 10 or more people. Write a function relating the number of people, $\text{\hspace{0.17em}}n,\text{\hspace{0.17em}}$ to the cost, $\text{\hspace{0.17em}}C.$
Two different formulas will be needed. For n -values under 10, $\text{\hspace{0.17em}}C=5n.\text{\hspace{0.17em}}$ For values of $\text{\hspace{0.17em}}n\text{\hspace{0.17em}}$ that are 10 or greater, $\text{\hspace{0.17em}}C=50.$
$C\left(n\right)=\left\{\begin{array}{ccc}5n& \text{if}& 0
Working with a piecewise function
A cell phone company uses the function below to determine the cost, $\text{\hspace{0.17em}}C,\text{\hspace{0.17em}}$ in dollars for $\text{\hspace{0.17em}}g\text{\hspace{0.17em}}$ gigabytes of data transfer.
$C\left(g\right)=\left\{\begin{array}{ccc}25& \text{if}& 0
Find the cost of using 1.5 gigabytes of data and the cost of using 4 gigabytes of data.
To find the cost of using 1.5 gigabytes of data, $\text{\hspace{0.17em}}C\left(1.5\right),\text{\hspace{0.17em}}$ we first look to see which part of the domain our input falls in. Because 1.5 is less than 2, we use the first formula.
$C\left(1.5\right)=\text{}25$
To find the cost of using 4 gigabytes of data, $\text{\hspace{0.17em}}C\left(4\right),\text{\hspace{0.17em}}$ we see that our input of 4 is greater than 2, so we use the second formula.
$C\left(4\right)=25+10\left(4-2\right)=\text{}45$
Given a piecewise function, sketch a graph.
1. Indicate on the x -axis the boundaries defined by the intervals on each piece of the domain.
2. For each piece of the domain, graph on that interval using the corresponding equation pertaining to that piece. Do not graph two functions over one interval because it would violate the criteria of a function.
Graphing a piecewise function
Sketch a graph of the function.
$f\left(x\right)=\left\{\begin{array}{ccc}{x}^{2}& \text{if}& x\le 1\\ 3& \text{if}& 12\end{array}$
Each of the component functions is from our library of toolkit functions, so we know their shapes. We can imagine graphing each function and then limiting the graph to the indicated domain. At the endpoints of the domain, we draw open circles to indicate where the endpoint is not included because of a less-than or greater-than inequality; we draw a closed circle where the endpoint is included because of a less-than-or-equal-to or greater-than-or-equal-to inequality.
[link] shows the three components of the piecewise function graphed on separate coordinate systems.
Now that we have sketched each piece individually, we combine them in the same coordinate plane. See [link] .
Graph the following piecewise function.
$f\left(x\right)=\left\{\begin{array}{ccc}{x}^{3}& \text{if}& x<-1\\ -2& \text{if}& -14\end{array}$
Can more than one formula from a piecewise function be applied to a value in the domain?
No. Each value corresponds to one equation in a piecewise formula.
Access these online resources for additional instruction and practice with domain and range.
Key concepts
• The domain of a function includes all real input values that would not cause us to attempt an undefined mathematical operation, such as dividing by zero or taking the square root of a negative number.
• The domain of a function can be determined by listing the input values of a set of ordered pairs. See [link] .
• The domain of a function can also be determined by identifying the input values of a function written as an equation. See [link] , [link] , and [link] .
• Interval values represented on a number line can be described using inequality notation, set-builder notation, and interval notation. See [link] .
• For many functions, the domain and range can be determined from a graph. See [link] and [link] .
• An understanding of toolkit functions can be used to find the domain and range of related functions. See [link] , [link] , and [link] .
• A piecewise function is described by more than one formula. See [link] and [link] .
• A piecewise function can be graphed using each algebraic formula on its assigned subdomain. See [link] .
sinx sin2x is linearly dependent
what is a reciprocal
The reciprocal of a number is 1 divided by a number. eg the reciprocal of 10 is 1/10 which is 0.1
Shemmy
Reciprocal is a pair of numbers that, when multiplied together, equal to 1. Example; the reciprocal of 3 is ⅓, because 3 multiplied by ⅓ is equal to 1
Jeza
each term in a sequence below is five times the previous term what is the eighth term in the sequence
I don't understand how radicals works pls
How look for the general solution of a trig function
stock therom F=(x2+y2) i-2xy J jaha x=a y=o y=b
sinx sin2x is linearly dependent
cr
root under 3-root under 2 by 5 y square
The sum of the first n terms of a certain series is 2^n-1, Show that , this series is Geometric and Find the formula of the n^th
cosA\1+sinA=secA-tanA
Wrong question
why two x + seven is equal to nineteen.
The numbers cannot be combined with the x
Othman
2x + 7 =19
humberto
2x +7=19. 2x=19 - 7 2x=12 x=6
Yvonne
because x is 6
SAIDI
what is the best practice that will address the issue on this topic? anyone who can help me. i'm working on my action research.
simplify each radical by removing as many factors as possible (a) √75
how is infinity bidder from undefined?
what is the value of x in 4x-2+3
give the complete question
Shanky
4x=3-2 4x=1 x=1+4 x=5 5x
Olaiya
hi can you give another equation I'd like to solve it
Daniel
what is the value of x in 4x-2+3
Olaiya
if 4x-2+3 = 0 then 4x = 2-3 4x = -1 x = -(1÷4) is the answer.
Jacob
4x-2+3 4x=-3+2 4×=-1 4×/4=-1/4
LUTHO
then x=-1/4
LUTHO
4x-2+3 4x=-3+2 4x=-1 4x÷4=-1÷4 x=-1÷4
LUTHO
A research student is working with a culture of bacteria that doubles in size every twenty minutes. The initial population count was 1350 bacteria. Rounding to five significant digits, write an exponential equation representing this situation. To the nearest whole number, what is the population size after 3 hours?
f(x)= 1350. 2^(t/20); where t is in hours.
Merkeb
|
2020-07-09 02:25:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 15, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8323103785514832, "perplexity": 1070.2354491933418}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655897844.44/warc/CC-MAIN-20200709002952-20200709032952-00551.warc.gz"}
|
https://fizzbuzzer.com/common-child-challenge/
|
## fizzbuzzer
### Looking for good programming challenges?
Use the search below to find our solutions for selected questions!
# Common Child challenge
Sharing is caring!
Problem statement
Given two strings $\textit{a}$ and $\textit{b}$ of equal length, what’s the longest string ($\textit{S}$) that can be constructed such that it is a child of both?
A string $\textit{x}$ is said to be a child of a string $\textit{y}$ if $\textit{x}$ can be formed by deleting $\textit{0}$ or more characters from $\textit{y}$.
For example, ABCD and ABDC has two children with maximum length $\textit{3}$, ABC and ABD. Note that we will not consider ABCD as a common child because C doesn’t occur before D in the second string.
Input format
Two strings, $\textit{a}$ and $\textit{b}$, with a newline separating them.
Constraints
All characters are upper cased and lie between ASCII values 65-90. The maximum length of the strings is 5000.
Output format
Length of string S.
Sample Input 0
HARRY
SALLY
Sample Output 0
2
The longest possible subset of characters that is possible by deleting zero or more characters from HARRY and SALLY is AY, whose length is 2.
Sample Input 1
AA
BB
Sample Output 1
0
AA and BB has no characters in common and hence the output is 0.
Sample Input 2
SHINCHAN
NOHARAAA
Sample Output 2
3
The largest set of characters, in order, between SHINCHAN and NOHARAAA is NHA.
Sample Input 3
ABCDEF
FBDAMN
Sample Output 3
2
BD is the longest child of these strings.
#### Solution using Dynamic Programming
We define a 2-dimensional matrix lcs = int[n][m], where n and m is the length of the strings $\textit{a}$ and $\textit{b}$ respectively. lcs[i][j] will hold the length of the $\textit{Longest Common Subsequence (lcs)}$ for a[:i] and b[:j].
The algorithm looks as follows:
1. Iterate over the strings $\textit{a}$ and $\textit{b}$.
2. Let $\textit{i}<=n$ and $\textit{j}<=m$ be the current indices for $\textit{a}$ and $\textit{b}$ respectively.
3. Compare a[i] and b[j]:
1. If the characters at index $\textit{i}$ and $\textit{j}$ match then the length of the $\textit{lcs}$ is equal to: $1 + lcs[i-1][j-1]$
2. If the characters at index $\textit{i}$ and $\textit{j}$ do not match then the length of the $\textit{lcs}$ is equal to: $max(lcs[i-1][j],lcs[i][j-1])$
Solution
import sys
f = open("in.txt")
n = len(a) + 1
m = len(b) + 1
lcs = [[0]*(n) for i in range(m)] # lcs[i][j] holds the length of the LCS between a[:i] and b[:j]
maximumLength = 0
for i in range(1, n):
for j in range(1, m):
if (a[i-1] == b[j-1]):
lcs[i][j] = lcs[i-1][j-1] + 1
else:
lcs[i][j] = max(lcs[i][j-1],lcs[i-1][j])
maximumLength = max(maximumLength, lcs[i][j])
print (maximumLength)
Complexity:
Time complexity: $\mathcal{O}(nm)$
Space complexity: $\mathcal{O}(nm)$
#### Optimizing the DP Solution
Optimization 1
Although the program can be written without defining a function to do the work (as it only needs to solve one case), accessing file scope variables is slower than function scope variables. – Thanks go out to rdn32 from the HackerRank discussion boards.
Optimization 2
We don’t need a $\textit{mn}$ matrix to store our previous calculations for the previous subproblems. Since we know that both strings have the same length, $\textit{n}$, it suffices to use a matrix of size $\textit{2n}$.
Optimization 3
We are also using enumerate() instead of range() as it tends to be slightly faster.
Optimization 4
We omited the second max() computation after each outer iteration.
import sys
def solve(a,b):
n = len(a) + 1
lcs = [[0]*(n) for i in range(2)]
index = 0
for i, x in enumerate(a):
index = abs(index - 1)
for j, y in enumerate(b):
if (x == y):
lcs[index][j+1] = lcs[index-1][j] + 1
else:
lcs[index][j+1] = max(lcs[index][j],lcs[index-1][j+1])
return max(lcs[0][-1],lcs[1][-1])
def main():
f = open("in.txt")
|
2019-12-16 09:40:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 33, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5486788749694824, "perplexity": 2359.9391987386025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541319511.97/warc/CC-MAIN-20191216093448-20191216121448-00406.warc.gz"}
|
https://pacific.com.vn/archive/co-molar-mass-186e77
|
Molar Mass of an Element . This compound is also known as Carbon Monoxide. x The vapour density (ρ) is given by. of the components: As an example, the average molar mass of dry air is 28.97 g/mol.[7]. Molar mass is closely related to the relative molar mass (Mr) of a compound, to the older term formula weight (F.W. Multiplying by the molar mass constant ensures that the calculation is dimensionally correct: standard relative atomic masses are dimensionless quantities (i.e., pure numbers) whereas molar masses have units (in this case, grams/mole). You can look up that answer from the table: 22.99 g. You may be wondering why the molar mass of sodium isn't just twice its atomic number, the sum of the protons and neutrons in the atom, which would be 22. Most commonly, the molar mass is computed from the standard atomic weights and is thus a terrestrial average and a function of the relative abundance of the isotopes of the constituent atoms on Earth. Now we can apply this for real substances. Capitalisez la première lettre dans symbole chimique et tapez en minuscule les lettres restantes: Ca, Fe, Mg, Mn, S, O, H, C, N, Na, K, Cl, Al. {\displaystyle M_{i}} {\displaystyle {\bar {M}}} The isotopic distributions of the different elements in a sample are not necessarily independent of one another: for example, a sample which has been distilled will be enriched in the lighter isotopes of all the elements present. These relative weights computed from the chemical equation are sometimes called equation weights. The atomic weights used on this site come from NIST, the National Institute of Standards and Technology. The unit in which molecular mass is measured is amu. If the formula used in calculating molar mass is the molecular formula, the formula weight computed is the molecular weight. Co. Browse the list of Convert grams CO to moles or moles CO to grams, Molecular weight calculation: The dalton, symbol Da, is also sometimes used as a unit of molar mass, especially in biochemistry, with the definition 1 Da = 1 g/mol, despite the fact that it is strictly a unit of mass (1 Da = 1 u = 1.66053906660(50)×10−27 kg, as of 2018 CODATA recommended values). The molar mass is an average of many instances of the compound, which often vary in mass due to the presence of isotopes. For bulk stoichiometric calculations, we are usually determining molar mass, which may also be called standard atomic weight or average atomic mass. In chemistry, the molar mass of a chemical compound is defined as the mass of a sample of that compound divided by the amount of substance in that sample, measured in moles. Convert grams Co(OH)3 to moles or moles Co(OH)3 to grams. The molar mass is an average of many instances of the compound, which often vary in mass due to the presence of isotopes. Calculate the molecular weight of a chemical compound, More information on A useful convention for normal laboratory work is to quote molar masses to two decimal places for all calculations. ), and to the standard atomic masses of its constituent elements. Molar mass of Co(cobalt) is 58.9331950 ± 0.0000050 g/mol Convert between Co(cobalt) weight and moles Most commonly, the molar mass is computed from the standard atomic weightsand is thus a terrestrial average and a function of the relativ… This is distinct but related to the molar mass, which is a measure of the average molecular mass of all the molecules in a sample and is usually the more appropriate measure when dealing with macroscopic (weigh-able) quantities of a substance. This is accurate enough to directly determine the chemical formula of a molecule.[12]. The formula weight is a synonym of molar mass that is frequently used for non-molecular compounds, such as ionic salts.
|
2022-06-28 05:54:04
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8686960935592651, "perplexity": 846.1018660257312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103355949.26/warc/CC-MAIN-20220628050721-20220628080721-00542.warc.gz"}
|
https://codegolf.stackexchange.com/questions/175364/all-different-functions
|
# All different functions
For functions $$\f, g: \{0,1\}^n \rightarrow \{0,1\} \$$, we say $$\f \sim g\$$ if there's a permutation of $$\1,2,3,...,n\$$ called $$\i_1,i_2,i_3,...,i_n\$$ so that $$\f(x_1,x_2,x_3,...,x_n) = g(x_{i_1},x_{i_2},x_{i_3},...,x_{i_n})\$$. Therefore, all such functions are divided in several sets such that, for any two functions $$\f, g\$$ in a same set, $$\f \sim g\$$; for any two functions $$\f, g\$$ in different sets, $$\f \not\sim g\$$. (Equivalence relation) Given $$\n\$$, output these sets or one of each set.
Samples:
0 -> {0}, {1}
1 -> {0}, {1}, {a}, {!a}
2 -> {0}, {1}, {a, b}, {!a, !b}, {a & b}, {a | b}, {a & !b, b & !a}, {a | !b, b | !a}, {a ^ b}, {a ^ !b}, {!a & !b}, {!a | !b}
You can output the function as a possible expression(like what's done in the example, but should theoretically support $$\n>26\$$), a table marking outputs for all possible inputs (truth table), or a set containing inputs that make output $$\1\$$.
Shortest code win.
• @JoKing Anything, so there are $2^{2^n}$ functions for given $n$. – l4m2 Nov 6 at 4:42
• @JoKing Also, all boolean functions can be represented in term of AND and OR. – user202729 Nov 6 at 14:33
• @user202729 AND, OR, and NOT at the very least (not that this is a problem since you have used all three). Just AND and OR won't work. – Misha Lavrov Nov 6 at 14:48
# J, 62 bytes
f=:3 :'~.(2#~2^y)([:<@~.@/:~((i.@!A.i.)y)|:(y#2)\$])@#:i.2^2^y'
Try it online!
-3 bytes for anonymous function (removing f=:)
For each boolean function (truth table), generate its equivalence class, then remove duplicates.
• Does it return 3D array truth table and can't output with echo? – l4m2 Nov 6 at 15:13
• @l4m2 I don't understand your question. J can print array of any dimensions. – user202729 Nov 6 at 17:13
• or maybe I can't read it? I think f(3) outputs {0,0},{0,a&b},{0,a&!b,0,b&!a,a&b,0},... – l4m2 Nov 6 at 18:48
|
2018-12-17 09:32:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4062685966491699, "perplexity": 3335.924367193989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828501.85/warc/CC-MAIN-20181217091227-20181217113227-00304.warc.gz"}
|
https://socratic.org/questions/how-do-you-factor-21n-2-3n-24
|
# How do you factor 21n ^ { 2} + 3n - 24?
Jun 21, 2018
$3 \left(n - 1\right) \left(7 n + 8\right)$
#### Explanation:
$\text{take out a "color(blue)"common factor } 3$
$= 3 \left(7 {n}^{2} + n - 8\right)$
$\text{factor the quadratic using the a-c method}$
$\text{the factors of the product } 7 \times - 8 = - 56$
$\text{which sum to "+1" are "-7" and } + 8$
$\text{split the middle term using these factors}$
$7 {n}^{2} - 7 n + 8 n - 8 \leftarrow \textcolor{b l u e}{\text{factor by grouping}}$
$= \textcolor{red}{7 n} \left(n - 1\right) \textcolor{red}{+ 8} \left(n - 1\right)$
$\text{take out the "color(blue)"common factor } \left(n - 1\right)$
$= \left(n - 1\right) \left(\textcolor{red}{7 n + 8}\right)$
$21 {n}^{2} + 3 n - 24 = 3 \left(n - 1\right) \left(7 n + 8\right)$
Jun 21, 2018
$21 \cdot \left(n - 1\right) \left(n + \frac{8}{7}\right)$
#### Explanation:
Solving the equation$21 {n}^{2} + 3 n - 24 = 0$
by the quadratic Formula we get
${n}_{1 , 2} = - \frac{1}{14} \pm \frac{15}{14}$
so
${n}_{1} = 1$
${n}_{2} = - \frac{8}{7}$
and we get
$21 \left(n - 1\right) \left(n + \frac{8}{7}\right)$
|
2019-10-14 03:27:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 18, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9991632699966431, "perplexity": 8919.344000494371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649035.4/warc/CC-MAIN-20191014025508-20191014052508-00541.warc.gz"}
|
http://www.physicsforums.com/printthread.php?t=421161
|
Physics Forums (http://www.physicsforums.com/index.php)
- Calculus & Beyond Homework (http://www.physicsforums.com/forumdisplay.php?f=156)
- - finding T(v) relative to B and B' (http://www.physicsforums.com/showthread.php?t=421161)
dzimitry Aug9-10 04:00 AM
finding T(v) relative to B and B'
1. The problem statement, all variables and given/known data
find T(v) using the matrix relative to B and B'
T(x, y, z) = (2x, x + y, y + z, x + z)
v = (1, -5, 2)
B = { (2, 0, 1), (0, 2, 1), (1, 2, 1) }
B' = { (1, 0, 0, 1), (0, 1, 0, 1), (1, 0, 1, 0), (1, 1, 0, 0) }
2. Relevant equations
3. The attempt at a solution
T(2, 0, 1) = (4, 2, 1, 3)
= 4(1, 0, 0, 1) + 2(0, 1, 0, 1) + 1(1, 0, 1, 0) + 3(1, 1, 0, 0)
= (8, 5, 1, 6)
T(0, 2, 1) = (0, 2, 3, 1)
= (4, 3, 3, 2)
T(1, 2, 1) = (2, 3, 3, 2)
= (7, 5, 3, 5)
A = 8 4 7
5 3 5
1 3 3
6 2 5
Av = (2, 0, -8, 6)
but if the person I am checking against is right, the answer should be (2, -4, -3, 3)
I am confused as to if I can even use the method I am using in this case.
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
dzimitry Aug9-10 04:01 AM
Re: finding T(v) relative to B and B'
that's a matrix A btw, everything that was indented got shifted.
HallsofIvy Aug9-10 09:13 AM
Re: finding T(v) relative to B and B'
Quote:
Quote by dzimitry (Post 2832607) 1. The problem statement, all variables and given/known data find T(v) using the matrix relative to B and B' T(x, y, z) = (2x, x + y, y + z, x + z) v = (1, -5, 2) B = { (2, 0, 1), (0, 2, 1), (1, 2, 1) } B' = { (1, 0, 0, 1), (0, 1, 0, 1), (1, 0, 1, 0), (1, 1, 0, 0) } 2. Relevant equations 3. The attempt at a solution T(2, 0, 1) = (4, 2, 1, 3) = 4(1, 0, 0, 1) + 2(0, 1, 0, 1) + 1(1, 0, 1, 0) + 3(1, 1, 0, 0)
No, (4, 2, 1, 3) is NOT equal to (8, 5, 1, 6)! You are doing this backwards. You want to find numbers, a, b, c, d, such that (4, 2, 1, 3)= a(1, 0, 0, 1)+ b(0, 1, 0, 1)+ c(1 , 0, 1, 0)+ d(1, 1, 0, 0). That is you jeed to solve a+ c+ d= 4, b+ d= 2, c= 1, and a+ b= 3.
Then
$$\begin{bmatrix}a \\ b\\ c\\ d\end{bmatrix}$$
will be the first column of the matrix.
Quote:
= (8, 5, 1, 6) T(0, 2, 1) = (0, 2, 3, 1) = (4, 3, 3, 2) T(1, 2, 1) = (2, 3, 3, 2) = (7, 5, 3, 5) A = 8 4 7 5 3 5 1 3 3 6 2 5 Av = (2, 0, -8, 6) but if the person I am checking against is right, the answer should be (2, -4, -3, 3) I am confused as to if I can even use the method I am using in this case. Thanks in advance 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution
dzimitry Aug9-10 12:28 PM
Re: finding T(v) relative to B and B'
ok that makes sense...and for the vector v = (1, -5, 2), do I need to solve a system like
(1, -5, 2) = a(2, 0, 1) + b(0, 2, 1) + c(1, 2, 1) and use (a, b, c) as my v and multiply that by A?
All times are GMT -5. The time now is 08:31 AM.
|
2013-12-07 13:31:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5590877532958984, "perplexity": 316.04568536369715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163054548/warc/CC-MAIN-20131204131734-00059-ip-10-33-133-15.ec2.internal.warc.gz"}
|
http://physics.stackexchange.com/tags/electric-circuits/new
|
# Tag Info
0
As you stated, we can think of a real battery as an ideal one with an internal resistance $R_i$. This battery is then connected to an external circuit with resistance $R$. Those 2 resistors form a voltage divider. If the EMF has a value of $V$ then the voltage measured across the external resistance is $V*R/(R+R_i)$. This voltage is equal to the EMF of the ...
0
Diode laser with changeable wavelength, fan side with a prism and an array of sensors: wavelegth -> ampltude Laser and mirror to position the beam, fan side with array of sensors: position -> amplitude Laser with variable amplitude, fan side with electric thermometer: amplitude -> heat -> current -> amplitude Broken laser, fan side with regulator (work ...
0
Sure, have multiple linear sensors, indicating amplitude.
0
When we derive Ohm's Law using the Drude Model, we assume at one point of time that E=V/L, when is fact, E=dV/dL, unless E is constant, in which case the assumption E=V/L is true. But I don't understand why the electric field in a conductor must be constant as current flows. Generally, the electric field in a conductor does not have to be constant (in ...
1
An easy way to prove Ohm's law for electric fields that aren't constant is to first assume that the electric field is approximately constant over short lengths, just like $E=dV/dL$ suggests. Using that, you can derive Ohm's law for short lengths of material, $dV=IdR$. We'll assume that "current in = current out", which is true at steady-state. This allows us ...
0
The current $I$ has a value in one point of the circuit, in contrast to the voltage $U$ which is always measured between 2 points. The definition of $I$ is the amount of charge $\Delta q$ that passes through a particular point in the circuit in the time $\Delta t$ (it's a quantity mathematically similar to the simple velocity in kinematics). So when you ...
0
What is voltage? The way I see it, and this is only my concept, voltage represents electron pressure. because electrons repel each other,the more electrons you pack into a unit mass of a conductor the higher the voltage, such as in a metal foil capacitor. If you force electrons onto a non conductor, the room for electrons is very limited and the static ...
2
Two capacitors in parallel have the same voltage. Two capacitors in series have the same charge. Simplify the problem to two capacitors in series (each started life as two capacitors in parallel) - what is the ratio of their voltages. Then use $Q=CV$ to figure the charge on each pair; finally distribute the charge on the elements of each pair according to ...
1
Assuming for now that this is homework, I'll provide this hint: the voltage on the 8.73 $\mu$C capacitor is not 21.9 V. Don't forget that that voltage has to be distributed among all of the components.
1
In principle, it is possible, using, e.g., high-current relativistic electron beams - please see, e.g., the review http://arxiv.org/abs/physics/0409157 . @John Rennie offers reasonable arguments, but the very real problems he mentions can be overcome - I don't have time to describe the specific mechanisms (see the review). In experiments, propagation length ...
2
The cathode ray tube has had the air pumped out. Electrons scatter off oxygen and nitrogen molecules so if you fired an electron beam in air it would be scattered in a short distance. The distance would depend on the beam energy, but it's a lot shorter than 100m. The range of electrons from beta radiation in air is around a metre. You could argue that ...
1
To start with one could have an ac current never grounded anywhere , for a household generator for example. The reason one grounds at the generator is for safety so the ground can pick up any miss chance, as it is a practically infinite sink for electrons. Only one of the two lines can be grounded of course :). It was found though that due to capacitences ...
0
There is a LOT of capacitive coupling between the neutral wire and ground even if a DC current cannot flow. And we are talking about AC here.
0
In my opinion the contribution of hysteresis losses in an iron wire can be important when comparing with conductive losses, but this is an issue strongly dependent of the iron alloy used according to hysteresis loop, conductivity and permeability. The analysis of the induced current distribution in conducting wires subjected to a harmonic axial voltage is ...
2
There is a commonly used analogy for electric circuits called the hydraulic analogy. This imagines the electrons as water and the wires as pipes. The voltage is equivalent to the water pressure and the current is equivalent to the water flow rate. Start with a DC current and imagine the water is doing work by flowing through a water wheel: This is all ...
0
At a beach the waves carry energy and momentum from the sea to the shore, even though the water in the waves moves back and forth. It is the same way with alternating current: what matters is the energy flow carried by the electric and magnetic fields, not the movements of the charges.
0
If the voltage reverses doesn't the flow of electrons reverse? It depends. If the alternating voltage is across a diode then, no, the current through the diode doesn't (effectively) reverse but is instead unidirectional. However, a genuine alternating current periodically reverses direction - the electric charge 'sloshes' back and forth within the ...
-2
A metallic wire is electostaticly neutral the mobile negative charges equals the strongly Bounded pisitive charges , so resultant electric field is zero everywhere.
0
If there's a way to solve its by assuming both ammeters to be identical and thus from your equations:$$0.2R+6=1.7R\\\implies R=4$$ because there's no other relation between them, or if you find one it'll be the same information disguised in other form. or it can be solved if we have any bit extra information; as to solve a linear equation in two variables ...
1
From a circuit theory perspective, an inductor is effectively a short-circuit (wire) at DC while a capacitor is an open-circuit. Thus, any parasitic capacitance must be in parallel since, if it were in series, an inductor would be an open-circuit at DC. From an AC perspective, if the parasitic capacitance were in series, the inductor would appear ...
2
The missing piece here is that the temperature of the resistor is a function of the current. Your equation should perhaps read $V = I\,R(T(I))$. Does that help?
1
You have a function: $V(T, i) = i \cdot R(T)$ and you should get $\dfrac{dV}{di}$. $T$ doesn't change when you vary $i$ and $R(T)$ doesn't too, so it can be considered as a constant comparing to variable $i$. Fix $T$ at some generic value, for example $a$, doing this you get $R(T = a) = R_a$ So your function is reduced to $V(i) = i \cdot R_a$. Now you ...
0
The question has already been answered but I would like to provide another from a slightly different perspective. Imagine that the unconnected battery terminals were connected together via another resistor: Now, there is an electric current circulating counter-clockwise and given by $$I = \frac{V_{BAT A} + V_{BAT B}}{R_1 + R_2}$$ Clearly, if we ...
3
No, there won't be any persistent current going through the resistor. There will only be some current for a tiny period of time. This will bring the poles of the left battery to potentials $0,V_1$, and those of the right battery to $V_1,V_1+V_2$. Note that the electrostatic potential of both poles of the two batteries that are connected to the resistor will ...
0
No, there will be no current through the resistor. There is no potential difference between it's ends. There is a diference between the poles of each battery, but I see no reason why threre should be a voltage difference and current on the resistor. Both batteries create a potential difference relative to the resistor. It could be changed to have a ...
0
The total voltage difference across the resistors ($V_3$) is, by design, a constant 5.4 volts1 (because it's being supplied by a power supply that pretty well approximates a constant voltage source). This total voltage difference must be dropped across the two resistors $R_1$ and $R_2$ in series: that is, $$V_1 + V_2 = V_3.$$ As the resistors are in ...
2
It depends what you're trying to do. The integral will give you the net current. So if you did the integral for the AC current going to the computer I'm typing this on you'd get the value zero. This is quite correct because no net current flows for kit powered by AC. On the other hand, if you're trying to work out how much power my computer consumes you ...
0
I think is that on the outer diameter for a distance tending to zero, the electric field will be same as inside but when you move further outside of the cable towards larger distance, the field will be reducing.
Top 50 recent answers are included
|
2014-07-23 03:50:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6783455610275269, "perplexity": 403.21230441370056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997873839.53/warc/CC-MAIN-20140722025753-00226-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://zbmath.org/?q=an%3A1163.30301
|
## Differential calculus on the Faber polynomials.(English)Zbl 1163.30301
From the introduction: We show how the methods introduced in [Bull. Sci. Math. 126, No. 5, 343–367 (2002; Zbl 1010.33006)] and [A. Bouali, ibid. 130, No. 1, 49-70 (2006; Zbl 1094.30010)] allow to do differential calculus on the manifold of coefficients of univalent functions. The Faber polynomials $$(F_k)_{k\geq 1}$$ are given by the identity
$1+b_1w+ b_2w^2+\cdots+ b_kw^k+\cdots= \exp\Bigg(-\sum_{k=1}^{+\infty} \frac{F_k(b_1,b_2,\dots,b_k)}{k} w^k\Bigg).$
The polynomials $$(G_m)_{m\geq 1}$$ and $$(K_n^p)_{n\geq 1}$$, $$p\in\mathbb Z$$, are given by
\begin{aligned} \frac{1}{1+b_1w+b_2w^2+\cdots+b_kw^k+\cdots} &= 1+ \sum_{m=1}^{+\infty} G_m(b_1,b_2,\dots,b_m)w^m,\\ (1+b_1w+b_2w^2+\cdots+b_kw^k+\cdots)^p &= 1+ \sum_{n\geq 1} K_n^p(b_1,b_2,\dots,b_n)w^n, \end{aligned}
then $$G_m= K_m^{-1}$$ and $$K_m^1=b_m$$.
The object of this note is to prove that the polynomials $$(K_n^p)$$ are all obtained as partial derivatives of the Faber polynomials and show how some of the recursion formulae on the polynomials are related to elementary differential calculus on $${\mathcal M}$$. This is a step towards the classification of Faber type polynomials. In the last section, we give the example of the conformal map from the exterior of the unit disk onto the exterior of $$[-2,+2]$$. This shows how to introduce nontrivial second-order differential operators on the manifold $${\mathcal M}$$.
### MSC:
30B50 Dirichlet series, exponential series and other series in one complex variable 33C45 Orthogonal polynomials and functions of hypergeometric type (Jacobi, Laguerre, Hermite, Askey scheme, etc.)
### Citations:
Zbl 1010.33006; Zbl 1094.30010
Full Text:
### References:
[1] Airault, H.; Malliavin, P., Unitarizing probability measures for representations of Virasoro algebra, J. math. pures appl., 80, 6, 627-667, (2001) · Zbl 1032.58021 [2] Airault, H.; Ren, J., An algebra of differential operators and generating functions on the set of univalent functions, Bull. sci. math., 126, 5, 343-367, (2002) · Zbl 1010.33006 [3] A. Bouali, Faber polynomials, Cayley-Hamilton equation and Newton symmetric functions, Bull. Sci. Math. (2005) · Zbl 1094.30010 [4] A. Bouali, On the Faber polynomials of a rectangle, preprint, 2005 [5] Faber, G., Uber polynomische entwicklungen, Math. ann., 57, 385-408, (1903) · JFM 34.0430.01 [6] Feller, W., An introduction to probability theory and its applications, vol. 1, (1968), John Wiley · Zbl 0155.23101 [7] Kirillov, A.A., Geometric approach to discrete series of unireps for Virasoro, J. math. pures appl., 77, 735-746, (1998) · Zbl 0922.58078 [8] Montel, P., Leçons sur LES séries de polynômes à une variable complexe, Collection de monographies sur la théorie des fonctions, (1910), Gauthier-Villars Paris · JFM 41.0277.01 [9] Neretin, Y.A., Representations of Virasoro and affine Lie algebras, (), 157-225 · Zbl 0805.17018 [10] Pritsker, I.E., Derivatives of Faber polynomials and Markov inequalities, J. approx. theory, 118, 163-174, (2002) · Zbl 1374.30012 [11] Schaeffer, A.C.; Spencer, D.C., Coefficient regions for schlicht function, Colloquium publications, vol. 35, (1950), American Math. Soc. · Zbl 0066.05701 [12] Schiffer, M., Faber polynomials in the theory of univalent functions, Bull. amer. soc., 54, 503-517, (1948) · Zbl 0033.36301 [13] Schur, I., Identities in the theory of power series, Amer. J. math., 69, 14-26, (1947) · Zbl 0034.01103
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2022-08-09 13:26:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9226744174957275, "perplexity": 1611.4175842507664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00346.warc.gz"}
|
https://brilliant.org/problems/spot-future-whats-your-trade/
|
You calculate that the spot value of the SPY Index is currently $2050. The SPY Futures contract that expires in 6 months is trading at$2050. What would you want to do?
|
2017-05-23 18:52:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.408609002828598, "perplexity": 1108.0697735649373}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607649.26/warc/CC-MAIN-20170523183134-20170523203134-00308.warc.gz"}
|
https://pandaanku.com/degenerate-boundaries-for-multiple-alternative-decisions-panda-anku/
|
# Degenerate boundaries for multiple-alternative decisions | Panda Anku
### Problem setup and aim
To investigate the form of the optimal decision boundary for multiple choices, we follow the usual convention that choice evidence is modeled by overlapping normal distributions23. Each choice (hypothesis) Hi is represented by a normal distribution with vector mean μi and standard deviation σi. These parameters μi, σi are defined by the inter-choice discriminability, which is the amount of overlap between choice distributions: the less overlap between distributions i and j, the more discriminable the choices and easier the task24. We assume equal discriminability between all choices, and so all choice distributions are equivariant with equidistant means, which is achieved by using vector-valued evidence (see Methods). This means that for each decision episode, the “true” hypothesis is equally indiscriminable from all other hypotheses, giving a consistent n-alternative forced choice (nAFC) paradigm regardless of which hypothesis is chosen.
The integration-to-threshold model samples evidence from the ‘true’ hypothesis until a decision boundary is reached. Each choice distribution represents possible evidence for that hypothesis, originating from the environment, memory, or noisy sensory processes18. At each time step, a sample is taken and inference is performed on the evidence accumulated thus far, generating a decision trajectory. The decision time T is when this trajectory crosses a boundary for a particular choice. If the boundary crossed represents the “true” hypothesis, then zero error e = 0 is generated, whereas crossing any other boundary generates a unit error e = 1. Usually, integration-to-threshold models rely on scalar evidence with a scalar decision boundary. In our case, the evidence will be a vector with boundaries that are hyper-surfaces in a vector space, which is detailed in the next section.
At the end of a decision episode, when a choice is made, the decision time and error are combined into a single reward. Here, we formulate reward as a linear combination of error and decision time weighted by their associated costs Wi and c:
$$r=left{begin{array}{ll}-{W}_{i}-cT,&{{{{{{{rm{incorrect}}}}}}}},{{{{{{{rm{decision}}}}}}}}\ -cT,hfill &{{{{{{{rm{correct}}}}}}}},{{{{{{{rm{decision}}}}}}}}.end{array}right.$$
(1)
This is a standard reward function used in a wide range of past work, for example, ref. 7,25. Unequal error costs Wi ≠ Wj induce choice-dependent reward, where hypothesis-dependent error costs are relative to the “true” hypothesis and to each other. For tractability, we will assume all error costs are equal, with the expectation that similar results hold in the unequal cost case but that the analysis will be more complicated. We also consider a constant (time-independent) cost c per time step, assuming stationarity of evidence distributions and evidence accumulation in a free-response task. A challenging aspect of this framework is that reward is highly stochastic due to the random nature of evidence sampling. How then do we define optimality?
In this paper, we come from the view that humans and animals maximize expected reward19,26,27,28. Then the optimum decision boundary maximizes the average reward for a given ratio of costs c/W. Monte Carlo simulations of decision trajectories of independent trials, using the formalism outlined above and the evidence inference method derived in the next section, yield reward values for a set of candidate decision boundaries. In general, we find a set of high-dimensional nonlinear, complex boundaries. We will show that these boundaries are consistent with a range of behavioral and phenomenological results along with testable neurophysiological predictions.
### Multi-alternative decision-making as a particle diffusing in n-dimensions
In this section, we show that n-alternative decision-making can be viewed as a diffusion process in an (n − 1) dimensional subspace of the belief space. This is a perspective that has previously been established (for example, see refs. 7, 14), but we cover this material here to help the reader build intuition and to detail the implications for multi-alternative decisions.
For 2AFC tasks, integration-to-threshold models such as the sequential probability ratio test (SPRT, see Methods), represent the decision trajectory as a particle diffusing in 1D. If we define this dimension on the y-axis with the origin corresponding to the point of equal belief for each hypothesis, then positive y-values represent greater belief in choice H0 and negative values greater belief in choice H1. If time is represented along the x-axis, the decision trajectory takes the form of a random walk over a range of y-values, with decision boundaries y ≥ ± θ0,1 for the two decisions 0, 1. This bounded random walk model can be extended to nAFC tasks, the walk taking place in (n − 1)-dimensions. However, this extension is not straightforward. Firstly, belief in hypotheses H0,1 are defined over the positive and negative real numbers of a single dimension, which raises the question of how belief in another hypothesis H2 should be represented. Secondly, the decision boundaries in 1D are well defined as a pair of single bounds (θ0 < y < θ1), but as the belief space extends to n-choices, how should the decision boundaries be represented?
To examine these questions, we take a Bayesian sequential inference perspective in which 1D decision variables in models like the SPRT are deconstructed into two decision variables that represent the degree of belief in two hypotheses H0 and H1. By using the sequential Bayesian inference beliefs directly, the positive/negative range for the 1D decision variable is split into two independent axes that represent normalized belief over each hypothesis as a separate decision trajectory, given by the posterior probability Pi(t) = P(Hix(1:t)) where x(1:t) is the accumulated evidence at time t (see Fig. 1).
The decision variable transformation between the SPRT and sequential Bayesian inference is straightforward (Fig. 1c, d). A decision variable (DV) represents the accrual of all sources of priors and evidence into a quantity that is interpreted by the decision rule to produce a choice4. The DV of the SPRT is the log posterior probability ratio29 and the DV of sequential Bayesian inference is simply the posterior probability. Because sequential Bayesian inference is constrained by P0(t) + P1(t) = 1, it has the same number of unconstrained degrees of freedom as SPRT. Moreover, the boundary values are equivalent under the DV transformation from SPRT boundaries θ0,1 to boundaries on the posteriors Θ0,1; specifically, the SPRT thresholds are given by the log-odds of the corresponding posterior thresholds in sequential Bayesian inference4,
$$theta=log (Theta /1-Theta ).$$
(2)
Now, the key point is that sequential Bayesian inference applies to an arbitrary number of choices and so holds for general nAFC decision-making14. Figure 1c, d illustrate this for 3- and 4-choice tasks, respectively, with the dashed lines representing flat decision boundaries Pi(t) > Θi. Individual probability trajectories for each choice correspond to the coordinates of the overall decision trajectories (Fig. 1a, b), interpreted as a particle diffusing in nD. Sequential Bayesian inference forms an orthogonal coordinate system for each probability trajectory (Fig. 1c, d) as components of the n-dimensional decision trajectory (Fig. 1a, b).
There are geometric implications of using sequential Bayesian inference as coordinates for n-dimensional decision trajectories (Fig. 1a, b). Although the decision trajectories have n probability-coordinates, Pn, they are constrained such that ∑iPi(t) = 1; therefore, the decision trajectories populate (n − 1)D simplices. For example, 2AFC decision dynamics are represented as a particle constrained to have P0(t) + P1(t) = 1, which is the 1D line P1 = 1 − P0 on a 2D (P0, P1) plot of the beliefs. It follows that 3AFC dynamics take place on a 2D plane (Fig. 1a) and 4AFC dynamics in a 3D tetrahedron (Fig. 1b) and so forth. Note that if any hypothesis has zero probability Pi(t) = 0, then the space in which the decision trajectory evolves collapses to the remaining non-zero directions. For example, each face of the 4-choice tetrahedron in Fig. 1b is a combination of three choices with non-zero probabilities and each edge a combination of two such choices.
As a result, decision boundaries are (n − 2)-dimensional objects in n-dimensional probability space. So for n > 2 choices, boundaries can have spatial dependence with respect to the decision space visualized in Fig. 1a, b. For 2AFC tasks, decision boundaries are points on a 2D line, which are simply the transformed boundaries (equation 2) of the standard two-choice integration-to-threshold model. Likewise, for 3AFC, the decision boundaries are lines on a plane (Fig. 1a, dashed lines) and for 4AFC are planes within a tetrahedron (Fig. 1a, dark gray planes). The example boundaries shown are flat with a constant decision threshold in each dimension. An interesting consequence is that high-dimensional boundaries can have a nonlinear structure as a function of the n-dimensional beliefs P. Then, the linear 3AFC boundaries (Fig. 1a, dashed lines) generalize to curves and the planar 4AFC boundaries (Fig. 1b, dark gray planes) generalize to curved surfaces.
Curved decision boundaries have been shown to perform optimally on 3AFC tasks for free-response, mixed-difficulty trials7; however, it is not known how important the precise shape of that boundary is for maximizing reward. Here we ask whether there are other complex boundary shapes that improve performance over the flat boundary case and whether the greater freedom to choose nonlinear boundaries has other consequences for decision-making.
### Multi-dimensional decision boundaries can be complex
To investigate the importance of boundary shape for reward maximization, we define a subset of possible boundaries using some specific spatial parameterizations that provide diverse sets of nonlinear boundaries. These parameterizations are constrained such that: (I) each boundary θi intersects with each edge leading away from the point Pi = 1, and likewise intersects with each (hyper)plane leading away from the said point (e.g., each colored boundary intersects with two edges in Fig. 2); and (II) assuming symmetric error costs Wi = Wj for simplicity in equation (1), the boundaries remain symmetric under permutations Pi ↔ Pj (e.g., all boundaries have the reflectional symmetries of the outer equilateral triangle in Fig. 2).
These constraints can be used to derive a general boundary parameterization comprising a shape function and tuning parameters (Fig. 2). A general boundary parameterization F(P(t); θ, α, …) takes the probability vector P(t) as an input, along with an edge-intersection parameter θ and shape parameters (α, …) to give a decision rule:
$${P}_{i} (3) The resulting complex decision boundary has an amplitude parameter α and some additional shape parameters. To make our investigation tractable, we limit our parameterization to one additional parameter, β (e.g., a frequency in the oscillating case). For simplicity, we select four distinct forms of F that we call flat(θ), curve(θ, α), power(θ, α, β), and oscil(θ, α, β), examples of which are shown in Fig. 2 and all of which contain the flat boundary as a particular instance (see Methods for the full forms and a mathematical derivation). Within these parametric subsets, the optimal decision boundaries are determined by optimal values of θ, α, and β. Some example boundary parameterizations illustrate the range of possible boundary features and how they extend to multiple-choice decision tasks (Figs. 2 and 3). Each parameterization is a scaling of a flat boundary (Fig. 2, left column) denoted as the flat(θ) function, such that: (I) The function curve(θ, α) is the simplest parameterization of interest, with α the amplitude of the curve (Fig. 2, second column). (II) The function power(θ, α, β) has an additional parameter β that modulates a double curve or forms a central peak (Fig. 2, third column). (III) The oscillatory function oscil(θ, α, β) is a cosine with amplitude α and frequency β (Fig. 2, fourth column). We have chosen these parameterizations so that if β = 0, we recover the curve parameterization, and if α = 0, we recover the flat parameterization. These parameterizations apply to any number of choices n ≥ 2, with examples of the curve and oscil functions for 4AFCs shown in Fig. 3a, b respectively. Note how these decision boundaries intersect with each 3AFC plane: each face in Fig. 3a matches a panel in Fig. 2. Overall, we have constructed a set of permutation-invariant, nonlinear decision boundaries that we will use as candidate functions to explore optimal decision rules. This raises the question of which parameter values give optimal boundaries within these parametrized subsets of boundaries. ### Complex decision boundaries are consistent with the speed-accuracy curve It is well established that humans and animals generate speed-accuracy trade-off (SAT) curves during decision-making experiments6,9, showing the mean error against mean decision time, where each point on the curve can be accorded a cost ratio c/W of time to errors (equation (1)). For 2AFC tasks, the trend is such that as the value of c/W increases, speed is favored over the accuracy, and so the mean decision time decreases with a compensatory increase in mean decision error. This trade-off is instantiated by the decision rule (learned for each c/W value) with the SAT curve implicitly parameterized by decision boundary parameters. For 2AFCs, this is a single parameter: the flat boundary threshold θ30,31. For nAFCs with n > 2, these are complex boundary functions (equation (3)) with sets of parameters (α, β, …). If the SAT curve is truly a curve, rather than a region, then multiple parameter combinations would give the same SAT, since a curve requires just one implicit parameter. To examine the SAT curves for each parameterization, we optimized the boundary parameters for a range of cost ratios c/W and then plotted the resulting accuracies and decision times (Fig. 4). Optimal parameters θ, α, β were found by stochastic optimization over the reward landscapes, using Monte Carlo simulation over a grid search of parameters to generate a visualization of the reward landscapes (Fig. 5a–c). As rewards are highly stochastic, smoothed landscapes were averaged over 100,000 decision trajectories. Optimal parameters were extracted by taking all points with a mean reward for which (r ; > ;{r}_{max }-delta sigma), where ({r}_{max }) is the maximum mean reward of the noisy landscape, σ is the spread of rewards, and δ = 0.02 is a small parameter (see Methods for details). Since the variation in expected reward is so small as to be negligible for a real decision-maker with limited capacity for sampling rewards from the environment, we define these boundaries as constituting a set of “good enough” boundaries that are in practice as effective as a true optimum within each parameterization. The boundary functions curve, power, and oscil produce smooth, well-defined SAT curves (Fig. 4) that resemble the relationship between speed and accuracy for optimal rewards found experimentally2,31. Despite the wide range of boundary shapes they describe, all three parameterizations produce nearly identical SAT curves, as confirmed by overlaying the average over all three cases (Fig. 4, black dashed curve, all panels). Flat boundaries (α = 0) are contained within these parameterizations, and so the SAT curves for all three boundary functions closely resemble the flat-boundary case. One difference between the three cases is their relative spreads: the curve(θ, α) parameterization yields the tightest SAT curve, followed by the oscil(θ, α, β) SAT curve, and finally, the power(θ, α, β) SAT curve is the thickest. One might therefore consider that the curve parameterization qualifies as the ‘best’ SAT curve, which is consistent with previous work since it describes the shape of the optimal boundary found for the 3AFC case in refs. 7,14. However, all parameterizations closely follow a single mean SAT curve (black dashed lines), so a wide range of boundary characteristics give near-optimal decisions. Moreover, each value of c/W has multiple points spread along the same curve, so the SAT can be satisfied by multiple optimized boundaries even within the same parameterization. ### A degenerate set of decision boundaries yield close-to-optimal expected reward So far, we have seen that optimized nonlinear decision boundaries generate well-defined SAT curves that remain similar in three parameterizations curve, oscil, and power. Next, we analyze the optimized boundaries by direct inspection of the reward landscapes and their position on the SAT curve. Inspection of the 3AFC reward landscapes for the curve parameterization reveals that the region with mean rewards within δ of the maximum ({r}_{max }) extends across the parameter space (Fig. 5, black lines). As c/W increases and the maximum reward ({r}_{max }) decreases (panels a–c), this acceptance region sweeps towards θ = 1/3 and the flat boundary α > 0 and becomes more dependent on α in addition to θ. These acceptance regions correspond to sets of optimized parameter combinations and so specify families of decision boundaries that all maximize reward within a small variance. For closer scrutiny of the optimal region, five sections of the reward landscapes for 3AFC curve(θ, α) parameterization are shown in Fig. 6. These sections correspond to values α = {−20, 10, 0, 10, 20} (including the flat boundary α = 0) to provide a detailed look at the reward landscape peaks with 100-fold more samples of θ than in Fig. 5. Evidently, the peak changes with cost ratio c/W and θ (Fig. 6, right column). This analysis leads to the observations: (I) As c/W increases, the reward landscape maximum covers a broader range of θ values (section peaks separate) and appears to acquire slope (peaks diverge in height, with a higher-to-lower pattern). (II) The spread of the average rewards at the peak of each section overlaps (red dashed lines), decreasing as c/W increases but not diminishing to zero. (III) Extracting near-to-optimal parameters by taking all points within a small δ = 0.02 range of the peak average reward yields a set of near-optimal decision boundaries over a broad range of θ and α values (black points in Fig. 6). These three observations all support the effective degeneracy of optimized decision boundaries within the parameterizations. Observation (I) shows that for small c/W, the underlying structure of the reward landscape appears to degenerate with sections almost entirely overlapping (Fig. 6, top right). As c/W increases, a shallow structural maximum becomes apparent (Fig. 6, bottom right). Observation (II) shows significant overlap in the close-to-optimal region even across the apparent structural maximum (Fig. 6, bottom right). Observation (III) shows directly that there is an effective degeneracy. One could question whether different sections through the reward landscape would change these observations, as Fig. 6 depends on the range and discretization of α. Our range of α covers the entire range of boundary shapes shown in Fig. 2, including flat boundaries, and because of the gradual variation across sections, we would not expect further structure from a finer discretization. We also expect that using more Monte Carlo samples for each cross-section would not change the results, as the means and spreads shown in the sections in Fig. 6 appear to be good estimates of the distributions of average rewards (e.g., by their smooth variation with θ and unitary maxima). The close-to-optimal set of boundaries produces a range of speed-accuracy trade-offs (Fig. 5d–f). In 2AFC decision-making, each point on the SAT curve is a unique optimal boundary specifying a unique trade-off for a given value of c/W. This raises the question: for nAFCs with n > 2, what are the range of points on the SAT curve given by the set of close-to-optimal boundaries? Fig. 5d shows the mean decision errors against mean decision times for all close-to-optimal boundaries found in the 3AFC landscapes from Fig. 5a–c. In each case, an effectively-optimal reward is achieved by a broad range of SATs (Fig. 5d) rather than a tight group around a single SAT. How can a small range of reward values produce a broad range of speed-accuracy trade-offs? The breadth of speed-accuracy trade-off values produced by complex decision boundaries is explained by a range of near-optimal threshold parameter values. Then, given cost values W and c, the rewards$${mathbb{E}}({r}_{max })=-W{mathbb{E}}(e)-c{mathbb{E}}[T],; e={0,1},; ({{{{{{{rm{correct}}}}}}}}/{{{{{{{rm{incorrect}}}}}}}},{{{{{{{rm{decision}}}}}}}})$$(4) have two degrees of freedom for each value of ({mathbb{E}}({r}_{max })) in trading off the expected error ({mathbb{E}}(e)) and expected decision time ({mathbb{E}}[T]). Thus, the same expected reward may be attained by boundaries with different combinations of ({mathbb{E}}(e)) and ({mathbb{E}}[T]). Hence, the set of close-to-optimal decision boundaries yields the range of (({mathbb{E}}(e),,{mathbb{E}}[T])) solutions shown in Fig. 5d–f. ### Effectively-optimal decisions with different boundary shapes What are the characteristics of the nonlinear decision boundaries in the near-optimal set? If the decision boundaries were qualitatively similar in shape, then learning the particular boundary shape would be important for maximizing reward. Conversely, if the boundary shapes are qualitatively different, then learning the precise boundary shape would not be critical, and the emphasis on optimality would shift to the inference and normalization processes for multiple-choice decision-making. Figure 5e shows that for each cost ratio c/W, the edge-intersection parameter θ seems to be the main determinant of optimality, as is visible in the homogenous values of θ within each region (a–c). In contrast, the shape parameter α takes heterogenous values within each region (a–c) in Fig. 5f. For every cost ratio c/W, the entire explored range of α is represented in the optimal set, whereas there is a narrow range of θ that varies with c/W. The flat-boundary case (α = 0) is optimal for each of the degenerate sets, with the optimal θ then a single value that lies within the broadened range when α is non-zero. Thus, for all c/W, there are many SATs near each parameter value, and in turn, each SAT instance is close to many different parameter values (Fig. 5e, f). Therefore, it appears that close-to-optimal multi-alternative boundaries are possible with significant modulation of the flat-boundary case. The close-to-optimal set contains a broad range of parameters, giving a diverse set of boundary shapes (examples in Fig. 7). This supports the notion that learning the precise boundary shape is less important for making effective optimal decisions. ### Mean error and decision time vary along the optimal decision boundaries Every point on an extended boundary for nAFC decision-making with n > 2 has an associated error and decision time distribution (see Fig. 7 for a color map of the mean decision time for flat boundaries). In contrast, the 2AFC decision thresholds are points on a line in the space of 2D belief vectors P from (0, 1) to (1, 0). Each choice on an extended boundary is a single point with an error and decision time distribution. Spatially-dependent error and decision time distributions that vary along the decision boundary are a consequence of having a multi-dimensional belief vector that can vary over a decision boundary embedded as a curve or surface in the higher-dimensional belief space. Hence, the close-to-optimal set of boundaries have different ranges of mean decision times and mean errors but practically indistinguishable expected rewards. If the under-determinism in the reward structure were eliminated (e.g., by also minimizing mean error or mean decision time), then the set of reward-maximizing decision boundaries would contract and SATs narrow. Interestingly, decision times vary even along flat boundaries (Fig. 7a, yellow lines; decision times shown by background color), and so even the simplest case of crossing a single threshold is complicated for multiple alternatives. ### Implicit dynamics of optimal decision boundaries support both static and collapsing thresholds The point where a decision trajectory crosses a high-dimensional decision boundary is a belief vector that has a corresponding mean error and mean decision time. Because all of these quantities can vary along a nonlinear boundary, there can appear to be non-trivial dynamics in the decision ‘threshold’ if it is instead interpreted as a unitary value rather than as a boundary function. Here we refer to this property as implicit dynamics because it originates in the boundary shape rather than from an explicit time-dependence of the threshold. In Fig. 7, the boundary overlays a gradient coloring representing decision time, making explicit the non-trivial relation between decision time and belief of a decision. In this sense, we uncover temporal ‘dynamics’ implicit in the static, complex, and nonlinear decision boundaries for multiple choices. There has been much debate over whether temporally-dynamic decision thresholds give a better account of 2AFC experimental data than the static thresholds of SPRT15,16,18. The assumption of fixed (constant-valued) thresholds has been called into question, with collapsing thresholds gaining popularity, which are sometimes interpreted as urgency signals9,19,30,32. From an optimality perspective, collapsing thresholds are more appropriate for repeated free-response trials of mixed difficulty and for those with deadlines19,21, whereas static thresholds are appropriate for single free-response trials without deadlines and repeated free-response trials of known difficulty; however, static thresholds are not adequate for single free-response trials of mixed, a priori unknown difficulty25. Models with collapsing thresholds have been shown to reduce the skew of error and decision time distributions in some experimental tasks16, and urgency signals proposed to account for increased firing rates in the LIP brain region of macaques during trials in which accumulated evidence (encoded as neuronal firing rates) is unchanging10,11,30. How multiple-choice decision boundaries relate to this debate is thus of interest. To investigate the relationship between implicit decision threshold dynamics discussed above and spatial nonlinear decision boundaries, we transform the decision variables to a form that gives a temporal structure in the threshold as a consequence of the extended static boundaries. The boundary beliefs are sorted by decision time, averaging over boundary values with identical decision times. These then appear as time-dependent decision boundaries applied to evidence (Fig. 8), which for display purposes, we represent using the log-odds (equation 2). These dynamic decision thresholds have a range of temporal structures for each cost ratio c/W, separating naturally into three categories: increasing (Fig. 8, top row), collapsing (middle row), and static thresholds (bottom row). These categories appear to correlate with the shape of the decision boundary: increasing thresholds with convex boundaries (e.g., Fig. 7a, dark blue curve), decreasing thresholds with concave boundaries (e.g., Fig. 7a, red curve), and static thresholds with flat boundaries (e.g., Fig. 7a, yellow line). This correlation seems to originate in an increase in decision time as the belief moves away from equality between choices (Fig. 7a–c, shading). All things being equal, decisions that terminate with higher beliefs tend to be more accurate, whereas decisions terminating with a low belief of the choice tend to be less accurate. Therefore, experimental observation of implicitly dynamic decision boundaries (including increasing thresholds) could be due to time-dependent accuracy plots of data from individual subjects33. We emphasize that while these implicit threshold dynamics look like temporal dependence, they are, in fact due to the spatial structure of the decision boundary. Flat decision boundaries can therefore be interpreted as a special case where the boundary on the belief in one choice to cross its threshold is independent of the beliefs of the other choices; however, even then, the mean decision time and mean error have a spatial structure along these boundaries (Fig. 7). ### Comparison to current models, interpretations, and predictions There are few normative approaches to modeling multiple-choice decision-making, with the recent study of ref. 7 the state-of-the-art in using an evidence accumulation vector accumulated in a race model with a boundary in n-dimensional space. Using dynamic programming, they find optimal decision boundaries for free-response, mixed-difficulty trials are nonlinear and collapse over time. Clearly, their nDRM is closely linked to the model presented here, but there are key differences: the models have a different evidence structure, trial structure, and optimization process, which leads to diverging perspectives on the nature of multiple-choice decision boundaries. In the following, we describe how these perspectives can be reconciled, and in doing so gain a broader understanding of normalization and inference in multiple-choice decision-making. In comparison with the nDRM, one difference between our model and that in ref. 7 is in how they use Bayesian inference to accumulate evidence. Although the nDRM is derived from Bayes’ rule, evidence is accumulated linearly and inference values (the posterior) are used indirectly to calculate the expected reward7. Conversely, our model, which focuses more on the details of nonlinear decision boundaries, uses the posterior from Bayes’ rule as the accumulated evidence. In this respect, our model has less biophysical realism that the nDRM because there is little supporting evidence for the brain representing posterior beliefs directly as probabilities. Instead, studies point towards the brain employing indirect representations from which beliefs can be inferred33,34. However, we describe below how either view of evidence representation is compatible with two of the main results of the present paper. Firstly, evidence accumulation takes place on an n − 1-dimensional simplex in our model, as in Fig. 1a, b (n = 2 a curve; n = 3, a plane). Similarly, reward maximization in the nDRM results in evidence accumulation perpendicular to a diagonal equidistant from all evidence-component axes, and so the space collapses to n − 1 dimensions. This subspace appears to be a scaling of the posterior simplices shown in Fig. 1a, b, which is simply a normalization of the evidence accumulation (see Methods). In consequence, both models agree that normalization of the evidence is a key component of multiple-choice decision-making, which is imposed by using probability values directly here and emergent in the nDRM. Further, the decision boundaries in both models exist in that same subspace, and so agree as to the expected dimensionality of neural population activity during evidence accumulation (see later section on Predictions). Secondly, our model results in a set of close-to-optimal decision boundaries that contains drastically different boundary shapes, which we now argue is compatible with the nDRM. Tajima et al. use a network approximation to the optimal decision rule to separate the boundary components into a race model with nonlinear and dynamic (collapsing) boundaries, both with normalization, and evaluate their performance under reward maximization7. Their model performance is best in the presence of internal variability (adding noise), where omitting boundary dynamics performs similarly or slightly better than the combination of nonlinearity, dynamics, and normalization together. From our perspective, these distinctly different types of boundaries could be interpreted as members of a set of close-to-optimal boundaries since both types result in effectively maximal rewards. The effect of evidence normalization is also important. Both the model presented here and the nDRM rely on evidence normalization as key to model performance. In the model here, the normalization of evidence follows from using posteriors, whereas in the nDRM it follows from a projection of the accumulated evidence onto the n − 1-dimensional subspace described above. This suggests a major influence on optimality for multiple-choice tasks, yet an assessment of the contribution of normalization alone on performance is absent in ref. 7: in the nDRM, normalization is not separated from the nonlinearity of the decision boundaries. For this reason, and because7 uses mixed difficulties which demand collapsing boundaries, comparing the performance to flat boundaries for the nDRM is not comparable to the flat boundaries examined here. Normalization appears integral to optimal multiple-choice decision-making, and the magnitude of its influence may offer an explanation for degenerate optima: it appears that boundary shape has a lesser influence on optimality when the evidence is normalized, which would benefit learning and generalization as a general neural mechanism for context-dependent decision-making35. Evidence normalization is also a mechanism that satisfies a number of physiological constraints. Represented by the range of neural activity, normalization satisfies: (I) Biophysical constraints of the range of activity of neural populations – the firing rate of biological neurons cannot be negative and cannot exceed a certain level due to their refractory period13; (II) neural recordings of decision-making tasks show that a decision is triggered when neural activity reaches a stereotyped level of activity9, and (III) as the number of options increases, so too does the processing and representation of multivariate evidence accumulation and the relative belief over these options. This, together with the wide-ranging influence of normalization on optimality discussed previously, leads us to consider normalization as an integral part of the evidence accumulation process. We now show that normalization, here in the form of a posterior representation of the evidence, can explain some ‘irrational’ behaviors: (a) the decrease in offset activities in multi-alternative tasks; (b) violation of the independence of irrelevant alternatives, and (c) violation of the regularity principle9,36,37,38,39. These behaviors are outcomes of three properties of normalization when adding (for example) a third option P2, where the currently-held beliefs are P0 and P1 such that P0 + P1 = 1; then the new normalized probabilities are$${tilde{P}}_{i}=frac{1}{1+{P}_{2}}{P}_{i}.$$(5) This has the effect of increasing the minimum distance dT = T − Pi/(1 + P2) from each choice belief to a boundary T, decreasing the distance d = Pi/(1 + P2) − 1/3 from the flat prior, and reducing the difference in belief values supporting each alternative$${Delta }_{i,j}=frac{1}{1+{P}_{2}}|{P}_{i}-{P}_{j}|,quad quad ine j.
(6)
All of these quantities decrease as the belief value of option P3 increases, which has the following consequences.
First, we consider (a) a decrease in offset activities in multi-alternative tasks. Multiple studies show that the initial average neural activity (“offset”) encoding evidence accumulation decreases as the number of options increases9,36. Support for this offset behavior is given in ref. 7 for the network model of the nDRM by introducing it directly into the reward maximization as the number of options is increased. However, assuming that evidence accumulation uses the posterior directly, as in our model, then for each unit increase in the number of choices, the average belief per choice decreases by 1/n(n + 1); i.e., the decrease in offset activity is a direct consequence of evidence normalization.
Next, we consider (b) the violation of IIA. The independence of irrelevant alternatives (IIA) recurs in many traditional rational theories of choice40,41. It asserts that the presence of an ‘irrelevant’ option should not affect the choice between “relevant” options (e.g., adding a low-value choice to existing higher-value choices)42,43,44. Violation of this principle has been shown across behavioral studies in both animals and humans37. This behavior is replicated in7 using a network approximation of the nDRM with added noise during evidence accumulation and is attributed to their use of divisive normalization. In our model, the violation of IIA is explained by the representation of evidence accumulation as the belief vector (equations (5, 6)): as the belief of the third option (P2) increases, the belief values of other options are reduced, which necessitates further evidence accumulation before making a decision than would otherwise have been required. Further, the difference in belief values supporting both high-valued options is decreased (equation (6)), so more evidence is needed to choose between these options also. Requiring additional evidence accumulation, and so increased difficulty in choosing between the two high-valued options, is exhibited by longer decision times and/or higher error rates, as in behavioral studies.
Last, we consider (c) the violation of the regularity principle. The inverse of IIA violation, the regularity principle, says that adding extra options cannot increase the probability of selecting an existing option, and has been found to be violated in behavioral studies38,39. This is simulated in7 using the same network approximation of the nDRM as for the violation of IIA. We also find that violation of regularity is congruent with IIA violation: adding a third option reduces the belief value of the original options (equation (5)) while also reducing the difference in belief values (equation (6)).
Our next consideration is related to network models of decision-making. In the nDRM, evidence accumulation is implemented in a straightforward manner, but the optimal boundaries are complex and nonlinear7. The optimal decision policy is approximated by a recurrent neural circuit that implements a nonlinear transformation of the accumulators, which simplifies the decision rule to a simple winner-takes-all rule when an accumulator reaches a single threshold. The decision rule is then simple, local, single-valued, and applies to each population independently. An interesting question is whether a similar approximation and neural implementation can be implemented with the nonlinear boundaries presented here. In principle, our model could be approximated by applying normalization (as in ref. 7) and a nonlinear transformation of the decision variable to remove the nonlinear component of the boundary. However, it is unclear whether a recurrent network (as in ref. 7) exists to transform the decision variables for the more complex power and oscil boundaries. If it is possible to represent the nonlinearities within a larger recurrent network, one would obtain a local, single-valued decision rule applied independently to each accumulator.
### Reproduction of other experimental findings
Hick’s law in choice RTs: Hick’s law is a benchmark result relating decision time to the number of choice alternatives. The relationship is classically log-linear in the form (bar{{{{{{{{rm{RT}}}}}}}}}=a+blog (n)). This relationship is found for both perceptual and value-based implementations of the nDRM for a set cost ratio45,46,47. Our model also replicates this relationship for a number of cost ratios (Fig. 9a, colored lines). Interestingly, both the slope and intercept vary with the cost ratio c/W.
SAT offset and slope: It has been reported that increasing the number of choices n results in a steeper slope and larger values for decision time versus coherency, along with a steeper slope and larger values of mean error9. For our model, we find that the SAT curves move away from the origin with increasing choices (Fig. 9b). Notice that the slope of the curves also decreases with the number of choices, which is not inconsistent with ref. 9.
Dynamic thresholds: Urgency signals have been observed in neural recordings from area LIP during multiple-choice tasks8,9 and are often interpreted as implementing collapsing decision thresholds, although this is still under debate15,16. For multiple-choice boundaries, temporal dynamics of this type have been found for the optimized nDRM7. Our model shows that the appearance of urgency signals could, in part, be explained by the change of decision times along nonlinear decision boundaries (Figs. 7 and 8), giving rise to what looks like temporally-dynamic thresholds. However, boundaries that are nonlinear in evidence (but linear in time) do not apply to 2AFC tasks. Therefore, they cannot act as a catch-all explanation for the urgency signal since this signal has also been observed during 2AFC tasks. Additionally, our results also reproduce dynamic boundaries with increasing or mixed gradients, as found in17, although not as a result of mixed-difficulty trials.
### Predictions
The nDRM makes several predictions pertaining to both behavior and neural implementation7. These also apply to the model presented here but stem directly from the mechanism of posterior probabilities as evidence accumulation with the consequent boundary degeneracy. Here we make a variation of these predictions and so a means of distinguishing empirically between the nDRM and a distinct class of models, encompassing the one here, where the integrators are normalized to represent probabilities48,49.
Firstly, the depression of neural activity prior to evidence accumulation (the offset) increases with the number of alternatives, as found in neural recordings of area LIP9. We find that this can be attributed directly to the decreasing value of the priors. We, therefore, predict that mean offset is independent of modulations of the task, such as changes in reward rate or the learnt SAT. This is contrary to the predictions made by7, which attribute the offset to a mechanism for reward maximization of the network approximation, and so predict that the offset has a dependency on reward rate, encompassing the reward values and inter-trial interval.
Secondly, the neural population activity should be near an (n − 1)-dimensional subspace during evidence accumulation. We found that evidence accumulation takes place on an (n − 1)-dimensional simplex with nonlinear decision boundaries. In this, we agree with7 that the neural population activity should be constrained to an (n − 1)-dimensional subspace during evidence accumulation, and that this could be tested with standard dimensionality-reduction techniques using multi-electrode recordings. However, there may be further subtlety in our case due to the effective degeneracy of the decision boundaries. The section “Implicit dynamics of optimal decision boundaries support both static and collapsing thresholds” showed that the decision variable can be transformed into a form that gives apparent temporal structure in the threshold (Fig. 8), dependent on the particular nonlinear boundary within the degenerate set. Our expectation is that this subtlety in the threshold dynamics may also manifest in that the accumulation manifold may appear to vary from subject to subject or trial to trial while maintaining near-optimal performance (given that any point in the accumulation manifold could also be on some decision boundary).
|
2022-12-05 13:54:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6521029472351074, "perplexity": 1053.5448576877418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711017.45/warc/CC-MAIN-20221205132617-20221205162617-00367.warc.gz"}
|
https://gh.bmj.com/content/6/12/e007248.full
|
Article Text
Durability of effects from short-term economic incentives for clinic attendance among HIV positive adults in Tanzania: long-term follow-up of a randomised controlled trial
1. Carolyn A Fahey1,2,
2. Prosper F Njau3,
3. Nicole K Kelly1,4,
4. Rashid S Mfaume3,
5. Patrick T Bradshaw1,
6. William H Dow5,
7. Sandra I McCoy1
1. 1Epidemiology, University of California, Berkeley, California, USA
2. 2Epidemiology, University of Washington, Seattle, Washington, USA
3. 3Ministry of Health, Community Development, Gender, Elderly and Children, Dodoma, Tanzania, United Republic of
4. 4Epidemiology, University of North Carolina, Chapel Hill, North Carolina, USA
5. 5Health Policy and Management, University of California, Berkeley, California, USA
1. Correspondence to Dr Carolyn A Fahey; cfahey{at}berkeley.edu
Abstract
Introduction Conditional economic incentives are shown to promote medication adherence across a range of health conditions and settings; however, any long-term harms or benefits from these time-limited interventions remain largely unevaluated. We assessed 2–3 years outcomes from a 6-month incentive programme in Tanzania that originally improved short-term retention in HIV care and medication possession.
Procedures
All participants received standard clinical care as provided by the health facilities, both during the 6-month intervention and thereafter. Following national guidelines for individuals starting ART, participants were directed to visit the clinic on a monthly basis for clinical evaluation and medication dispensing, including antiretroviral drugs to treat HIV.17 Along with this usual care, participants in the two intervention arms could receive food or cash transfers once per month—conditional on timely attendance at a scheduled clinic visit (within ±4 days)—during the six consecutive months following trial enrolment. Participants could receive up to six transfers, for a maximum total value of TZS135 000 (≈US$66). This value was selected to prevent undue coercion and be comparable to the Tanzania Social Action Fund national social protection programme; it was somewhat larger than a previous trial in the Democratic Republic of Congo that offered US$5 cash incentives (increasing in value by US\$1 on each visit) to pregnant women living with HIV, which found improved retention at 6 weeks post partum but not ART adherence or viral suppression.18 A more recent trial in Tanzania has also demonstrated the effectiveness of the monthly TZS22 500 TZS cash incentive at improving 6-month retention and viral suppression compared with the standard of care.10
For the present follow-up study, after 36 months had elapsed since original enrolment in the trial, research assistants worked with clinic staff to trace all former participants, re-enrol them in the study, and measure long-term HIV care outcomes. Tracing procedures followed the US President’s Emergency Plan for AIDS Relief (PEPFAR) guidelines,19 using phone calls, engagement with community health workers who conduct routine tracing, and triangulation with other facilities (online supplemental figure 1). First, the last known status for each participant was abstracted from medical records at the original facilities. Individuals confirmed to be in care at the original facility or another facility within Shinyanga region were approached by research assistants during a scheduled clinic visit, after clinic staff obtained the individual’s permission for study contact. If an individual’s medical record specified an out-of-region transfer or that the most recent scheduled appointment was missed, clinic staff attempted to call or text; those who could not be reached by phone were referred to community health workers for tracing. Given successful tracing and agreement to be contacted by study staff, research assistants scheduled either an in-person meeting at a location preferred by the individual (home, community location or clinic) or a phone call if the individual no longer lived within the region.
Supplemental material
Efforts to trace each participant continued until successful location or the conclusion of ‘exhaustive’ tracing efforts as defined by PEPFAR (≥3 attempts using at least two tracing methods). After contacting former participants, research assistants confirmed each individual’s identity and participation in the original trial, obtained consent to participate in the follow-up study, completed an interview, and abstracted all available medical records from facilities attended since starting ART. In-person written informed consent was obtained for contacted individuals who were still living within Shinyanga region; verbal consent was obtained over the phone for individuals who had moved out of the region; and a waiver of informed consent to access medical records applied to individuals found to be deceased or who could not be located after exhaustive tracing (as defined above).
All interviews were conducted in Kiswahili and collected information about experiences and preferences regarding HIV care along with socio-demographic characteristics. Clinical visit and appointment dates, medication dispensing and other routinely collected HIV care data were abstracted from all available facility and individual-held medical records, including both electronic and paper-based records.
Outcomes
The primary prespecified outcome was retention in care, measured by clinic attendance records at 24 and 36 months after enrolment. Following the same outcome definition used in the original study, individuals considered lost to care included those who died, disengaged from care or had no evidence of care for ≥90 days after a missed appointment as of 24 and 36 months.9 19 As recommended by PEPFAR, participants who could not be found after exhaustive tracing efforts were classified as lost to care.19 However, we conservatively considered retention to be missing for former participants whose last known status indicated a transfer that we were unable to verify (ie, complete medical records from the facility could not be accessed, primarily in the case of out-of-region facilities as these were not visited in-person).
All-cause mortality, a component of retention, was assessed as a secondary outcome at 24 and 36 months and in a time-to-event analysis. Mortality status and date of death were obtained through medical records or contact with family members during tracing. While we also intended to evaluate adherence to ART using the medication possession ratio over 12–36 months, this was deemed infeasible due to often incomplete medication dispensing records obtained at follow-up.
Statistical analysis
Sample size was determined for the original trial.9 For the follow-up study, we first conducted descriptive analyses regarding participant mobility and continuity of care over time, as crucial and often poorly understood aspects of retention in HIV care. This included summarising survey responses about moving and distance-related clinic preferences, the proportions attending their original facility versus new facilities at follow-up, and the frequency of out-of-region transfers.
The primary analysis followed the same methods as in the pre-specified analysis of the original trial.9 14 We conducted an intention-to-treat analysis to evaluate the primary outcome of retention in care and the secondary outcome of mortality at 24 and 36 months. Using predicted probabilities from logistic regression models, outcomes were expressed as marginal mean differences between the combined incentive and control groups with 95% CIs.20 21 We controlled only for enrolment site in the primary analyses to account for stratified randomisation.
In secondary analyses, we additionally adjusted for baseline characteristics including age, sex, and characteristics that were imbalanced at baseline of the original trial (WHO clinical stage, occupation and language).9 We also examined effects disaggregated by incentive type (cash vs control, food vs control and food vs cash). In addition, we explored effect heterogeneity across the same subgroups as in an analysis of the original trial results (sex, age, wealth index and treatment delay between HIV diagnosis and ART initiation) with a Wald test of the interaction term at alpha=0.20, while expecting these analyses to be underpowered.22
Lastly, we examined the effect of incentives on time to all-cause mortality. We used an unadjusted Kaplan-Meier plot stratified by study group and a Cox proportional hazards model to compare the relative mortality rates by group. The Cox model was adjusted for clinic, as in the primary analysis. We also added a time interaction after detecting evidence of a proportional-hazards violation (Schoenfeld residuals p=0.006), using two equal 18-month time periods (0–18 months, 18–36 months) to satisfy the proportional-hazards assumption. As such, we reported HRs by time interval.
The primary intention-to-treat analyses of effect estimates at 24 and 36 months included all randomised participants. We used multiple imputation to estimate retention in care and mortality for participants who were missing values for each outcome. This approach incorporates uncertainty about the missing data by creating numerous plausible imputed datasets and then combining results from each. We implemented sequential multiple imputation with 20 iterations separately for intervention and control arms using logistic models, including the same fully observed predictors used for this purpose in a similar trial (clinic, age, sex and WHO clinical stage).10 Parameter estimates were combined according to Rubin’s rules.23 24 As a sensitivity analysis, we also report complete-case estimates for all outcomes (excluding participants with missing data). For the survival analysis, individuals missing 36-month mortality status were instead censored at the date last known to be alive. All statistical analyses were conducted using Stata V.14.
Patient and public involvement
We worked closely with government, health facility and community stakeholders during the trial design process to ensure that the intervention amount was appropriate, unlikely to be coercive and yet likely to be effective to change the desired health outcomes.14 Postintervention feedback was also collected from patients and used in designing subsequent trials.10 25 26
Results
Descriptive findings
Follow-up to ascertain 24-month and 36-month outcomes occurred between 3 March 2018 and 19 September 2019, with all 800 original participants (509 (64%) women (table 1)) included in the primary analysis (figure 1). Medical records were abstracted for all 800 participants and a total of 530 (66%) participants completed the follow-up interview.
Figure 1
Trial profile, adult HIV treatment initiates in Tanzania, 2013–2018. ART, antiretroviral therapy. *Four screened patients were excluded for unknown reasons (missing screening data).
Table 1
Participant characteristics at baseline, HIV treatment initiates in Tanzania, 2013–2015
Most interviews (408 (77%)) were conducted at the original health facilities, along with 94 (18%) at another facility or location within Shinyanga region and 28 (5%) over the phone with participants who had relocated to another region. The median follow-up period at the time of the interview was 48 months since enrolment (IQR 43–52). Participant responses indicated high mobility: one in three (162 (31%)) reported moving to a different place of residence since enrolment in the original trial. Additionally, a quarter of those in care at the time of the interview (135 of 521 (26%)) reported currently attending a facility other than the one nearest to their home, with the most cited reasons including continuing care at the same facility where they started treatment (35 (26%)), fear of stigma or HIV status disclosure at their local facility (28 (21%)) and quality of services (16 (12%)).
According to medical records at 24 and 36 months, respectively, 556 (70%) and 492 (62%) participants were engaged in care at the same health facility as at trial enrolment, with no differences between intervention and control groups. We verified receipt of care at another facility for 84 participants (26 out-of-region) at 24 months and 115 participants (35 out-of-region) at 36 months. Records also indicated potential transfers for an additional 56 (7%) participants before 24 months and 63 (8%) before 36 months, but records could not be obtained to verify these transfers (due to of out-of-region or long past transfers); retention in care was therefore estimated using multiple imputation for these participants, whose baseline characteristics did not generally vary from participants with observed retention in care status (online supplemental table 1).
Supplemental material
A total of 57 participants were found to be deceased by the end of follow-up activities, including 24 and 39 deaths that respectively occurred by 24 and 36 months (table 2); only 18 of these deaths had been recorded in original clinical records when follow-up study activities began, while the remainder were documented through study-initiated tracing procedures involving community health workers. In total, mortality status was confirmed through medical records or tracing activities for 710 (89%) participants at 24 months and 700 (88%) at 36 months.
Table 2
Observed outcomes over time by randomisation group, HIV treatment initiates in Tanzania, 2014–2018
Effect estimates
In primary intention-to-treat analyses (table 3), retention in care did not differ between the incentive and control groups at 24 months (86.5% vs 84.4%; risk differences (RD)=2.1, 95% CI −5.2 to 9.3) nor at 36 months (83.3% vs 77.8%, RD 5.6, 95% CI −2.7 to 13.8). Likewise, there was no difference in all-cause mortality at 24 months (2.5% vs 7.7%, RD −5.2, 95% CI −10.5 to 0.1) or 36 months (4.7% vs 9.0%, RD −4.3, 95% CI −10.2 to 1.6).
Table 3
Durability of intention-to-treat effects from short-term conditional economic incentives for clinic attendance provided to HIV treatment initiates for 6 months, Tanzania, 2015–2018
Adjusted analyses yielded similar results (table 3), although with a reduction in 24-month mortality among the incentive group compared with the control (RD −5.7, 95% CI −11.3 to −0.1). Estimates from complete-case analyses were also similar (online supplemental table 2). Analyses disaggregated by incentive type also yielded similar results, with no differences in outcomes between the cash and food arms (online supplemental table 3).
In subgroup analyses, estimated incentive effects on 24-month retention varied by age, wealth and clinic, with larger point estimates among younger individuals, those with low relative wealth at baseline, and those enrolled at the regional hospital or health centre as opposed to the large district hospital (online supplemental table 4): P-interaction <0.20). There was no evidence of effect heterogeneity for retention at 36 months or death at either time point.
Lastly, analysis of time to all-cause mortality did not reveal a difference in survival over 36 months (figure 2). The incentive group had a lower mortality rate during the first half of follow-up, including 12 months after the 6-month incentive period (0–18 months: HR 0.27, 95% CI 0.10 to 0.74); thereafter, mortality rates did not vary by intervention group (18–36 months: HR 1.13, 95% CI 0.33 to 3.79).
Figure 2
Kaplan-Meier survival plot of time to all-cause mortality among adult HIV treatment initiates in Tanzania, 2013–2018.
Discussion
We used gold-standard tracing procedures in a rare long-term follow-up study of short-term conditional economic incentives for treatment adherence to understand the durability of effects and assess any long-term harms or benefits. After following up with participants 3 years after the original study enrolment, our results show that immediate improvements in retention and mortality from a 6-month cash and food incentive programme for HIV treatment initiates9 were not sustained in the long term. Importantly, the intervention group outcomes did not drop below the comparison group as could be expected given a ‘crowding-out’ effect of external rewards on intrinsic motivation to attend the clinic.27 These findings help to address a dearth of information across the medication adherence literature regarding long-term impacts of short-term incentives. To our knowledge, this is the first study to assess post-intervention effects beyond a year after the withdrawal of incentives for engagement in HIV treatment.
We did not find strong evidence that time-limited incentives produced lasting improvements in ART adherence. However, nor did we find evidence for long-term harm, an often-cited hypothetical concern.27 On the contrary, our results suggest that adherence gains during the incentive period may have averted early deaths at the start of HIV treatment, with lower mortality still perceptible at 24 months in adjusted analyses. One possible explanation of these findings comes from a livelihood framework, whereby cash and food incentives are a ‘provision-type’ intervention that is recommended to meet basic needs of those most vulnerable; alternate interventions aimed at protecting and promoting livelihoods are recommended after providing this temporary stabilising support.28 Another plausible mechanism for these findings is through the price pathway, whereby incentives lowered the cost associated with clinic attendance and triggered the initial adherence effect, then once removed the behaviour gradually reverted towards that of the control group as any habit formation effects wore off.
These findings support the use of short-term incentives as a simple, effective and low-cost intervention for bolstering retention and adherence through the difficult first months of ART initiation, a time commonly defined by stigma, illness, and loss of economic productivity29 as well as peak lost to follow-up from clinical care.30 However, complementary (eg, ‘cash plus’31) interventions may be necessary after ART initiation to address ongoing social and structural barriers to lifelong ART, along with behavioural challenges such as treatment fatigue32; linking additional incentives to clinic attendance may also be important if the price pathway is key. Additionally, the effectiveness of incentives may vary by setting. In our results, there was some evidence of stronger effects at the two clinics with relatively small patient populations (including a regional referral hospital and a health centre, located in/near a medium-sized town) compared with the busy, urban district hospital where implementation may have been more challenging.
This study had several limitations. First, the trial was powered to detect effects at the end of the 6-month incentive period,9 not the smaller effects anticipated at longer follow-up intervals. Next, although exhaustive efforts were made to trace every original study participant, some individuals could not be located due to challenges including high mobility and frequently changed phone numbers. Although there were no meaningful differences in baseline characteristics of participants who could not be traced at 36 months, it is possible these participants differed in other ways that could be associated with the outcomes of interest. For example, participants who remained in care at the original clinics were more likely traced, which could perpetuate differences observed in the original trial’s analysis. Additionally, potentially incomplete HIV care attendance records obtained at follow-up may have resulted in an underestimation of retention in care at intermediate time points, although we do not anticipate any such misclassification to vary by study group. Further, this analysis measured outcomes at discrete points in time, thus it is possible that outcome patterns differed at intermediate time points. Lastly, participants in the original trial were recruited during a period when ART availability was limited to individuals with advanced disease progression, before recent policy changes extended universal access to ART immediately after HIV diagnosis; further long-term research on incentives for individuals starting ART in the current era may be warranted. This study also had key strengths, including the original randomised design and unique focus on ART initiates, along with a rigorous tracing strategy to reduce outcome misclassification for participants no longer attending the original participating health facilities.
In conclusion, findings from this study suggest that small conditional economic incentives are a safe and effective strategy to promote retention and adherence at the critical time of HIV treatment initiation; however, these effects diminish over time. Complementary, longer-term strategies focused on sustaining lifelong retention and adherence are recommended after ART initiation to encompass a comprehensive approach to ending the HIV epidemic.
Data availability statement
Data are available on reasonable request. Deidentified participant data, which were collected for the study, will be made available after obtaining relevant institutional research ethics board approval of a proposal and providing a signed data access agreement that can be obtained from SIM.
Ethics statements
Ethics approval
This study was approved by the National Institute for Medical Research, Tanzania (Ref. NIMR/HQ/R.8a/Vol. IX/1631) and the Committee for Protection of Human Subjects at the University of California, Berkeley (Protocol ID: 2017-11-10508).
Acknowledgments
The authors are grateful to the local research team, clinic staff, and study participants. Publication made possible in part by support from the Berkeley Research Impact Initiative (BRII) sponsored by the UC Berkeley Library.
• Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Footnotes
• Handling editor Lei Si
• Contributors The study was designed by SIM and PFN. SIM, PFN, CAF and NKK implemented the study and collected data with input from RSM. CAF did the statistical analysis with input from SIM, WHD and PTB. CAF drafted the initial manuscript and all authors participated in reviewing the draft for intellectual content and assisting with revisions. All authors approved the final version of the manuscript. CAF had full access to all the data in the study and the final responsibility for the decision to submit for publication.
• Funding This work was supported by the National Institute of Mental Health at the National Institutes of Health (grant number R21MH115802 to SIM).
• Disclaimer The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Mental Health.
• Competing interests None declared.
• Provenance and peer review Not commissioned; externally peer reviewed.
• Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
|
2023-03-30 08:11:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17925113439559937, "perplexity": 6067.999689208426}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949107.48/warc/CC-MAIN-20230330070451-20230330100451-00318.warc.gz"}
|
https://www.physicsforums.com/threads/proof-the-internal-pressure-is-0-for-an-ideal-gas-and-n-2-a-v-2-for-a-van-der-wa.218595/
|
# Proof the internal pressure is 0 for an ideal gas and ((n^2)a)/(v^2) for a Van der Wa
1. Feb 28, 2008
### Jennifer Lyn
Like in the other problem I posted- This is the other question that I missed and just can't find a solution for.
1. The problem statement, all variables and given/known data
Prove the internal pressure is 0 for an ideal gas and ((n^2)a)/(v^2) for a Van der Waals gas.
2. Relevant equations
1. VdQ Eqn: p= (nRT)/(v-b) - ((n^2)a)/(v^2)
2. (partial S/partial V) for constant T = (partial p/partial T) for contant V.
3. dU = TdS - pdV
4. pi sub t (internal pressure) = (partial U/partial V) for constant T
3. The attempt at a solution
a) Ideal Gas
0 = (partial U/partial V) for const T
int 0 dv = int du
0 = int (TdS - pdV)
int p dv = int T ds
int (nRT/v) dv = int (Pv/nR) dS
nRT x int(1/V) dv = pv/nR x int 1 dS
... and I get kind of lost here, though I know that what I've already done is wrong.. :(
b) VdW gas
I actual have to get going to school, but I'll come back and type up what i've done (incorrectly :( for this part afterwards).
2. Feb 28, 2008
### афк
Hello Jennifer,
I suppose you are given the so called thermal equation of state $p=p(T,V,n)$ for both
the ideal gas
$$p=\frac{nRT}{V}$$
and the Van der Waals gas
$$p=\frac{nRT}{V-nb}-\frac{n^2a}{V^2}$$
The inner pressure $\left(\frac{\partial U}{\partial V}\right)_T$ can be calculated after finding the so called caloric equation of state $U=U(T,V,n)$ for both cases.
Another straightforward method would be to use the following identity which shows that the caloric and thermal equations of state are not independent of each other:
$$\left(\frac{\partial U}{\partial V}\right)_T=T\left(\frac{\partial p}{\partial T}\right)_V-p$$
Do you know how to derive this identity?
3. Feb 28, 2008
### Jennifer Lyn
I think so..
$$\Pi$$t = $$\partial$$u/$$\partial$$v for constant t
= ( 1/$$\partial$$v$$\times$$(Tds - pdv) )
= T $$\times$$ ($$\partial$$p/$$\partial$$t) - p
I think that's right. I still don't know how to get from that Maxwell relation to the ideal gas and Van der Waals eqn, though.
Last edited: Feb 28, 2008
4. Feb 28, 2008
### Jennifer Lyn
Ok, I think I figured it out, from my previous post (sorry- im still getting used to using the tools for math on this board) I replace the vanderwaals eqn into P in my partial p and then just solve from there.
5. Feb 28, 2008
### Jennifer Lyn
Thanks everyone!
|
2017-03-23 00:52:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7629138231277466, "perplexity": 2888.214118986221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186530.52/warc/CC-MAIN-20170322212946-00249-ip-10-233-31-227.ec2.internal.warc.gz"}
|