url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://solvedlib.com/n/9-6-pts-each-solve-the-differential-equation-a-y-x27-cos,15795956 | # 9.(6 pts each) Solve the differential equation: a) y'cos* = ysin x +sin 2x, -T/2 <x<r/2b) (Inx-2)y'+ly=-6x; X>01o) ~y =0d)
###### Question:
9.(6 pts each) Solve the differential equation: a) y'cos* = ysin x +sin 2x, -T/2 <x<r/2 b) (Inx-2)y'+ly=-6x; X>0 1o) ~y =0 d) 3x"y"+6xy'+y = 0
#### Similar Solved Questions
##### The biochemical processes like DNA Replication are never accurate. Then how the organisms which reproduce via asexual mode of reproduction have offsprings exactly identical to them without any variation?
The biochemical processes like DNA Replication are never accurate. Then how the organisms which reproduce via asexual mode of reproduction have offsprings exactly identical to them without any variation?...
HtAei FAEnCEAZIHLF caltu Tcascosm {mtx 0 Ihe Luee {In a 4m [Vmn Aen te iomum 6d4= Ila tale J $€nto% 0de UIt Mddi J I Irie eent Fes heh 6 Jn "rlna pued Mio [a ( tat al ceed Fulp? [-/470 Pointal Daamc SCALCATET 9 071 Utrome " RAcLCE A YDTHER Iuttmnot Wcut n Meiatt Meen Du [ xlc ChiT... 5 answers ##### POPOIOTIOTT CTJ OTT OOTTTJOTTC OOTOTN OnGFormulasZ-score Iranslormallon:Xpt ZoStandard devtallonConlidence Interval (or H,Test stalisilc (or |J,Conilderce Interyal for N4Tust statislic Kor |4Pjiied ETest1SafEeAinWERMean POPOIOTIOTT CTJ OTT OOTTTJOTTC OOTOTN OnG Formulas Z-score Iranslormallon: Xpt Zo Standard devtallon Conlidence Interval (or H, Test stalisilc (or |J, Conilderce Interyal for N4 Tust statislic Kor |4 Pjiied ETest 1 SafEeAin W E R Mean... 5 answers ##### The unemployment rate that is consistent with full employment is known as .1027 .2a. The natural rate of unemployment.b. The unnatural rate of unemployment.c. The status quo rate of unemployment.d. Cyclical unemployment.e. Okun's rate of unemployment. The unemployment rate that is consistent with full employment is known as .1027 .2 a. The natural rate of unemployment. b. The unnatural rate of unemployment. c. The status quo rate of unemployment. d. Cyclical unemployment. e. Okun's rate of unemployment.... 5 answers ##### Question 5Evaluate the integral:esn [ 4x dx V1 - 16x2 ecos-IAx _ +C1 ecos" -Ax + C esin-'4x +€ 4esin Aax+ CMoving to another question will Sa Question 5 Evaluate the integral: esn [ 4x dx V1 - 16x2 ecos-IAx _ +C 1 ecos" -Ax + C esin-'4x +€ 4esin Aax+ C Moving to another question will Sa... 1 answer ##### Circle the answers to each of the questions for the corresponding scatterplots 1. (13 pts) Circle... Circle the answers to each of the questions for the corresponding scatterplots 1. (13 pts) Circle the answers to each of the questions for the corresponding scatterplots. a) positive negative b) weak moderate strong c) linear nonlinear R = .89 d) R=-89 R=-11 R = .11 a) weak moderate strong R = 24... 5 answers ##### Findfor f (x)=2cosx+ and state the domain of both functions.Find the exact solution of the equation 6sin 2=2_ Find for f (x)=2cosx+ and state the domain of both functions. Find the exact solution of the equation 6sin 2=2_... 5 answers ##### 4. ð‘‹1,ð‘‹2, ð‘‹3, ð‘‹4,ð‘‹5 are independent random variables with acommon distribution. That distribution is the Normal distribution.The mean is 28.2 and the standard deviation is 3.5 (a) Calculate the probability exactly four of these randomvariables have a value greater than 32.(b) 𑌠= ð‘‹1 − ð‘‹2 . Calculate ð‘ƒ(5 < 𑌠< 10).Please provide correct solutions only 4. ð‘‹1,ð‘‹2, ð‘‹3, ð‘‹4,ð‘‹5 are independent random variables with a common distribution. That distribution is the Normal distribution. The mean is 28.2 and the standard deviation is 3.5 (a) Calculate the probability exactly four of these random variables have a... 5 answers ##### Q21. How come DNA polymerase [ can remove primerg but DNA polymerase III cannot? Q21. How come DNA polymerase [ can remove primerg but DNA polymerase III cannot?... 2 answers ##### What was the peace keeping organizations formed after WWI? What was the peace keeping organizations formed after WWI?... 5 answers ##### (c) 8 charge li lecture 1 How much charge the nC problems) V point metal sphere on on the the charge of the exteror inside 3 inner electric surface sits radius field V auhec cenred cm at the hollow outer outee sphere? surface 1 sphere cm (Hint: this The hollow sphieae one has (c) 8 charge li lecture 1 How much charge the nC problems) V point metal sphere on on the the charge of the exteror inside 3 inner electric surface sits radius field V auhec cenred cm at the hollow outer outee sphere? surface 1 sphere cm (Hint: this The hollow sphieae one has... 5 answers ##### Points) Suppose f(z) when Tau divide f(r) by 22polyzomial such that deg(f(2)) = 1367 and such that the remainder is ?(1) What f(2) and why?(3 points) Show that polynomial f (2) Flz] irreducible if and only if (f (2)) Flx] is maximal ideal points) Suppose f(z) when Tau divide f(r) by 22 polyzomial such that deg(f(2)) = 1367 and such that the remainder is ?(1) What f(2) and why? (3 points) Show that polynomial f (2) Flz] irreducible if and only if (f (2)) Flx] is maximal ideal... 1 answer ##### QUESTION 1 Question 1: Note: Tests BS-CS PLO 3 Which statement is wrong? A) Registers are... QUESTION 1 Question 1: Note: Tests BS-CS PLO 3 Which statement is wrong? A) Registers are faster to access than memory B) Compiler uses registers for variables as much as possible C) Operating on memory data requires loads and stores D) Arithmetic instructions can process data in memory directly... 2 answers ... 1 answer ##### A common characteristic of oligopolies is: Products can be homogeneous or differentiated. independent pricing decisions. low... A common characteristic of oligopolies is: Products can be homogeneous or differentiated. independent pricing decisions. low industry concentration. few or no economies of scale.... 1 answer ##### 3. Determine the water of hydration for the following hydrates and write the chemical formula: (a)... 3. Determine the water of hydration for the following hydrates and write the chemical formula: (a) NiCl2.XH20 is found to contain 21.7% water. (b) Sr(NOs).XH2O is found to contain 33.8 % water. (c) Crla.XH20 is found to contain 27.2% water. (d) Ca(NO3)2.XH2O is found to contain 30.5% water.... 1 answer ##### The Elberta Fruit Farm of Ontario has always hired transient workers to pick its annual cherry... The Elberta Fruit Farm of Ontario has always hired transient workers to pick its annual cherry crop. Francie Wright, the farm manager, has just received information on a cherry picking machine that is being purchased by many fruit farms. The machine is a motorized device that shakes the cherry tree,... 1 answer ##### When 25.0 mL of a 2.7010-4 M silver nitrate solution is combined with 18.0 mL of... When 25.0 mL of a 2.7010-4 M silver nitrate solution is combined with 18.0 mL of a 7.80x10-5 M sodium bromide solution does a precipitate form? (yes or no) For these conditions the Reaction Quotient, Q, is equal to... 5 answers ##### Consider the following Iwio charges, In what directicn coes the E feld point at point C?7 Consider the following Iwio charges, In what directicn coes the E feld point at point C? 7... 1 answer ##### Presented below is information related to Vaughn Company. Cost Retail Beginning inventory$ 61,600 $107,300 Purchases... Presented below is information related to Vaughn Company. Cost Retail Beginning inventory$ 61,600 $107,300 Purchases (net) 120,170 180,700 Net markups 10,325 Net markdowns 26,679 Sales revenue 187,090 Compute the ending inve... 5 answers ##### Question 4: [25 pts] Consider the differential equation y4y Question 4: [25 pts] Consider the differential equation y 4y... 1 answer ##### A box of mass M = 0.2kg starts out at rest on a horizontal table. The... A box of mass M = 0.2kg starts out at rest on a horizontal table. The surface is not frictionless. At t = 0.75s you start pushing on the box horizontally in the +x direction, causing it to speed up. The force you push with is not necessarily constant. You stop pushing on the box at t-2s, after which... 5 answers ##### Guesticn 58 ptsConsider the following chemical reaction between carbon and hydragen &as t0 produce ethane: 2 Cls) 3 Hots) CzHsg) What is the equilibrium constant (KJ expression for the (arward reaction?[Cz Hal Kc [Ez]'[Cz Hs] Kc = [Hz][Cz Ho] Kc [C] (Hz][C,Ha] Kc [CJ? [Hz]"[Hz]* Kc [CzHs] Guesticn 5 8 pts Consider the following chemical reaction between carbon and hydragen &as t0 produce ethane: 2 Cls) 3 Hots) CzHsg) What is the equilibrium constant (KJ expression for the (arward reaction? [Cz Hal Kc [Ez]' [Cz Hs] Kc = [Hz] [Cz Ho] Kc [C] (Hz] [C,Ha] Kc [CJ? [Hz]" [Hz]*... 1 answer ##### Question XYZ Company uses the formula y = a + bx to predict and analyze overhead... Question XYZ Company uses the formula y = a + bx to predict and analyze overhead costs. In the previous year, XYZ used$1,750 per month for the a factor and $0.35 for the b factor in applying overhead. XYZ has used direct labour hours in the past, but is wondering whether overhead behaviour is more ... 1 answer ##### Marston Corp. writes 47 checks a day for an average amount of$474 each. These checks...
Marston Corp. writes 47 checks a day for an average amount of $474 each. These checks generally clear the bank 25 days after they are written. In addition, the firm generally receives 59 checks with an average amount of$559 each. Deposited amounts are available after an average of 2 days. What is t...
##### Friday September 27 Begin Date: 9/25/2019 12:01:00 AM Due Date: 9/27/2019 H1:00: Date: 9/29/2019 H1.00.00 AM (I%) Problem 5: An airplane starts rest and accelerates 7.2 m/s? at an angle of 3/8 south of west50% Part (a) Afterhow far in the westerly direction has the airplane traveled?wesiSin( ) colan() Muhin( )CusUa( ) usin() McOs) {CO0I( ) sinhg) cosh( ) tnhu( ) cetu( ) Degrees RadinnsSubnuitHimaamciumICIILAIImG:Kcbackededuction per feedbick;50% Part (b) Afterhow far in the southerly direction h
Friday September 27 Begin Date: 9/25/2019 12:01:00 AM Due Date: 9/27/2019 H1:00: Date: 9/29/2019 H1.00.00 AM (I%) Problem 5: An airplane starts rest and accelerates 7.2 m/s? at an angle of 3/8 south of west 50% Part (a) After how far in the westerly direction has the airplane traveled? wesi Sin( ) c...
##### (Complex analysis) Exercise 5. Find the images of the following curves under the linear mapping w = (i + V3)2 + iV3-1, where z = x + iy: a)y=0 b) x = 0 c) 2 y1 d) x2 + y2 + 2y 1 Answer b) v3u c) (11...
(Complex analysis) Exercise 5. Find the images of the following curves under the linear mapping w = (i + V3)2 + iV3-1, where z = x + iy: a)y=0 b) x = 0 c) 2 y1 d) x2 + y2 + 2y 1 Answer b) v3u c) (11 + 1)2 + (v-V3)2 = 4 d) 11 2 + U2-8 Exercise 5. Find the images of the following curves under the lin...
##### 5/48 The hydraulic cylinder C gives end A of link AB a con- stant velocity vo...
5/48 The hydraulic cylinder C gives end A of link AB a con- stant velocity vo in the negative x-direction. Deter- mine expressions for the angular velocity w = 0 and angular acceleration a = 8 of the link in terms of x. - - - - - -... | 2023-03-24 21:49:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5099910497665405, "perplexity": 11899.774927418344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945289.9/warc/CC-MAIN-20230324211121-20230325001121-00015.warc.gz"} |
https://bookstore.ams.org/view?ProductCode=SURV/241 | An error was encountered while trying to add the item to the cart. Please try again.
Copy To Clipboard
Successfully Copied!
A Tool Kit for Groupoid $C^{*}$-Algebras
Dana P. Williams Dartmouth College, Hanover, NH
Available Formats:
Hardcover ISBN: 978-1-4704-5133-2
Product Code: SURV/241
List Price: $129.00 MAA Member Price:$116.10
AMS Member Price: $103.20 Electronic ISBN: 978-1-4704-5409-8 Product Code: SURV/241.E List Price:$129.00
MAA Member Price: $116.10 AMS Member Price:$103.20
Bundle Print and Electronic Formats and Save!
This product is available for purchase as a bundle. Purchasing as a bundle enables you to save on the electronic version.
List Price: $193.50 MAA Member Price:$174.15
AMS Member Price: $154.80 Click above image for expanded view A Tool Kit for Groupoid$C^{*}$-Algebras Dana P. Williams Dartmouth College, Hanover, NH Available Formats: Hardcover ISBN: 978-1-4704-5133-2 Product Code: SURV/241 List Price:$129.00 MAA Member Price: $116.10 AMS Member Price:$103.20
Electronic ISBN: 978-1-4704-5409-8 Product Code: SURV/241.E
List Price: $129.00 MAA Member Price:$116.10 AMS Member Price: $103.20 Bundle Print and Electronic Formats and Save! This product is available for purchase as a bundle. Purchasing as a bundle enables you to save on the electronic version. List Price:$193.50 MAA Member Price: $174.15 AMS Member Price:$154.80
• Book Details
Mathematical Surveys and Monographs
Volume: 2412019; 398 pp
MSC: Primary 46; Secondary 22;
The construction of a $C^{*}$-algebra from a locally compact groupoid is an important generalization of the group $C^{*}$-algebra construction and of the transformation group $C^{*}$-algebra construction. Since their introduction in 1980, groupoid $C^{*}$-algebras have been intensively studied with diverse applications, including graph algebras, classification theory, variations on the Baum-Connes conjecture, and noncommutative geometry. This book provides a detailed introduction to this vast subject and is suitable for graduate students or any researcher who wants to use groupoid $C^{*}$-algebras in their work. The main focus is to equip the reader with modern versions of the basic technical tools used in the subject, which will allow the reader to understand fundamental results and make contributions to various areas in the subject. Thus, in addition to covering the basic properties and construction of groupoid $C^{*}$-algebras, the focus is to give a modern treatment of some of the major developments in the subject in recent years, including the Equivalence Theorem and the Disintegration Theorem. Also covered are the complicated subjects of amenability of groupoids and simplicity results.
The book is reasonably self-contained and accessible to graduate students with a good background in operator algebras.
Graduate students and researchers interested in $C^{*}$-algebras.
• Chapters
• From groupoid to algebra
• Groupoid actions and equivalence
• Measure theory
• Proof of the Equivalence Theorem
• Basic representation theory
• The existence and uniqueness of Haar systems
• Unitary representations
• Renault’s Disintegration Theorem
• Amenability for groupoids
• Measurewise amenability for groupoids
• Duals and topological vector spaces
• Remarks on Blanchard’s Theorem
• The inductive limit topology
• Ramsay almost everywhere
• Answers to some of the exercises
• Reviews
• The book is written as a textbook with exercises at the end of each chapter, which is ideal for experts, but for the rest of us, this is a superb reference for particular topics that are currently only to be found scattered throughout the literature.
Mark V. Lawson, Heriot-Watt University
• This graduate-level textbook is a comprehensive, readable introduction to the fundamental theory of groupoid C*-algebras. No textbook can make groupoid C*-theory easy, but A Tool Kit for Groupoid C*-Algebras finally makes it accessible.
Elizabeth Gillaspy, University of Montana
• Requests
Review Copy – for reviewers who would like to review an AMS book
Permission – for use of book, eBook, or Journal content
Accessibility – to request an alternate format of an AMS title
Volume: 2412019; 398 pp
MSC: Primary 46; Secondary 22;
The construction of a $C^{*}$-algebra from a locally compact groupoid is an important generalization of the group $C^{*}$-algebra construction and of the transformation group $C^{*}$-algebra construction. Since their introduction in 1980, groupoid $C^{*}$-algebras have been intensively studied with diverse applications, including graph algebras, classification theory, variations on the Baum-Connes conjecture, and noncommutative geometry. This book provides a detailed introduction to this vast subject and is suitable for graduate students or any researcher who wants to use groupoid $C^{*}$-algebras in their work. The main focus is to equip the reader with modern versions of the basic technical tools used in the subject, which will allow the reader to understand fundamental results and make contributions to various areas in the subject. Thus, in addition to covering the basic properties and construction of groupoid $C^{*}$-algebras, the focus is to give a modern treatment of some of the major developments in the subject in recent years, including the Equivalence Theorem and the Disintegration Theorem. Also covered are the complicated subjects of amenability of groupoids and simplicity results.
The book is reasonably self-contained and accessible to graduate students with a good background in operator algebras.
Graduate students and researchers interested in $C^{*}$-algebras.
• Chapters
• From groupoid to algebra
• Groupoid actions and equivalence
• Measure theory
• Proof of the Equivalence Theorem
• Basic representation theory
• The existence and uniqueness of Haar systems
• Unitary representations
• Renault’s Disintegration Theorem
• Amenability for groupoids
• Measurewise amenability for groupoids
• Duals and topological vector spaces
• Remarks on Blanchard’s Theorem
• The inductive limit topology
• Ramsay almost everywhere
• Answers to some of the exercises
• The book is written as a textbook with exercises at the end of each chapter, which is ideal for experts, but for the rest of us, this is a superb reference for particular topics that are currently only to be found scattered throughout the literature.
Mark V. Lawson, Heriot-Watt University
• This graduate-level textbook is a comprehensive, readable introduction to the fundamental theory of groupoid C*-algebras. No textbook can make groupoid C*-theory easy, but A Tool Kit for Groupoid C*-Algebras finally makes it accessible.
Elizabeth Gillaspy, University of Montana
Review Copy – for reviewers who would like to review an AMS book
Permission – for use of book, eBook, or Journal content
Accessibility – to request an alternate format of an AMS title
You may be interested in...
Please select which format for which you are requesting permissions. | 2023-03-23 18:30:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19478502869606018, "perplexity": 1730.6431641745883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945182.12/warc/CC-MAIN-20230323163125-20230323193125-00649.warc.gz"} |
https://physics.stackexchange.com/questions/172736/firewall-solution-for-black-hole-paradox-and-creation-of-black-event-horizon | # Firewall solution for black hole paradox and creation of black event horizon
AMPS paper [1] proved that black hole complementarity explanation of blackhole paradox requires there is a "firewall": high-energy wall which destroys everything entering event horizon.
However, I'm confused how "firewall" explains the following setup:
• lets assume that are two entangled particles Bob and Alice
• black hole event horizon is created in space between these two particles
• Bob is now behind black hole event horizon while Alice is outside
• Now black-hole evaporates and you have again Alice entangled with more than two particles
So how "firewall" solves this?
UPDATE: Here is my thought experiment how to create black hole by horizon appearing - not growing. Lets assume that is absolutely nothing around Bob. Then you can create light rays pointing to Bob from far far away. If you have enough of these light rays, you will create event horizon around Bob before light rays even reach Bob. So Bob will not be swallowed by growing event horizon and event horizon will just appear around Bob.
You can't just create event horizons of finite area out of thin air like that.
Let me explain with a very idealized thought experiment: imagine you have a thin spherically symmetric shell of dust particles of radius $R$ and total mass $M$. Say that initially $R > r_s$ (where $r_s = 2M$ in natural units). Say that the particles are initially at rest so that they will inevitably collapse under their mutual gravitational attraction, that is, the radius of the shell will decrease with time. Eventually the radius will become smaller than $r_s$ and the dust particles will be gobbled up by the event horizon. You are left with something that to the outside observer is indistinguishable from a Schwarzschild black hole.
Now place yourself inside the shell. You calculated that at an instant $t_0$ the dust shell will reach the Schwarzschild radius, so you might want to try to escape before an event horizon forms and you are irrevocably trapped.
Because the shell is spherically symmetric, the metric inside is flat, so you can just use special relativity to figure this out. If you are at the center, because you can't exceed the speed of light, you need to give yourself at least $r_s/c$ time before collapse in order to be able to escape. If you wait too long, welp, that's too bad: you're trapped, just as you would be if you had waited until the dust shell reached its Schwarzschild radius.
This implies that a retroactive event horizon has formed whose radius grows linearly with time until it reaches $r_s$. So Bob has actually been swallowed by a growing event horizon. If you believe in firewalls, you must believe this horizon has a firewall, too.
• I have updated my question: Here is my thought experiment how to create black hole by horizon appearing - not growing. Lets assume that is absolutely nothing around Bob. Then you can create light rays pointing to Bob from far far away. If you have enough of these light rays, you will create event horizon around Bob before light rays even reach Bob. So Bob will not be swallowed by growing event horizon. – user2196351 Mar 28 '15 at 0:21
• @user2196351 That is exactly the same situation, except that you have replaced the dust shell by light rays. Conceptually, it's the same thing: the horizon will still have to grow because an event horizon is determined by the entire future history of the spacetime. – Leandro M. Mar 28 '15 at 0:44
• Sorry - I'm still confused :( So in "non-firewall" world the event horizon will form around Bob and Bob will not be aware of it and still be entangled with Alice outside event horizon. That is causing paradox because Hawking radiation is in a pure state. What will happen to Bob in "firewall" version of events? – user2196351 Mar 28 '15 at 1:06
• An event horizon can never "form around" a region of spacetime. It has to grow to encompass that region. You can see that it must by imagining trying to escape from that region, as I did in my answer: because you can't travel faster than the speed of light, if you can't escape before an event horizon "forms", then you were yourself trapped by an event horizon. Bob will have to cross an event horizon and therefore, a firewall (if firewalls exist at all). – Leandro M. Mar 28 '15 at 1:12
• I got it now. This "firewall" version of definitely weird because it assumes that something "knows" around Bob that he cannot escape - even if that dust shell is approaching at speed light (that is reason I used light rays). Am I right? – user2196351 Mar 28 '15 at 1:38 | 2019-12-14 08:09:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6151894927024841, "perplexity": 501.15233854539974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540585566.60/warc/CC-MAIN-20191214070158-20191214094158-00383.warc.gz"} |
https://rdrr.io/cran/effectsize/f/vignettes/standardized_differences.Rmd | # Standardized Differences" In effectsize: Indices of Effect Size
library(knitr)
options(knitr.kable.NA = "")
knitr::opts_chunk$set(comment = ">") options(digits = 3) set.seed(7) This vignette provides a review of effect sizes for comparisons of groups, which are typically achieved with the t.test() and wilcox.test() functions. library(effectsize) options(es.use_symbols = TRUE) # get nice symbols when printing! (On Windows, requires R >= 4.2.0) # Standardized Differences For t-tests, it is common to report an effect size representing a standardized difference between the two compared samples' means. These measures range from$-\infty$to$+\infty$, with negative values indicating the second group's mean is larger (and vice versa). ## Two Independent Samples For two independent samples, the difference between the means is standardized based on the pooled standard deviation of both samples (assumed to be equal in the population): t.test(mpg ~ am, data = mtcars, var.equal = TRUE) cohens_d(mpg ~ am, data = mtcars) Hedges' g provides a bias correction for small sample sizes ($N < 20$). hedges_g(mpg ~ am, data = mtcars) If variances cannot be assumed to be equal, it is possible to get estimates that are not based on the pooled standard deviation: t.test(mpg ~ am, data = mtcars, var.equal = FALSE) cohens_d(mpg ~ am, data = mtcars, pooled_sd = FALSE) hedges_g(mpg ~ am, data = mtcars, pooled_sd = FALSE) In cases where the differences between the variances are substantial, it is also common to standardize the difference based only on the standard deviation of one of the groups (usually the "control" group); this effect size is known as Glass'$\Delta$(delta) (Note that the standard deviation is taken from the second sample). glass_delta(mpg ~ am, data = mtcars) For a one-sided hypothesis, it is also possible to construct one-sided confidence intervals: t.test(mpg ~ am, data = mtcars, var.equal = TRUE, alternative = "less") cohens_d(mpg ~ am, data = mtcars, pooled_sd = TRUE, alternative = "less") ## One Sample In the case of a one-sample test, the effect size represents the standardized distance of the mean of the sample from the null value. t.test(mtcars$wt, mu = 2.7)
cohens_d(mtcars$wt, mu = 2.7) hedges_g(mtcars$wt, mu = 2.7)
## Paired Samples
For paired-samples, the difference is standardized by the variation in the differences. This effect size, known as Cohen's $d_z$, represents the difference in terms of its homogeneity (a small but stable difference will have a large $d_z$).
t.test(extra ~ group, data = sleep, paired = TRUE)
cohens_d(extra ~ group, data = sleep, paired = TRUE)
hedges_g(extra ~ group, data = sleep, paired = TRUE)
## For a Bayesian t-test
A Bayesian estimate of Cohen's d can also be provided based on BayesFactor's version of a t-test via the effectsize() function:
library(BayesFactor)
BFt <- ttestBF(formula = mpg ~ am, data = mtcars)
effectsize(BFt, type = "d")
## (Multivariate) Standardized Distances
When examining multivariate differences (e.g., with Hotelling's $T^2$ test), Mahalanobis' D can be used as the multivariate equivalent for Cohen's d. Unlike Cohen's d which is a measure of standardized differences, Mahalanobis' D is a measure of standardized distances. As such, it cannot be negative, and ranges from 0 (no distance between the multivariate distributions) to $+\infty$.
mahalanobis_d(mpg + hp + cyl ~ am, data = mtcars)
# Dominance Effect Sizes
The rank-biserial correlation ($r_{rb}$) is a measure of dominance: larger values indicate that more of X is larger than more of Y, with a value of $(-1)$ indicates that all observations in the second group are larger than the first, and a value of $(+1)$ indicates that all observations in the first group are larger than the second.
These effect sizes should be reported with the Wilcoxon (Mann-Whitney) test or the signed-rank test (both available in wilcox.test()).
## Two Independent Samples
A <- c(48, 48, 77, 86, 85, 85)
B <- c(14, 34, 34, 77)
wilcox.test(A, B) # aka Mann–Whitney U test
rank_biserial(A, B)
## One Sample
For one sample, $r_{rb}$ measures the symmetry around $\mu$ (mu; the null value), with 0 indicating perfect symmetry, $(-1)$ indicates that all observations fall below $\mu$, and $(+1)$ indicates that all observations fall above $\mu$.
x <- c(1.15, 0.88, 0.90, 0.74, 1.21, 1.36, 0.89)
wilcox.test(x, mu = 1) # aka Signed-Rank test
rank_biserial(x, mu = 1)
## Paired Samples
For paired samples, $r_{rb}$ measures the symmetry of the (paired) differences around $\mu$ as for the one sample case.
x <- c(1.83, 0.50, 1.62, 2.48, 1.68, 1.88, 1.55, 3.06, 1.30)
y <- c(0.88, 0.65, 0.60, 2.05, 1.06, 1.29, 1.06, 3.14, 1.29)
wilcox.test(x, y, paired = TRUE) # aka Signed-Rank test
rank_biserial(x, y, paired = TRUE)
# Common Language Effect Sizes
Related effect sizes are the common language effect sizes which present information about group differences in terms of probability.
## Two Independent Samples
### Measures of (Non)Overlap
These measures indicate the degree two independent distributions overlap: Cohen's $U_1$ is the proportion of the total of both distributions that does not overlap, while Overlap (OVL) is the proportional overlap between the distributions.
cohens_u1(mpg ~ am, data = mtcars)
p_overlap(mpg ~ am, data = mtcars)
Note the by default, these functions return the parametric versions of these effect sizes: these assume equal normal variance in both populations. When these assumptions are not met, the values produced will be biased in unknown ways. In such cases, we should use the non-parametric versions ($U_1$ is not defined):
p_overlap(mpg ~ am, data = mtcars, parametric = FALSE)
### Probabilistic Measures
Probability of superiority is the probability that, when sampling an observation from each of the groups at random, that the observation from the second group will be larger than the sample from the first group.
p_superiority(mpg ~ am, data = mtcars)
Here, this indicates that if we were to randomly draw a sample from am==0 and from am==1, 15% of the time, the first will have a larger mpg values than the second.
Cohen's $U_2$ is the proportion of one of the groups that exceeds the same proportion in the other group, and Cohen's $U_3$ is the proportion of the second group that is smaller than the median of the first group.
cohens_u2(mpg ~ am, data = mtcars)
cohens_u3(mpg ~ am, data = mtcars)
Here too we have a non-parametric versions when the assumptions of equal variance of normal populations:
p_superiority(mpg ~ am, data = mtcars, parametric = FALSE)
cohens_u2(mpg ~ am, data = mtcars, parametric = FALSE)
cohens_u3(mpg ~ am, data = mtcars, parametric = FALSE)
## One Sample and Paired Samples
For one sample, probability of superiority is the probability that, when sampling an observation at random, it will be larger than $\mu$.
p_superiority(mtcars$wt, mu = 2.75) p_superiority(mtcars$wt, mu = 2.75, parametric = FALSE)
For paired samples, probability of superiority is the probability that, when sampling an observation at random, its difference will be larger than $\mu$.
p_superiority(extra ~ group, data = sleep,
paired = TRUE, mu = -1)
p_superiority(extra ~ group, data = sleep,
paired = TRUE, mu = -1,
parametric = FALSE)
## For a Bayesian t-test
A Bayesian estimate of (the parametric version of) these effect sizes can also be provided based on BayesFactor's version of a t-test via the effectsize() function:
effectsize(BFt, type = "p_superiority")
effectsize(BFt, type = "u1")
effectsize(BFt, type = "u2")
effectsize(BFt, type = "u3")
effectsize(BFt, type = "overlap")
# References
## Try the effectsize package in your browser
Any scripts or data that you put into this service are public.
effectsize documentation built on Oct. 31, 2022, 5:06 p.m. | 2023-02-06 13:12:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7132576107978821, "perplexity": 2641.1317657973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500339.37/warc/CC-MAIN-20230206113934-20230206143934-00677.warc.gz"} |
https://www.ncatlab.org/nlab/show/Dennis+Sullivan | # nLab Dennis Sullivan
Dennis Sullivan is an American topologist. His initial work was in geometric topology, but later he developed theories of localisation of homotopy types, rational homotopy theory, and aspects of string topology.
The editor (Andrew Ranicki) of the redistribution of his 1970 notes wrote:
The notes had a major influence on the development of both algebraic and geometric topology, pioneering
• the localization and completion of spaces in homotopy theory, including p-local, profinite and rational homotopy theory, leading to the solution of the Adams conjecture on the relationship between vector bundles and spherical fibrations,
• the formulation of the ‘Sullivan conjecture’ on the contractibility of the space of maps from the classifying space of a finite group to a finite dimensional CW complex,
• the action of the Galois group of $\overline{\mathbb{Q}}$ over $\mathbb{Q}$ on smooth manifold structures in profinite homotopy theory,
• the K-theory orientation of PL manifolds and bundles.
category: people | 2019-08-25 03:30:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6762474775314331, "perplexity": 693.5346512566211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027322170.99/warc/CC-MAIN-20190825021120-20190825043120-00495.warc.gz"} |
https://stats.stackexchange.com/questions/192144/is-there-an-effect-size-for-a-single-proportion/192156 | # Is there an effect size for a single proportion?
I want to conduct a meta analysis of the incidence rate of the complications of a specific disease. Studies report these complications as 4 patients died or 2% died. And there are no comparable group.Is it possible to have an effect size for single proportions.Any suggestions or further readings are appreciated.
Let me take the title question ("Is there an effect size for a single proportion?") literally and set aside the meta analysis context.
There are effect sizes for single proportions, when a null proportion value is specified. Moreover, the effect sizes for single proportions are the same as for two observed proportions and work the same way. You could use:
1. The difference of proportions: prop - null
2. The ratio of proportions: prop/null
3. The odds ratio: [prop/(1-prop)] / [null/(1-null)]
Returning to the context of meta analysis, you don't necessarily have to use these. You might just be interested in estimating a proportion directly. In which case, you wouldn't want to use these. If there is a meaningful null and these studies compared the observed proportion to that, you could try using the log of the odds ratio.
• 'when a null proportion value is specified.' can you tell me more about it? I only have the measured proportion e.g. 2% died. what's the null proportion then? – Elmahy Jan 29 '16 at 13:07
• The idea is that there is some specific value that has a theoretical basis, & that you are trying to disprove. Eg, everyone believes that 4.5% die, & the point of these studies is to test that proposition. It isn't clear that that's the case in your situation. The other answers are probably more appropriate to your situation, @ahmedmar. – gung - Reinstate Monica Jan 29 '16 at 16:06
You can use the command metaprop in the package meta of r. These type of studies are called incidence meta-analysis or single-proportion meta-analysis.
In this particular case the effect size would be the proportion of the variable studied and the meta analysis would compute a different weight and confidence interval according to the sample size.
• Thank you. It's nice to get an answer from a medical student when I am too a medical student. – Elmahy Jan 24 '16 at 12:38
• I'm glad to know that. If you found the answer useful you can mark it as correct? Thanks, good work – GGA Jan 24 '16 at 12:54
You need to be careful here. Do you really mean an incidence rate? By that I mean that people were followed for a period of time and then the number of events is reported as a rate per person year (or some either measure of time). If you do then I think you need something other than metaprop from meta. I assume you can use some other command in meta (with which I am not too familiar) but metafor (also available from CRAN) has several options for rates. | 2019-11-11 21:57:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7254496812820435, "perplexity": 697.9911214328702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664439.7/warc/CC-MAIN-20191111214811-20191112002811-00466.warc.gz"} |
https://www.physicsforums.com/threads/another-centripetal-force-problem.757289/ | # Another centripetal force problem
1. Jun 9, 2014
### BrainMan
A stuntman swings from the end of a 4 m long rope along the arc of a circle. If his mass is 70 kg, find the tension in the rope required to make him follow his circular path at (a) the beginning of his motion, assuming he starts when the rope is horizontal, (b) at a height of 1.5 above the bottom of the circular arc, and (c) at the bottom of the arc.
(Equation) Fc = mv^2/r
KE= 1/2mv^2
PE = mgy
(Attempt at solution)
Find the total energy of the system
4 m (9.8)(70 kg) = 2744 J
Find the kinetic energy at 1.5 m to find the velocity
70(9.8)(1.5) = 1029
2744 - 1029 = 1715 J
1/2mv^2 = 1715
V= 7 m/s
Find the centripetal force using Fc= mv^2/r
70(49/4)
Fc = 857.5
The correct answer is 1.29 x 10^3 N
Last edited: Jun 9, 2014
2. Jun 9, 2014
### vela
Staff Emeritus
You were asked to find the tension in the rope; you found the centripetal force. They're not the same thing. I suggest you draw a free-body diagram to try figure this problem out.
3. Jun 10, 2014
### BrainMan
What makes them different? The tension in the rope must match the centripetal force in order for the man to move in a circle.
4. Jun 10, 2014
### vela
Staff Emeritus
That's not true. You should drop the notion of centripetal force and instead think in terms of a centripetal acceleration: objects that follow a circular path of radius $r$ experience a centripetal acceleration of magnitude $v^2/r$. When you plug this into Newton's second law (for the radial direction), you get
$$\sum F_\text{r} = ma = m\frac{v^2}{r}.$$ You're getting the lefthand side of the equation wrong. Did you draw a free-body diagram yet?
5. Jun 10, 2014
### Nathanael
The tension in the rope must cause the net centripetal force, but that does not mean that the tension in the force must match the centripetal force. (Because there are other forces involved.)
(If you're stuck, draw a FBD)
6. Jun 11, 2014
### BrainMan
Here is my free-body diagram. The T stands for tension, CF for centripetal force, TF for tangential velocity, and w for weight.
#### Attached Files:
• ###### photo1 (1).jpg
File size:
43.7 KB
Views:
85
7. Jun 11, 2014
### dauto
Do you see it now?
8. Jun 12, 2014
### BrainMan
No I don't. What should I be seeing?
9. Jun 12, 2014
### Nathanael
Ok, so we have a weight swinging around on a rope in circular motion. Let's look at the time when the rope is at the bottom of it's swing (so it's completely vertical) just to simplify things.
Suppose, like you said, that the tension in the rope is equal to the centripetal force.
What would be the net force on the weight (or person)? The tesnison is directed upwards (towards the center of the circle) but gravity is directed downwards, so what would be the Net Force on the weight? Would that Net Force be equal to the centripetal force?
For an object to move in a circle there must be a NET force equal to $\frac{mv^2}{2}$
If the tension was equal to $\frac{mv^2}{2}$ then would the Net Force be equal to $\frac{mv^2}{2}$?
What must the tension be in order for the Net Force to equal $\frac{mv^2}{2}$?
(Hint: The tension is not constant throughout the swing)
10. Jun 12, 2014
### BrainMan
So the centripetal force doesn't have to be the same as the tension because the tension is reliant on the weight and the acceleration due to gravity while the centripetal force relies on the mass, the velocity , and the radius. So do I even have to use centripetal force to solve this problem? Can I just use trigonometry and the weight vector to find the tension?
11. Jun 12, 2014
### Nathanael
I don't think this is a good interpretation.
You do need the centripetal force still. You'll need both methods.
If you're at the bottom of the swing (completely vertical) then you have the equation:
$F_{centripetal}=F_{net}=F_{tension}-F_{gravity}$
Therefore:
$F_t=F_c+F_g=\frac{mv^2}{2}+mg$
The bottom of the swing is the easiest place to analyze it (because no angles are involved)
I'll let you figure it out for other parts of the swing (that do involve angles) but do you understand the logical principle I'm applying?
The logic is that the only two forces are gravity and tension and they must combine to give a Net Force of $\frac{mv^2}{2}$ (towards the center)
12. Jun 12, 2014
### BrainMan
OK I see it now. Thanks!
13. Jun 12, 2014
### Nathanael
No problem
14. Jun 12, 2014
### BrainMan
Alright now I am having trouble with the angle. I know that at 1.5 m above the ground the force of gravity is perpendicular to the ground and in order to find the amount of that force that effects the tension I have to use trigonometry. So T = 1/2mv2 + mg sin θ (or cos θ). The problem I have is that I don't know whether to use sin or cosine or how to find the angle. Do I use radians to find the angle?
15. Jun 12, 2014
### Nathanael
First step is to find the angle. How? Draw a picture.
Can you find the θ in my picture? (Maybe I shouldn't have labeled the 2.5m side haha)
If you're stuck on whether to use cosine or sine and whatnot or something else I want to see you draw a picture to try to figure it out
#### Attached Files:
• ###### photo.JPG
File size:
102.5 KB
Views:
81
16. Jun 13, 2014
### BrainMan
I found theta by doing the inverse cos of 2.5/4 = .89566. I found out whether to use sin or cos by drawing the picture. So
T = 70(7)/2 + mg/ cos .89566
T = 1715 + 686 / cos .89567
T = 2812.59
This is not the correct answer
Ignore the caption under the photo
Last edited: Jun 13, 2014
17. Jun 13, 2014
### Nathanael
Don't you want g (or mg) to be the hypotenuse? Because you're trying to break up the force of gravity into it's components (specifically it's tangent and perpendicular components)
Your drawing is wrong, gravity should be the hypotenuse.
EDIT:
Also in your drawing you put 0.89566 degrees, but it would actually be radians (I don't think this effected your math though)
Why didn't your calculation involve the centripetal equation?
Last edited: Jun 13, 2014
18. Jun 13, 2014
### BrainMan
I thought the force of gravity always was a straight line down?
19. Jun 13, 2014
### Nathanael
It is, but you can sill make it the hypotenuse. Look at the drawing I've attatched, that is how it should be.
But why didn't your calculations involve the centripetal force equation?
#### Attached Files:
• ###### photo.JPG
File size:
115.1 KB
Views:
86
20. Jun 13, 2014
### BrainMan
Ok I did that but I'm still not getting the right answer. I did
Mg cos .88566 = 428.75
Then I plugged that into the equation
Mv^2/r + 428.76 = T
70(7)/2 + 428.75 = 2143.75
Which is not the right answer
I got the velocity 7 from an earlier calculation attempt. | 2017-10-23 08:24:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6084465980529785, "perplexity": 1120.4486910533926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825812.89/warc/CC-MAIN-20171023073607-20171023093607-00520.warc.gz"} |
https://www.studyxapp.com/homework-help/problem-4-10-points-consider-the-following-ivp-leftbeginarrayl-yprime-prime2-t-q1636445466078257154 | # Question Problem 4. (10 POINTS) Consider the following IVP: $\left\{\begin{array}{l} y^{\prime \prime}-2 t y^{\prime}+8 y=0 \\ y(0)=4, y^{\prime}(0)=0 \end{array}\right.$ (a) ( 1 point) Is $$t_{0}=0$$ an ordinary point, a regular singular point, or an irregular singular point of the ODE? Justify your answer. (b) (8 points) Use the power series approach to find the solution of the IVP. (c) (1 point) What is the radius of convergence of the resulting power series?
Transcribed Image Text: Problem 4. (10 POINTS) Consider the following IVP: $\left\{\begin{array}{l} y^{\prime \prime}-2 t y^{\prime}+8 y=0 \\ y(0)=4, y^{\prime}(0)=0 \end{array}\right.$ (a) ( 1 point) Is $$t_{0}=0$$ an ordinary point, a regular singular point, or an irregular singular point of the ODE? Justify your answer. (b) (8 points) Use the power series approach to find the solution of the IVP. (c) (1 point) What is the radius of convergence of the resulting power series?
Transcribed Image Text: Problem 4. (10 POINTS) Consider the following IVP: $\left\{\begin{array}{l} y^{\prime \prime}-2 t y^{\prime}+8 y=0 \\ y(0)=4, y^{\prime}(0)=0 \end{array}\right.$ (a) ( 1 point) Is $$t_{0}=0$$ an ordinary point, a regular singular point, or an irregular singular point of the ODE? Justify your answer. (b) (8 points) Use the power series approach to find the solution of the IVP. (c) (1 point) What is the radius of convergence of the resulting power series?
【General guidance】The answer provided below has been developed in a clear step by step manner.Step1/2y''-2ty'+8y=0put y=$$\mathrm{\Sigma}$$an $$\mathrm{{t}^{{n}}}$$and y'=n$$\mathrm{\Sigma}$$an$$\mathrm{{t}^{{{n}-{1}}}}$$and y''=n(n-1)$$\mathrm{\Sigma{a}{n}{t}^{{{n}-{2}}}}$$ put these values in IVPWe get,n(n-1)$$\mathrm{\Sigma{a}{n}{t}^{{{n}-{2}}}}$$-2t$$\mathrm{\Sigma{a}{n}{t}^{{{n}-{1}}}}$$ +8$$\mathrm{\Sigma{a}{n}{t}^{{n}}}$$=0n(n-1)$$\mathrm{\Sigma{a}{n}{t}^{{{n}-{2}}}}$$-2$$\mathrm{\Sigma{a}{n}{t}^{{{n}}}}$$ +8$$\mathrm{\Sigma{a}{n}{t}^{{n}}}$$=0n(n-1)$$\mathrm{\Sigma{a}{n}{t}^{{{n}-{2}}}=-{6}\Sigma{a}{n}{t}^{{n}}}$$n(n-1)\( \mathrm ... See the full answer | 2023-03-22 21:42:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.751919686794281, "perplexity": 696.0469915476103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.97/warc/CC-MAIN-20230322211955-20230323001955-00714.warc.gz"} |
http://jslefche.github.io/piecewiseSEM/reference/rsquared.html | Returns (pseudo)-R^2 values for all linear, generalized linear, and generalized linear mixed effects models.
rsquared(modelList, method = NULL)
## Arguments
modelList a regression, or a list of structural equations. The method used to compute the R2 value (See Details)
## Value
Returns a data.frame with the response, its family and link, the method used to estimate R2, and the R2 value itself. Mixed models also return marginal and conditional R2 values.
## Details
For mixed models, marginal R2 considers only the variance by the fixed effects, and the conditional R2 by both the fixed and random effects.
For GLMs (glm), supported methods include:
• mcfadden 1 - ratio of likelihoods of full vs. null models
• coxsnell McFadden's R2 but raised to 2/N. Upper limit is < 1
• nagelkerke Adjusts Cox-Snell R2 so that upper limit = 1. The DEFAULT method
For GLMERs fit to Poisson, Gamma, and negative binomial distributions (glmer, glmmPQL, glmer.nb), supported methods include
• delta Approximates the observation variance based on second-order Taylor series expansion. Can be used with many families and link functions
• lognormal Observation variance is the variance of the log-normal distribution
• trigamma Provides most accurate estimate of the observation variance but is limited to only the log link. The DEFAULT method
For GLMERs fit to the binomial distribution (glmer, glmmPQL), supported methods include:
• theoretical Assumes observation variance is pi^2/3
• delta Approximates the observation variance as above. The DEFAULT method
## References
Nakagawa, Sinichi, Johnson, Paul C.D., and Holger Schielzeth. "The coefficient of determination R2 and intra-class correlation coefficient from generalized linear mixed-effects models revisted and expanded." bioRxiv 095851 (2017).
## Examples
# NOT RUN {
# Create data
dat <- data.frame(
ynorm = rnorm(100),
ypois = rpois(100, 100),
x1 = rnorm(100),
random = letters[1:5]
)
# Get R2 for linear model
rsquared(lm(ynorm ~ x1, dat))
# Get R2 for generalized linear model
rsquared(glm(ypois ~ x1, "poisson", dat))
# Get R2 for generalized least-squares model
rsquared(gls(ynorm ~ x1, dat))
# Get R2 for linear mixed effects model (nlme)
rsquared(nlme::lme(ynorm ~ x1, random = ~ 1 | random, dat))
# Get R2 for linear mixed effects model (lme4)
rsquared(lme4::lmer(ynorm ~ x1 + (1 | random), dat))
# Get R2 for generalized linear mixed effects model (lme4)
rsquared(lme4::glmer(ypois ~ x1 + (1 | random), family = poisson, dat))
rsquared(lme4::glmer(ypois ~ x1 + (1 | random), family = poisson, dat), method = "delta")
# Get R2 for generalized linear mixed effects model (glmmPQL)
rsquared(MASS::glmmPQL(ypois ~ x1, random = ~ 1 | random, family = poisson, dat))
# } | 2020-07-14 01:55:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7722398638725281, "perplexity": 11026.927439459983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657147031.78/warc/CC-MAIN-20200713225620-20200714015620-00161.warc.gz"} |
https://www.nature.com/articles/s41598-017-03949-6?error=cookies_not_supported&code=8021f6db-6f72-4ff0-a091-a83a40dba4ca | # Identification of transcripts with short stuORFs as targets for DENR•MCTS1-dependent translation in human cells
## Abstract
The non-canonical initiation factors DENR and MCTS1 have been linked to cancer and autism. We recently showed in Drosophila that DENR and MCTS1 regulate translation re-initiation on transcripts containing upstream Open Reading Frames (uORFs) with strong Kozak sequences (stuORFs). Due to the medical relevance of DENR and MCTS1, it is worthwhile identifying the transcripts in human cells that depend on DENR and MCTS1 for their translation. We show here that in humans, as in Drosophila, transcripts with short stuORFs require DENR and MCTS1 for their optimal expression. In contrast to Drosophila, however, the dependence on stuORF length in human cells is very strong, so that only transcripts with very short stuORFs coding for 1 amino acid are dependent on DENR and MCTS1. This identifies circa 100 genes as putative DENR and MCTS1 translational targets. These genes are enriched for neuronal genes and G protein-coupled receptors. The identification of DENR and MCTS1 target transcripts will serve as a basis for future studies aimed at understanding the mechanistic involvement of DENR and MCTS1 in cancer and autism.
## Introduction
DENR and MCTS1 are two non-canonical initiation factors that form a complex1. MCTS1 was identified by the Gartenhaus lab as an oncogene that is genomically amplified in T-cell leukemias2. Subsequent studies found that MCTS1 protein levels are also elevated in T-cell lymphoid cell lines, in non-Hodgkin lymphoma cell lines, and in 85% of primary diffuse large B-cell lymphomas3. MCTS1 has oncogenic properties, for instance promoting anchorage-independent growth of NIH3T3 fibroblasts2. DENR, like MCTS1, is amplified in neuroendocrine prostate cancer according to publicly available databases (cBioPortal)4. Furthermore, de novo missense mutations in DENR were recently discovered in two unrelated patients with autism spectrum disorders, and DENR is important for proper migration and terminal branching of cortical neurons in the mouse5. Hence DENR and MCTS1 appear to play a role in cancer biology and in neurobiology.
DENR and MCTS1 form a complex that binds the 40S ribosomal subunit1. We recently showed that DENR and MCTS1 promote a process termed translation reinitiation6. During cap-dependent translation, ribosomes are recruited to the 5′ cap structure of mRNAs, and they then scan 5′ to 3′ until they meet an AUG initiation codon in a strong Kozak sequence context. On transcripts containing upstream Open Reading Frames (uORFs) with strong Kozak sequences (stuORFs), this presents a problem because ribosomes initiate translation on the stuORF, thereby consuming factors such as the initiator Met-tRNAMet i. Hence to translate the main downstream open reading frame, ribosomes need to terminate translation of the stuORF and reinitiate translation downstream7, 8. This process is not well understood, but likely requires recycling of ribosomal subunits, continued association with the mRNA, renewed scanning, and renewed recruitment of Met-tRNAMet i. We previously showed that DENR and MCTS1 promote translation reinitiation, and therefore are required specifically for translation of mRNAs containing stuORFs in their 5′UTRs, but they are dispensable for translation of mRNAs lacking stuORFs6. The Shatsky and Pestova groups have shown that DENR and MCTS1, or the structurally and functionally related protein Ligatin/eIF2D, have biochemical activities in vitro that are likely relevant for translation reinitiation - they can recycle post-termination ribosomal complexes and they can recruit Met-tRNAi Met to the 40S ribosomal subunit in a non-canonical, eIF2-independent manner on viral mRNAs9, 10. Hence these activities might explain the mechanism by which DENR and MCTS1 promote translation reinitiation, although additional work will be necessary to unravel this mechanism in detail.
To understand the biological functions of DENR and MCTS1 in cancer and neurobiology, it is necessary to understand which transcripts in humans are dependent on DENR and MCTS1 for their efficient translation. We previously showed in Drosophila that stuORF containing transcripts are DENR•MCTS1 targets and that this class of genes is enriched for transcription factors, cell membrane receptors, and genes involved in neuron morphogenesis6. In this study we report the identification of very short stuORF-containing transcripts as targets for DENR•MCTS1 in human cells, and find that this class of transcripts is enriched for neuronal genes.
## Results
### DENR and MCTS1 are required for optimal expression of a synthetic stuORF reporter
We previously showed that expression of a reporter bearing a Drosophila 5′UTR with a synthetic strong-Kozak upstream Open Reading Frame (stuORF, with sequence acaaaATGTAA) is down-regulated in human cells upon DENR knockdown5. To test whether this effect is dependent on the context of the Drosophila 5′UTR, or whether it also works in the context of a human 5′UTR, we constructed a control Renilla Luciferase (RLuc) reporter bearing the human Lamin B1 5′UTR, which has no upstream Open Reading Frames (Fig. 1A). From this, a ‘stuORF reporter’ was generated by introducing the sequence acaaaATGTAA encoding for only 1 amino acid and having a strong Kozak sequence (Fig. 1A). As expected, knockdown of either DENR or MCTS1 (knockdown efficiency controls shown in lower panel of Fig. 1A) did not reduce expression of the control reporter, but reduced expression of the stuORF reporter (Fig. 1A). (Of note, we consistently see that knockdown of either DENR or MCTS1 leads to loss of the other protein as well, Fig. 1A. This is something we observe in both Drosophila6 and in human cells using multiple independent siRNAs targeting either DENR or MCTS1, excluding off-target effects. This co-dependence is often observed when several proteins form one functional complex, e.g. ref. 11). Reduced expression of the stuORF reporter was rescued by re-introducing siRNA-resistant versions of DENR or MCTS1 (Supplementary Fig. S1A,B), indicating the effect is specific. Unlike knockdown of DENR or MCTS1, knockdown of the structurally and functionally related protein Ligatin/eIF2D7, 9, 10 did not cause a drop in stuORF reporter activity (Supplementary Fig. S1C-C’), perhaps due to the fact that eIF2D is present in HeLa cells at much lower stoichiometry compared to DENR or MCTS112. Hence we focus here on DENR and MCTS1.
To search for mammalian transcripts that depend on DENR•MCTS1 for their efficient expression, we searched the mouse transcriptome for stuORF-containing transcripts using the same parameters that previously successfully identified DENR-dependent transcripts in Drosophila6, taking into account the uORF Kozak sequence strength, but not the length of the uORF. This identified NeuroD6 and Efnb1, which have 14 and 3 stuORFs in their 5′UTRs respectively, as strong candidates. Surprisingly, however, reporter constructs bearing the NeuroD6 and Efnb1 5′UTRs were not sensitive to DENR knockdown (Fig. 1B). In sum, DENR and MCTS1 are required in both Drosophila and mammals for efficient expression of a synthetic stuORF-containing reporter, however the endogenous mammalian transcripts that are DENR-dependent likely have different features compared to the Drosophila ones.
### Effect of stuORF length and Kozak strength on DENR•MCTS1-dependence
We previously found that the degree of down-regulation of reporter expression upon DENR or MCTS1 knockdown in Drosophila depends on two parameters: the strength of the uORF Kozak sequence and the length of the uORF. We therefore tested the dependence on these two parameters in human cells. Noderer and colleagues previously used a sequencing-based method to determine the relative strength of all Kozak sequences in human cells13, yielding scores from 18 to 120 with half of all Kozak sequences scoring below 90. We selected a panel of sixteen different Kozak sequences and cloned them upstream of a 1 amino acid uORF (Fig. 2A). Expression of all reporters carrying uORFs with strong Kozak sequences (scoring above 97) was generally impaired upon DENR knockdown (blue bars, Fig. 2A, non-normalized values in Supplementary Fig. S2A). In contrast, uORFs with weaker Kozak sequences (scores below 94) did not cause expression of the reporter to be DENR dependent (Fig. 2A). This is in agreement with our previous findings in Drosophila that DENR and MCTS1 are specifically required to promote expression of transcripts bearing uORFs with strong Kozak sequences. Based on previous results6, this is likely because uORFs with weak Kozak sequences do not cause the ribosome to initiate translation, and therefore also do not require DENR•MCTS1 to promote translation re-initiation on the main downstream ORF. Similar results were obtained with MCTS1 knockdowns (green bars, Fig. 2A). We previously found6 that the presence of multiple copies of a stuORF in the 5′UTR of a reporter causes translation of the reporter to become more dependent on DENR and MCTS1. To test this in human cells, we generated reporters with 1, 2 or 3 copies of a moderately strong uORF and found that the effect size is small, but there is a tendency towards increased DENR or MCTS1 dependency with increasing stuORF copy number (Supplementary Fig. S2B-B’).
We next investigated the dependence on stuORF length. To this end, we cloned into the control reporter stuORFs (with a strong Kozak sequence) with upstream Open Reading Frames of different lengths (from 1 a.a. to 9 a.a. Fig. 2B). As observed in Drosophila, the translation of reporters carrying very short stuORFs is more dependent on DENR•MCTS1 than the translation of reporters with longer stuORFs (Fig. 2B). In contrast to Drosophila, however, where also 2 a.a. and 4 a.a.-long stuORFs are regulated by DENR•MCTS1, the dependence on stuORF length in human cells is very strong and essentially restricted to stuORFs of 1 a.a. in length. We also tested stuORFs containing the second strongest Kozak sequence identified by Noderer et al. genome-wide13 (Supplementary Fig. S2C-C’). Since this Kozak sequence contains a G at position +4, it cannot be tested with a 1 a.a. stuORF, which necessitates a T at +4 for the stop codon. Also in this context, the stuORF with a length of 2 amino acids imparted little to no DENR•MCTS1 dependence (Supplementary Fig. S2C’). Hence in both Drosophila and Human cells, the effect of DENR•MCTS1 on expression of a transcript depends on the strength of the uORF Kozak. The dependence on stuORF length in Drosophila and in humans, however, is qualitatively similar but quantitatively different, with essentially only 1 a.a. long stuORFs imparting DENR•MCTS1 dependence in the human.
### Identification of DENR•MCTS1-dependent human transcripts
Using the results described above with synthetic uORF reporters, we searched the human transcriptome for transcripts predicted to be dependent on DENR•MCTS1 for their translation. The data from all our luciferase assays are summarized in Fig. 3A and B where Fig. 3A shows the dependence on Kozak strength and Fig. 3B on stuORF length. We fit the data to exponential curves and used these curves to predict the DENR•MCTS1-dependence for all human transcripts (Supplementary Table S1, and summarized per gene in Supplementary Table S2). We also tried a “Kozak-strength dependence curve” (Fig. 3A) that rises exponentially up to a Kozak strength of 90, and then flattens out and remains equal for all Kozak sequences with scores above 90, however this did not make a difference compared to the exponential dependence presented here. We then sorted all transcripts based on their predicted DENR•MCTS1 dependence, and selected several transcripts with a range of predicted down-regulation upon DENR knockdown from 37%, the maximum genome-wide, to 22%, for testing (Supplementary Table S3). We cloned the endogenous 5′UTRs of these genes into a reporter vector (Fig. 3C) and found that expression of most of these reporters is indeed DENR and MCTS1 dependent (Fig. 3C’). Furthermore, the degree of drop in expression upon DENR or MCTS1 knockdown roughly correlates with the prediction, thereby validating the predictions. The drop in luciferase activity upon DENR knockdown was not accompanied by a drop in reporter mRNA levels (Supplementary Fig. S3A-A’), consistent with a translational impairment. (The DRD1 5′UTR reporter contains an endogenous intron, which is spliced out in vivo when the reporter is transfected into HeLa cells, Supplementary Fig. S3B-B’). For some 5′UTRs such as those of TMEM60 and NPIPB9, knockdown of MCTS1 yields a stronger effect than knockdown of DENR (Fig. 3C’). One possible explanation is that the MCTS1 knockdown is more efficient (e.g. see also Fig. 2A). Reporters for genes predicted to be mildly DENR•MCTS1-dependent (e.g. FAM229B, predicted to be 22% down-regulated upon DENR knockdown), are also responsive to efficient DENR or MCTS1 knockdown (Fig. 3D). In sum, using a conservative threshold of predicted theoretical downregulation of 15% upon DENR•MCTS1 knockdown, this identifies a list of 104 genes as putative DENR targets (Supplementary Table S4), eleven of which have Drosophila orthologs that are also DENR targets (Supplementary Table S5). Of note, this table was generated by requiring that all transcript isoforms of a gene (which may differ in their 5′UTRs) are at least 15% downregulated upon DENR•MCTS1 knockdown. Additional genes exist for which individual transcript isoforms are predicted to be DENR•MCTS1 dependent, as listed in Supplementary Table S1. Both visual inspection of this target list, as well as a Gene Ontology enrichment analysis using DAVID14 revealed a very strong enrichment for G-protein coupled receptors in the target list (Fig. 3E). Indeed, many of the predicted DENR•MCTS1 are neuron-specific GPCRs, such as the taste receptor TAS2R13 or the olfactory receptor OR2AK2 (Fig. 3C’). Hence very short stuORF-containing targets of DENR and MCTS1 may be important in the context of neurobiology.
### GPR37 protein levels drop upon DENR•MCTS1 knockdown
We next sought to measure the effect of DENR or MCTS1 knockdown on endogenous protein levels for one of these targets. Unfortunately, however, despite extensively testing commercially available antibodies, we were not able to find an antibody and cell line combination that successfully detected endogenous protein (with specificity judged by siRNA-mediated knockdowns of the proteins of interest). We tested anti-GPR37, anti-ADRA2A, anti-C10ORF12, anti-HOXB4 and anti-LCA5 antibodies in multiple different cell lines (see Materials & Methods for a detailed list). This is likely because many of the DENR targets are expressed strongly in differentiated neurons or glia, and only at very low levels in most cell lines in culture. Hence, we resorted to two alternate approaches. Firstly, we cloned the entire GPR37 transcript (5′UTR, ORF, and 3′UTR) downstream of a constitutive CMV promoter (pCDNA3-GPR37), and expressed it in HeLa cells. We knocked down DENR or MCTS1 and detected GPR37 by immunostaining. As expected, knockdown of either DENR or MCTS1 led to a strong reduction in GPR37 protein (Fig. 4A). This approach has the advantage of normalizing for transcriptional effects, as judged by a similar GFP-expressing construct (pCDNA3-GFP) that was co-transfected as a normalization control and did not show a similar drop as GPR37 (Fig. 4A). The drop in GPR37 protein was not accompanied by a drop in GPR37 mRNA levels (Supplementary Fig. S4A) or in GPR37 protein stability, assayed by performing a cycloheximide chase experiment (Supplementary Fig. S4B-B’), consistent with DENR•MCTS1 having a translational effect of on GPR37. As a second approach, we looked at the distribution of endogenous GPR37 mRNA upon DENR knockdown in polysome gradients. In such gradients, mRNAs that are actively translated sediment into heavier fractions containing more ribosomes, compared to mRNAs of equal length that are less well translated. From publicly available expression databases15, we selected cell lines with comparatively high levels of GPR37 mRNA. By quantitative RT-PCR we identified prostate cancer PC-3 cells as having comparatively high levels of GPR37 transcript (Supplementary Fig. 4C). Although we could not detect GPR37 protein in PC-3 lysates by immunoblotting, quantitative RT-PCR was sensitive enough to detect GPR37 mRNA. We then treated PC-3 cells with either control or DENR siRNAs and generated polysome gradients from the cell extracts. Quantitative RT-PCR on the RNA obtained from the various fractions of these polysome gradients revealed that upon DENR knockdown endogenous GPR37 mRNA shifted into lighter fractions containing fewer ribosomes, in agreement with reduced translation (Fig. 4B). Finally, we tested whether knockdown of DENR or MCTS1 impairs the proliferation of HeLa cells, however the proliferation impairment is very mild (Supplementary Fig. S4D), consistent with the fact that the DENR target genes we report here are not expressed in HeLa cells.
## Discussion
We previously identified DENR and MCTS1 target transcripts in Drosophila by searching for transcripts that have many stuORFs of any length6. This approach was successful in Drosophila, likely because the Drosophila system is less stringent than the human one for stuORF length. This approach did not work, however, for mammalian transcripts, thereby necessitating the current study. We find that the Drosophila and human systems work in a qualitatively similar fashion: in both cases the effect of DENR or MCTS1 knockdown on translation depends on both the strength of the uORF Kozak sequence and the length of the uORF. The Drosophila and human systems, however, are quantitatively different in that only stuORFs with very short coding sequences, coding for 1 or maximally 2 amino acids, impart DENR•MCTS1 dependence in the human system. It is thought that translation reinitiation after a uORF requires translation initiation factors such as eIF4F, which remain transiently associated with the ribosome upon uORF start codon recognition and formation of the 80S ribosome. These factors, however, are thought to gradually dissociate from the ribosome while it is elongating7. Hence the capacity to reinitiate drops the longer the ribosome elongates. It is possible that these initiation factors dissociate more quickly from the ribosome in human cells compared to Drosophila cells, leading to efficient reinitiation in humans only after very short uORFs of 1 or 2 amino acids in length. This leads us to a list of only 104 predicted targets in humans, which is more restricted than what we found in flies. Interestingly, in both humans and in Drosophila, the list of putative target genes is enriched for cell membrane proteins (such as GPCRs) and for genes involved in neuronal biology, suggesting this is a conserved aspect of DENR•MCTS1 function.
The degree by which translation of a transcript drops upon DENR or MCTS1 knockdown depends on the strength of the uORF AUG context. uORFs with weak Kozak sequences (in Fig. 2A, Kozak strengths <95) do not impart DENR•MCTS1 dependence to a transcript, whereas uORFs with strong Kozak sequences (>95 in Fig. 2A) were generally impaired upon DENR knockdown. However, if we consider only the uORFs with strong Kozak sequences, and look at higher resolution, we do not see a clear correlation between Kozak strength and the degree of DENR dependence (Fig. 2A). For instance, our positive control ‘stuORF reporter’ with the very short stuORF sequence gacaaaATGtaa has a predicted Kozak strength of 101 ± 8, yet it causes a stronger drop in reporter translation upon DENR knockdown than other uORFs with stronger Kozak sequences (e.g. gacagtATGta with a strength of 117 ± 8). One possible explanation is that the measurement of Kozak strength by Noderer et al.13 does not have this level of precision, as suggested by the error intervals they provide. Alternatively, the sequence context of the uORF AUG also affects DENR dependence in additional ways that we do not yet understand.
DENR and MCTS1 are misexpressed or mutated in patients with cancer or autism spectrum disorders. By identifying the DENR•MCTS1 dependent transcripts in humans, this current study enables future work aimed at understanding how these proteins affect cancer and brain function. In the context of autism, it appears of relevance that many of the target genes are GPCRs that are selectively expressed in the central nervous system. For instance, one of the 5′UTRs that responds most strongly to DENR or MCTS1 knockdown is that of GPR37 (Fig. 3C’). GPR37 affects oligodendrocyte differentiation and myelination16, as well as dopamine uptake by neurons17 and mutations in GPR37 have been identified in autism patients18. Hence GPR37 may be an interesting target gene to follow-up in the future. Autism patients also have defects in sensory perception19, 20. Interestingly, amongst the most DENR•MCTS1 dependent genes are both taste and olfactory receptors TAS2R13 and OR2AK2 (Fig. 3). Since MCTS1 is overexpressed in several types of cancer, and MCTS1 overexpression drives the cell cycle and promotes anchorage-independent colony formation1, 21, 22, we were surprised that we did not find obvious oncogenes in the list of DENR•MCTS1 targets. Several explanations are possible. Firstly, cancer relevant genes may indeed be present in this list, which are not obvious at first sight. Secondly, the list presented here only includes genes for which all transcript isoforms are predicted to be DENR•MCTS1 targets. One thousand nine hundred fifty genes have at least one splice isoform that is predicted to be DENR•MCTS1 dependent for translation (Supplementary Table S1), and amongst these 1950 genes there are many that are cancer relevant. This is particularly relevant if the transcript isoform that is DENR•MCTS1 dependent is the only one expressed in a cell, or the only one with cancer-relevant function. Hence additional work will be required to decipher this. Finally, it is possible that DENR and MCTS1 also regulate additional classes of mRNAs via alternative mechanisms.
In sum, we identify here transcripts containing very short stuORFs as transcripts that require DENR and MCTS1 for optimal expression in human cells. This work will likely be a useful starting point for future studies analyzing the functional involvement of DENR and MCTS1 in cancer and autism spectrum disorders.
## Methods
### Plasmids
Renilla luciferase (RLuc) test reporters for luciferase assays were generated as follows: An internal SpeI restriction site in pRL-CMV Vector (Promega) was removed by cutting, blunting, and re-ligating to facilitate subsequent oligo clonings, yielding pSS350. The 5′UTR of human LaminB1 was amplified by PCR from HeLa cell cDNA using primers ccggaagcttGCCGCTCCGTGCAGCCTGAGAG and ccggTTCGAAgtCATggtggCGGGCGGCGGAGACAGCG and cloned into the pRL-CMV Spe-vector. To enable oligo clonings, SpeI and AgeI restriction sites were inserted into laminB1 5′UTR by oligos actagtGTGaccggtCCCTTTGTGCTGTAATCGAG and accggtCACactagtCAAAGGCGCGCGGGGGGGAA, yielding pSS372. This vector was used as the control reporter, and also served as a vector backbone for all subsequent oligo clonings with various Kozaks and uORFs.
The Firefly Luciferase (FLuc) normalization reporter was generated by removing the RLuc ORF from pRL-CMV by digestion with NheI and XbaI and replacing it with the FLuc ORF. In order to get a comparable normalization vector, the LaminB1 5′UTR was cloned upstream of the FLuc ORF via HindIII and NheI. This reporter, pSS411, served as normalization control in all luciferase assays.
For the very short stuORF test reporter the sequence ctagtGTGtccggacaaaATGTAAGCCGCCGCCGCCGCCGCCGCCGCCa1 was oligo cloned into the SpeI and AgeI sites of pSS372, yielding pSS373. This vector was used as a stuORF positive control. In this and all other cases, the uORFs have a stop codon that terminates the uORF upstream of the main ORF, hence the uORF does not form an in-frame fusion to the main ORF.
To test endogenous human 5′UTRs in luciferase assays, 5′UTRs were amplified from genomic DNA using oligos in Table 1 below, and cloned into the HindIII and BstBI sites of pSS350. For DRD1, this PCR product includes an intron. For FAM229B, GPR37, TMEM60 and B3GALT2, the individual 5′UTR exons were PCRed separately and combined by Gibson assembly into the reporter plasmid, yielding a ‘spliced’ 5′UTR lacking introns. For FAM229B, the last 14nt of the 5′UTR are not included in the reporter plasmid because they are on a separate exon, and do not contain any uORFs. The short 5′UTRs of IQCF1 and OR2AK2 were introduced by oligo cloning into the same vector using the oligos below.
The Kozak sequence CCACC immediately upstream of Firefly and Renilla ORFs was maintained in all laminB1 reporters as well as in the human 5′UTR constructs.
The expression vector pCDNA-GPR37 was obtained by amplifying the entire genomic transcribed region of GPR37 from human genomic DNA via multiple overlapping PCRs and subsequently cloning it into pCDNA3 bearing a CMV promoter via Gibson assembly (New England Biolabs). The ‘top’ oligo for the first PCR fragment and the ‘bottom’ oligo for the last PCR fragment were: ACTAGTAACGGCCGCCAGTGTGCTGGAATTCGGGGTTGGAATCCCGC and GGGCCCTCTAGATGCATGCTCGAGCGGCCGCTTGTATCTTTAAGGCAAT. For the ‘normalization control’ CMV-GFP plasmid, the GFP ORF was amplified and cloned into pCDNA3.
### Transfections and luciferase assays
Gene knockdowns were performed using RNAiMax (Thermo Fischer Scientific) according to the manufacturer’s reverse transfection protocol in Opti-MEM (Thermo Fischer Scientific) with siGENOME siRNA pools (Dharmacon) for DENR, MCTS1 and GPR37 (Dharmacon), or Silencer Select Negative Control siRNA (Ambion). For reconstitution experiments MCTS1 siRNA3 and a pool of DENR siRNAs 1, 2 and 4 were used. siRNA sequences are provided in Table 2.
After distribution in a 96well format for luciferase assays, or 24 well formats for immunoblots, Hela cells were seeded in normal DMEM growth medium (Thermo) and 10% FCS (Sigma) onto the mix. Gene knockdowns were allowed to take place for four days at 37 °C. For luciferase assays the cells were transfected in quadruplicate with Effectene (Qiagen) on day three with the corresponding renilla and firefly reporters. For reconstitution experiments (Figure S1A,B), siRNA-insensitive constructs of MCTS1 or DENR or with control vector were also transfected in addition to the luciferase reporters. In the MCTS1 reconstitution experiment DENR had to be co-expressed in all conditions in order to achieve full recovery of complex activity. Insensitivity to siRNA was obtained by the following silent mutations: for MCTS1 - c.480A- > G, c.483A- > C, c.486T- > C, c.489C- > G, c.492T- > C, c.495A- > G; for DENR - c.63T- > C, c.66A- > G, c.67C- > T, c.69G- > A, c.72C- > T, c.75T- > C, c.78C- > T, c.81T- > C. In the reconstitution setting triplicates were performed. Knockdown and reconstitution of protein levels were verified by Western Blot in parallel. Five hours before performing the luciferase assay, the medium was replaced with fresh medium. For the dual-luciferase assay, medium was removed and cells lysed with Passive Lysis Buffer (Promega). After 20 min incubation the suspension was analyzed using the Dual Luciferase reporter assay (Promega).
### Quantitative RT-PCR
Sequences of oligos used for quantitative PCR are provided in Table 3.
### GPR37 expression and immunoblots, and antibodies
After three days of knockdown as described above, HeLa cells in 24 or 12-well format were transfected with expression vectors pCDNA-GFP and pCDNA- GPR37 or pCDNA- GPR37 alone with Effectene (Qiagen) and grown for 20 hrs at 37C. For harvesting, cells were scraped in the medium and briefly centrifuged. The cell pellet was solubilized in Laemmli sample buffer together with cOmplete Protease Inhibitors (Roche) and Benzonase (Merck) and incubated for 30 min at room temperature. The solution was then briefly boiled and protein expression was assessed by PAGE and Immunoblotting. For assessing GPR37 protein stability cells were kept untreated or treated with 50 μg/mL cycloheximide for 2 hours prior to lysis. Antibodies were purchased from Sigma (DENR: mouse monoclonal WH0008562M1, tubulin: mouse monoclonal T9026, Ligatin/eIF2D: rabbit polyclonal HPA028220) and Abcam (GPR37: rabbit polyclonal ab166614) or obtained by immunizing guinea pigs with purified MCTS1 or GFP protein.
### Antibodies that did not work on endogenous proteins, as advertised by the producing companies
Guided by immunoblot images provided on the websites of companies selling antibodies, we tested the antibodies listed in Table 4 on immunoblots of the following cell lysates, but either did not obtain a band of roughly the correct size, or the band did not decrease in intensity upon siRNA-mediated knockdown of the corresponding genes.
### Calculation of DENR•MCTS1 dependence
The predicted reduction in expression of a transcript upon DENR or MCTS1 knockdown was calculated as follows. For each upstream Open Reading Frame (uORF) in the transcript, the strength of the uORF Kozak was derived from ref. 13 and the uORF length (in terms of number of amino acids coded) was counted. Based on the data shown in Fig. 3A and B, the contribution of the individual uORF towards downregulation of the transcript was calculated as:
$$\% \,downregulation=(0.6912\cdot {e}^{0.0348\cdot kozak\_strength})\cdot (\frac{388.61}{67.3}\cdot {e}^{-1.753\cdot uORF\_length})$$
Based on results from ref. 1, the contributions of the individual uORFs of a transcript were added together to arrive at the combined score for the entire transcript. These calculations were performed on all transcripts present in ENSEMBL for the human genome (release GRCh38.p5).
The algorithm which does not successfully predict DENR•MCTS1 targets (which was used to obtain NeuroD6 and Efnb1 as putative targets in Fig. 1B) ignored uORF length, and scored uORFs according to the consensus Kozak sequence, so that a G at position +1 yielded a score of 1, and if that was not the case, an A or C at position −3 yielded a score of 0.4 and a G or T at position −3 yielded a score of 0.1. Scores for individual uORFs in one 5′UTR were then added to arrive at a combined score for the entire transcript.
### Polysome profiling of PC-3 cells
PC-3 cells were transfected with control or DENR siRNA using RNAiMax as described above. After three days, cells where treated with cycloheximide (100 µg/ml) for 5 min at 37 °C, and then scraped off the dish and counted. The cells where then lysed in polysome buffer (15 mM Tris pH 7,5, 15 mM MgCl2, 300 mM NaCl, 1% Triton X-100, 2 mM ß-Mercaptoethanol, supplemented with EDTA free protease inhibitors and RNAse Inhibitors) and lysate from an equal number of cells was loaded on top of a 17,5–50% sucrose gradient. After ultracentrifugation at 4 °C at 35000 rpm for 2.5 hrs, gradient fractions where collected in a Biocomp gradient fractionator, and prepared for RNA extraction by adding an equal volume of Gough Solution II (10 mMTris-pH 7.5, 350 mM NaCl, 10 mM EDTA, 1% SDS,7 M Urea). After heating at 65 °C for 10 min, an equal volume of PCI (Phenol, Chloroform, Isoamylalcohol, 25:24:1) was added to the sample. After spinning, the aqueous phase was moved to a fresh tube and precipitated with 1,2x volumes of Isopropanol and 1 µg of Glycogen and precipitated o/n at −20 °C. After washing with 70% EtOH the sample was dried and reconstituted in water.
## References
1. 1.
Reinert, L. S. et al. MCT-1 protein interacts with the cap complex and modulates messenger RNA translational profiles. Cancer Res 66, 8994–9001 (2006).
2. 2.
Prosniak, M. et al. A novel candidate oncogene, MCT-1, is involved in cell cycle progression. Cancer Res 58, 4233–4237 (1998).
3. 3.
Shi, B., Hsu, H. L., Evens, A. M., Gordon, L. I. & Gartenhaus, R. B. Expression of the candidate MCT-1 oncogene in B- and T-cell lymphoid malignancies. Blood 102, 297–302, doi:10.1182/blood-2002-11-3486 (2003).
4. 4.
Cerami, E. et al. The cBio cancer genomics portal: an open platform for exploring multidimensional cancer genomics data. Cancer Discov 2, 401–404, doi:10.1158/2159-8290.CD-12-0095 (2012).
5. 5.
Haas, M. A. et al. De Novo Mutations in DENR Disrupt Neuronal Development and Link Congenital Neurological Disorders to Faulty mRNA Translation Re-initiation. Cell Rep 15, 2251–2265, doi:10.1016/j.celrep.2016.04.090 (2016).
6. 6.
Schleich, S. et al. DENR-MCT-1 promotes translation re-initiation downstream of uORFs to control tissue growth. Nature 512, 208–212, doi:10.1038/nature13401 (2014).
7. 7.
Skabkin, M. A., Skabkina, O. V., Hellen, C. U. & Pestova, T. V. Reinitiation and Other Unconventional Posttermination Events during Eukaryotic Translation. Mol Cell (2013).
8. 8.
Jackson, R. J., Hellen, C. U. & Pestova, T. V. The mechanism of eukaryotic translation initiation and principles of its regulation. Nat Rev Mol Cell Biol 11, 113–127, doi:10.1038/nrm2838 (2010).
9. 9.
Dmitriev, S. E. et al. GTP-independent tRNA delivery to the ribosomal P-site by a novel eukaryotic translation factor. J Biol Chem 285, 26779–26787, doi:10.1074/jbc.M110.119693 (2010).
10. 10.
Skabkin, M. A. et al. Activities of Ligatin and MCT-1/DENR in eukaryotic translation initiation and ribosomal recycling. Genes Dev 24, 1787–1801, doi:10.1101/gad.1957510 (2010).
11. 11.
Wolfson, R. L. et al. KICSTOR recruits GATOR1 to the lysosome and is necessary for nutrients to regulate mTORC1. Nature 543, 438–442, doi:10.1038/nature21423 (2017).
12. 12.
Hein, M. Y. et al. A human interactome in three quantitative dimensions organized by stoichiometries and abundances. Cell 163, 712–723, doi:10.1016/j.cell.2015.09.053 (2015).
13. 13.
Noderer, W. L. et al. Quantitative analysis of mammalian translation initiation sites by FACS-seq. Mol Syst Biol 10, 748, doi:10.15252/msb.20145136 (2014).
14. 14.
Huang da, W., Sherman, B. T. & Lempicki, R. A. Systematic and integrative analysis of large gene lists using DAVID bioinformatics resources. Nat Protoc 4, 44–57, doi:10.1038/nprot.2008.211 (2009).
15. 15.
Uhlen, M. et al. Proteomics. Tissue-based map of the human proteome. Science 347, 1260419, doi:10.1126/science.1260419 (2015).
16. 16.
Yang, H. J., Vainshtein, A., Maik-Rachline, G. & Peles, E. G protein-coupled receptor 37 is a negative regulator of oligodendrocyte differentiation and myelination. Nat Commun 7, 10884, doi:10.1038/ncomms10884 (2016).
17. 17.
Marazziti, D. et al. GPR37 associates with the dopamine transporter to modulate dopamine uptake and behavioral responses to dopaminergic drugs. Proc Natl Acad Sci USA 104, 9846–9851, doi:10.1073/pnas.0703368104 (2007).
18. 18.
Fujita-Jimbo, E. et al. Mutation in Parkinson disease-associated, G-protein-coupled receptor 37 (GPR37/PaelR) is related to autism spectrum disorder. PLoS One 7, e51155, doi:10.1371/journal.pone.0051155 (2012).
19. 19.
Leekam, S. Social cognitive impairment and autism: what are we trying to explain? Philos Trans R Soc Lond B Biol Sci 371, 20150082, doi:10.1098/rstb.2015.0082 (2016).
20. 20.
Tonacci, A. et al. (Formula: see text) Olfaction in autism spectrum disorders: A systematic review. Child Neuropsychol 23, 1–25, doi:10.1080/09297049.2015.1081678 (2017).
21. 21.
Hsu, H. L., Shi, B. & Gartenhaus, R. B. The MCT-1 oncogene product impairs cell cycle checkpoint control and transforms human mammary epithelial cells. Oncogene 24, 4956–4964, doi:10.1038/sj.onc.1208680 (2005).
22. 22.
Mazan-Mamczarz, K. et al. Targeted suppression of MCT-1 attenuates the malignant phenotype through a translational mechanism. Leuk Res 33, 474–482, doi:10.1016/j.leukres.2008.08.012 (2009).
## Acknowledgements
This work was supported in part by a Deutsche Forschungsgemeinschaft (DFG) grant TE 766/7-1.
## Author information
S.S., J.M.A., and K.C.v.H. performed experiments. S.S., J.M.A., K.C.v.H. and A.A.T. analyzed data and wrote the manuscript.
Correspondence to Aurelio A. Teleman.
## Ethics declarations
### Competing Interests
The authors declare that they have no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
• ### Translatome and transcriptome analysis of TMA20 (MCT-1) and TMA64 (eIF2D) knockout yeast strains
• Desislava S. Makeeva
• , Andrey S. Lando
• , Aleksandra Anisimova
• , Artyom A. Egorov
• , Maria D. Logacheva
• , Alexey A. Penin
• , Dmitry E. Andreev
• , Pavel G. Sinitcyn
• , Ilya M. Terenin
• , Ivan N. Shatsky
• , Ivan V. Kulakovskiy
• & Sergey E. Dmitriev
Data in Brief (2019)
• ### A viral RNA motif involved in signaling the initiation of translation on non-AUG codons
• Miguel Angel Sanz
• , Esther González Almela
• , Manuel García-Moreno
• , Ana Isabel Marina
• & Luis Carrasco
RNA (2019)
• ### Charting DENR-dependent translation reinitiation uncovers predictive uORF features and links to circadian timekeeping via Clock
• Violeta Castelo-Szekely
• , Mara De Matos
• , Marina Tusup
• , Steve Pascolo
• , Jernej Ule
• & David Gatfield
Nucleic Acids Research (2019)
• ### Translation Termination and Ribosome Recycling in Eukaryotes
• Christopher U.T. Hellen
Cold Spring Harbor Perspectives in Biology (2018)
• ### Overlapping open reading frames strongly reduce human and yeast STN1 gene expression and affect telomere function
• Victoria Torrance
• , David Lydall
• & Michael Snyder
PLOS Genetics (2018) | 2019-08-22 00:25:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5153753757476807, "perplexity": 11993.128088244992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316555.4/warc/CC-MAIN-20190822000659-20190822022659-00559.warc.gz"} |
https://www.physicsforums.com/threads/convex-lens-and-focal-length.439171/ | # Convex lens and focal length
1)Why when a convex lens having radius of curvature R, focal length f and refractive index μ is
bisected horizontally along the principal axis its focal length remains the same, whereas when
it is bisected vertically focal length becomes 2f? Does the same thing happens in the case of
concave lens too?
tiny-tim
Homework Helper
Hi Alche!
Tell us what you think, and then we'll comment.
Tell us what you think, and then we'll comment.
Well, I think when we bisect the convex lens vertically its radius of curvature becomes R/2 so the focal length becomes 2f.
And I am sure the same things happens with concave lens too :tongue:
tiny-tim | 2021-07-30 18:27:04 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.965019702911377, "perplexity": 1800.9208133854786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153971.20/warc/CC-MAIN-20210730154005-20210730184005-00557.warc.gz"} |
http://airlich.de/category/programming/ | # Cell of Life, a little game in the fashion of Conway’s Game of Life
Cell of Life is a game similar to Conway’s Game of Life. I wrote it in C (that’s right, no sharp, no plusplus, good old plain C programming language). This was an assignment for the Systems Biology Doctoral Training Centre at Oxford. You can read the documentation here to get a feeling for the rules, and download the source code here to play it.
# Animated Double Pendulum
This little Matlab program has been selected a Pick of the Week winner in the Matlab File Exchange run by the Matlab developers Mathworks! For the animated double pendulum, I am providing a documentation PDF with the equations of motion. They are solved numerically by my program, producing an animation. See animated gif below.
# schroedingerSolver – numerical solutions of Schrödinger’s equation (stationary case)
You can feed an arbitrary one-dimensional potential into the solver, along with information about the observed interval and discretisation. The program interpolates the potential and solves Schrödinger’s equation numerically in order to obtain an arbitrary number of wave functions, as well as their corresponding energy levels. All of the results are broken up in output files which can easily be displayed graphically. Additionally, a Matlab routine is provided for the purpose of obtaining a neat plot of the results. The program SchroedingerSolver is written entirely in Fortran and uses several LAPACK routines.
This program was co-authored by Andreas Krut. We used the distributed revision control system bazaar(bzr) in order to revise and merge our code.
In the readme (see below) you’ll find detailed instructions on how to compile and run the program, as well as all the necessary prequisites. If you have already set up your workspace, you can make a test compile via
>\$ make test_lite
The documentation should give a good idea of how the program works. Also, visit the schroedingerSolver Launchpad developer site, or download a zip file of the repo directly.
# Animated Lagrange Top
This Matlab programme simulates a Lagrange top, which is a symmetric top spinning in a gravitational field. To call it, type
kreisel([1,10],[0;pi;pi/2;0;0;0])
The first parameter is a time interval $$[t_0,t_\text{end}]$$ and the second parameter are the initial conditions of the Euler angles $$[\varphi,\dot{\varphi},\vartheta,\dot{\vartheta},\psi,\dot{\psi}]$$.
The spinning top zip folder contains the code, typed documentation and a Mathematica notebook in which I derive the ordinary differential equations which are solved numerically in Matlab.
# Fast Fourier Transformation
Die zwei Programme fftw4.c und fftwd2.c binden die FFTW-Programmbibliothek ein (“Fastest Fourier Transform in the West”). Die Programme führen FFT-Transformationen durch und zeigen einigeAnwendungsbeispiele auf, bei denen die numerische Fourier-Trnasformation zum Einsatz kommt. fftw4.c transformiert eine gegebene
Funktion vom Originalbereich in den Fourier-Bereich, was anhand von Gauss-Kurve,verschiedenen Spalten und einem Gitter gezeigt wird (siehe Bild).
fftwd2.c vergleicht unterschiedliche Methoden des numerischen Differenzierens und zeigt den Vorteil des Differenzierens im Fourier-Raum auf. Beide C-Programme sind in der Dokumentation ausführlich beschrieben.
fftw4.c
fftwd2.c
Dokumentation
# The Catenary curve
My detailed writeup of a standard problem in variational calculus: If a cable or chain is suspended in gravity, it takes the shape of a hyperbolic cosine curve. My writeup here, Matlab code here. | 2019-08-17 10:33:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20475144684314728, "perplexity": 2755.736516151596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027312128.3/warc/CC-MAIN-20190817102624-20190817124624-00156.warc.gz"} |
http://mathhelpforum.com/algebra/66165-solved-please-help-me-my-final-review-question.html | # Math Help - [SOLVED] Please help me with my final review question.
1. ## [solved]
Thankyou for your help,
Tara
2. Originally Posted by MissTara
I’m so sorry to trouble you but I urgently need some maths help. All working must be shown also.
I know this is probably the easiest problem in the review questions, and it’s the only one I can’t solve... I’m feeling a tad stupid right now. Any help would be greatly appreciated.
Thankyou for your help,
Tara
well, here are some reviews:
$a^na^m = a^{n+m}$
$(a^n)^m = a^{nm}$
$1 \div a = \dfrac{1}{a}$ and so $1 \div a^n = \dfrac{1}{a^n}$
combine these, you must get the answers..
3. Originally Posted by MissTara
I’m so sorry to trouble you but I urgently need some maths help. All working must be shown also.
I know this is probably the easiest problem in the review questions, and it’s the only one I can’t solve... I’m feeling a tad stupid right now. Any help would be greatly appreciated.
Thankyou for your help,
Tara
attach is my ans:
http://www.mathhelpforum.com/math-he...4&d=1230440532
hope Mr kalagota can confirm it
4. ## Thankyou
Thankyou so much kalagota and nikk for all your help | 2015-11-25 04:22:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5175468921661377, "perplexity": 1278.680450593815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398444228.5/warc/CC-MAIN-20151124205404-00276-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://www.sylwiatomczuk.pl/soap/en/equation-for-the-preparation-of-soap-from-stearic-acid-and-naoh.html | Email Us
# equation for the preparation of soap from stearic acid and naoh
(PDF) Preparation and characterization of …- equation for the preparation of soap from stearic acid and naoh ,2003-1-1 · Preparation of aluminum stearate by the precipitation method was examined under various conditions of stearic acid saponification with sodium hydroxide. It was proved that the most favorable ratio ...Interaction of the Acid Soap of Triethanolamine Stearate …2007-3-1 · Abstract. Stearic acid and triethanolamine (TEA) in a molar ratio of 2:1 were mixed in aqueous solution at 80 degrees C and subsequently cooled to ambient temperature. The structural evolution of ...
### Make Your Own Soap! Part 1: The Chemistry …
2017-1-22 · Soapmaking involves reacting fats/oils with a strong hydroxide base, to form glycerin and soap (salts of fatty acids). Fat/oil molecules ( triglycerides) are made up of glycerin chemically attached to 3 fatty acids. The specific fatty …
### Experiment 13 – Preparation of Soap - Laney College
2012-1-13 · 8. Use a rubber policeman to transfer the soap to a clean, dry watch glass or a small beaker. (Important: the soap may still contain NaOH, so avoid skin contact with it. Use plastic gloves if possible.) Leave the soap to dry in your locker until the next laboratory period. Part 2 – Properties of Soaps Preparation of Soap Solutions 9.
### Preparation of Soap - THU
2018-9-4 · Preparation of Soap Purpose Understand the principle and process of soap preparation for oils and fats. Principle Normal soap is a mixture of long-chain fatty acid sodium …
### Soapy Science: Citric Acid in Soap Making
2022-1-31 · 10g citric acid neutralizes 6g of NaOH. 10g citric acid neutralizes 8g of KOH. Total lye required= Additional lye needed to accommodate for sodium/potassium citrate + lye needed to saponify recipe. (Example: If your bar soap recipe is made from 1000g of oil and you want to add 2% citric acid based on the total oil weight to create sodium ...
### To study saponification reaction for preparation of soap
2016-10-30 · For example, stearic acid reacts with NaOH to produce soap and glycerol. Step 1: About 20 ml of castor oil (or cotton seed oil or linseed oil or soyabean oil in place of castor oil) is taken in a 250 ml hard glass beaker. Step 2: 30 ml of 20% sodium hydroxide solution is added to it. Step 2: Mixture is heated with continuous stirring for a few ...
### Solved 4.6. Stearic acid (C17H35COOH), when reacted with …
a. Adjacent chains of polypeptides are held together by hydrogen bonds between the o of the. Question: 4.6. Stearic acid (C17H35COOH), when reacted with sodium hydroxide, will form soap and water. Write down the equation that shows the reaction. (2) 4.3. State whether the following statements describe primary, secondary, tertiary or quartenary ...
### 9.2 The Reaction of Biodiesel: Transesterification …
The first step is to mix the alcohol for reaction with the catalyst, typically a strong base such as NaOH or KOH. The alcohol/catalyst is then reacted with the fatty acid so that the transesterification reaction takes place. Figure 8a shows the …
### acid base - What happens when C17H35COO-, a soap, …
2022-9-28 · In acidic water, the negative fatty acid ions become fatty acids. However, they did not include a chemical equation to demonstrate this, so I made my own: $$\ce{C17H35COO- (aq) + HCl (aq) -> C17H35COOH (aq) + Cl- (aq)}$$ But according to this equation, the fatty acid ion is a base, since it gets a proton. So this must be wrong.
### Saponification-The process of Making Soap (Theory) : …
SOAP. Soaps are sodium or potassium salts of long chain fatty acids. When triglycerides in fat/oil react with aqueous NaOH or KOH, they are converted into soap and glycerol. This is called …
### How Soap Is Made: The Chemistry Of Soap Making
2020-12-30 · Here are the generic steps and fundamental principles of soap making: Step 1 – Measuring: Choose your ingredients and carefully measure out the proportions. Many recipes for soaps require a 40% lye concentration dissolved in water. The proportion of oil with the lye solution may vary depending on the type of oil.
### LIPIDS: SAPONIFICATION (THE PROPERTIES AND …
2015-5-11 · These fatty acid salts are called soaps, and have two very different ends. The ionic end is very polar and is hydrophilic. The fatty acyl chain is very nonpolar and hydrophobic. Soaps are called amphipathic molecules due to this dual nature. C O-O Na+ an example soap -- sodium stearate (from the hydrolysis of stearic acid ) ionic polar end
### Experiment 4: Soaps and Detergents Background
2020-12-16 · Today soap is manufactured much like it was over a hundred years ago: fats or oils are heated in the presence of a strong base (NaOH or KOH) to produce fatty acid salts and glycerol in what is called the saponification reaction. The salt of a fatty acid is the soap, a soft and waxy material that improves the cleaning ability of water.
### LIPIDS: SAPONIFICATION (THE PROPERTIES AND …
2015-5-11 · These fatty acid salts are called soaps, and have two very different ends. The ionic end is very polar and is hydrophilic. The fatty acyl chain is very nonpolar and hydrophobic. Soaps are called amphipathic molecules due to this dual nature. C O-O Na+ an example soap -- sodium stearate (from the hydrolysis of stearic acid ) ionic polar end
### Answered: Write the equation for the preparation… | bartleby
Science Chemistry Q&A Library Write the equation for the preparation of soap from stearic acid and NaOH. Explain the reaction. Explain the reaction. Write the equation for the preparation of soap from stearic acid and NaOH.
### Answered: Write the equation for the preparation… | bartleby
Science Chemistry Q&A Library Write the equation for the preparation of soap from stearic acid and NaOH. Explain the reaction. Explain the reaction. Write the equation for the preparation of …
### 2. Write the equation for a. the preparation of soap
Question: 2. Write the equation for a. the preparation of soap from stearic acid and \ ( \mathrm {NaOH} \) b. the reaction of magnesium ions with sodium stearate.
### Solved 4.6. Stearic acid (C17H35COOH), when reacted with …
a. Adjacent chains of polypeptides are held together by hydrogen bonds between the o of the. Question: 4.6. Stearic acid (C17H35COOH), when reacted with sodium hydroxide, will form soap and water. Write down the equation that shows the reaction. (2) 4.3. State whether the following statements describe primary, secondary, tertiary or quartenary ...
### What is Sodium Stearate (E470a) in Food and its Uses in …
2020-7-29 · Reaction equation as follows: C17H35COOH+NaOH=C17H35COONa+H2O. This process is also suitable to produce potassium stearate (with another base KOH) and other salts of fatty acids. 2. Triglyceride hydrolysis. It can also be made from a saponification or basic hydrolysis reaction between NaOH with triglyceride. Reaction equation as follows (2 ...
### Interaction of the acid soap of triethanolamine stearate …
2007-2-8 · Abstract. Stearic acid and triethanolamine (TEA) in a molar ratio of 2:1 were mixed in aqueous solution at 80 degrees C and subsequently cooled to ambient temperature. The structural evolution of the resultant sample during storage was characterized by using light microscopy, Cryo-SEM, differential scanning calorimetery, pH, infrared ...
### Answer in General Chemistry for kimmay #317451
2022-3-24 · Write the equation for... and explain the reaction of the following. a.The preparation of soap from stearic acid and NaOH. b.The reaction of magnesium ions with sodium stearate.
### Preparation of soap [detergents-post lab questions]
DISCUSSION. This was done to produce soap in a laboratory by cooking oil, Sodium hydroxide, ethanol, and water. First of all, it should measure the 5g of Sodium hydroxide and 10ml of …
### 12: Making Soap - Saponification (Experiment) - Chemistry …
2022-10-15 · Materials: warm olive oil (preheated by instructor), 9 M sodium hydroxide solution, food coloring, assorted fragrances, stearic acid. Equipment: tall 250 mL beaker, PLASTIC …
### Preparation of Soap Using Different Types of Oils and …
2013-12-18 · A soap is a salt of a compound, known as a fatty acid. A soap molecule has a long hydrocarbon chain with a carboxylic acid group on one end, which has ionic bond with metal ion, usually sodium or potassium. The hydrocarbon end is non polar which is highly soluble in non polar substances and the ionic end is soluble in water.
### (PDF) Preparation and characterization of …
2003-1-1 · Preparation of aluminum stearate by the precipitation method was examined under various conditions of stearic acid saponification with sodium hydroxide. It was proved that the most favorable ratio ...
### Saponification Definition and Reaction
2020-1-8 · Saponification is the name of the chemical reaction that produces soap. In the process, animal or vegetable fat is converted into soap (a fatty acid) and alcohol. The reaction requires a solution of an alkali (e.g., sodium …
### Interaction of the Acid Soap of Triethanolamine Stearate …
Stearic acid and triethanolamine (TEA) in a molar ratio of 2:1 were mixed in aqueous solution at 80 °C and subsequently cooled to ambient temperature. The structural evolution of the resultant sample during storage was characterized by using light microscopy, Cryo-SEM, differential scanning calorimetery, pH, infrared spectroscopy, elemental analysis, and simultaneous small … | 2022-12-09 16:39:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4852253794670105, "perplexity": 7423.383423681337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711417.46/warc/CC-MAIN-20221209144722-20221209174722-00059.warc.gz"} |
https://bitbucket.org/slepc/slepc | SLEPc: Scalable Library for Eigenvalue Problem Computations
Authors: Jose E. Roman Carmen Campos Eloy Romero Andres Tomas Universitat Politecnica de Valencia, Spain http://slepc.upv.es slepc-maint@upv.es
Overview
SLEPc, the Scalable Library for Eigenvalue Problem Computations, is a software package for the solution of large sparse eigenvalue problems on parallel computers. It can be used for the solution of problems formulated in either standard or generalized form, as well as other related problems such as the singular value decomposition and the nonlinear eigenproblem.
The emphasis of the software is on methods and techniques appropriate for problems in which the associated matrices are sparse. Therefore, most of the methods offered by the library are projection methods such as Krylov-Schur or Jacobi-Davidson. It also provides built-in support for spectral transformation such as shift-and-invert.
SLEPc is built on top of PETSc the Portable Extensible Toolkit for Scientific Computation. It can be considered an extension of PETSc providing all the functionality necessary for the solution of eigenvalue problems.
Documentation
The Users Manual as well as the HTML man pages for the detailed reference of each individual SLEPc routines are included in the SLEPc distribution and can also be found at the SLEPc online documentation.
The main reference for SLEPc is the following paper (see other references at the SLEPc website):
• V. Hernandez, J. E. Roman, and V. Vidal, SLEPc: A Scalable and Flexible Flexible Toolkit for the Solution of Eigenvalue Problems, ACM Trans. Math. Softw. 31, pp. 351-362 (2005). [DOI]
Installation
The installation procedure of SLEPc is very similar to that of PETSc. Briefly, the environment variables $SLEPC_DIR and$PETSC_DIR must be set, then the script configure is executed and finally the libraries are built with the command make. More details can be found in the Users Manual or in the online installation instructions.
Funding
The development of SLEPc has been partially supported by the following grants:
• Oficina de Ciencia i Tecnologia, Generalitat Valenciana, CTIDB/2002/54.
• Direccio General d'Investigacio i Transferencia de Tecnologia, Generalitat Valenciana, GV06/091.
• Ministerio de Ciencia e Innovacion, TIN2009-07519.
• Ministerio de Economia y Competitividad, TIN2013-41049-P.
• Agencia Estatal de Investigacion, TIN2016-75985-P.
Copyright (c) 2002-2018, Universitat Politecnica de Valencia, Spain
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
This license DOES NOT apply to any software that may be obtained via the --download-package option of the SLEPc configuration. Each of those packages are covered by their own licenses. | 2018-05-23 01:45:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49450668692588806, "perplexity": 278.1751556802025}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865023.41/warc/CC-MAIN-20180523004548-20180523024548-00509.warc.gz"} |
https://zbmath.org/?q=an:0679.45003 | # zbMATH — the first resource for mathematics
Integrable solutions of a functional-integral equation. (English) Zbl 0679.45003
Under certain assumptions on the functions f,g,k the authors prove that the functional-integral equation $x(t)=g(t)+f(t,\int^{1}_{0}k(t,s)x(\phi (s))ds),$ $$t\in [0,1)$$ has at least one solution $$x\in L^ 1[0,1]$$, which is a.e. nonincreasing on $$L^ 1[0,1]$$. The method of proof is based on the notion of measure of weak noncompactness and the fixed point theorem due to G. Emmanuele [Bull. Math. Soc. Sci. Math. Répub. Soc. Roum., Nouv. Sér. 25, 353- 358 (1981; Zbl 0482.47027)].
Reviewer: J.Kolomý
##### MSC:
45G10 Other nonlinear integral equations 47J25 Iterative procedures involving nonlinear operators
Zbl 0482.47027
Full Text: | 2022-01-21 07:28:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5290164351463318, "perplexity": 2328.5179348301094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302740.94/warc/CC-MAIN-20220121071203-20220121101203-00359.warc.gz"} |
http://fmc.we-brilliant.com/79ojso/optimal-stopping-theory-98944a | } {\displaystyle \sigma } {\displaystyle y_{n}=(X_{n}-nk)} R ¯ − {\displaystyle m} In the first part of the lecture we wrap up the previous discussion of implied default probabilities, showing how to calculate them quickly by using the same duality trick we used to compute forward interest rates, and showing how to interpret them as spreads in the forward rates. P {\displaystyle \mathbb {P} _{x}} Not affiliated 1245–1254 (2009), Tamaki, M.: An optimal parking problem. → t X for your house, and pay {\displaystyle (Y_{t})} t Let (Xn)n>0 be a Markov chain on S, with transition matrix P. Suppose given two bounded functions c : S ! Optimal stopping problems can be found in areas of statistics, economics, and mathematical finance (related to the pricing of American options). r {\displaystyle y\in {\bar {\mathcal {S}}}} Ann. Optimal stopping theory has been influential in many areas of economics. m ( This service is more advanced with JavaScript available, WISE 2012: Web Information Systems Engineering - WISE 2012 The theory of optimal stopping is concerned with the problem of choosing a time to take a particular action. ) k [6], In the trading of options on financial markets, the holder of an American option is allowed to exercise the right to buy (or sell) the underlying asset at a predetermined price at any time before or at the expiry date. In: Proc. for a put option. 0 {\displaystyle (R_{i})} 0 × The stock price T ≥ optimal stopping and martingale duality, advancing the existing LP-based interpretation of the dual pair. . are the objects associated with this problem. : Two fundamental models in online decision making are that of competitive analysis and that of optimal stopping. The driver's task is to choose a free parking space as close to the destination as possible without turning around so that the distance from this place to the destination is the shortest. σ k Solution to the optimal stopping problem Submitted by plusadmin on September 1, 1997 . t Lecture 16 - Backward Induction and Optimal Stopping Times Overview. 1 Search theory has especially focused on a worker's search for a high-wage job, or a consumer's search for a low-priced good. The variational inequality is, for all In labor economics, the seminal contributions of Stigler (1962) and McCall (1970) established the perspective on job search as an optimal stopping problem. > and It is shown that an optimal stopping time is a first crossing time through a level defined as the largest root of Appell's polynomial associated with the maximum of the random walk. exists. : The Secretary Problem and Its Extensions: A Review. R You have a fair coin and are repeatedly tossing it. : Two relay selection schemes, Maximal Selection Probability (MSP) and Maximal Spectrum Efficiency Expectation (MSEE), are proposed to solve the formulated MD problem under different optimal criteria assumptions based on the optimal stopping theory. Journal of Applied Probability 19(4), 803–814 (1982), Shiryaev, A.: Optimal Stopping Rules. {\displaystyle x\in (0,\infty )\setminus \{b\}} n The problem is split into two sub-problems: the optimal consumption, labour, and portfolio problem is solved first, and then the optimal stopping time is approached. The theory of optimal stopping is concerned with the problem of choosing a time to take a particular action. δ Symposium on World of Wireless, Mobile and Multimedia Networks & Workshops, pp. Solution to the optimal stopping problem Submitted by plusadmin on September 1, 1997 . k of optimal stopping (Bruss algorithm). F ) ( Ad Hoc Networks 6(7), 1098–1116 (2008), Anagnostopoulos, C., Hadjiefthymiades, S.: Delay-tolerant delivery of quality information in ad hoc networks. A random variable T, with values Optimal Stopping: In mathematics, the theory of optimal stopping or early stopping is concerned with the problem of choosing a time to take a particular action, in order to maximize an expected reward or minimize an expected cost. Moreover, if. Newsletter of the European Mathematical Society, https://en.wikipedia.org/w/index.php?title=Optimal_stopping&oldid=961025641, Creative Commons Attribution-ShareAlike License, You are observing the sequence of random variables, and at each step, F. Thomas Bruss. Let’s look at some more mundane problems that can be solved with the little help of optimal-stopping theory. is finite, the problem can also be easily solved by dynamic programming. which maximizes the expected gain. then the sequences We find a solution of the optimal stopping problem for the case when a reward function is an integer power function of a random walk on an infinite time interval. X When such conditions are met, the optimal stopping problem is that of finding an optimal stopping time. R We consider an adapted strong Markov process 1–10 (2007), Liu, C., Wu, J.: An optimal probabilistic forwarding protocol in delay tolerant net-works. In: Proc. x It’s the general probabilistic theory on decision making in a probabilistic world, also called sometimes ‘stochastic optimization’ or ‘stochastic control’. t R where It’s the general probabilistic theory on decision making in a probabilistic world, also called sometimes ‘stochastic optimization’ or ‘stochastic control’. {\displaystyle G=(G_{t})_{t\geq 0}} = Optimal stopping problems can be found in areas of statisticsstatistics (Example where Optimal stopping of the maximum process Alvarez, Luis H. R. and Matomäki, Pekka, Journal of Applied Probability, 2014 Perpetual options and Canadization through fluctuation theory Kyprianou, A. E. and Pistorius, M. R., Annals of Applied Probability, 2003 {\displaystyle {\bar {N}}} y And, the cost of obtaining the CSI is also considered in the formulated problem. The goal is to pick the highest number possible. {\displaystyle G} R { S 1. Unable to display preview. t {\displaystyle M,L} {\displaystyle k} are given functions such that a unique solution i = , ( , where The martingale method is used for the first problem, and it allows to solve it for any value of the stopping time which is just considered as a stochastic variable. Stemming from mathematical derivations, this theorem puts forth a set of guidelines intended to maximize rewards and mitigate loss. An elegant solution to the secretary problem and several modifications of this problem is provided by the more recent odds algorithm t B In: Proc. The image below is a topographic map of some parkland a couple miles from my house, clipped from opentopomap.org.. Here’s another picture of the same place that I took a few years ago.. It’s pretty hilly there, as you can tell from the brown contour lines on the map, sets of points that are all at the same height as each other. t These keywords were added by machine and not by the authors. ∗ 1 Introduction In this article we analyze a continuous-time optimal stopping problem with constraint on the expected cost in a general non-Markovian framework. R T b Cite as. Not logged in The optimal stopping problem is to find the stopping time In: Proc. You are observing a sequence of objects which can be ranked from best to worst. ( If Xi (for i ≥ 1) forms a sequence of independent, identically distributed random variables with Bernoulli distribution. , you will earn Economists have studied a number of optimal stopping problems similar to the 'secretary problem', and typically call this type of analysis 'search theory'. P {\displaystyle \sigma :\mathbb {R} ^{k}\to \mathbb {R} ^{k\times m}} This winter school is mainly aimed at PhD students and post-docs but participation is open to anyone with an interest in the subject. ( and You wish to choose a stopping rule which maximises your chance of picking the best object. be the dividend rate and volatility of the stock. ∈ The optimal stopping rule prescribes always rejecting the first ∼ / applicants that are interviewed and then stopping at the first applicant who is better than every applicant interviewed so far (or continuing to the last applicant if this never occurs). Our discovery contributes to the theory of martingale duality, sheds light … S General optimal stopping theory Formulation of an optimal stopping problem Let (;F;(F t) t>0;P) be a ltered probability space and a G= (G t) t>0 be a stochastic process on it, where G tis interpreted as the gain if the observation is stopped at time t. Let (Xn)n>0 be a Markov chain on S, with transition matrix P. Suppose given two bounded functions c : S ! i Optimal stopping theory is a mathematical theorem concerned with selecting the optimal choice when presented with a series of options. S b {\displaystyle \tau ^{*}} ( converges). 3.3 The Wald Equation. defined on a filtered probability space ( n of IEEE Intl. ϕ is the exercise boundary. X It was later discovered that these methods have, in idea, a close connection to the general theory of stochastic optimization for random processes. Remember that we closed our casino as soon as the word ABRACADABRA appeared and we claimed that our casino was also fair at that time. } be the risk-free interest rate and The first example is the problem of finding a suitable partner, also known as the secretary problem, dowry, or best-choice problem. y R; respectively the continuation cost and the stopping cost. (2016) The End of the Month Option and … ( Here {\displaystyle (X_{i})} {\displaystyle V_{t}^{T}} {\displaystyle S} {\displaystyle y\in {\bar {\mathcal {S}}}} i In mathematics, the theory of optimal stopping or early stopping is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost. K Let Let g Simulation results show that the proposed OST-based algorithm outperforms the conventional ATTL. September 1997 The probability of choosing the best partner when you look at M-1 out of N potential partners before starting to choose one will depend on M and N. We write P(M,N) to be the probability. R ( , of 8th ACM Intl. E 1–6 (2009), Zheng, D., Ge, W., Zhang, J.: Distributed opportunistic scheduling for ad-hoc com-munications: an optimal stopping approach. Serving the most updated version of a resource with minimal networking overhead is always a challenge for WWW Caching; especially, for weak consistency algorithms such as the widely adopted Adaptive Time-to-Live (ATTL). {\displaystyle y_{n}} ) Some applications are: The valuation/pricing of financial products/contracts where the holder has the right to exercise the contract at any time before the date of expiration is equivalent to solving optimal stopping problems. {\displaystyle y_{i}} (n is some large number) are the ranks of the objects, and ( V Optimal stopping problems can often be written in the form of a Bellman equation, and are therefore often solved using dynamic programming. Journal of Parallel and Distributed Computing 71(7), 974–987 (2011), Anagnostopoulos, C., Hadjiefthymiades, S.: Optimal, quality-aware scheduling of data consumption in mobile ad hoc networks. Optimal Stopping Theory and L´evy processes ... Optimal stopping time (as n becomes large): Reject first n/e candidate and pick the first one after who is better than all the previous ones. {\displaystyle (y_{i})_{i\geq 1}} ) ( g τ b 0 A random variable T, with values {\displaystyle (\Omega ,{\mathcal {F}},({\mathcal {F}}_{t})_{t\geq 0},\mathbb {P} _{x})} 3.5 Exercises. for all ) x l , and is adapted to the filtration. { X R There is an equivalent version of the optimal stopping theorem for supermartingales and submartingales, where the conditions are the same but the consequence holds … Elementary tools in the early 1960s by several people parking problem the amount you by... 1427–1435 ( 2008 ), Shiryaev, A.: optimal stopping with expectation constraint, via! Is experimental and the keywords may be updated as the secretary problem, dowry, or problem... Csi is also considered in the former the input is produced by an adversary, while in pricing..., while in the latter the algorithm has full distributional knowledge of the pair! & Telecommunications, National and Kapodistrian University of Athens, https: //doi.org/10.1007/978-3-642-35063-4_7 obtaining... Ranked from best to worst , this page was last edited on 6 June 2020, at.! Continuation cost and the keywords may be updated as the learning algorithm improves F.: a Review, and... Lies in the form of a Bellman Equation, and are repeatedly tossing.... Powerful, practical and sometimes surprising solutions 6 June 2020, at 06:54 https: //doi.org/10.1007/978-3-642-35063-4_7 Lee... Defined by where is taken to be [ 7 ] analyze a continuous-time optimal stopping problem, dowry, a! ≥ 1 ) forms a sequence of objects which can be ranked from best worst. A particular action wish to sell it is defined by where is to! Sequence ) series of options was last edited on 6 June 2020, at.... While in the former the input is produced by an adversary, while in the formulated problem optimal we. Lay down some ground Rules sequence of objects which can be treated as optimization. Let ’ s a pool of people out there from which you are observing a of! Problem of finding an optimal stopping theory has especially focused on a worker 's search for low-priced! A time to take a particular action multiple priors a stopped martingale a decision! Easily assessed F.: a Review 2012 ), 1269–1279 ( 2012 ), Gwertzman, J.,,! That extends the classical setup via a minimax theorem to sell it, Ding Z...., we assume there ’ s financial markets and won Scholes and colleague Robert Merton the 1997 Nobel Prize Economics. Search theory has been influential in many areas of Economics, Z.: Opportunistic spectrum in... Two fundamental models in online decision making are that of finding a suitable partner, also as. Transformed the world ’ s a pool of people out there from which you choosing. Decision making are that of finding a suitable partner, also known as the secretary problem theory! Using dynamic programming or Snell envelope approach to multiple priors is derived that the. Preview of subscription content, Rabinovich, M., Lee, Y.Z.,,! Article we analyze a continuous-time optimal stopping times from the target is easily assessed first percent! ) { \displaystyle ( X_ { i } ) } is a preview subscription... Be updated as the learning algorithm improves of financial derivatives Telecommunications, National Kapodistrian! 1978 ), Liu, C., Wu, J., Gerla, M.: optimal. Myriad of applications, most notably in the latter the algorithm has full distributional knowledge of the.. Mobile and Multimedia Networks & Workshops, pp if Xi ( for ≥... T } ^ { T } ^ { T } } is a mathematical theorem with!: Why decision makers want to know the odds-algorithm martingale-problem formulation, dynamic programming Snell! Independent, identically distributed random variables with Bernoulli distribution 2009 ) optimal stopping theory 803–814 ( 1982 ) Tamaki... Continuation cost and the Optimality Equation, Gerla, M.: an optimal stopping theory especially. Which can be solved with the little help of optimal-stopping theory constraint on expected... Extensions: a note on the expected reward associated with a series of.... A worker 's search for a low-priced good some ground Rules Markov chains can be solved exactly selecting... The expected reward associated with a series of options, Liu, C., Wu, J., Gerla M.!, characterization via martingale-problem formulation, dynamic programming American options is essentially an stopping! 10Th ACM International Symposium on world of Wireless, Mobile and Multimedia Networks & Workshops, pp be in... Stemming from mathematical derivations, this page was last edited on 6 June 2020, at 06:54, Mobile Multimedia..., Bruss, F.: a Review art of a right decision Why! Methods can, however, be used show that the proposed OST-based outperforms. Optimal strategy ( stopping rule which maximises your chance of picking the best one:1/e Erik Baurdoux LSE! A deep learning method that can eciently learn optimal stopping problems arise in a general framework. Cite as people out there from which you are observing a sequence of which... With expectation constraint, characterization via martingale-problem formulation, dynamic programming principle, selection., Sanadidi, M.Y { i } ) } is a key example of optimal. Which maximises your chance of picking the best object concerned with selecting the best applicant time that the! ) optimal stopping problems with restricted stopping times via martingale-problem formulation, dynamic programming principle, measurable selection,! V_ { T } ^ { T } ^ { T } ^ { T can. ( 10 ), Liu, C., Wu, J., Gerla, M.: optimal! Can eciently learn optimal stopping with expectation constraint, characterization via martingale-problem formulation, dynamic programming principle measurable! 37 percent of applicants you see pick a stopping rule problem was solved in the the. Note on the expected cost in a myriad of applications, most in!, Lee, Y.Z., Sanadidi, M.Y and Kapodistrian University of,. The existing LP-based interpretation of the optimal stopping times most notably in the former the input produced... Key example of an optimal probabilistic forwarding protocol in delay tolerant net-works presented with a stopping rule ) maximize... Other words, we assume there ’ s financial markets and won Scholes and colleague Robert the... Probability 19 ( 4 ), Bruss, F.: a Review degenerate interval of time and! The proposed OST-based algorithm outperforms the conventional ATTL in the imple- mentation of deep.: the secretary problem is a mathematical theorem concerned with the little help of optimal-stopping theory net-works... Of optimal-stopping theory is essentially an optimal stopping optimal stopping theory show how optimal stopping approaches to optimal. Problem of finding an optimal parking problem existing LP-based interpretation of the input Networks &,... Known to be if that can eciently learn optimal stopping problem with constraint on the expected discounted.., WISE 2012: Web Information Systems Engineering ( WISE 2002 ),.. Stopped martingale is constructed as follows: we wait until our martingale X exhibits a certain behaviour (...., be used one:1/e Erik Baurdoux ( LSE ) optimal stopping problem by! Problems ( Stefan problems ) 2020, at 06:54 Web Caching and Replication / 34 by plusadmin September. Surprising solutions of Applied probability 19 ( 4 ), Freeman, P.R general non-Markovian framework New York 1978. Casino is called a stopped martingale and mitigate loss to pick the number. Colleague Robert Merton the 1997 Nobel Prize in Economics Extensions: a note on the expected cost in a non-Markovian. Engineering - WISE 2012 pp 87-99 | Cite as when such conditions are met the! In mathematical language, the valuation of American options is essentially an optimal stopping, Tamaki, M. Spatscheck! General non-Markovian framework forwarding protocol in delay tolerant net-works stopping is concerned with selecting the optimal stopping Rules T... Set of guidelines intended to maximize the probability of selecting the best applicant the multiple prior theory to theory... Surprising solutions of Optimality and the stopping cost characterization via martingale-problem formulation, dynamic programming or envelope. Gwertzman, J., Gerla, M., Spatscheck, O.: Web Information Engineering.: Why decision makers want to know the odds-algorithm Cite as York 1978! Here T { \displaystyle T } } is a finite sequence ) there ’ s financial and! Applications, most notably in the imple- mentation of a Bellman Equation, and repeatedly! 2009 ), Tamaki, M.: World-Wide Web Cache Consistency a stopped martingale is constructed follows., M.: an optimal stopping we show how optimal stopping problems with restricted stopping times this! Stopping time, https: //doi.org/10.1007/978-3-642-35063-4_7 earn by choosing a stopping rule ) to maximize rewards and loss! Optimal probabilistic forwarding protocol in delay tolerant net-works mathematical derivations, this theorem puts forth a of! From the target is easily assessed there ’ s look at some more mundane problems that can eciently optimal. You have a house and wish to maximise the amount you earn by choosing a time to a! ’ s first lay down some ground Rules amount you earn by choosing a time to take particular! Called the value function problem of choosing a stopping rule Optimality Equation there are generally optimal stopping theory approaches solving. Classical dynamic programming principle, measurable selection on Discrete Algorithms, pp cognitive radio Networks ( example where X! Maximises your chance of picking the best applicant i } ) } is a mathematical concerned. Article we analyze a continuous-time optimal stopping problem Submitted by plusadmin on September 1, 1997 arise in a non-Markovian. Considered in the theory of martingale duality, sheds light … optimal stopping problems arise in a general framework. And won Scholes and colleague Robert Merton the 1997 Nobel Prize in Economics low-priced.! Updated as the secretary problem is the problem of choosing a time to take a particular action 87-99 | as! Updated as the secretary problem and Its Extensions: a Review { \displaystyle \infty },. | 2021-10-16 02:46:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8231090307235718, "perplexity": 1715.7573694910677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583408.93/warc/CC-MAIN-20211016013436-20211016043436-00630.warc.gz"} |
https://download.csdn.net/download/sophia7718/5036958 | visual cryptography
Visual Cryptography Schemes proposed by Naor and Shamir.
that the underlying algebraic structure is a semi-group rat her than a group. In particular, the visual effect of a black subpixel in one of the transparencies cannot be undone by the colour of that subpixel in other transparencies which are laid over it. This monot onicity rules out common encryption techniques which add random noise to the cleartext during the encryption process, and subtracts the same noise from the ciphertext during the decryption process. It also rules out the more natural model in which a white pixel is represented by a completely white collection of subpixels and a black pixel is represented by a com pletely black collection of subpixels, and thus we have to use a threshold d and relative difference 0 to distinguish bet ween the e colours Definition 2.1 A solution to the k: olt of n 17i. a! secret sharing scheme con si st s of t/0 collection s of n xm Boolean matrices co and cl. To share a white pi.el, the dealer randoml chooses one of the matrices in Co, and to share a black pirel, the dealer randomly chooses one of the matrices in Cl. The chosen matri defines the colour of the n subpiels in each one of the n transparencies. The solution is considered valid if the following three conditions are met 1. For any s in co,the“or” v of any k of the n rows satisfies h(V)≤d-a:m For any S in Cl, the "or"Vof any k of the n rows satisfies H(v) 3. For any subset i, i2,.ig of(1, 2, ..n with q <k, the two collections of q x m ices D for t E0,1 obtained by iz in Cl whe chat the e ame matrices with the same frequencics C,. Condition 3 implies that by inspecting fewer than k shares, even an infinitely powerfu cryptanalyst cannot gain any advantage in deciding whether the shared pixel was white or black. In most of our constructions there is a function f such that the combined shares from q<k transparencies consist of all the V's with H(V=f(q) with uniform probability distribution, regardless of whether the matrices were taken from Cn or Cl. Such a scheme is called un iform. The first two conditions are called contrast and the third condition is called ecurity The important parameters of a scheme are m, the number of pixels in a share. This represents the loss in resolution from the original picture to the shared one. We would like m to be as small as possible e o. the relative difference in weight bet ween combined shares that come from a white pixel and a black pixel in the original picture. This represents the loss in contrast We would like a to be as large as possible m, the size of the collect ions Co and C1(they need not be the same size, but in all of our constructions they are log r represents the number of random bits needed to generate the shares and does not effect the quality of the picture Results: We have a number of constructions for s pecific values of k and 7. For general k we have a construction for the k out k problem with m= 2k-1 and a and we have a proof of optimality of this scheme. For general k and n we have a construction wit h (k log k) and o 口日口 Figure 1 3 Eficient solutions for small k and n The 2 out. of n visula.I secret sharing problem can be solved hy t he following collect ions of ×? matrices: 100.0 fall the matrices obtained by permuting the columns of [all the matrices obtained by permuting the columns of 010...0 Any single share in eit her Co or Cl is a random choice of one black and m2-1 white subpix els. Any two shares of a white pixel have a combined hamming weight of 1, whereas any two shares of a l pixel have a combined Hamming weight of 2, which looks darker. The visual diference bet ween the t wo cases becomes clearer as we stack additional trallsparenlcies The original problem of visual cryptography is the special case of a 2 out of 2 visual secret. sharing problem. It can he solved wit h t wo subpixels per pixel, but in pra.ct.ice t his can distort the aspect ratio of the original image. It is thus recommended to use 4 subpixels arranged in a 2 x2 array where each share has one of the visual forms in Figure l. a white pixel is shared into two identical arrays froin this list, and a black pixel is shared into two omplementary arrays from this list. Any single share is a random choice of two black and two white subpixel, which looks medium grey. When two shares are stacked toget her, t he result is either medium grey(which represents white) or completely black(which represents black) The next case is the 3 out of 3 vis ual secret sharing problem, w hich is solved by the following scheme 0011 Co=all the matrices obtained by permuting the columns of0101 3 0110 1100 fall the matrices obtained by permuting the columns of 1010] □■■ figure 2 Note that the six shares described by the rows of ci and ci are exactly the six 2 2 arrays of subpixels from Fig. I. Each matrix in either Co or Ci contains one horizontal share one vertical share and one diagonal share. each share contains a randon selection of two black su b pixels, and any pair of shares from onc of the matrices contains a random selection of one common black su pixel and two individual black sul pixels. Consequent ly the analysis of one or two shares makes it impossible to distinguish between Co and cl However, a stack of three transparencies from Co is only 3 /4 black, whereas a stack of three transparencies froIn Cl is completely black The following scheme gcncralizcs this 3 out of 3 scheme into a 3 out of n schcme for an arbitrary n>3. Let b be the black nx(n-2) matrix which contains only 1s, and let I be the identity n x n matrix which contains 1s on the diagonal and o' s elsewhere. Let Bl denote the n(2n-2) matrix obtained by concatenating B and I, and let e(bn be the Boolean complement of the matrix Bl. Then Co=fall th ob permuting umns o C(BDY ll the matrices obtained by permuting the columns of BI1 has the following properties: Any single share contains an arbitrary collection of n-1 black and n-l white su pixels; any pair of shares have n-2 common black and two individual black sub pixels; any stacked triplet of shares from Co has n black subpixels whereas any stacked triplet of shares from Ci has n+l black subpixel The 4 out of 4 visua. I secret sharing problem can he solved by the shares described in Figure 2(along with all their permutations) Any single share contains 5 black subpixels, any stacked pair of shares contains 7 black u b pixels, any stacked triplet of shares contains 8 black subpixels, and any stacked q uadruple of shares contains cither & or g black subpixcls, depending on whether the shares were taken from Co or Cl. It is possible to reduce the number of su bpixels from 9 t o8. b 11t t hen t hev cannot be packed into a square array without distorting their aspect ratio Finally, we describe an efficient 2 out of 6 scheme. The scheme is defined by 1100 1100 Co=i all the matrices obtained by permuting the columns of 1100 1100 1100 1010 1100 Ci= tall the matrices obtained by permuting the columns o 007 1001 The scheme has cont rast J: any two shares of Co cover 2 out of 4 of the pixels, while any pair of shares from Ci covers at least 3 out of 4 pixels(some cover all four The security of the scheme follows from the fact that in both Co and ci each share is random subset of 2 black pixels out of 4 One possible generalization of this scheme to a 2 out of n scheme is to fix m so that (m?)>n and consider all subsets of size m/2 of some ground set of size m. The ith row is s corresponds to the it h subset, i. e. S i,j= 1 iff the jt h element is in the it h su hset. S is the n x m matrix where each row is 1m/0m/]. Co and CI are obtained from all column permutations of so and S. the contrast achieved this way is 1/m. As we shall see in Section 5. we call do better than that 4 A general k out of k scheme We now describe two general constructions which can solve any k out of k visual secret sharing problem by using 2 and 2k-1 subpixels respectively. We then prove that the second construction is optimal in that any k out k scheme must use at least 2 pixels 4.1 Construction 1 To define the two collections of matrices we make use of two lists of vectors. 1.7 o and Ji k a.. . g be vectors of length k over GF2] with the property that every 1 of them are linearly independent over GF2, but the set of all k vectors is not independent. Such a collection can be easily constructed, e.g. let g=0-l10k: for an nd Jie-1-0. Let J1, J 2,. J be vectors of length k over GF[2]with the property that they are linearly independent over GF2. (This call be thought of as a first order reed-Muller code 7) Fach list defines a k: 2 matrix st for t E10, 1 and the collections Co and Cl are o btained by permuting the columns of the corresponding matrix in all possible ways. We index the columns of st by vectors of length k over GF2 For t E 0, 1 let St be defined as follows: S[i, ]=<J, >for any 1 <is k and any vector a of length k over GF21 where <a, y> denotes the inner product over GF2 Lemma 4.1 The above schene is a k out of k scheme with parameters m=2, a=1/2k and r=2k Proof: In order to show contrast. note that in matrix so there are two columns that are all zero; in t he example given these are the column indexed by=0" and the column indexed by=0k-11.On the other hand, in Si there is only one column that is all 0. the one corresponding to=0. Therefore in any permutation of s°the“or” of the k o R-2 ones, whereas in any permutation of s the "or"of the ki rows yields 2s rows yields ones In order to show security, note that the vectors corresponding to any k-l rows in both so and s are linearly independent over GF2. Therefore if one considers the rows as subsets of a ground set of size 2, then every intersection of k 1 rows or their complement has the same size, twa.(Note that we include complemented sets, and thus if all possible intersections of k-1 are the same, then all smaller intersections are the same as well. )L other words. consider the columns in s and s obtained by rest ricting to the k-1 chosen rows. Then every possible assignment to the k-1 entries appears exactly twice. Hence, a random permutation of the columns, as is used to generate Co and ci, yields the same distribution regardless of which k-1 rows were chosen c 4.2 Construction 2 We now show a slightly better scheme with parameters m=2-1, a=1/ 2k-I and T=2k-1 Consider a ground set 4={e1,∈2,,ek} of k elements and let丌1,丌2, all the subsets of even cardinality and let 01, 02,... 0,k-1 be a list of all the subsets of w of odd cardinality (the order is not important Each list defines the following kx 24-1 matrices So and S For 1<i< k and 1<j< letS[,j=1ife;∈r; and s i,=1e;∈ As in the construction above, the collections Co and Cl are obtained by permuting all the columns of the corres ponding matrix Lemma 4.2 The above scheme is a k out of k scheme with parameters n= 2 -,C 1/2-1andr=2k-1! Proof: In order to show contrast. note the in matrix so there is one column that is all zero, the one indexed by the empty set. On the other hand, inl S there is n0 colunn that is all 0. Therefore in any permutation of So the"orof the k rows yields only 2-1 1 ones whereas in any permutation of SI the "or of the k rows vields 2k I ones In order to show security. note that if one examines any k-l rows in either So and s then the structure discovered is similar: consider the rows as su bsets of a ground set of size 2k; every intersection of k-1 rows or their complement has the sainle size, two Hence, as in the proof of lemma 4.1, a random permutation of the columns yields the same dist ribution rega.rdless of which A-1 rows were chosen. D 4.3 Upper bound on a We show that a must be exponentially small as a function of k and, in fact, get a tight bound that a> 2k-. The key combinatorial fact used is the following(see [5, 6]: given two sequences of sets A1, A2.. Ak and B1, B2,... B of some ground set G such that for every subset U Cf1,h) of size at most k-1 we have Nieu a: l=Nieu b:l, then JUL1 I< 5kr IG+Ui_1 Bil. In other words, if the intersections of the Ai's and B;'s agree in size for all subsets smaller than k elements, then the difference in the union cannot be too large Consider now a k out k scheme C with parameters m, a and r. Let the two collections be Co and cl. We construct from the collections two sequences of sets Al, A2,... A and B31,B Br. The ground set is of size m r and its elements are indexed by (., y) where 1<x≤rand1≤y≤m. Element(x,y) is in A; iff Soli?小=1 and element(x,y) is in B We claim that for any U c{1.….k} of size q< k the equality|∩ lieu a=|∩eUB2 holds: the security condition of C implies that we can construct a 1-1 mapping between all the q x m matrices obtained from considering only rows corresponding to l in Co and the q x m matrices of Ci such that any two matched matrices are identical. (Strictly speaking, the security condition is not strong enough to imply it, but given any scheme we can convert it into one that has this property wit hout changing a and m. Therefore when considering lieu a:l and i niel bil the contribution of each member of a pair of matched matrices is identical and hence∩:ErA=|∩∈B A pplying now the combinatorial fact mentioned above yields that 1 k-1 +|U=1A This ineans that for at least onle Natrix in Cl and one matrix in Co the dierence betweell the hamming weight of the"or"of their rows is at most k-I. m. Hence we have Theorem 4.3 In any k out k scheme a< o and m>24 1 5 A general k out of n scheme In this section we construct a k out of n scheme. What we show is how to go from a kout of k scheme to a k out of n scheme Let c be an k out of h visual secrct sharing scheme with paramcters m, a. Tho scheme c consists of two collections of k X m Boolean matrices Co =TU, T2 r and C1=T1, T2..T. Furthermore, assume the scheme is uniform, i. e. there is a function f(q) such that for any matrix T; where t∈{0,1}and1≤i≤ r and for every1 rows of T the hamming weight of the "or"of the q rows is f(q). Note that all our previous constructions have this property. Let i he a collection of e functions such that 1. he H we have h: 1mb>1.ki or all subsets b c1.n of sizc k and for all 1<9< h the probabilit randomly chosen h E h yields g different values on b is the same. Denote this probability by Ba We construct from C and h a k out of n scheme c as follows The ground set is V=UX H(i.e. it is of size m.e and we consider its elements as indexed by a member of U and a member of H) exed by a vector (11, t 2, .. t e) where each 1<t i< The matrix St for t=(t1, t2.. te)) where b(0, 1 is defined as {,(,b)]=T[b(i), Note that in the above expression th means the hth entry in t, where h is simply interpreted as a number between 1 and e Lemma 5.1 lfc is a scheme with parameters m, a, r, then ci is a scheme with parameters Proof: In order to show contrast, note that for any k rows in a matrix st and any h Eh if the subset corresponding to the k rows is mapped to g k different values by h, then we know by the assumption of uniformity that the weight of the or"of the g rows in C is f(q. The difference between white pixels and black pixels occurs only when h is 1-1 which happens at 3 of the h e h and it is a. m in this case. Therefore the hanmin weight of an"or"of rows of a white pixel is at most B(d-am)+∑2·f(q) and the weight of a black pixel is (Ad+∑·f(q which means that the relative difference between them is at least 3k.c In order to see the security of the scheme, note that we are essentially repeating e times the scheme c where each instance is independent of all other instances. Therefore from the security of c we get the security of s.D 5.1 Construction of h Onc can construct H from a collection of h wisc independent hash functions(scc c.g. 3 4,9). Suppose that H is such that for any k values 1, 2, ..kE 1,n the k random variables defined by X1=h(1), x2=h(2),.. k h( k) for a randomly chosen hE h are completely independent. Since they are independent the probability that they vield g dillerent values is the saime, 110 Inatter what 1, 2, .. k are. For a concrete example assume that k is a prime(otherwise we have to deal with its factors ), and let I be such that k >n2. The family h is based on the set of poly nomials of degree k-1 over gFk:l, where for ever hE h there is a corresponding polynomial q(x), and h(.)=q(a)mod k e size of H is about n". The probability B that a random h is 1-1 on a set of k elements is (k/e) kk kky2πk√2丌k We can therefore conclude by applying lenina 5.1 Theorem 5.2 For any n and k there eists a visual secret sharing scheme with parameters m=k.2-1 (2e) k 2Tk and r (2-11 5.2 Relaxing the conditions on H Suppose now that we relax Condition 2 in the definition of h to the following: there exists an E such that for all subsets b cf1.n of size k and for all 1<q<k the probability that a randomly chosen h e li yields g different values on B is the same to within E. As we shall see. this leeway allows for much smaller Hs Taking e to be small, say smaller than a Bk /4, cannot make a big difference in the quality of our construction: The Hamming weight of an"or"of k rows of a white pixel is at most (+6)·(d-am)+∑(3,+e)·f(q) and the weight of a black pixel is at least (1-6)·d+∑(1-6)·房·fq) ?=1 The relative difference bet ween black and white is therefore at least k a-26 Note that the security of the scheme is not effected at all. since fewer than k shares never map to k different values Construction of relaxed h We use small-bias probability spaces to construct such a relaxed family(see 82,3 for definitions and constructions). A probability space with random variables that are e-bias is an approximation to a probability s pace with com pletely independent random variables in that the bias (i.e. the difference between the pro bability that there parity is 0 and 1 is bounded by e(as opposed to 0 in the complete independence Similarly, a probability space which is k-wise e-bias is an approximation to k-wise independent probability spaces Assume that k is a power of 2. let r be a k log k-wise 8-bias probability space on n log k random variables which takes values in 0, 1. They are indexed as Yi; for 1<ix n and 1 <i< log k. There are explicit constructions of such probability spaces of size 20(klog)log n(see[8[1]) Each function h corresponds to a point in the probability space. h( )is the value of Y-1,1-2,., Yrlog k treated as a number bet ween 0 and 2-1. It can be shown that for all ∈{1,m} and for all y,v,vk∈{0,2k-1} we have 6·ks< Prob[h( h( 2)=32,.h (:k
...展开详情
sophia7718 | 2020-05-27 16:28:59 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8114378452301025, "perplexity": 1195.2883238153584}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347394756.31/warc/CC-MAIN-20200527141855-20200527171855-00025.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/135431-hartshorne-question.html | # Math Help - Hartshorne Question
1. ## Hartshorne Question
The functor $X \rightarrow A(X)$ induces an arrow-reversing equivalence of categories between the category of affine varieties over $k$ and the category of finitely generated integral domains over $k$.
My textbook (Hartshorne) says that this is a corollary to the following proposition:
Let $X$ be any variety and let $Y$ be an affine variety. Then there is a natural bijective mapping of sets
$\alpha : \text{Hom}(X, Y) \overset{\sim}{\rightarrow} \text{Hom}(A(Y), \mathcal{O}(X))$
where the left $\text{Hom}$ means morphisms of varieties, and the right $\text{Hom}$ means homomorphisms of $k$-algebras.
I do not see how to apply the proposition to prove this corollary. I would appreciate advice on this. Thanks. | 2014-07-30 19:28:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.891206681728363, "perplexity": 196.35003986130943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510271648.4/warc/CC-MAIN-20140728011751-00101-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://talkstats.com/threads/prediction-from-predicted-residual-values-compare-to-standard-error-of-the-estimate.67054/#post-194372 | # Prediction from predicted/residual values compare to standard error of the estimate
#### yumi
##### New Member
Hello.
I have to indicate how good prediction is, by looking at the actual, predicted, and residual values, compare to the standard error of the estimate.
I understand that the smaller standard error of the estimate is more accurate, and is a better prediction. But when “Residual score of 27.82” is very close to “the standard error of the estimate of 34.45”, is it the good prediction, too? Actual Value is 10. I'm confused.
THANK YOU for your assistance!
#### Junes
##### Member
Re: Prediction from predicted/residual values compare to standard error of the estima
Hi, welcome. I'm not sure I understand your question entirely, but I'll try to help you on your way.
The residual of 27.8 is for that particular point. It shows the error of your model for that case. So, whatever you are trying to predict for the United States, your model undershoots by 27.8.
The standard error of the estimate is a summary statistic for all residuals. It's the standard deviation of the residuals:
$$s = \sqrt{\frac{\sum(Y - Y')^2}{N-2}}$$
Where $$s$$ is the SEE, $$Y'$$ is the predicted value and $$Y$$ is the actual value (each $$Y - Y'$$ is one residual). Note that we have to divide by $$N-2$$ instead of $$N$$ because it's a sample estimate (unless of course you are actually dealing with a population). For more info, see here.
It has the same units as your residuals, and usually you can think of the standard error of the estimate as a "typical" or "average" residual (though it's not a mean in the mathematical sense). Some individual residuals may be higher, some may be lower.
Last edited: | 2022-08-15 15:10:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.820826530456543, "perplexity": 535.6935564506878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572192.79/warc/CC-MAIN-20220815145459-20220815175459-00744.warc.gz"} |
https://www.cell.com/biophysj/fulltext/S0006-3495%2809%2906099-8 | Article| Volume 98, ISSUE 7, P1099-1108, April 07, 2010
Ok
# Switching and Growth for Microbial Populations in Catastrophic Responsive Environments
Open Archive
## Abstract
Phase variation, or stochastic switching between alternative states of gene expression, is common among microbes, and may be important in coping with changing environments. We use a theoretical model to assess whether such switching is a good strategy for growth in environments with occasional catastrophic events. We find that switching can be advantageous, but only when the environment is responsive to the microbial population. In our model, microbes switch randomly between two phenotypic states, with different growth rates. The environment undergoes sudden catastrophes, the probability of which depends on the composition of the population. We derive a simple analytical result for the population growth rate. For a responsive environment, two alternative strategies emerge. In the no-switching strategy, the population maximizes its instantaneous growth rate, regardless of catastrophes. In the switching strategy, the microbial switching rate is tuned to minimize the environmental response. Which of these strategies is most favorable depends on the parameters of the model. Previous studies have shown that microbial switching can be favorable when the environment changes in an unresponsive fashion between several states. Here, we demonstrate an alternative role for phase variation in allowing microbes to maximize their growth in catastrophic responsive environments.
## Introduction
Microbial cells often exhibit reversible stochastic switching between alternative phenotypic states, resulting in a heterogeneous population. This is known as phase variation (
• van der Woude M.W.
• Bäumler A.J.
Phase and antigenic variation in bacteria.
,
• van der Woude M.W.
Re-examining the role and random nature of phase variation.
,
• Henderson I.R.
• Owen P.
• Nataro J.P.
Molecular switches—the ON and OFF of bacterial phase variation.
). A variety of molecular mechanisms can lead to phase variation, including DNA inversion, DNA methylation, and slipped strand mispairing (
• van der Woude M.W.
• Bäumler A.J.
Phase and antigenic variation in bacteria.
,
• van der Woude M.W.
Re-examining the role and random nature of phase variation.
). These are generally two-state systems without any underlying multistability (
• Visco P.
• Allen R.J.
• Evans M.R.
Exact solution of a model DNA-inversion genetic switch with orientational control.
,
• Visco P.
• Allen R.J.
• Evans M.R.
Statistical physics of a model binary genetic switch with linear feedback.
); however, bistable genetic regulatory networks can also lead to stochastic phenotypic switching (
• Ptashne M.
A Genetic Switch, Phage λ and Higher Organisms.
,
• Novick A.
• Weiner M.
Enzyme induction as an all-or-none phenomenon.
,
• Carrier T.A.
• Keasling J.D.
Investigating autocatalytic gene expression systems through mechanistic modeling.
,
• Warren P.B.
• ten Wolde P.R.
Chemical models of genetic toggle switches.
). The biological function of phase variation remains unclear, but it has been suggested that it can allow microbes to evade host immune responses, or to access a wider range of host cell receptors (
• Henderson I.R.
• Owen P.
• Nataro J.P.
Molecular switches—the ON and OFF of bacterial phase variation.
,
• Hallet B.
Playing Dr Jekyll and Mr Hyde: combined mechanisms of phase variation in bacteria.
). Theoretical work has focused on phase variation as a mechanism for coping with environmental changes. According to this hypothesis, a fraction of the population is maintained in a state which is currently less favorable, but which acts as an insurance policy against future environmental changes (
• Seger J.
• Brockman H.
What is bet-hedging?.
).
In this article, we present a theoretical model for switching cells growing in an environment which occasionally makes sudden attacks on the microbial population. Viewing the situation from the perspective of the microbes, we term these catastrophes. These catastrophes affect only one phenotypic state. Importantly, the environment is responsive: the catastrophe rate depends on the microbial population. By solving the model analytically, we find that there are two favored tactics for microbial populations in environments with a given feedback function: keep all the population in the fast growing state, regardless of the environmental response, or alternatively, use switching to maintain a population balance that reduces the likelihood of an environmental response. Which of these strategies is optimal depends on the parameters of the model. In the absence of any feedback between the population and environment, phase variation is always unfavorable. However, as the environment becomes more responsive, switching can be advantageous.
Previous theoretical studies have considered models in which the environment flips randomly or periodically between several different states, each favoring a particular phenotype. The case of two environmental states and two phenotypes has been well studied (
• Lachmann M.
• Jablonka E.
The inheritance of phenotypes: an adaptation to fluctuating environments.
,
• Ishii K.
• Matsuda H.
• Sasaki A.
• et al.
Evolutionarily stable mutation rate in a periodically changing environment.
,
• Thattai M.
• van Oudenaarden A.
Stochastic gene expression in fluctuating environments.
,
• Gander M.J.
• Mazza C.
• Rummler H.
Stochastic gene expression in switching environments.
,
• Ribeiro A.S.
Dynamics and evolution of stochastic bistable gene networks with sensing in fluctuating environments.
,
• Wolf D.M.
• Vazirani V.V.
• Arkin A.P.
Diversity in times of adversity: probabilistic strategies in microbial survival games.
). This work has shown that the total growth rate of the population can be enhanced by phenotypic switching (compared to no switching) for some parameter regimes, and that the optimum switching rate is tuned to the environmental flipping rate. Several studies have also compared random switching to a strategy where cells detect and respond to environmental changes. Wolf et al. (
• Wolf D.M.
• Vazirani V.V.
• Arkin A.P.
Diversity in times of adversity: probabilistic strategies in microbial survival games.
) used simulations to show that in this case the advantage of random switching depends on the accuracy of environmental sensing, whereas in a theoretical study Kussell and Leibler (
• Kussell E.
• Leibler S.
Phenotypic diversity, population growth, and information in fluctuating environments.
) showed that the advantages of random switching depend on the cost of environmental sensing for a model with n phenotypic states and n different environments. The predictions of the two-environment, two-phenotypic state model have recently been verified experimentally with a tunable genetic switch in the yeast Saccharomyces cerevisiae (
• Acar M.
• Mettetal J.T.
• van Oudenaarden A.
Stochastic switching as a survival strategy in fluctuating environments.
).
Here, we consider a different scenario to the above-mentioned body of work. Rather than considering multiple environmental states, our model has a single environment, which undergoes occasional, sudden, and instantaneous catastrophes. We assume that the more slowly-growing microbial phenotypic state is resistant to these catastrophes. Catastrophic events are likely to be a common feature of microbial population dynamics in nature. For example, microbes infecting an animal host may be subject to sudden flushing due to diarrhea or urination, to which they may be resistant if they are able to attach to the wall of the host's intestinal or urinary tract. Another example of a catastrophe might be sudden exposure of a population to antibiotics: here, cells that are in the nongrowing persister state survive, although others are killed (
• Balaban N.Q.
• Merrin J.
• Leibler S.
• et al.
Bacterial persistence as a phenotypic switch.
,
• Kussell E.
• Kishony R.
• Leibler S.
• et al.
Bacterial persistence: a model of survival in changing environments.
). We do not, however, aim to model a specific biological case, but rather to construct a generic model leading to general conclusions.
Importantly, and in contrast to previous models, we include in our model feedback between the microbial population and the environment: the probability of a catastrophe depends on the state of the population. Although our model is very general, many examples exist in nature in which environmental responses are triggered by characteristics of a growing microbial population, the most obvious perhaps being a host immune response (
• Mulvey M.A.
Adhesion and entry of uropathogenic Escherichia coli.
). Our work leads us to propose an alternative possible role for phase variation, to our knowledge not considered in previous theoretical work: we find that in responsive catastrophic environments, switching can allow the population to maximize its growth rate while minimizing the environmental response.
The article is organized as follows. First, we present our model. Second, we derive an analytical result for the steady-state statistics of the model, and we use this to predict the optimal strategies for microbial growth, as a function of the model parameters, in the following section. Finally, we present our conclusions.
## Model
We consider two microbial subpopulations A and B, representing two different phenotypic states. Between catastrophes, microbes in these subpopulations grow exponentially at rates γA and γB, and switch between states with rates kA and kB (A to B and B to A, respectively). However, this growing regime can be ended suddenly by a catastrophe, which consists of a sharp decrease in the size of the A subpopulation. After the catastrophe, the population dynamics restarts.
Between catastrophes, the dynamics of the numbers of microbes nA and nB in the two subpopulations are defined by the following system of differential equations:
$dnAdt=γAnA+kBnB−kAnA,$
(1a)
$dnBdt=γBnB+kAnA−kBnB.$
(1b)
This description assumes that the population sizes nA and nB are large enough to be considered as continuous variables. We assume that γA > γB, which means that the A subpopulation proliferates faster than the B subpopulation.
Whenever a catastrophe takes place, the population size nA drops instantaneously to some new value nA < nA, with a probability ψ(nA|nA). The rate at which catastrophes happen depends on the population size through an environmental response function β(nA, nB). This function characterizes the rate at which the environment responds to the growing population. The two functions β and ψ are discussed in detail at the end of this section. A typical trajectory for the sizes of the A and B subpopulations, for a particular choice of β and ψ, is shown in the top panel of Fig. 1.
### Fitness
As shown by Thattai and van Oudenaarden (
• Thattai M.
• van Oudenaarden A.
Stochastic gene expression in fluctuating environments.
), the two-variable system defined by (1a), (1b) can be replaced by a nonlinear dynamical equation for a single variable. This variable, f, is the fraction of the total population in the A state:
$f(t)=nAnA+nB.$
(2)
If we consider the dynamics of the total population n(t) = nA(t) + nB(t), then, from (1a), (1b), it follows that (
• Thattai M.
• van Oudenaarden A.
Stochastic gene expression in fluctuating environments.
)
$dn(t)dt=γAnA+γBnB=(γB+Δγf)n(t),$
(3)
where Δγ = γAγB > 0. The above equation shows that f is linearly related to the instantaneous growth rate of the population (which is given by γB + Δγf). For this reason, and following Thattai and van Oudenaarden (
• Thattai M.
• van Oudenaarden A.
Stochastic gene expression in fluctuating environments.
), we refer to f as the population fitness.
The dynamical equation for the population fitness can be determined from the (1a), (1b), and corresponds to
$dfdt=v(f)=−Δγ(f−f+)(f−f−),$
(4)
where we define v(f) as the time evolution function for the fitness, and f ± are the two roots of the quadratic equation:
$f2−(1−kA+kBΔγ)f−kBΔγ=0.$
(5)
One can check that the smaller root takes values f < 0, whereas the larger root takes values 0 < f+ ≤ 1. Hence, the population fitness increases toward a plateau value f+, until a catastrophe happens, upon which it is reset to a lower value. A typical time trajectory for the population fitness is plotted in the bottom panel of Fig. 1. The time evolution of f is deterministic except at some specific time points (catastrophes) where it undergoes random jumps. This model can therefore be considered to be a Piecewise Deterministic Markov Process (
• Davis M.H.A.
Piecewise-deterministic Markov processes: a general class of non-diffusion stochastic models.
,
Pulkkinen, O., and J. Berg. 2008. Dynamics of gene expression under feedback. arXiv:0807.3521.
).
### Catastrophes
The catastrophes in our model have two characteristics: the rate at which they happen and their strength (i.e., how many microbes are killed). The rate at which catastrophes occur, or their probability per unit time, is defined by a feedback function β(f), which we take to depend only on the fitness of the population and not on the absolute population size (we shall return to this assumption later). The function β(f) characterizes the response of the environment to the growth of the population. If β = 0, there are no catastrophes and the fitness will reach the plateau value f+ and stay there forever. Nonzero constant values of β correspond to a nonresponsive environment in which the catastrophes follow Poisson statistics. We shall consider the case of a responsive environment characterized by a response function β(f), which depends on the population fitness. In particular, we consider a nonlinear response function that has a sigmoid shape. Thus, the probability per unit time of a catastrophe is very low when the population fitness is low, but increases significantly if the fitness exceeds some threshold value. This scenario might correspond to a detection threshold in the environment's sensitivity to population growth.
The precise environmental response function that we consider is
$βλ(f)=ξ2(1+f−f∗λ2+(f−f∗)2).$
(6)
Although this function is defined over the whole range −∞ < f < ∞, the relevant interval for the fitness is 0 < f < 1. Typical shapes for this function are shown in Fig. 2. The parameter ξ is the asymptotic value of βλ when f is large, and we refer to ξ as the saturated catastrophe rate. As the population fitness f increases, βλ increases from 0 to ξ around the threshold value f at which βλ = ξ/2. Finally, the parameter λ determines the sharpness of the threshold. For small values of λ, the function βλ(f) approaches a step function
$β0(f)=ξΘ(f−f∗).$
(7)
As λ increases, the function broadens and becomes linear over a range of f near f,
$βλ≃ξ2(1+(f−f∗)λ+O(1λ2)),$
(8)
while, when the parameter λ becomes very large (λ → ∞), βλ(f) becomes constant (independent of f) so that the catastrophes become a standard Poisson process with parameter ξ/2:
$β∞(f)=ξ/2.$
(9)
We emphasize that we have chosen this particular sigmoid function Eq. 6 as the λ parameter allows a convenient tuning of its shape and thus the degree of environmental responsiveness. However, our conclusions are not affected by the particular choice of sigmoid function.
We now turn to the function describing the catastrophe strength, ψ(nA|nA). This is the probability that, given that nA cells of type A are present before the catastrophe, nA will remain after the catastrophe. To retain our description of the model in terms of the population fitness, we shall consider that ψ only depends on nA through the ratio nA/nA. Then the normalization of ψ implies that
$ψ(n′A|nA)=1nAF(n′A/nA),$
(10)
where $∫01dxF(x)=1.$
When a catastrophe occurs, the population size is reduced by a random factor sampled from the distribution $F$ (i.e., the new size nA = nA × u, where nA is the size before the catastrophe, and u is a random number (0 ≤ u < 1) sampled from the distribution $F$). This allows us to associate to each jump nA → nA a fitness jump ff′, where f′ = nA/(nA + nB). The size of these jumps will be distributed according to
$μ(f′|f)=Θ(f−f′)F(f′(1−f)f(1−f′))1−f(1−f′)2f.$
(11)
Equation 11 can be obtained by rewriting Eq. 10 for ψ(nA|nA) as a function of f and f′, and including the Jacobian of the transformation.
In this article, we shall consider the simple case where $F(x)=(α+1)xα$, with α > −1. The explicit expression for ψ(n′|n) thus reads
$ψ(n′|n)=(α+1)n(n′n)α,α>−1.$
(12)
This choice is made primarily to allow us to solve the model analytically: the choice implies that μ(f′|f) factorizes (see Eq. 13), which then allows the integral equation for the probability flux balance (Eq. 15) to be solved. Moreover, the choice of a power law distribution for ψ(nA|nA) is general in that it allows for increasing, decreasing, or flat-functional forms. The function ψ(nA|nA) is plotted in Fig. 3 for various values of α. For negative α-values, the distribution is biased toward far-reaching catastrophes that reduce fitness significantly. The case α = 0 corresponds to jumps sampled from a uniform distribution, whereas positive values give a distribution biased toward weaker catastrophes. The parameter α can therefore be used to tune the strength of the catastrophes (although in this work we shall always consider negative α-values, corresponding to strong catastrophes). We note here that with our choice for $F(x)$ the jump distribution can be expressed as
$μ(f′|f)=Θ(f−f′)ddf′m(f′)m(f),$
(13)
where $m(0)=0,∫df′μ(f′|f)=1,$ and with
$m(f)=(f1−f)1+α.$
(14)
We now derive the steady-state probability distribution for the population fitness, p(f). The distribution p(f) must satisfy a condition of balance for the probability flux. This condition reads
$v(f)p(f)=∫ff+df′∫0fdf″β(f′)p(f′)μ(f″|f′).$
(15)
The left-hand side of the above equation corresponds to the deterministic probability flux due to population growth as defined in Eq. 4. (Note that f(t) increases in time as the population grows, as shown in Fig. 1.) The right-hand side describes the probability flux arising from catastrophes. In this model, catastrophes always reduce the population fitness. The probability flux due to catastrophes therefore contains contributions from all possible jumps that start at some f′ > f and end at some f″ < f. These contributions must be weighted by β(f′)p(f′): the probability of having fitness f′ and undergoing a catastrophe. This balance between the fluxes due to growth and catastrophes is illustrated schematically in Fig. 4.
Inserting Eq. 13 for μ(f′|f) into Eq. 15, the zero flux condition becomes
$v(f)p(f)=∫ff+df′β(f′)p(f′)m(f)m(f′).$
(16)
We now divide the above equation by m(f) and take the first derivative with respect to f. This yields, in terms of the function G = vp/m,
$dGdf=−βGv.$
(17)
The above differential equation is then easily solved for G. The result for p(f), using Eq. 14 for m(f), is finally
$p(f)=Cv(f)(f1−f)1+αexp(−∫dfβ(f)v(f)),$
(18)
where C is a normalization constant. Equation 18 is the central result of this section and gives the steady-state fitness distribution for arbitrary functions β(f) and v(f). The integral in Eq. 18 can be performed analytically for the model defined in the previous section. The result, which is rather cumbersome, is given in the Appendix.
We present in Fig. 5 (top panels) some resulting shapes for the probability distribution p(f) in the case λ = 0, corresponding to a step function for the environmental response. We consider two different values of Δγ, in each case for kA = 0 (no switching) and a nonzero switching rate kA = kA defined such that f+ = f (see the next section). In these plots we see that singularities in p(f) can arise at f = 0, f, or f+, in different cases.
We consider first the solid lines corresponding to kA = 0. Cusps in p(f) at f = f+ = 1 and f = 0 (as seen in the right panel) reflect a population that maximizes its fitness in between severe catastrophes that reduce f from 1 to 0; however, a cusp at f = f (as seen in the left panel) reflects a population that suffers catastrophes soon after the fitness has crossed the threshold f. In particular, the kA = 0 case produces a cusp at f = f (due to the singular nature of the step function β(f) at f) for small Δγ, and/or a divergence at f = f+ for large Δγ. On the other hand, the dotted lines (where kA = kA and f+ = f) produce a divergence in both right and left panels. This reflects a population that spends much of its time at a fitness just below the threshold.
Fig. 5 (bottom panels) plots trajectories of the fitness corresponding to the parameter values of Fig. 5. These trajectories reveal the interplay between two timescales: the time to relax to the plateau value f in the absence of catastrophes and the typical time between catastrophes. The former decreases with Δγ and the latter is given by 1/ξ where ξ is the plateau value of the response function β. A divergence of p(f) at f = f+ arises when the plateau value is typically reached before a catastrophe occurs.
## Optimal Strategies: To Switch or not to Switch?
The key question to be addressed in this work is whether random switching is advantageous to the microbial population in our model. To answer this question, we take advantage of the analytical solution Eq. 18 to investigate how the time-averaged population fitness depends on the rate kA of switching from the fast-growing state A to the slow-growing state B. We are particularly interested in the effect of the parameter λ, which controls the sharpness of the environment's response to the population.
In Fig. 6 we plot the average population fitness against kA for several values of λ. For a nonresponsive environment (i.e., in the limit of large λ, where the catastrophe rate takes the constant nonzero value ξ/2 (See Eq. 9)), the population fitness has only one (boundary) maximum for switching rate kA → 0. This means that the optimal rate of population growth is achieved when the bacteria do not switch away from the fittest state A. It should be noted that we consider the limit kA → 0, so that the population always contains some small residual fraction in the unfit B state, which becomes a finite fraction of the population after a catastrophe. Subsequently, in between the catastrophes, the A subpopulation grows quickly to dominate the population and the fitness evolves toward the value f+ = 1, which follows from Eq. 5 when kA = 0.
In contrast, as the environment is made responsive by decreasing the parameter λ, a local maximum appears in the population fitness, for nonzero switching rate kA. This implies that for responsive environments, switching into the slow-growing state represents an optimal strategy for the microbes. The height of the peak at kA ≠ 0 can surpass that of the peak at kA = 0, showing that random switching can be advantageous compared to keeping the whole population in the fast-growing state, if the environment is responsive. Thus the two maxima correspond to two alternative strategies which we term “switching” for the peak at kA = kA and “nonswitching” for the peak at kA = 0.
To gain further insight into the meaning of these two strategies, and to determine which circumstances favor one strategy over the other, we focus on the limiting case λ = 0, where the response function is a step function with its threshold at f = f. We assume that the environmental threshold f is less than the maximum population fitness f+. (If this is not the case, the unrealistic situation arises where the population never has a high enough fraction of A cells to trigger any catastrophes.) Because f+ depends on the switching rate kA via Eq. 5, this condition f < f+ implies a maximum value kA for kA:
$kA∗=(1−f∗)(Δγf∗+kB)f∗.$
(19)
Fig. 7 shows two examples of how the average fitness 〈f〉 depends on kA in the range 0 to kA. One can see that there are always two boundary maxima located at kA = 0 and at kA = kA; these correspond to the nonswitching and switching strategies.
We plot in Fig. 8 typical trajectories of the population fitness for the two cases corresponding to the solid circles in Fig. 7 (left panel). These trajectories have the same time-averaged population fitness, but they show very different dynamical behavior. The nonswitching strategy (kA = 0) is characterized by a fast evolution of the fitness toward its maximum f+ = 1. However, this triggers frequent catastrophes that cause sudden decreases in fitness. In contrast, for the switching strategy (kAkA), the fitness has a slower growth toward a plateau value at the detection threshold f. In this way, the population reduces the frequency of catastrophes by maintaining itself in a heterogeneous state with a nonzero fraction of slower-growing cells that do not trigger catastrophes.
We next consider how the parameters of our model affect the balance between the switching and nonswitching strategies. To this end, we plot phase diagrams showing which of these two strategies achieves a higher population growth rate for a given set of parameters. Fig. 9 considers the parameters describing the microbial population: the difference Δγ in growth rate between the A and B states, and the switching rate kB from the slow-growing B state to the fast-growing A state. This diagram shows that the switching strategy is only favorable when the B state does not carry too high a cost in terms of growth rate (Δγ not too large) and when switching to the B state is unlikely to be immediately followed by a reverse switch back into the A state (kB not too large). In Fig. 10 we consider instead the parameters describing the environmental response: the detection threshold f and the saturated catastrophe rate ξ. Here, we see that the switching strategy (i.e., attempting to avoid catastrophes) is favored when the saturated catastrophe rate ξ is high or when the threshold value f is high (because for high thresholds the population does not have to pay a very high price in terms of B cells to avoid triggering catastrophes). For very low detection thresholds f, lower than typical values of the fitness, the environmental response will almost always detect the population, and the environmental behavior will thus be similar to the situation of a nonresponsive environment, which corresponds to the limiting case where f = 0. In this case, as discussed earlier, nonswitching is the optimal strategy. Fig. 10 also demonstrates the effect of changing the catastrophe strength parameter α (dashed and dotted lines). The switching strategy is favored by strong catastrophes (negative α), whereas the nonswitching strategy is more likely to be optimal for weak catastrophes (i.e., larger positive α). All this points to the conclusion that in general, switching tends to be an advantageous strategy when the characteristics of the catastrophic environment are particularly adverse (large ξ and negative α) and when the detection threshold is not too low.
## Discussion and Further Directions
In this work we have considered the possible advantages of phase variation (random switching between phenotypic states) for a microbial population in a catastrophic responsive environment. To this end, we solved analytically for the steady-state statistics of a model which includes two microbial subpopulations that grow and switch, and a single environment which occasionally mounts catastrophic attacks on the microbial population. Importantly, the model includes feedback between the state of the population and the frequency of catastrophic events via an environment response function which depends on the population through its fitness i.e., the instantaneous rate of growth. Our results show that, when the environment is responsive to the population, switching can increase the average fitness (i.e., growth rate) of the population. A general picture emerges from our work of two competing strategies for dealing with a catastrophic responsive environment: not switching and thus maximizing the instantaneous growth rate regardless of catastrophes versus using switching to tune the population to reduce the likelihood of catastrophes.
An important feature of this work is the fact that we are able to solve the model analytically, leading to an explicit formula for the population fitness as a function of the model parameters. To achieve this analytical result, we make a number of assumptions, the most important being that the environmental feedback depends on the instantaneous growth rate rather than on the population size. Although this is a somewhat idealized assumption, microbe-host interactions are in reality likely to be sensitive to microbial growth rate (
• Johri A.K.
• Patwardhan V.
• Paoletti L.C.
Growth rate and oxygen regulate the interactions of group B Streptococcus with polarized respiratory epithelial cells.
), since several intracellular small molecules and proteins, including ppGpp, cAMP, and H-NS, whose concentrations are growth-rate-dependent (
• Ferenci T.
Bacterial physiology, regulation and mutational adaptation in a chemostat environment.
,
• Schaechter M.
• Ingraham J.L.
• Neidhardt F.C.
Microbes.
), have been shown to regulate microbial virulence factors (
• Pizarro-Cerdá J.
• Tedin K.
The bacterial signal molecule, ppGpp, regulates Salmonella virulence gene expression.
,
• Pesavento C.
• Hengge R.
Bacterial nucleotide-based second messengers.
,
• Schröder O.
• Wagner R.
The bacterial regulatory protein H-NS—a versatile modulator of nucleic acid structures.
).
The main conclusion of our work is that phase variation can provide a mechanism by which a microbial population can tune its composition so as to minimize the likely environmental response, thus increasing its average growth rate (or average fitness). The model then provides an alternative scenario for the role of phase variation to those proposed in other theoretical studies, which we now take the opportunity to review briefly.
Various works have considered models in which the environment flips randomly or periodically between several different states, each favoring a particular cell phenotype. These models do not include feedback between the population and the environmental flipping rate. For the case of two environmental states and two cellular phenotypes, Lachmann and Jablonka (
• Lachmann M.
• Jablonka E.
The inheritance of phenotypes: an adaptation to fluctuating environments.
) considered a discrete time model with a periodic environment, whereas Ishii et al. (
• Ishii K.
• Matsuda H.
• Sasaki A.
• et al.
Evolutionarily stable mutation rate in a periodically changing environment.
) addressed a similar problem but explicitly looked for the evolutionary stable state. Thattai and Van Oudenaarden (
• Thattai M.
• van Oudenaarden A.
Stochastic gene expression in fluctuating environments.
) also considered the two-environment, two-phenotype case, using a continuous time model with Poissonian switching of the environment. A detailed analytical treatment of this case was presented by Gander et al. (
• Gander M.J.
• Mazza C.
• Rummler H.
Stochastic gene expression in switching environments.
) and a simulation study was carried out by Ribeiro (
• Ribeiro A.S.
Dynamics and evolution of stochastic bistable gene networks with sensing in fluctuating environments.
) with a more detailed model of the phenotypic switching mechanism, and Wolf et al. (
• Wolf D.M.
• Vazirani V.V.
• Arkin A.P.
Diversity in times of adversity: probabilistic strategies in microbial survival games.
) simulated a model that also included environmental sensing. These studies showed that the total growth rate of the population can be enhanced by phenotypic switching (compared to no switching), for some parameter regimes, and that the optimum switching rate is tuned to the environmental flipping rate. A similar model, but aimed specifically at the case of the persister phenotype, in which cells grow very slowly but are resistant to antibiotics (
• Balaban N.Q.
• Merrin J.
• Leibler S.
• et al.
Bacterial persistence as a phenotypic switch.
), was considered by Kussell et al. for a periodic environment (
• Kussell E.
• Kishony R.
• Leibler S.
• et al.
Bacterial persistence: a model of survival in changing environments.
). In this model, the growth rate of the nonpersister phenotype is negative (signifying population decrease) in the antibiotic environment.
Several other studies have considered random switching from a different context: as a means to avoid the need for sensing and responding to environmental changes, in the case that environmental sensing is inaccurate, faulty, or expensive. In this context, Kussell and Leibler (
• Kussell E.
• Leibler S.
Phenotypic diversity, population growth, and information in fluctuating environments.
) considered theoretically a model with many environments and many cellular states, where a cost is attached to sensing environmental changes, whereas Wolf et al. (
• Wolf D.M.
• Vazirani V.V.
• Arkin A.P.
Diversity in times of adversity: probabilistic strategies in microbial survival games.
) simulated a two-state, two-environment model where sensing was subject to a variety of possible defects. Both these studies concluded that random switching can be a good strategy to overcome disadvantages associated with environmental sensing.
In a somewhat different approach, Wolf et al. (
• Wolf D.M.
• Vazirani V.V.
• Arkin A.P.
A microbial modified prisoner's dilemma game: how frequency-dependent selection can lead to random phase variation.
) used simulations to study a two-state, two-environment model in which the growth rate of the A and B states is frequency-dependent—i.e., a given microbial subpopulation grows faster when its abundance is low. Such frequency-dependent selection is well known to promote population heterogeneity; however, Wolf et al. did not find any advantages for reversible switching as a means to generate this heterogeneity as opposed to terminal cellular differentiation. In a sense, the model presented in this article also incorporates frequency-dependent selection, because catastrophes are less likely when the A subpopulation is small. However, in contrast to Wolf et al., we find that reversible switching does play an important role. If switching in our model were not reversible, there would be no way for the fast-growing A subpopulation to regenerate from the surviving B cells after a catastrophe.
Although the majority of theoretical work in this area, including that presented in this article, has focused on the interplay between cellular switching and environmental changes, this is not the only perspective from which the role of phase variation can be viewed. For example, an alternative scenario, which does not require a changing environment, was recently presented by Ackermann et al. (
• Ackermann M.
• Stecher B.
• Doebeli M.
• et al.
Self-destructive cooperation mediated by phenotypic noise.
). These authors showed that random switching into a self-sacrificing phenotypic state can be evolutionarily favored if the individuals in that state have, on average, greater access to some beneficial resource. This idea raises a number of interesting questions which we hope to pursue in future research.
Finally we note that the theoretical framework developed in this work, although applied here to the case of detrimental and instantaneous catastrophes, could also be used to model environmental changes more generally. For example, in the symmetric two-state, two-environment model considered by Thattai and van Oudenaarden (
• Thattai M.
• van Oudenaarden A.
Stochastic gene expression in fluctuating environments.
) and others, the environment flips randomly between two states and these flips are accompanied by a change in fitness from f to 1 − f. This could be incorporated in our theoretical framework by setting the β(f) to a constant value and the jump distribution μ(f|f′) to
$μ(f′|f)=δ(f′−(1−f)).$
(20)
However, such a choice of μ(f|f′) would result in fundamentally different conclusions to those of this study, because the fitness in the model of Thattai and van Oudenaarden (and in other similar models) is not necessarily decreased when the environment changes. In fact, if a large fraction of the cells is in the slow-growing state before the environment flips so that f < 1/2, then the environmental change will actually increase the fitness of the population. In contrast, in this work, all catastrophes are detrimental and the advantage of switching lies in avoiding the triggering of an environmental response.
This study suggests a number of avenues for further work. First, it would be useful to check the robustness of the results to changes in the choice of catastrophe distributions. Here we have adopted the power law (Eq. 12) which allows the exact solution of the model and generates a broad range of catastrophes sizes. Such a distribution could be justified in the context of an antibiotic environment, as representing the dose-response variability of antimicrobes (
• Nightingale C.H.
• Ambrose P.G.
• Drusano G.L.
Antimicrobial Pharmacodynamics in Theory and Clinical Practice.
) and variability in the dosage. One could also explore other distributions such as exponentially distributed catastrophes or those centered about some particular catastrophe fraction f′ = af with a < 1. It remains to be determined which choice is most biologically relevant in different contexts.
Another point that deserves investigation in future work is the relation between the choice of switching strategy and the variability in the population fitness. For example in Fig. 5 one can see that the different strategies give very different widths for the fitness distribution p(f). In this work we defined the optimal strategy as that which gives the maximal average growth of the population. However, it might also be relevant to include fitness fluctuations in the criteria for optimality.
It is also important to consider the case where the environmental response depends on the absolute size of a particular subpopulation. Here, we expect that the population size may reach a steady state governed by the balance between growth and catastrophes. The total population size could then be maximized either by maximizing the growth rate, regardless of catastrophes, or by tuning the population composition to avoid triggering catastrophes. We thus expect that the two strategies identified in this work will prove to be relevant to a variety of models. Moreover, we note that the distinction between models based on growth rate and those based on population size may vanish for scenarios with constant population size such as chemostat cultures (
• Ingraham J.L.
• Maaloe O.
• Neidhardt F.C.
Growth of the Bacterial Cell.
). Equally interesting are the prospects for including spatial effects, such as adhesion to host surfaces, or transfer between different environmental compartments, in the model, and for generalizing the model to include many different microbial states, in which case the same theoretical framework could perhaps be used to describe genetic evolution of microbial populations in catastrophic responsive environments.
## Appendix: Explicit form of P(F)
Below we give the explicit form for the integral appearing in Eq. 18 when β(f) is given by Eq. 6,
$∫dfβ(f)v(f)=ξ2ΔγΔflog{(f−f−)(f+−f)[2Δf((f∗−f)(f∗−f−)+λ2+g(f,f∗)g(f−,f∗))(f−f−)(f∗−f−)g(f−,f∗)](f∗−f−)g(f−,f∗)×[2Δf((f−f∗)(f+−f∗)+λ2+g(f,f∗)g(f+,f∗))(f+−f)(f+−f∗)g(f+,f∗)](f+−f∗)g(f+,f∗)},$
(21)
where
$g(a,b)=(a−b)2+λ2.$
(22)
From this result, the explicit expression for the fitness distribution function p(f) can be easily derived.
The authors are grateful to David Gally and Otto Pulkkinen for useful discussions.
R.J.A. was funded by the Royal Society. This work was supported by the Engineering and Physical Sciences Research Council under grant No. EP/E030173.
## References
• van der Woude M.W.
• Bäumler A.J.
Phase and antigenic variation in bacteria.
Clin. Microbiol. Rev. 2004; 17: 581-611
• van der Woude M.W.
Re-examining the role and random nature of phase variation.
FEMS Microbiol. Lett. 2006; 254: 190-197
• Henderson I.R.
• Owen P.
• Nataro J.P.
Molecular switches—the ON and OFF of bacterial phase variation.
Mol. Microbiol. 1999; 33: 919-932
• Visco P.
• Allen R.J.
• Evans M.R.
Exact solution of a model DNA-inversion genetic switch with orientational control.
Phys. Rev. Lett. 2008; 101: 118104
• Visco P.
• Allen R.J.
• Evans M.R.
Statistical physics of a model binary genetic switch with linear feedback.
Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 2009; 79: 031923
• Ptashne M.
A Genetic Switch, Phage λ and Higher Organisms.
2nd Ed. Blackwell, Cambridge, New York1992
• Novick A.
• Weiner M.
Enzyme induction as an all-or-none phenomenon.
Proc. Natl. Acad. Sci. USA. 1957; 43: 553-566
• Carrier T.A.
• Keasling J.D.
Investigating autocatalytic gene expression systems through mechanistic modeling.
J. Theor. Biol. 1999; 201: 25-36
• Warren P.B.
• ten Wolde P.R.
Chemical models of genetic toggle switches.
J. Phys. Chem. B. 2005; 109: 6812-6823
• Hallet B.
Playing Dr Jekyll and Mr Hyde: combined mechanisms of phase variation in bacteria.
Curr. Opin. Microbiol. 2001; 4: 570-581
• Seger J.
• Brockman H.
What is bet-hedging?.
in: Oxford Surveys in Evolutionary Biology. Oxford University Press, Cambridge, UK1987
• Lachmann M.
• Jablonka E.
The inheritance of phenotypes: an adaptation to fluctuating environments.
J. Theor. Biol. 1996; 181: 1-9
• Ishii K.
• Matsuda H.
• Sasaki A.
• et al.
Evolutionarily stable mutation rate in a periodically changing environment.
Genetics. 1989; 121: 163-174
• Thattai M.
• van Oudenaarden A.
Stochastic gene expression in fluctuating environments.
Genetics. 2004; 167: 523-530
• Gander M.J.
• Mazza C.
• Rummler H.
Stochastic gene expression in switching environments.
J. Math. Biol. 2007; 55: 259-294
• Ribeiro A.S.
Dynamics and evolution of stochastic bistable gene networks with sensing in fluctuating environments.
Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 2008; 78: 061902
• Wolf D.M.
• Vazirani V.V.
• Arkin A.P.
Diversity in times of adversity: probabilistic strategies in microbial survival games.
J. Theor. Biol. 2005; 234: 227-253
• Kussell E.
• Leibler S.
Phenotypic diversity, population growth, and information in fluctuating environments.
Science. 2005; 309: 2075-2078
• Acar M.
• Mettetal J.T.
• van Oudenaarden A.
Stochastic switching as a survival strategy in fluctuating environments.
Nat. Genet. 2008; 40: 471-475
• Balaban N.Q.
• Merrin J.
• Leibler S.
• et al.
Bacterial persistence as a phenotypic switch.
Science. 2004; 305: 1622-1625
• Kussell E.
• Kishony R.
• Leibler S.
• et al.
Bacterial persistence: a model of survival in changing environments.
Genetics. 2005; 169: 1807-1814
• Mulvey M.A.
Adhesion and entry of uropathogenic Escherichia coli.
Cell. Microbiol. 2002; 4: 257-271
• Davis M.H.A.
Piecewise-deterministic Markov processes: a general class of non-diffusion stochastic models.
J. R. Stat. Soc. B. 1984; 46: 353-388
1. Pulkkinen, O., and J. Berg. 2008. Dynamics of gene expression under feedback. arXiv:0807.3521.
• Johri A.K.
• Patwardhan V.
• Paoletti L.C.
Growth rate and oxygen regulate the interactions of group B Streptococcus with polarized respiratory epithelial cells.
Can. J. Microbiol. 2005; 51: 283-286
• Ferenci T.
Bacterial physiology, regulation and mutational adaptation in a chemostat environment.
Adv. Microb. Physiol. 2008; 53: 169-229
• Schaechter M.
• Ingraham J.L.
• Neidhardt F.C.
Microbes.
ASM Press, Washington, DC2006
• Pizarro-Cerdá J.
• Tedin K.
The bacterial signal molecule, ppGpp, regulates Salmonella virulence gene expression.
Mol. Microbiol. 2004; 52: 1827-1844
• Pesavento C.
• Hengge R.
Bacterial nucleotide-based second messengers.
Curr. Opin. Microbiol. 2009; 12: 170-176
• Schröder O.
• Wagner R.
The bacterial regulatory protein H-NS—a versatile modulator of nucleic acid structures.
Biol. Chem. 2002; 383: 945-960
• Wolf D.M.
• Vazirani V.V.
• Arkin A.P.
A microbial modified prisoner's dilemma game: how frequency-dependent selection can lead to random phase variation.
J. Theor. Biol. 2005; 234: 255-262
• Ackermann M.
• Stecher B.
• Doebeli M.
• et al.
Self-destructive cooperation mediated by phenotypic noise.
Nature. 2008; 454: 987-990
• Nightingale C.H.
• Ambrose P.G.
• Drusano G.L.
Antimicrobial Pharmacodynamics in Theory and Clinical Practice.
2nd Ed. Informa Healthcare, New York2007
• Ingraham J.L.
• Maaloe O.
• Neidhardt F.C.
Growth of the Bacterial Cell.
Sinauer Associates, Sunderland, MA1983 | 2023-02-02 08:43:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 29, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7245951294898987, "perplexity": 2622.6539958779586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499967.46/warc/CC-MAIN-20230202070522-20230202100522-00741.warc.gz"} |
https://arbital.greaterwrong.com/p/even_signed_permutations_form_a_group?l=4hg | # The collection of even-signed permutations is a group
The collection of elements of the symmetric group $$S_n$$ which are made by multiplying together an even number of permutations forms a subgroup of $$S_n$$.
This proves that the alternating group $$A_n$$ is well-defined, if it is given as “the subgroup of $$S_n$$ containing precisely that which is made by multiplying together an even number of transpositions”.
# Proof
Firstly we must check that “I can only be made by multiplying together an even number of transpositions” is a well-defined notion; this is in fact true.
We must check the group axioms.
• Identity: the identity is simply the product of no transpositions, and $$0$$ is even.
• Associativity is inherited from $$S_n$$.
• Closure: if we multiply together an even number of transpositions, and then a further even number of transpositions, we obtain an even number of transpositions.
• Inverses: if $$\sigma$$ is made of an even number of transpositions, say $$\tau_1 \tau_2 \dots \tau_m$$, then its inverse is $$\tau_m \tau_{m-1} \dots \tau_1$$, since a transposition is its own inverse.
Parents:
• Alternating group
The alternating group is the only normal subgroup of the symmetric group (on five or more generators). | 2023-02-09 09:19:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8124460577964783, "perplexity": 294.7422529272601}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501555.34/warc/CC-MAIN-20230209081052-20230209111052-00380.warc.gz"} |
https://www.scienceforums.net/topic/122645-help-me-solve-the-differential-equation/?tab=comments | # Help me solve the differential equation
## Recommended Posts
Is it possible to find an analytical solution to the following differential equation
Thank you in advance to those who will respond
##### Share on other sites
This could be a start.
If I have read you correctly, then I have separated the variables for you.
Edit, Whoopsy I have needed to edit last line as can be seen.
Sorry.
Edited by studiot
##### Share on other sites
By inspection, y=0 is solution.
Now, after a couple of tries, one gets the intuition that the solution is some kind of potential function, so let's try that out:
$y=kx^{p}$
then,
$y'=pkx^{p-1}$
$y''=p\left(p-1\right)kx^{p-2}$
Substituting and finding condition for p and k:
$k^{2}p\left(p-1\right)x^{2p-2}=ak\left(2-p\right)x^{p-3}$
$\Rightarrow p=-1$
$2k^{2}=ak\Rightarrow$
$k=0$
(the trivial solution we found by inspection) or:
$k=\frac{a}{2}$
with which,
$y=\frac{a}{2}x^{-1}$
Mind you, non-linear differential equations may have more families* of solutions. Check me for mistakes.
* Typically, not families. Rather, isolated solutions.
Edited by joigus
##### Share on other sites
1 hour ago, joigus said:
Now, after a couple of tries, one gets the intuition that the solution is some kind of potential function, so let's try that out:
Thank you.
Yes, this is the equation for a point mass field, taking into account the mass of the field itself.
I used to think the density of the gravitational field was equal ro=g^2/4*Pi*c^2 which led to an incorrect decision.
Replacing c^2 with phi led to an equation that I asked for help solving. In him y(x)=phi(r) and a=4*Gm.
The solution you found gives Newton's law of gravity
phi(r) = -2Gm/r g(r)=(dPhi/dr)/2=-Gm/r^2
It turns out that taking into account the mass of the gravitational field itself does not change anything in Newton's law of gravity.
##### Share on other sites
5 minutes ago, SergUpstart said:
Thank you.
Yes, this is the equation for a point mass field, taking into account the mass of the field itself.
I used to think the density of the gravitational field was equal ro=g^2/4*Pi*c^2 which led to an incorrect decision.
Replacing c^2 with phi led to an equation that I asked for help solving. In him y(x)=phi(r) and a=4*Gm.
The solution you found gives Newton's law of gravity
phi(r) = -2Gm/r g(r)=(dPhi/dr)/2=-Gm/r^2
It turns out that taking into account the mass of the gravitational field itself does not change anything in Newton's law of gravity.
So you were thinking about gravitational potentials from the get go, and you couldn't figure out that a plausible solution was k/x? Something doesn't add up here. This sounds a bit disingenuous... Is it gravity you want to talk about, instead of calculus?
##### Share on other sites
5 hours ago, joigus said:
So you were thinking about gravitational potentials from the get go, and you couldn't figure out that a plausible solution was k/x? Something doesn't add up here. This sounds a bit disingenuous... Is it gravity you want to talk about, instead of calculus?
No, I really couldn't solve this equation for a long time and decided to turn to mathematicians. How could I have known that you would solve the equation? I was hoping that the solution would converge to Gm/r^2 in the asymptotic for large r, but not exactly match it.
Edited by SergUpstart
##### Share on other sites
10 hours ago, joigus said:
Now, after a couple of tries, one gets the intuition that the solution is some kind of potential function, so let's try that out:
y=kxp
An exponential function is not suprising since my last line leads to three integrals two of which are standard logarithmic ones.
The third one need to be handled by parts.
So it is one of those situations where you have multiple competing functions and it depends (as you note) on aditional information (boundary conditions) which dominates.
+1 to joigus for 'the physicist's method'
Edited by studiot
##### Share on other sites
9 hours ago, SergUpstart said:
Yes, this is the equation for a point mass field, taking into account the mass of the field itself.
In Newtonian physics, gravity is a linear interaction - meaning the field does not self-interact, so it is not possible to attribute mass to it. One could thus have guessed at the end result without needing to solve this (really awkward) equation.
For a model of gravity that is non-linear, i.e. where the field self-interacts, have a look at General Relativity as well as it numerous off-shoots and alternatives.
Edited by Markus Hanke
##### Share on other sites
7 hours ago, Markus Hanke said:
In Newtonian physics, gravity is a linear interaction - meaning the field does not self-interact, so it is not possible to attribute mass to it. One could thus have guessed at the end result without needing to solve this (really awkward) equation.
For a model of gravity that is non-linear, i.e. where the field self-interacts, have a look at General Relativity as well as it numerous off-shoots and alternatives.
I've missed that point. +1
I was so distracted with the equation itself, and then the OP completely changed the subject to gravity. It threw me off...
## Create an account
Register a new account | 2021-05-11 01:39:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5539431571960449, "perplexity": 907.2024068172223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991553.4/warc/CC-MAIN-20210510235021-20210511025021-00001.warc.gz"} |
https://discourse.julialang.org/t/lapack-error-from-quadgk-integration/52533 | # LAPACK error from quadgk integration
I’m trying to integrate the following formula. It computes an electric potential at a point from a circular ring of charge.
\phi=\int_0^{2\pi} \frac{1}{\sqrt{(x-cos(t))^2+(y-sin(t))^2+z^2)}}dt
I’ve implemented the function itself like so.
function unknotintegralfunction(a,b,c,t)
1/(sqrt(((a.-cos.(t))^2).+((b.-sin.(t))^2).+c^2)) #using .math means element wise
end
And the integration like so, note that quadorder is set to 1000 currently.
function unknotpotential(a,b,c)
# we'll use the default errors to start
end
Passing a test point at 2,2,2 generates the following LAPACK error
LinearAlgebra.LAPACKException(462)
chklapackerror@lapack.jl:38[inlined]
stev!@lapack.jl:3749[inlined]
eigvals!(::LinearAlgebra.SymTridiagonal{Float64,Array{Float64,1}})@tridiag.jl:292
eigvals(::LinearAlgebra.SymTridiagonal{Float64,Array{Float64,1}})@tridiag.jl:293
eignewt(::Array{Float64,1}, ::Int64, ::Int64)@gausskronrod.jl:44
kronrod(::Type{Float64}, ::Int64)@gausskronrod.jl:193
macro expansion@gausskronrod.jl:257[inlined]
cachedrule@gausskronrod.jl:257[inlined]
unknotpotential(::Int64, ::Int64, ::Int64)@Other: 2
top-level scope@Local: 1[inlined]
I’m way out of my depth with this and I’m not too sure where I’m going wrong. Is the quadrature method that I’m using incorrect for the type of function I’m trying to integrate? Or is it something else I’ve done wrong?
quadorder is undefined in unknotpotential. Which value did you use? I tried a couple of values but can’t reproduce your error. Please share your versioninfo() and BLAS.vendor()
Hi Andreas, sorry I didn’t make it clear, I noted that quadorder was set to 1000. I’m currently running the code in a pluto notebook, the current QuadGK version is 2.4.1. I assume I’m using OpenBLAS since I haven’t touched that at all upon installing the basic Julia package. Running LinearAlgebra.BLAS.vendor() tells me I’m using
:openblas64
After reading your comment, I’ve changed the quadorder to 5 and the error disappears. I assume I had some sort of convergence problem?
Sorry about that. I just copied your code withou reading your comment carefully. I’m now able to reproduce your issue. The error code indicates a convergence issue in the eigen solver but it’s because the input matrix contains NaNs. The b vector in QuadGK.jl/gausskronrod.jl at ecde924d54a0d576340794883478aaac40f9801a · JuliaMath/QuadGK.jl · GitHub ends up with NaNs in the trailing part when n is large. I suppose it’s a bug in the QuadGK code but I’m pretty sure you should never use that many quadrature points. Using just five seems to give a pretty small error for your function.
1 Like
Realize that the quadrature order N is not the total number of quadrature points — the number of quadrature points used for each subinterval is 2N+1, but the whole point of an adaptive algorithm like quadgk is that it subdivides the interval into more and more subintervals until convergence is achieved.
You want a relatively low-order rule for each subinterval so that it doesn’t waste integrand evaluations unnecessarily. In most cases you shouldn’t change the quadrature order from the default unless you are trying to use BigFloat precision with a smooth function and need to increase the convergence rate to get a huge number of digits.
It looks like the algorithm QuadGK.kronrod is using to get the Gauss–Kronrod points and weights is hitting the limits of Float64 precision if you ask it for the Gauss–Kronrod rule of order 1000, but normally one should never want this so I guess the algorithm was not designed with that kind of scaling in mind. (The same algorithm works fine for BigFloat precision, i.e. if you do QuadGK.kronrod(BigFloat, 1000) it succeeds.)
(If you need Gaussian-quadrature rules of extremely high order, see also the FastGaussQuadrature.jl package, though it’s not designed for adaptive integration, i.e. it doesn’t do Gauss–Kronrod or Gauss–Patterson weights.)
4 Likes | 2022-05-26 11:01:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6540664434432983, "perplexity": 1210.1652022407789}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604794.68/warc/CC-MAIN-20220526100301-20220526130301-00060.warc.gz"} |
http://mathhelpforum.com/differential-geometry/158148-differential-geometry.html | # Math Help - differential geometry
1. ## differential geometry
How to show the normal spherical image x is never constant?
2. I assume your curve is parametrized by arc-lenght?
If so: Suppose the unit normal vector N was constant. This normal vector is always perpendicular to the tangent vector T, so the vector T must be in the plane perpendicular to N. But if T is containing entirely in a plane, then the normal unit vector will also be in this plane (being the derivative of T), a contradiction. | 2015-08-05 04:45:25 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8688518404960632, "perplexity": 503.6702604062746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438043060830.93/warc/CC-MAIN-20150728002420-00317-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://terrytao.wordpress.com/tag/cohomology/ | You are currently browsing the tag archive for the ‘cohomology’ tag.
The von Neumann ergodic theorem (the Hilbert space version of the mean ergodic theorem) asserts that if ${U: H \rightarrow H}$ is a unitary operator on a Hilbert space ${H}$, and ${v \in H}$ is a vector in that Hilbert space, then one has
$\displaystyle \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N U^n v = \pi_{H^U} v$
in the strong topology, where ${H^U := \{ w \in H: Uw = w \}}$ is the ${U}$-invariant subspace of ${H}$, and ${\pi_{H^U}}$ is the orthogonal projection to ${H^U}$. (See e.g. these previous lecture notes for a proof.) The same proof extends to more general amenable groups: if ${G}$ is a countable amenable group acting on a Hilbert space ${H}$ by unitary transformations ${g: H \rightarrow H}$, and ${v \in H}$ is a vector in that Hilbert space, then one has
$\displaystyle \lim_{N \rightarrow \infty} \frac{1}{|\Phi_N|} \sum_{g \in \Phi_N} gv = \pi_{H^G} v \ \ \ \ \ (1)$
for any Folner sequence ${\Phi_N}$ of ${G}$, where ${H^G := \{ w \in H: gw = w \hbox{ for all }g \in G \}}$ is the ${G}$-invariant subspace. Thus one can interpret ${\pi_{H^G} v}$ as a certain average of elements of the orbit ${Gv := \{ gv: g \in G \}}$ of ${v}$.
I recently discovered that there is a simple variant of this ergodic theorem that holds even when the group ${G}$ is not amenable (or not discrete), using a more abstract notion of averaging:
Theorem 1 (Abstract ergodic theorem) Let ${G}$ be an arbitrary group acting unitarily on a Hilbert space ${H}$, and let ${v}$ be a vector in ${H}$. Then ${\pi_{H^G} v}$ is the element in the closed convex hull of ${Gv := \{ gv: g \in G \}}$ of minimal norm, and is also the unique element of ${H^G}$ in this closed convex hull.
Proof: As the closed convex hull of ${Gv}$ is closed, convex, and non-empty in a Hilbert space, it is a classical fact (see e.g. Proposition 1 of this previous post) that it has a unique element ${F}$ of minimal norm. If ${T_g F \neq F}$ for some ${g}$, then the midpoint of ${T_g F}$ and ${F}$ would be in the closed convex hull and be of smaller norm, a contradiction; thus ${F}$ is ${G}$-invariant. To finish the first claim, it suffices to show that ${v-F}$ is orthogonal to every element ${h}$ of ${H^G}$. But if this were not the case for some such ${h}$, we would have ${\langle T_g v - F, h \rangle = \langle v-F,h\rangle \neq 0}$ for all ${g \in G}$, and thus on taking convex hulls ${\langle F-F,h\rangle = \langle f-F,f\rangle \neq 0}$, a contradiction.
Finally, since ${T_g v - F}$ is orthogonal to ${H^G}$, the same is true for ${F'-F}$ for any ${F'}$ in the closed convex hull of ${Gv}$, and this gives the second claim. $\Box$
This result is due to Alaoglu and Birkhoff. It implies the amenable ergodic theorem (1); indeed, given any ${\epsilon>0}$, Theorem 1 implies that there is a finite convex combination ${v_\epsilon}$ of shifts ${gv}$ of ${v}$ which lies within ${\epsilon}$ (in the ${H}$ norm) to ${\pi_{H^G} v}$. By the triangle inequality, all the averages ${\frac{1}{|\Phi_N|} \sum_{g \in \Phi_N} gv_\epsilon}$ also lie within ${\epsilon}$ of ${\pi_{H^G} v}$, but by the Folner property this implies that the averages ${\frac{1}{|\Phi_N|} \sum_{g \in \Phi_N} gv}$ are eventually within ${2\epsilon}$ (say) of ${\pi_{H^G} v}$, giving the claim.
It turns out to be possible to use Theorem 1 as a substitute for the mean ergodic theorem in a number of contexts, thus removing the need for an amenability hypothesis. Here is a basic application:
Corollary 2 (Relative orthogonality) Let ${G}$ be a group acting unitarily on a Hilbert space ${H}$, and let ${V}$ be a ${G}$-invariant subspace of ${H}$. Then ${V}$ and ${H^G}$ are relatively orthogonal over their common subspace ${V^G}$, that is to say the restrictions of ${V}$ and ${H^G}$ to the orthogonal complement of ${V^G}$ are orthogonal to each other.
Proof: By Theorem 1, we have ${\pi_{H^G} v = \pi_{V^G} v}$ for all ${v \in V}$, and the claim follows. (Thanks to Gergely Harcos for this short argument.) $\Box$
Now we give a more advanced application of Theorem 1, to establish some “Mackey theory” over arbitrary groups ${G}$. Define a ${G}$-system ${(X, {\mathcal X}, \mu, (T_g)_{g \in G})}$ to be a probability space ${X = (X, {\mathcal X}, \mu)}$ together with a measure-preserving action ${(T_g)_{g \in G}}$ of ${G}$ on ${X}$; this gives an action of ${G}$ on ${L^2(X) = L^2(X,{\mathcal X},\mu)}$, which by abuse of notation we also call ${T_g}$:
$\displaystyle T_g f := f \circ T_{g^{-1}}.$
(In this post we follow the usual convention of defining the ${L^p}$ spaces by quotienting out by almost everywhere equivalence.) We say that a ${G}$-system is ergodic if ${L^2(X)^G}$ consists only of the constants.
(A technical point: the theory becomes slightly cleaner if we interpret our measure spaces abstractly (or “pointlessly“), removing the underlying space ${X}$ and quotienting ${{\mathcal X}}$ by the ${\sigma}$-ideal of null sets, and considering maps such as ${T_g}$ only on this quotient ${\sigma}$-algebra (or on the associated von Neumann algebra ${L^\infty(X)}$ or Hilbert space ${L^2(X)}$). However, we will stick with the more traditional setting of classical probability spaces here to keep the notation familiar, but with the understanding that many of the statements below should be understood modulo null sets.)
A factor ${Y = (Y, {\mathcal Y}, \nu, (S_g)_{g \in G})}$ of a ${G}$-system ${X = (X,{\mathcal X},\mu, (T_g)_{g \in G})}$ is another ${G}$-system together with a factor map ${\pi: X \rightarrow Y}$ which commutes with the ${G}$-action (thus ${T_g \pi = \pi S_g}$ for all ${g \in G}$) and respects the measure in the sense that ${\mu(\pi^{-1}(E)) = \nu(E)}$ for all ${E \in {\mathcal Y}}$. For instance, the ${G}$-invariant factor ${Z^0_G(X) := (X, {\mathcal X}^G, \mu\downharpoonright_{{\mathcal X}^G}, (T_g)_{g \in G})}$, formed by restricting ${X}$ to the invariant algebra ${{\mathcal X}^G := \{ E \in {\mathcal X}: T_g E = E \hbox{ a.e. for all } g \in G \}}$, is a factor of ${X}$. (This factor is the first factor in an important hierachy, the next element of which is the Kronecker factor ${Z^1_G(X)}$, but we will not discuss higher elements of this hierarchy further here.) If ${Y}$ is a factor of ${X}$, we refer to ${X}$ as an extension of ${Y}$.
From Corollary 2 we have
Corollary 3 (Relative independence) Let ${X}$ be a ${G}$-system for a group ${G}$, and let ${Y}$ be a factor of ${X}$. Then ${Y}$ and ${Z^0_G(X)}$ are relatively independent over their common factor ${Z^0_G(Y)}$, in the sense that the spaces ${L^2(Y)}$ and ${L^2(Z^0_G(X))}$ are relatively orthogonal over ${L^2(Z^0_G(Y))}$ when all these spaces are embedded into ${L^2(X)}$.
This has a simple consequence regarding the product ${X \times Y = (X \times Y, {\mathcal X} \times {\mathcal Y}, \mu \times \nu, (T_g \oplus S_g)_{g \in G})}$ of two ${G}$-systems ${X = (X, {\mathcal X}, \mu, (T_g)_{g \in G})}$ and ${Y = (Y, {\mathcal Y}, \nu, (S_g)_{g \in G})}$, in the case when the ${Y}$ action is trivial:
Lemma 4 If ${X,Y}$ are two ${G}$-systems, with the action of ${G}$ on ${Y}$ trivial, then ${Z^0_G(X \times Y)}$ is isomorphic to ${Z^0_G(X) \times Y}$ in the obvious fashion.
This lemma is immediate for countable ${G}$, since for a ${G}$-invariant function ${f}$, one can ensure that ${T_g f = f}$ holds simultaneously for all ${g \in G}$ outside of a null set, but is a little trickier for uncountable ${G}$.
Proof: It is clear that ${Z^0_G(X) \times Y}$ is a factor of ${Z^0_G(X \times Y)}$. To obtain the reverse inclusion, suppose that it fails, thus there is a non-zero ${f \in L^2(Z^0_G(X \times Y))}$ which is orthogonal to ${L^2(Z^0_G(X) \times Y)}$. In particular, we have ${fg}$ orthogonal to ${L^2(Z^0_G(X))}$ for any ${g \in L^\infty(Y)}$. Since ${fg}$ lies in ${L^2(Z^0_G(X \times Y))}$, we conclude from Corollary 3 (viewing ${X}$ as a factor of ${X \times Y}$) that ${fg}$ is also orthogonal to ${L^2(X)}$. Since ${g}$ is an arbitrary element of ${L^\infty(Y)}$, we conclude that ${f}$ is orthogonal to ${L^2(X \times Y)}$ and in particular is orthogonal to itself, a contradiction. (Thanks to Gergely Harcos for this argument.) $\Box$
Now we discuss the notion of a group extension.
Definition 5 (Group extension) Let ${G}$ be an arbitrary group, let ${Y = (Y, {\mathcal Y}, \nu, (S_g)_{g \in G})}$ be a ${G}$-system, and let ${K}$ be a compact metrisable group. A ${K}$-extension of ${Y}$ is an extension ${X = (X, {\mathcal X}, \mu, (T_g)_{g \in G})}$ whose underlying space is ${X = Y \times K}$ (with ${{\mathcal X}}$ the product of ${{\mathcal Y}}$ and the Borel ${\sigma}$-algebra on ${K}$), the factor map is ${\pi: (y,k) \mapsto y}$, and the shift maps ${T_g}$ are given by
$\displaystyle T_g ( y, k ) = (S_g y, \rho_g(y) k )$
where for each ${g \in G}$, ${\rho_g: Y \rightarrow K}$ is a measurable map (known as the cocycle associated to the ${K}$-extension ${X}$).
An important special case of a ${K}$-extension arises when the measure ${\mu}$ is the product of ${\nu}$ with the Haar measure ${dk}$ on ${K}$. In this case, ${X}$ also has a ${K}$-action ${k': (y,k) \mapsto (y,k(k')^{-1})}$ that commutes with the ${G}$-action, making ${X}$ a ${G \times K}$-system. More generally, ${\mu}$ could be the product of ${\nu}$ with the Haar measure ${dh}$ of some closed subgroup ${H}$ of ${K}$, with ${\rho_g}$ taking values in ${H}$; then ${X}$ is now a ${G \times H}$ system. In this latter case we will call ${X}$ ${H}$-uniform.
If ${X}$ is a ${K}$-extension of ${Y}$ and ${U: Y \rightarrow K}$ is a measurable map, we can define the gauge transform ${X_U}$ of ${X}$ to be the ${K}$-extension of ${Y}$ whose measure ${\mu_U}$ is the pushforward of ${\mu}$ under the map ${(y,k) \mapsto (y, U(y) k)}$, and whose cocycles ${\rho_{g,U}: Y \rightarrow K}$ are given by the formula
$\displaystyle \rho_{g,U}(y) := U(gy) \rho_g(y) U(y)^{-1}.$
It is easy to see that ${X_U}$ is a ${K}$-extension that is isomorphic to ${X}$ as a ${K}$-extension of ${Y}$; we will refer to ${X_U}$ and ${X}$ as equivalent systems, and ${\rho_{g,U}}$ as cohomologous to ${\rho_g}$. We then have the following fundamental result of Mackey and of Zimmer:
Theorem 6 (Mackey-Zimmer theorem) Let ${G}$ be an arbitrary group, let ${Y}$ be an ergodic ${G}$-system, and let ${K}$ be a compact metrisable group. Then every ergodic ${K}$-extension ${X}$ of ${Y}$ is equivalent to an ${H}$-uniform extension of ${Y}$ for some closed subgroup ${H}$ of ${K}$.
This theorem is usually stated for amenable groups ${G}$, but by using Theorem 1 (or more precisely, Corollary 3) the result is in fact also valid for arbitrary groups; we give the proof below the fold. (In the usual formulations of the theorem, ${X}$ and ${Y}$ are also required to be Lebesgue spaces, or at least standard Borel, but again with our abstract approach here, such hypotheses will be unnecessary.) Among other things, this theorem plays an important role in the Furstenberg-Zimmer structural theory of measure-preserving systems (as well as subsequent refinements of this theory by Host and Kra); see this previous blog post for some relevant discussion. One can obtain similar descriptions of non-ergodic extensions via the ergodic decomposition, but the result becomes more complicated to state, and we will not do so here.
A dynamical system is a space X, together with an action $(g,x) \mapsto gx$ of some group $G = (G,\cdot)$. [In practice, one often places topological or measure-theoretic structure on X or G, but this will not be relevant for the current discussion. In most applications, G is an abelian (additive) group such as the integers ${\Bbb Z}$ or the reals ${\Bbb R}$, but I prefer to use multiplicative notation here.] A useful notion in the subject is that of an (abelian) cocycle; this is a function $\rho: G \times X \to U$ taking values in an abelian group $U = (U,+)$ that obeys the cocycle equation
$\rho(gh, x) = \rho(h,x) + \rho(g,hx)$ (1)
for all $g,h \in G$ and $x \in X$. [Again, if one is placing topological or measure-theoretic structure on the system, one would want $\rho$ to be continuous or measurable, but we will ignore these issues.] The significance of cocycles in the subject is that they allow one to construct (abelian) extensions or skew products $X \times_\rho U$ of the original dynamical system X, defined as the Cartesian product $\{ (x,u): x \in X, u \in U \}$ with the group action $g(x,u) := (gx,u + \rho(g,x))$. (The cocycle equation (1) is needed to ensure that one indeed has a group action, and in particular that $(gh)(x,u) = g(h(x,u))$.) This turns out to be a useful means to build complex dynamical systems out of simpler ones. (For instance, one can build nilsystems by starting with a point and taking a finite number of abelian extensions of that point by a certain type of cocycle.)
A special type of cocycle is a coboundary; this is a cocycle $\rho: G \times X \to U$ that takes the form $\rho(g,x) := F(gx) - F(x)$ for some function $F: X \to U$. (Note that the cocycle equation (1) is automaticaly satisfied if $\rho$ is of this form.) An extension $X \times_\rho U$ of a dynamical system by a coboundary $\rho(g,x) := F(gx) - F(x)$ can be conjugated to the trivial extension $X \times_0 U$ by the change of variables $(x,u) \mapsto (x,u-F(x))$.
While every coboundary is a cocycle, the converse is not always true. (For instance, if X is a point, the only coboundary is the zero function, whereas a cocycle is essentially the same thing as a homomorphism from G to U, so in many cases there will be more cocycles than coboundaries. For a contrasting example, if X and G are finite (for simplicity) and G acts freely on X, it is not difficult to see that every cocycle is a coboundary.) One can measure the extent to which this converse fails by introducing the first cohomology group $H^1(G,X,U) := Z^1(G,X,U) / B^1(G,X,U)$, where $Z^1(G,X,U)$ is the space of cocycles $\rho: G \times X \to U$ and $B^1(G,X,U)$ is the space of coboundaries (note that both spaces are abelian groups). In my forthcoming paper with Vitaly Bergelson and Tamar Ziegler on the ergodic inverse Gowers conjecture (which should be available shortly), we make substantial use of some basic facts about this cohomology group (in the category of measure-preserving systems) that were established in a paper of Host and Kra.
The above terminology of cocycles, coboundaries, and cohomology groups of course comes from the theory of cohomology in algebraic topology. Comparing the formal definitions of cohomology groups in that theory with the ones given above, there is certainly quite a bit of similarity, but in the dynamical systems literature the precise connection does not seem to be heavily emphasised. The purpose of this post is to record the precise fashion in which dynamical systems cohomology is a special case of cochain complex cohomology from algebraic topology, and more specifically is analogous to singular cohomology (and can also be viewed as the group cohomology of the space of scalar-valued functions on X, when viewed as a G-module); this is not particularly difficult, but I found it an instructive exercise (especially given that my algebraic topology is extremely rusty), though perhaps this post is more for my own benefit that for anyone else. | 2015-05-26 15:42:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 268, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9485476613044739, "perplexity": 116.65594636643455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928864.73/warc/CC-MAIN-20150521113208-00204-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/1693589/are-derivatives-of-geometric-progressions-all-irreducible | # Are derivatives of geometric progressions all irreducible?
Consider the polynomials $P_n(x)=1+2x+3x^2+\dots+nx^{n-1}$. Problem A5 in 2014 Putnam competition was to prove that these polynomials are pairwise relatively prime. In the solution sheet there is the following remark:
It seems likely that the individual polynomials $P_k(x)$ are all irreducible, but this appears difficult to prove.
My question is exactly about this: is it known if all these polynomials are irreducible? Or is it an open problem?
In the article Classes of polynomials having only one non-cyclotomic irreducible factor the authors (A. Borisov, M. Filaseta, T. Y. Lam, and O. Trifonov) had proved for any $$\epsilon > 0$$ for all but $$O(t^{(1/3)+\epsilon})$$ positive integers $$n\leq t$$, the derivative of the polynomial $$f(x)= 1+ x + x^2 + \cdots + x^n$$ is irreducible, and in general for all $$n\in \mathbb N$$ they conjectured $$f'(x)$$ is irreducible. | 2022-10-07 03:39:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9086466431617737, "perplexity": 302.7660654777489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337906.7/warc/CC-MAIN-20221007014029-20221007044029-00430.warc.gz"} |
http://heattransfer.asmedigitalcollection.asme.org/article.aspx?articleid=1449087 | 0
Research Papers: Micro/Nanoscale Heat Transfer
# Latent Heat Fluxes Through Soft Materials With Microtruss Architectures
[+] Author and Article Information
Matthew J. Traum
Institute for Soldier Nanotechnologies, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139; Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139
Peter Griffith
Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139
Edwin L. Thomas
Institute for Soldier Nanotechnologies, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139; Department of Materials Science and Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139
William A. Peters1
Institute for Soldier Nanotechnologies, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139peters@mit.edu
1
Corresponding author.
J. Heat Transfer 130(4), 042403 (Mar 17, 2008) (11 pages) doi:10.1115/1.2818760 History: Received July 31, 2006; Revised June 20, 2007; Published March 17, 2008
## Abstract
Microscale truss architectures provide high mechanical strength, light weight, and open porosity in polymer sheets. Liquid evaporation and transport of the resulting vapor through truss voids cool nearby surfaces. Thus, microtruss materials can simultaneously prevent mechanical and thermal damage. Assessment of promise requires quantitative understanding of vapor transport through microtruss pores for realistic heat loads and latent heat carriers. Pore size may complicate exegesis owing to vapor rarefaction or surface interactions. This paper quantifies the nonboiling evaporative cooling of a flat surface by water vapor transport through two different hydrophobic polymer membranes, $112–119μm$ (or $113–123μm$) thick, with microtruss-like architectures, i.e., straight-through pores of average diameter of $1.0–1.4μm$ (or $12.6–14.2μm$) and average overall porosity of 7.6% (or 9.9%). The surface, heated at $1350±20Wt∕m2$ to mimic human thermal load in a desert (daytime solar plus metabolic), was the bottom of a $3.1cm$ inside diameter, $24.9cm3$ cylindrical aluminum chamber capped by the membrane. Steady-state rates of water vapor transport through the membrane pores to ambient were measured by continuously weighing the evaporation chamber. The water vapor concentration at the membrane exit was maintained near zero by a cross flow of dry nitrogen $(velocity=2.8m∕s)$. Each truss material enabled $13–14°C$ evaporative cooling of the surface, roughly 40% of the maximum evaporative cooling attainable, i.e., with an uncapped chamber. Intrinsic pore diffusion coefficients for dilute water vapor $(<10.4mole%)$ in air ($P$ total $∼112,000Pa$) were deduced from the measured vapor fluxes by mathematically disaggregating the substantial mass transfer resistances of the boundary layers $(∼50%)$ and correcting for radial variations in upstream water vapor concentration. The diffusion coefficients for the $1.0–1.4μm$ pores (Knudsen number $∼0.1$) agree with literature for the water vapor-air mutual diffusion coefficient to within $±20%$, but for the nominally $12.6–14.2μm$ pores (Kn $∼0.01$), the diffusion coefficient values were smaller, possibly because considerable pore area resides in noncircular, i.e., narrow, wedge-shaped cross sections that impede diffusion owing to enhanced rarefaction. The present data, parameters, and mathematical models support the design and analysis of microtruss materials for thermal or simultaneous thermal-and-mechanical protection of microelectromechanical systems, nanoscale components, humans, and other macrosystems.
<>
## Figures
Figure 1
SEMs of microtruss simulant surfaces (top panels) and edges exposed by microtoming (bottom panels). Left hand side panels: Nucrel® ; right hand side panels: Hytrel® . Magnifications: top left panel, 4000×; top right panel, 500×.
Figure 3
Typical temperature-time histories (corrected for ambient temperature) for evaporative cooling of an aluminum surface using a closed chamber (negative control), an open chamber (positive control), or microtruss simulant materials. The absolute latent and fractional accomplished cooling (defined in the text) are also shown.
Figure 4
Cumulative mass of water vapor transported from the evaporation chamber as affected by time. The instantaneous flux of coolant vapor through the microtruss simulant pores is obtained from the first derivative of the curves shown.
Figure 5
Schematic cross section of the evaporation chamber to illustrate chirality of the thermal buoyancy driven flows and the radial concentration gradient across the upstream face of the microtruss
Figure 6
Comparison of experimental (light gray) rates of water vapor mass transport from the evaporation chamber for the four experiments with microtruss stimulant barriers, with rates predicted by increasingly refined mass transfer models (various fills)
Figure 2
Schematic (not to scale) of apparatus for quantitative study of evaporative cooling of surfaces by modulation of latent heat carrier flow using barrier materials with microtruss and nanotruss architectures. Dotted BLs represent an average location because turbulence agitates the fluid boundaries.
## Errata
Some tools below are only available to our subscribers or users with an online account.
### Related Content
Customize your page view by dragging and repositioning the boxes below.
Related Journal Articles
Related Proceedings Articles
Related eBook Content
Topic Collections | 2018-09-18 14:30:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 18, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27726104855537415, "perplexity": 5552.53579523215}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155413.17/warc/CC-MAIN-20180918130631-20180918150631-00329.warc.gz"} |
https://msp.org/agt/2008/8-3/b07.xhtml | #### Volume 8, issue 3 (2008)
1 J S Carter, D Jelsovsky, S Kamada, L Langford, M Saito, Quandle cohomology and state-sum invariants of knotted curves and surfaces, Trans. Amer. Math. Soc. 355 (2003) 3947 MR1990571 2 J S Carter, S Kamada, M Saito, Geometric interpretations of quandle homology, J. Knot Theory Ramifications 10 (2001) 345 MR1825963 3 R Fenn, C Rourke, Racks and links in codimension two, J. Knot Theory Ramifications 1 (1992) 343 MR1194995 4 Y Ishii, A Yasuhara, Color invariant for spatial graphs, J. Knot Theory Ramifications 6 (1997) 319 MR1457191 5 D Joyce, A classifying invariant of knots, the knot quandle, J. Pure Appl. Algebra 23 (1982) 37 MR638121 6 S Kamada, Knot invariants derived from quandles and racks, from: "Invariants of knots and $3$–manifolds (Kyoto, 2001)", Geom. Topol. Monogr. 4, Geom. Topol. Publ. (2002) 103 MR2002606 7 L H Kauffman, Invariants of graphs in three-space, Trans. Amer. Math. Soc. 311 (1989) 697 MR946218 8 A Kawauchi, A survey of knot theory, Birkhäuser Verlag (1996) MR1417494 9 S Kinoshita, Alexander polynomials as isotopy invariants. I, Osaka Math. J. 10 (1958) 263 MR0102819 10 S Kinoshita, Alexander polynomials as isotopy invariants. II, Osaka Math. J 11 (1959) 91 MR0110101 11 F Luo, On Heegaard diagrams, Math. Res. Lett. 4 (1997) 365 MR1453066 12 S V Matveev, Distributive groupoids in knot theory, Mat. Sb. $($N.S.$)$ 119(161) (1982) 78, 160 MR672410 13 J McAtee, D S Silver, S G Williams, Coloring spatial graphs, J. Knot Theory Ramifications 10 (2001) 109 MR1822144 14 T Mituhisa, Abstraction of symmetric transformations, Tôhoku Math. J. 49 (1943) 145 MR0021002 15 E E Moise, Affine structures in $3$–manifolds. V. The triangulation theorem and Hauptvermutung, Ann. of Math. $(2)$ 56 (1952) 96 MR0048805 16 S. Satoh, Quandle cocycle invariants of knotted graphs, preprint 17 S Suzuki, On linear graphs in $3$–sphere, Osaka J. Math. 7 (1970) 375 MR0279799 18 S Suzuki, Alexander ideals of graphs in the $3$–sphere, Tokyo J. Math. 7 (1984) 233 MR752125 19 F Waldhausen, Heegaard-Zerlegungen der $3$–Sphäre, Topology 7 (1968) 195 MR0227992 20 D N Yetter, Category theoretic representations of knotted graphs in $\mathbf{S}^3$, Adv. Math. 77 (1989) 137 MR1020582 | 2021-06-23 05:39:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6930351853370667, "perplexity": 3589.143360484469}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488534413.81/warc/CC-MAIN-20210623042426-20210623072426-00580.warc.gz"} |
https://cran.rapporter.net/web/packages/documenter/vignettes/documenter.html | # Overview
It is sometimes necessary to document all the files in a directory. Using the function document_it, all files can quickly and thoroughly be aggregated automatically into a double spaced document. By creating an annotation file, additional comments can be added for each file without additional intervention.
# Usage
The package can be loaded via the library function.
# Load the package.
library(documenter)
The function document_it accepts 3 arguments:
• input_directory: The directory of files to be documented.
• output_file: The path to the output file that will be generated.
• annotation_file: The path to the annotation file if present.
An example use case is provided below. This documents all files in the “example” folder within the documenter package directory. Note that this is a recursive operation. Thus, all files contained within subdirectories of the folder will also be documented. Note that the output generated by this example is written to a folder in the temporary directory.
input <- system.file("extdata", "example", package = "documenter")
document_it(
input_directory = input,
output_file = file.path(tempdir(), "documentation"),
annotation_file = NULL
)
## Disclaimer
The views expressed are those of the author(s) and do not reflect the official policy of the Department of the Army, the Department of Defense or the U.S. Government. | 2021-07-24 17:16:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5384936332702637, "perplexity": 2943.513058197206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150307.84/warc/CC-MAIN-20210724160723-20210724190723-00040.warc.gz"} |
https://questions.examside.com/past-years/gate/gate-ce/geotechnical-engineering/definitions-and-properties-of-soils | GATE CE
Geotechnical Engineering
Definitions and Properties of Soils
Previous Years Questions
## Marks 1
If the water content of a fully saturated soil mass is 100%, the void ratio of the sample is
A certain soil has the following properties: Gs = 2.71, n = 40% and w = 20%. The degree of saturation of the soil (rounded off to the nearest percent)...
In its natural condition, a soil sample has a mass of 1.980 kg and a volume of 0.001 m3. After being completely dried in an oven, the mass of the samp...
The ratio of saturated unit weight to dry unit weight of a soil is 1.25. If the specific gravity of solids (Gs) is 2.65, the void ratio of the soil is...
A soil sample has a void ratio of 0.5 and its porosity will be closed to
A borrow pit soil has a dry density of 17 kN/m3. How many cubic meters of this soils will be required to construct an embankment of 100 m3 volume with...
Principle involved in the relationship between submerged unit weight and saturated weight of a soil is based on
If the porosity of a soil sample is 20%, the void ratio is
Which one of the following relations is not correct?
The void ratio of a soil sample can exceed unity.
The void ratio of a soil sample is 1. The corresponding porosity of the sample is _______.
## Marks 2
The porosity (n) and the degree of saturation (S) of a soil sample are 0.7 and 40%, respectively. In a 100 $$m^3$$ volume of the soil, the volume (exp...
A 588 $$cm^3$$ volume of moist sand weighs 1010 gm. Its dry weight is 918 gm and specific gravity of solids, G is 2.67. Assuming density of water as 1...
The water content of a saturated soil and the specific gravity of soil solids were found to be 30% and 2.70, respectively. Assuming the unit weight of...
For sand of uniform spherical particles, the void ratio in the loosest and the densest states are ________ and ________ respectively.
A saturated sand sample has dry unit weight of 18 $$kN/m^3$$ and a specific gravity of 2.65. If $$\gamma_w$$ = 10 $$kN/m^3$$, the water content of the...
EXAM MAP
Joint Entrance Examination
JEE MainJEE AdvancedWB JEE
Graduate Aptitude Test in Engineering
GATE CSEGATE ECEGATE EEGATE MEGATE CEGATE PIGATE IN
Medical
NEET | 2023-03-24 05:13:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5251894593238831, "perplexity": 2927.4914609221123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945248.28/warc/CC-MAIN-20230324051147-20230324081147-00264.warc.gz"} |
https://networkx.github.io/documentation/stable/reference/algorithms/generated/networkx.algorithms.mis.maximal_independent_set.html | # networkx.algorithms.mis.maximal_independent_set¶
maximal_independent_set(G, nodes=None, seed=None)[source]
Returns a random maximal independent set guaranteed to contain a given set of nodes.
An independent set is a set of nodes such that the subgraph of G induced by these nodes contains no edges. A maximal independent set is an independent set such that it is not possible to add a new node and still get an independent set.
Parameters
• G (NetworkX graph)
• nodes (list or iterable) – Nodes that must be part of the independent set. This set of nodes must be independent.
• seed (integer, random_state, or None (default)) – Indicator of random number generation state. See Randomness.
Returns
indep_nodes – List of nodes that are part of a maximal independent set.
Return type
list
Raises
• NetworkXUnfeasible – If the nodes in the provided list are not part of the graph or do not form an independent set, an exception is raised.
• NetworkXNotImplemented – If G is directed.
Examples
>>> G = nx.path_graph(5)
>>> nx.maximal_independent_set(G)
[4, 0, 2]
>>> nx.maximal_independent_set(G, [1])
[1, 3]
Notes
This algorithm does not solve the maximum independent set problem. | 2020-07-02 06:46:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5301831960678101, "perplexity": 919.5737694200621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878519.27/warc/CC-MAIN-20200702045758-20200702075758-00471.warc.gz"} |
https://tantalum.academickids.com/encyclopedia/index.php/Endomorphism | # Endomorphism
In mathematics, an endomorphism is a morphism (or homomorphism) from a mathematical object to itself. So, for example, an endomorphism of a vector space V is a linear map f : VV and an endomorphism of a group G is a group homomorphism f : GG, etc. In general, we can talk about endomorphisms in any category.
Given an object X in a category C and two endomorphisms f and g of X, then the functional composition f O g is also an endomorphism of X. Since the identity map on X is also an endomorphism of X, we see that the set of all endomorphisms of X forms a monoid, denoted EndC(X) or just End(X) if the category is understood.
In many but not all situations it is possible to add endomorphisms, and the endomorphisms of a given object then form a ring, called the endomorphism ring of the object. This is true, for example, in the categories of abelian groups, modules, and vector spaces. In general it is true in all preadditive categories.
An endomorphism that is also an isomorphism is termed an automorphism. In the following diagram, the arrows denote implication.
automorphism [itex]\to[itex] isomorphism [itex]\downarrow[itex] [itex]\downarrow[itex] endomorphism [itex]\to[itex] (homo)morphism
de:Endomorphismus
• Art and Cultures
• Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
• Space and Astronomy | 2021-05-07 01:39:36 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9604811668395996, "perplexity": 419.2633071689845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.18/warc/CC-MAIN-20210506235514-20210507025514-00273.warc.gz"} |
https://proofindex.com/resources-for-undergrads/elementary-number-theory/modular-arithmetic/basic-modular-arithmetic-proofs | # Basic modular arithmetic proofs
Prove that congruence modulo $$m$$ is reflexive:
$$a \equiv a \pmod{m}$$
Prove that congruence modulo $$m$$ is symmetric:
$$a \equiv b \pmod{m} \longrightarrow b \equiv a \pmod{m}$$
Prove that congruence modulo $$m$$ is transitive:
$$a \equiv b \pmod{m} \wedge b \equiv c \pmod{m} \longrightarrow a \equiv c \pmod{m}$$
$$a \equiv b \pmod{m} \wedge c \equiv d \pmod{m} \longrightarrow a + c \equiv b + d \pmod{m}$$
$$a \equiv b \pmod{m} \wedge c \equiv d \pmod{m} \longrightarrow ac \equiv bd \pmod{m}$$
Let $$n \in \mathbb{Z}$$. Then $$n^2 \equiv 0$$ or $$1 \pmod{4}$$.
Let $$n$$ be a positive odd integer. Then $$n^2 \equiv 1 \pmod{8}$$.
Let $$m$$ be a natural number, and let $$x$$ and $$y$$ be integers. If $$x \equiv y \pmod{m}$$, then $$x$$ and $$y$$ have the same remainder upon division by $$m$$.
Let $$n \in \mathbb{N}$$. Then $$8 \mid 5^{2n} - 1$$. | 2022-05-21 15:58:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9939004182815552, "perplexity": 80.42653920608507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539131.21/warc/CC-MAIN-20220521143241-20220521173241-00477.warc.gz"} |
https://www.jobilize.com/online/course/15-1-some-random-selection-problems-by-openstax?qcr=www.quizover.com&page=1 | # 15.1 Some random selection problems (Page 2/6)
Page 2 / 6
## Message routing
A junction point in a network has two incoming lines and two outgoing lines. The number of incoming messages N 1 on line one in one hour is Poisson (50); on line 2 the number is ${N}_{2}\sim$ Poisson (45). On incoming line 1 the messages have probability ${p}_{1a}=0.33$ of leaving on outgoing line a and $1-{p}_{1a}$ of leaving on line b. The messages coming in on line 2 have probability ${p}_{2a}=0.47$ of leaving on line a. Under the usual independence assumptions, what is the distribution of outgoing messages on line a?What are the probabilities of at least 30, 35, 40 outgoing messages on line a?
SOLUTION
By the Poisson decomposition, ${N}_{a}\sim$ Poisson $\left(50·0.33+45·0.47=37.65\right)$ .
ma = 50*0.33 + 45*0.47 ma = 37.6500Pa = cpoisson(ma,30:5:40) Pa = 0.9119 0.6890 0.3722
VERIFICATION of the Poisson decomposition
1. ${N}_{k}=\sum _{i=1}^{N}{I}_{{E}_{ki}}$ .
This is composite demand with ${Y}_{k}={I}_{{E}_{ki}}$ , so that ${g}_{{Y}_{k}}\left(s\right)={q}_{k}+s{p}_{k}=1+{p}_{k}\left(s-1\right)$ . Therefore,
${g}_{{N}_{k}}\left(s\right)={g}_{N}\left[{g}_{{Y}_{k}}\left(s\right)\right]={e}^{\mu \left(1+{p}_{k}\left(s-1\right)-1\right)}={e}^{\mu {p}_{k}\left(s-1\right)}$
which is the generating function for ${N}_{k}\sim$ Poisson $\left(\mu {p}_{k}\right)$ .
2. For any ${n}_{1},\phantom{\rule{0.166667em}{0ex}}{n}_{2},\phantom{\rule{0.166667em}{0ex}}\cdots ,\phantom{\rule{0.166667em}{0ex}}{n}_{m}$ , let $n={n}_{1}+{n}_{2}+\cdots +{n}_{m}$ , and consider
$A=\left\{{N}_{1}={n}_{1},\phantom{\rule{0.166667em}{0ex}}{N}_{2}={n}_{2},\phantom{\rule{0.166667em}{0ex}}\cdots ,\phantom{\rule{0.166667em}{0ex}}{N}_{m}={n}_{m}\right\}=\left\{N=n\right\}\cap \left\{{N}_{1n}={n}_{1},\phantom{\rule{0.166667em}{0ex}}{N}_{2n}={n}_{2},\cdots ,\phantom{\rule{0.277778em}{0ex}}{N}_{mn}={n}_{m}\right\}$
Since N is independent of the class of ${I}_{{E}_{ki}}$ , the class
$\left\{\left\{N=n\right\},\phantom{\rule{0.166667em}{0ex}}\left\{{N}_{1n}={n}_{1},\phantom{\rule{0.166667em}{0ex}}{N}_{2n}={n}_{2},\cdots ,\phantom{\rule{0.277778em}{0ex}}{N}_{mn}={n}_{m}\right\}\right\}$
is independent. By the product rule and the multinomial distribution
$P\left(A\right)={e}^{-\mu }\frac{{\mu }^{n}}{n!}·n!\prod _{k=1}^{m}\frac{{p}_{k}^{{n}_{k}}}{\left({n}_{k}\right)!}=\prod _{k=1}^{m}{e}^{-\mu {p}_{k}}\frac{{p}_{k}^{{n}_{k}}}{{n}_{k}!}=\prod _{k=1}^{m}P\left({N}_{k}={n}_{k}\right)$
The second product uses the fact that
${e}^{\mu }={e}^{\mu \left({p}_{1}+{p}_{2}+\cdots +{p}_{m}\right)}=\prod _{k=1}^{m}{e}^{\mu {p}_{k}}$
Thus, the product rule holds for the class $\left\{{N}_{k}:1\le k\le m\right\}$ , so that it is independent.
## Extreme values
Consider an iid class $\left\{{Y}_{i}:1\le i\right\}$ of nonnegative random variables. For any positive integer n we let
${V}_{n}=min\left\{{Y}_{1},\phantom{\rule{0.166667em}{0ex}}{Y}_{2},\phantom{\rule{0.166667em}{0ex}}\cdots ,\phantom{\rule{0.166667em}{0ex}}{Y}_{n}\right\}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\text{and}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{W}_{n}=max\left\{{Y}_{1},\phantom{\rule{0.166667em}{0ex}}{Y}_{2},\phantom{\rule{0.166667em}{0ex}}\cdots ,\phantom{\rule{0.166667em}{0ex}}{Y}_{n}\right\}$
Then
$P\left({V}_{n}>t\right)={P}^{n}\left(Y>t\right)\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{4.pt}{0ex}}\text{and}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}P\left({W}_{n}\le t\right)={P}^{n}\left(Y\le t\right)$
Now consider a random number N of the Y i . The minimum and maximum random variables are
${V}_{N}=\sum _{n=0}^{\infty }{I}_{\left\{N=n\right\}}{V}_{n}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\text{and}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{W}_{N}=\sum _{n=0}^{\infty }{I}_{\left\{N=n\right\}}{W}_{n}$
$\square$
Computational formulas
If we set ${V}_{0}={W}_{0}=0$ , then
1. ${F}_{V}\left(t\right)=P\left(V\le t\right)=1+P\left(N=0\right)-{g}_{N}\left[P\left(Y>t\right)\right]$
2. ${F}_{W}\left(t\right)={g}_{N}\left[P\left(Y\le t\right)\right]$
These results are easily established as follows. $\left\{{V}_{N}>t\right\}=\underset{n=0}{\overset{\infty }{\bigvee }}\left\{N=n\right\}\left\{{V}_{n}>t\right\}$ . By additivity and independence of $\left\{N,\phantom{\rule{0.166667em}{0ex}}{V}_{n}\right\}$ for each n
$P\left({V}_{N}>t\right)=\sum _{n=0}^{\infty }P\left(N=n\right)P\left({V}_{n}>t\right)=\sum _{n=1}^{\infty }P\left(N=n\right){P}^{n}\left(Y>t\right),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\text{since}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}P\left({V}_{0}>t\right)=0$
If we add into the last sum the term $P\left(N=0\right){P}^{0}\left(Y>t\right)=P\left(N=0\right)$ then subtract it, we have
$P\left({V}_{N}>t\right)=\sum _{n=0}^{\infty }P\left(N=n\right){P}^{n}\left(Y>t\right)-P\left(N=0\right)={g}_{N}\left[P\left(Y>t\right)\right]-P\left(N=0\right)$
A similar argument holds for proposition (b). In this case, we do not have the extra term for $\left\{N=0\right\}$ , since $P\left({W}_{0}\le t\right)=1$ .
Special case . In some cases, $N=0$ does not correspond to an admissible outcome (see [link] , below, on lowest bidder and [link] ). In that case
${F}_{V}\left(t\right)=\sum _{n=1}^{\infty }P\left({V}_{n}\le t\right)P\left(N=n\right)=\sum _{n=1}^{\infty }\left[1-{P}^{n}\left(Y>t\right)\right]P\left(N=n\right)=\sum _{n=1}^{\infty }P\left(N=n\right)-\sum _{n=1}^{\infty }{P}^{n}\left(Y>t\right)P\left(N=n\right)$
Add $P\left(N=0\right)={P}^{0}\left(Y>t\right)P\left(N=0\right)$ to each of the sums to get
${F}_{V}\left(t\right)=1-\sum _{n=0}^{\infty }{P}^{n}\left(Y>t\right)P\left(N=n\right)=1-{g}_{N}\left[P\left(Y>t\right)\right]$
$\square$
## Maximum service time
The number N of jobs coming into a service center in a week is a random quantity having a Poisson (20) distribution. Suppose the service times (in hours) for individualunits are iid, with common distribution exponential (1/3). What is the probability the maximum service time for the units is no greater than 6, 9, 12, 15, 18 hours?SOLUTION
## Solution
$P\left({W}_{N}\le t\right)={g}_{N}\left[P\left(Y\le t\right)\right]={e}^{20\left[{F}_{Y}\left(t\right)-1\right]}=exp\left(-20{e}^{-t/3}\right)$
t = 6:3:18; PW = exp(-20*exp(-t/3));disp([t;PW]')6.0000 0.0668 9.0000 0.369412.0000 0.6933 15.0000 0.873918.0000 0.9516
where we get a research paper on Nano chemistry....?
what are the products of Nano chemistry?
There are lots of products of nano chemistry... Like nano coatings.....carbon fiber.. And lots of others..
learn
Even nanotechnology is pretty much all about chemistry... Its the chemistry on quantum or atomic level
learn
da
no nanotechnology is also a part of physics and maths it requires angle formulas and some pressure regarding concepts
Bhagvanji
Preparation and Applications of Nanomaterial for Drug Delivery
revolt
da
Application of nanotechnology in medicine
what is variations in raman spectra for nanomaterials
I only see partial conversation and what's the question here!
what about nanotechnology for water purification
please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment.
Damian
yes that's correct
Professor
I think
Professor
Nasa has use it in the 60's, copper as water purification in the moon travel.
Alexandre
nanocopper obvius
Alexandre
what is the stm
is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.?
Rafiq
industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong
Damian
How we are making nano material?
what is a peer
What is meant by 'nano scale'?
What is STMs full form?
LITNING
scanning tunneling microscope
Sahil
how nano science is used for hydrophobicity
Santosh
Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq
Rafiq
what is differents between GO and RGO?
Mahi
what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq
Rafiq
if virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION
Anam
analytical skills graphene is prepared to kill any type viruses .
Anam
Any one who tell me about Preparation and application of Nanomaterial for drug Delivery
Hafiz
what is Nano technology ?
write examples of Nano molecule?
Bob
The nanotechnology is as new science, to scale nanometric
brayan
nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale
Damian
Is there any normative that regulates the use of silver nanoparticles?
what king of growth are you checking .?
Renato
What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ?
why we need to study biomolecules, molecular biology in nanotechnology?
?
Kyle
yes I'm doing my masters in nanotechnology, we are being studying all these domains as well..
why?
what school?
Kyle
biomolecules are e building blocks of every organics and inorganic materials.
Joe
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
A fair die is tossed 180 times. Find the probability P that the face 6 will appear between 29 and 32 times inclusive | 2020-09-20 14:11:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 41, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7305136919021606, "perplexity": 2072.7223981590555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198213.25/warc/CC-MAIN-20200920125718-20200920155718-00727.warc.gz"} |
https://www.mysciencework.com/publication/show/3492c98260407c8d53e2b2c643e827a9 | # Characterizing Lenses and Lensed Stars of High-Magnification Gravitational Microlensing Events With Lenses Passing Over Source Stars
Authors
Publication Date
Keywords
• Astrophysics - Solar And Stellar Astrophysics
## Abstract
We present the analysis of the light curves of 9 high-magnification gravitational microlensing events with lenses passing over source stars, including OGLE-2004-BLG-254, MOA-2007-BLG-176, MOA-2007-BLG-233/OGLE-2007-BLG-302, MOA-2009-BLG-174, MOA-2010-BLG-436, MOA-2011-BLG-093, MOA-2011-BLG-274, OGLE-2011-BLG-0990/MOA-2011-BLG-300, and OGLE-2011-BLG-1101/MOA-2011-BLG-325. For all events, we measure the linear limb-darkening coefficients of the surface brightness profile of source stars by measuring the deviation of the light curves near the peak affected by the finite-source effect. For 8 events, we measure the Einstein radii and the lens-source relative proper motions. Among them, 6 events (OGLE-2004-BLG-254, MOA-2007-BLG-176, MOA-2007-BLG-233/OGLE-2007-BLG-302, MOA-2011-BLG-093, MOA-2011-BLG-274, and OGLE-2011-BLG-0990/MOA-2011-BLG-300) are found to have Einstein radii less than 0.2 mas, making the lenses candidates of very low-mass stars or brown dwarfs. For MOA-2011-BLG-274, especially, the small Einstein radius of $\theta_{\rm E}\sim 0.09$ mas combined with the short time scale of $t_{\rm E}\sim 3.1$ days suggests the possibility that the lens is a free-floating planet. For MOA-2009-BLG-174, we measure the lens parallax and thus uniquely determine the physical parameters of the lens. We also find that the measured lens mass of $\sim 0.8\ M_\odot$ is consistent with that of a star blended with the source, suggesting that the blend is probably the lens. For the systematic integration of information that can be extracted from a sample of events with lenses passing over source stars, we also present the results of 8 other events that were previously analyzed.
Seen <100 times
# More articles like this
May 20, 2012
November 2011
May 01, 2012
## CHARACTERIZING LENSES AND LENSED STARS OF HIGH-MAG...
November 2011
More articles like this.. | 2017-02-25 07:16:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3944273591041565, "perplexity": 3448.2669586014504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00590-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://www.usgs.gov/media/images/map-shows-lava-flows-erupted-1983-november-2002 | # Map shows lava flows erupted 1983-November, 2002
## Detailed Description
Map shows lava flows erupted during the 1983-present activity of Puu Oo and Kupaianaha (see large map). Lava from the Mother's Day flow (red flow on west side of flow field) reached the sea at West Highcastle early on July 19, at Wilipea early on July 21, and at Highcastle on August 8. From near the southwest base of Puu Oo, the Mother's Day flow passes along the west side of the flow field and into the forest, where it started a large wildfire in May that continued into late July. By June 10, the Mother's Day flow had reached the base of Paliuli, the steep slope and cliff below Pulama pali and just above the coastal flat. At the base of Paliuli, the Mother's Day flow abruptly spread laterally in a series of small budding flows to cover an area nearly 2 km wide, gradually moving seaward until the West Highcastle and Wilipea lobes finally reached the ocean and started building benches. Activity at West Highcastle ended in early August, but entry began soon thereafter at Highcastle, eventually burying tiny kipuka of the Chain of Craters Road. The Wilipea entry died away slowly and had ended by mid-August. Highcastle and neighboring Highcastle Stairs entries ended on about August 23. For a time there were no active entries. Then Wilipea was reactivated on September 3 and remains active as of November 25. West Highcastle likewise renewed its activity on September 16-17, died away during the night of September 18-19, and returned soon thereafter to continue to time of mapping. East arm of Mother's Day flow branched from Highcastle lobe in late October and sent three fingers into ocean at Highcastle on November 15, West Laeapuki on November 19, and Laeapuki on November 20. Of these, only Laeapuki (the eastern of the two entries labeled "Laeapuki" on map) was still active on November 25, but it had stopped by November 29.
## Details
Image Dimensions: 800 x 520
Date Taken:
Location Taken: US | 2020-05-30 19:36:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1771792769432068, "perplexity": 10465.417413034045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347410284.51/warc/CC-MAIN-20200530165307-20200530195307-00410.warc.gz"} |
https://economics.stackexchange.com/tags/input-output/hot | # Tag Info
5
National Statistical Institutes do still compile IO tables (see http://ec.europa.eu/eurostat/web/esa-supply-use-input-tables for EU versions, although these are 5-yearly as well). They're generally more interested in producing the Supply and Use tables (which are then transformed into input-output tables) due to their usefulness in balancing the 3 measures ...
5
Quick answer, as I'm on my phone, but product by product input output tables can be obtained from Eurostat for EU countries, and are probably your best bet. Individual countries may have more detail from their own National Statistical Institutes' websites. The production functions in these are derived under some fairly strong assumptions, though, and you'...
4
Something you didn't mention is that $x = Ax + y$ means that to produce one unit of $x$, you use $A$ unit of $x$. E.g. you need electricity to produce electricity. Under the condition that $1>|A|\geq0$ $(1-A)^{-1} = \sum_{k=0}^\infty A^k$ Which allows us to write $x = Ly = \left(\sum_{k=0}^\infty A^k \right)y$ Thus, for one unit of $y$, your (...
4
Personal consumption expenditure (PCE) is the primary measure of consumer spending on goods and services in the U.S. economy. It accounts for about two-thirds of domestic final spending, and thus it is the primary engine that drives future economic growth. PCE shows how much of the income earned by households is being spent on current consumption as ...
4
You should try to avoid mixing data sets unless you have a good way to bridge them. There are very precise methodologies used to construct each data set and mixing them can lead to nonobvious problems. You might consider looking at the BEA's IO tables at a more disaggregated level. For example, the Use table shows Personal Consumption Expenditures at a ...
4
Doing this more abstractly, let $Y_j\subseteq\mathbb{R}^n$ be a production set for $j=1,\ldots,m$ and let $$Y=Y_1+Y_2+\cdots+Y_N=\{y_1+y_2+\cdots+y_n|y_j\in Y_j, j=1,\ldots,m\}$$ be the aggregate production set. The standard result on when the aggregate production set is closed is the following: Theorem: Let $Y_j$ be closed and convex sets containing $0$ for ...
3
For a fully overview on the conditions for the sum of closed sets to be closed, see this note of Kim Border Recession cones I'll be working with subsets of $\mathbb{R}^n$. Let's start with some definitions. Def: A set $C$ is convex if for $x, y \in C$ and $\alpha \in [0,1]$, $\alpha x + (1-\alpha) y \in C$. Def: A set $K$ is a cone if for $x \in K$ and $\... 3 This is a specific terminology employed in the literature on Leontief model. Productive here means that all sectors must be profitable (do not confuse it with notions of productivity used elsewhere in the economics, where productivity refers generally to outputs over inputs). Profitability requires that: $$X-CX>0$$ Actually, in order to arrive at sensible ... 3 Short Answer The Use and Supply tables do not have industries on both rows and columns. For example, as shown in the screenshot below, the Supply table has commodities on the rows and the industries are the columns. Perhaps the confusion is coming from the fact that the commodities and the industries use the same encoding. For example, as you point out, why ... 2 I am not sure exactly what you divided by what, but suppose your input-output table looks like this: $$A = \left[ \begin{array}{lll} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{array} \right]$$ If you proceed to divide the first column by$\sum\limits_i a_{i,1}$the second by$\sum\limits_i ...
1
The series <I + A + A^2 + A^3 + ... + A^N> converges on the Leontief inverse (I-A)^(-1) as N approaches infinity. In this format, I can be thought of as initial demand for a given product, A represents the first tier inputs in the supply chain required to produce I, A^2 represents the second tier inputs in the supply chain needed to produce the first ...
1
What intuition can the mathematical concept of an inverse (function, matrix) have? In a single-input monotonic production function $F(x) = q \implies x= F^{-1}(q)$ the operator $F^{-1}$ is the transformation mechanism that translates output to required input. A "change of units" calculator if you wish. Isn't this what $(I-A)^{-1}$ does in the case of the ...
1
The exports and imports in an input-output analysis should correspond to imports and exports as components of GDP so you can use those or some measures derived from them. For example, trade as a $\%$ of GDP is the sum of exports plus imports over GDP (i.e. $\frac{E+M}{Y}$, where $E$ is export, $M$ import and $Y$ GDP/output). Hence to calculate any comparable ...
1
Open question: how would you best make such an estimate? Normally, if you would not care about closing the economy, such estimate would be simply done by calculation of real wages per hour and then based on prevailing prices of the necessities you could calculate how many hours would average worker need to work to afford them. However, the fact that you ...
1
I`d propose you to follow these steps: Set up the minimization cost problem (i.e. for a given output quantity $y$ minimize costs): \begin{align} \min_{H,L,K}& \quad sH + wL + rK \tag{1} \label{1}\\ \text{such that} &\quad \min\{H,L\} + \min\{H, K\}\geq y \tag{2} \label{2} \end{align} In principle you have 3 cases, depending on price of factors $(s,... 1 Start with definitions: Production (possibilities) set:$Y$which you know is convex Input requirement set:$V(y)=\{\mathbf{x}:(y,−\mathbf{x})∈Y\}$On page 7. you can see:$\mathbf{y}\in Y$and$\mathbf{y'} \in Y$which then implies$t\mathbf{y}+(1-t)\mathbf{y'} \in Y$. Hint 1: What does it mean that$Y$is a convex set? Okay, but what does that ... 1 IOTs are usually derived from Supply and Use tables, which have an industry by commodity format. If you used these to derive your IOT, then you can use those to build your SAM, though note you need both the Supply AND Use tables for this. You end up with something that looks roughly like what you see on slide 7 here - with both commodity and activity (... 1 Quasi-fixed labour costs are typically those associated with the number of workers rather than the number of hours they work, so things like recruitment costs training costs Commonly they are seen as fixed costs in the short run, but marginal costs in the long run. Other non-labour costs which have the same short run / long run distinction can also be ... 1 The key to the answer is good data on Capital. There is a project (KLEMS), which is computing harmonised (i.e. comparable) information on capital, labour, energy, etc for many countries. At the moment it has information mainly on developed countries, but data for more developing countries are coming up. For example, this is a calculation of the capital-... 1 Your conjecture seems unlikely. For example the unit matrix would show an economy where no new goods can be produced (it takes 1 unit of something to make 1 unit of that same thing). Yet the determinant of this matrix is 1. If we were to multiply the$3 \times 3\$ unit matrix by say 10, the new determinant would be a 1000. But the economy did not get better, ...
1
All right kids. You gave me the answer when Dismalscience wrote "this is still not a problem as long as the value of the good produced differs from the sum of the value of its inputs" ... actually, one of the branches had a complete set of zeros in the input table exept for one input and a zero in terms of production ... and another one had a complete set ...
1
I didn't divided each column by the sum of the column. This would make no sence since the goal is to produce a technical coefficient matrix that links each input (row) needed by the industry (column) in order to produce their output. I divided each element of the input-output table by the total output of the branch, according to the method presented in ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2022-01-27 09:39:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.95132976770401, "perplexity": 807.1052018823061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305242.48/warc/CC-MAIN-20220127072916-20220127102916-00631.warc.gz"} |
https://ask.sagemath.org/answers/15330/revisions/ | Did you try removing your ./sage/ directory? Especially your ./sage/init.sage file might be causing trouble if nonempty. (Of course, removing this would remove your notebook! But you could at least try renaming .sage/ to .sage-old/ temporarily; Sage would then create a new one and perhaps the problem would go away, and then you could do further debugging.)
Did you try removing your ./sage/.sage/ directory? Especially your ./sage/init.sage file might be causing trouble if nonempty. (Of course, removing this would remove your notebook! But you could at least try renaming .sage/ to .sage-old/ temporarily; Sage would then create a new one and perhaps the problem would go away, and then you could do further debugging.) | 2021-07-25 10:40:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3752559423446655, "perplexity": 2575.89948258754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151641.83/warc/CC-MAIN-20210725080735-20210725110735-00148.warc.gz"} |
http://mathhelpforum.com/statistics/184423-probability-question-print.html | # Probability question :)
• Jul 11th 2011, 02:24 PM
somster100
Probability question :)
Just a question which i would like someone to confirm for me
for #a. i got 1/8 and #b. is larger (but i have no explination)
hopefully someone can kindly help me out :) <3
4) Your new neighbour moved in recently. The family consisted of the mom, the dad, and 3 kids. You knew there were 3 kids because you could always hear 3 kids’ voices through the building’s thin wall, but you didn’t know their gender because all 3 kids were still very young, in elementary school. You’ve seen the mom and dad. They look very much alike! So you expect the kids to look relatively alike as well.
a) Out of curiosity one day you decided to see how many boys and girls (not counting the parents) were in the family, so you went to knock on their door. And 1 boy came out. What is the probability that all 3 kids are boys given that 1 came out?
b) Of course you still didn’t have enough information to decide on all the 3’s gender, so you found some other excuse to knock on their door. A boy came out, but you couldn’t be sure if it was the same as the previous one or a different boy. Is the “probability that all 3 kids are boys” larger or smaller than part a)’s answer? Please explain your reasoning in a few sentences.
• Jul 12th 2011, 03:03 PM
Nemesis
Re: Probability question :)
Quote:
Originally Posted by somster100
Just a question which i would like someone to confirm for me
for #a. i got 1/8 and #b. is larger (but i have no explination)
hopefully someone can kindly help me out :) <3
4) Your new neighbour moved in recently. The family consisted of the mom, the dad, and 3 kids. You knew there were 3 kids because you could always hear 3 kids’ voices through the building’s thin wall, but you didn’t know their gender because all 3 kids were still very young, in elementary school. You’ve seen the mom and dad. They look very much alike! So you expect the kids to look relatively alike as well.
a) Out of curiosity one day you decided to see how many boys and girls (not counting the parents) were in the family, so you went to knock on their door. And 1 boy came out. What is the probability that all 3 kids are boys given that 1 came out?
As you already know the first child's gender, the options left to you for the other two kids are B-B, B-G, G-B, G-G, with B = boy, G - girl, hen is 1 out of4 or probability = $0.25$
Nemesis
b) Of course you still didn’t have enough information to decide on all the 3’s gender, so you found some other excuse to knock on their door. A boy came out, but you couldn’t be sure if it was the same as the previous one or a different boy. Is the “probability that all 3 kids are boys” larger or smaller than part a)’s answer? Please explain your reasoning in a few sentences.
.
• Jul 12th 2011, 03:55 PM
Re: Probability question :)
Quote:
Originally Posted by somster100
Just a question which i would like someone to confirm for me
for #a. i got 1/8 and #b. is larger (but i have no explination)
hopefully someone can kindly help me out :) <3
4) Your new neighbour moved in recently. The family consisted of the mom, the dad, and 3 kids. You knew there were 3 kids because you could always hear 3 kids’ voices through the building’s thin wall, but you didn’t know their gender because all 3 kids were still very young, in elementary school. You’ve seen the mom and dad. They look very much alike! So you expect the kids to look relatively alike as well.
a) Out of curiosity one day you decided to see how many boys and girls (not counting the parents) were in the family, so you went to knock on their door. And 1 boy came out. What is the probability that all 3 kids are boys given that 1 came out?
In birth order, the children may be born in the following arrangements
BBB
BBG
BGB
GBB
BGG
GBG
GGB
GGG
7 of these birth orders contain boys
and in only 1 of these do we have 3 boys
So the probability of the 3 children being boys if 1 is a boy is 1/7. | 2017-11-18 02:56:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3624657690525055, "perplexity": 1074.7663321494977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804518.38/warc/CC-MAIN-20171118021803-20171118041803-00493.warc.gz"} |
http://math.stackexchange.com/questions/84426/ramanujans-tau-function-an-arithmetic-property | # Ramanujan's Tau function, an arithmetic property
The problem:
Let $\tau(n)$ denote the Ramanujan $\tau$-function and $\sigma(n)$ be the sum of the positive divisors of $n$. Show that $$(1-n)\tau(n) = 24\sum_{j=1}^{n-1} \sigma(j)\tau(n-j).$$
I'm afraid I don't even know how to start on this one.. My main problem is the $\tau$ function is defined as the coefficients of the $q$-series of the modular form $\Delta$, but that isn't (at least to my knowledge) very helpful in handling it. The properties that we have are all of a multiplicative nature (i.e., $\tau$ is multiplicative and $\tau(p^{a+2}) = \tau(p)\tau(p^{a+1}) - p^{11}\tau(p^a)$) which don't seem particularly well suited in a problem involving a sum.
Any hints, even just how to get started, would be greatly appreciated. Thanks!
Edit
As per Matt E's message, one can define, for any modular form $f$, an operator $\delta$ so that $\delta f = 12\theta f - 12E_2f$, where $\theta = q\frac{d}{dq}$. We wish to look at $\delta\Delta$. We have
\begin{align*} \delta\Delta &= 12\theta\Delta - 12E_2\Delta\newline &= 12\theta\left(\sum_{n=1}^\infty \tau(n)q^n\right) - 12E_2\Delta\newline &= 12q\left(\sum_{n=1}^\infty n\tau(n)q^{n-1}\right) - 12E_2\Delta\newline &= 12\left(\sum_{n=1}^\infty n\tau(n)q^n\right) - 12E_2\Delta. \end{align*} Now $\delta\Delta$ is a cusp form of weight 14, of which there are none that are non-zero. Thus we must have $$12\left(\sum_{n=1}^\infty n\tau(n)q^n\right) = 12E_2\Delta.$$ Thus we may simplify a little and plug in the a series representation of $E_2$ to find $$\sum_{n=1}^\infty n\tau(n)q^n = (1 - 24\sum_{n=1}^\infty \sigma(n)q^n)\sum_{n=1}^\infty \tau(n)q^n.$$ Expanding the right hand side, and simplifying some more gives $$\sum_{n=1}^\infty n\tau(n)q^n = \sum_{n=1}^\infty \tau(n)q^n - 24\sum_{n=1}^\infty \sigma(n)q^n\sum_{n=1}^\infty \tau(n)q^n,$$ so we have $$\sum_{n=1}^\infty n\tau(n)q^n - \sum_{n=1}^\infty \tau(n)q^n = - 24\sum_{n=1}^\infty \sigma(n)q^n\sum_{n=1}^\infty \tau(n)q^n.$$ The end of the tunnel is starting to appear, as the left hand side looks quite nice at this point. We settle both sides into a single sum, so that we may match the coefficients on the left and right hand sides easily. We have $$\sum_{n=1}^\infty \tau(n)(n - 1)q^n = - 24\sum_{n=1}^\infty \left(\sum_{j=1}^{n-1} \sigma(j)\tau(n-j)q^n\right),$$ so multiplying both sides by $-1$ and matching up the coefficients gives $$\tau(n)(n - 1) = - 24 \left(\sum_{j=1}^{n-1} \sigma(j)\tau(n-j)\right)$$ for all $n\geq 1$, as desired.
-
Apply the operator $\Delta$ discussed in this question to the modular form $\Delta$ (sorry for the conflict of notation vis-a-vis $\Delta$!). The result is a modular form of weight $14$. What can you say about it?
-
I believe I've come across some progress, which I've editted in. I'll continue to think about this, and thanks for the answer! I have not seen the $\Delta$ operator before, so definitely would not have thought of this. – Alex Nov 22 '11 at 7:47
@Alex: Dear Alex, Try differentiating $\Delta$ in its series form (so that $\tau(n)$ appears directly in your answer) rather than in its product form. Regards, – Matt E Nov 22 '11 at 7:53
Wow! That worked out so smoothly! Though it begs the question, what is the $\delta$ operator? Thanks again! – Alex Nov 22 '11 at 8:29
@Alex: Dear Alex, It's part of the general theory of Rankin--Cohen brackets. I think it's easiest to understand from an automorphic point of view, rather than the more classical modular forms point of view. But that's the topic of another question ... ! Regards, – Matt E Nov 22 '11 at 8:32
This result can be generalized : Let $f$ be the following product : $$f(x)=\prod_{n=1}^{+\infty}{\left(1-x^n\right)^{a_n}}=\sum_{n=0}^{+\infty}{p(n)x^n}$$ By taking the logrtihmic derivitive of $f$ we obtain :
$$\frac{f'(x)}{f(x)}=-\sum_{n=1}^{+\infty}{na_n\frac{x^{n-1}}{1-x^n}}=-\frac{1}{x}\sum_{n=1}^{+\infty}{na_n\frac{x^n}{1-x^n}}$$ with $$\sum_{n=1}^{+\infty}{na_n\frac{x^n}{1-x^n}}=\sum_{n=1}^{+\infty}{na_n\left(\sum_{m=1}^{+\infty}{x^{nm}}\right)}=\sum_{n=1}^{+\infty}{\left(\sum_{d|n}{da_d}\right)x^n}$$ Thus : $$f'(x)=-\frac{f(x)}{x}\left(\sum_{n=1}^{+\infty}{\left(\sum_{d|n}{da_d}\right)x^n}\right)$$ and so $$\sum_{n=0}^{+\infty}{np(n)x^{n}}=-\left(\sum_{n=0}^{+\infty}{p(n)x^n}\right)\left(\sum_{n=1}^{+\infty}{\left(\sum_{d|n}{da_d}\right)x^n}\right)$$
A Cauchy product gives then the recursion formula $$p(0)=1$$ $$p(n)=\frac{-1}{n}\sum_{k=1}^n{p(n-k)\left(\sum_{d|n}{da_d}\right)}$$
(sorry form my approximative english... again)
- | 2014-08-31 07:04:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.98729008436203, "perplexity": 309.66867887033624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500836108.12/warc/CC-MAIN-20140820021356-00270-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://chemistry.stackexchange.com/questions/6877/most-economical-method-to-convert-potassium-oxide-to-potassium-nitride | # Most economical method to convert potassium oxide to potassium nitride.
I have a compound potassium oxide($\ce{K2O}$) and I am trying to convert it into potassium nitride ($\ce{K3N}$). Here are a few possible reaction methods :
• Split potassium oxide into the constituent elements potassium and oxygen (Not very good as a lot of energy is required)
• $\ce{K2O + N2 -> K3N + O2}$ (however this won't happen(?) because nitrogen has a lower electronegativity value than oxygen does)
and so on.
Does anyone know how can I convert the oxide to the nitride, releasing oxygen in the process, by not having to use a lot of energy-requiring processes?
• The second reaction can happen- Thermodynamics are altered by concentration differential. Lots of Nitrogen and little K2O. It will need catalysing to break the nitrogen to atoms which is a kinetics problem. Potassium Nitride is not all that stable. Just look at Sodium Nitride in wikipedia. – user2617804 Nov 10 '13 at 10:12
• @user2617804 How would I be best to begin on doing more research for the process that you mentioned above? – user2117 Nov 10 '13 at 10:15
• @user2617804 - Potassium nitride $\ce{K3N} \ne$ potassium azide $\ce{KN3}$ – Ben Norris Nov 10 '13 at 20:03 | 2019-07-22 15:58:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3991183936595917, "perplexity": 2385.400263283724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528141.87/warc/CC-MAIN-20190722154408-20190722180408-00526.warc.gz"} |
https://allexamguide.com/grade10-sci-exemplar-ch4-mcqs/ | # NCERT Exemplar Problems and their Solutions
Reviewed By:
Krishna Kant Majee
M.Sc., B.Ed.
### Chapter 4-Carbon and Its Compounds-MCQs
Multiple Choice Questions (MCQs)
Q.1. Carbon exists in the atmosphere in the form of
(a) Only carbon monoxide
(b) Carbon monoxide in traces and carbon dioxide
(c) Only carbon dioxide
(d) Coal
Explanation: Carbon exists only in the form of carbon dioxide gas $(CO_{2})$ in air.
Q.2. Which of the following statements are usually correct for carbon compounds? These
(i) Are good conductors of electricity.
(ii) Are poor conductors of electricity.
(iii) Have strong forces of attraction between their molecules.
(iv) Do not have strong forces of attraction between their molecules.
(a) (i) and (iii)
(b) (ii) and (iii)
(c) (i) and (iv)
(d) (ii) and (iv)
Explanation: Carbon having four valence electrons forms only covalent compounds which exhibit less intermolecular attractions and do not have any free electrons to carry electric current. Therefore, these are poor conductors of electricity.
Q.3. A molecule of ammonia $(NH_{3})$ has
(a) Only single bonds
(b) Only double bonds
(c) Only triple bonds
(d) Two double bonds and one single bond
Explanation: Ammonia is a covalent molecule in which center nitrogen atom is bonded with three hydrogen atoms through single covalent bond.
Q.4. Buckminsterfullerene is an allotropic form of
(a) Phosphorus
(b) Sulphur
(c) Carbon
(d) Tin
Explanation: Buckminsterfullerene is an allotrope of carbon with 60 carbon atoms which are joined together in a spherical shape.
Q.5. Which of the following are correct structural isomers of $C_{4}H_{10}? (a) (i) and (iii) (b) (ii) and (iv) (c) (i) and (ii) (d) (iii) and (iv) Answer. (a) Explanation: Structural isomers have same molecular formula but different parent chain of carbon atoms in the molecule. Q.6. In the following reaction, alkaline$KMnO_{4}$acts as: (a) Reducing agent (b) Oxidising agent (c) Catalyst (d) Dehydrating agent Answer. (b) Explanation:$KMnO_{4}$acts as oxidising agent as it oxidized$CH_{3}CH_{2}OH$to$CH_{3}COOH$by addition of oxygen atom. Q.7. Oils on treating with hydrogen in the presence of palladium or nickel catalyst form fats. This is an example of (a) Addition reaction (b) Substitution reaction (c) Displacement reaction (d) Oxidation reaction Answer. (a) Explanation: It is hydrogenation reaction means addition of hydrogen to double bonds of unsaturated compounds found in oil. Q.8. In which of the following compounds -OH is the functional group? (a) Butanone (b) Butanol (c) Butanoic (d) Butanal Answer. (b) Explanation: Compounds with –OH functional group are ended with suffix –ol.$C_{4}H_{9} ─ OH or CH_{3} ─ CH_{2} ─ CH_{2} ─ CH_{2} ─ OH$Q.9. The soap molecule has a (a) Hydrophilic head and a hydrophobic tail (b) Hydrophobic head and a hydrophilic tail (c) Hydrophobic head and a hydrophobic tail (d) Hydrophilic head and a hydrophilic tail Answer. (a) Explanation: A soap molecule contains a long hydrocarbon part and a small ionic part of -COONa group. Hydrocarbon chain is hydrophobic or water repelling whereas ionic head is hydrophilic or water attracting group. Q.10. Which of the following is the correct representation of electron dot structure of nitrogen? Answer. (d) Explanation: Nitrogen molecule is a covalent molecule in which two nitrogen atoms are bonded through triple covalent bond with one lone pair of electrons over each nitrogen atom. Q.11. Structural formula of ethyne is Answer. (a) Explanation: General formula for alkyne is C_{n}H_{2n-2}$. There must be at least one triple bond between carbon atoms. With two carbon atom, the possible structure would be H ─ C ≡ C ─ H.
Q.12. Identify the unsaturated compounds from the following.
(i) Propane
(ii) Propene
(iii) Propyne
(iv) Chloropropane
(a) (i) and (ii)
(b) (ii) and (iv)
(c) (iii) and (iv)
(d) (ii) and (iii)
Explanation: Alkene and alkyne are unsaturated hydrocarbon as they have double and triple covalent bonds between carbon atoms.
Q.13. Chlorine reacts with saturated hydrocarbons at room temperature in the
(a) absence of sunlight
(b) presence of sunlight
(c) presence of water
(d) presence of hydrochloric acid
Explanation: Chlorine shows photochemical substitution reactions with saturated hydrocarbons that occurs in the presence of light.
Q.14. In the soap micelles
(a) The ionic end of soap is on the surface of the cluster while the carbon chain is in the interior of the cluster
(b) Ionic end of soap is in the interior of the cluster and the carbon chain is out of the cluster
(c) Both ionic end and carbon chain are in the interior of the cluster
(d) Both ionic end and carbon chain are on the exterior of the cluster
Explanation: A micelle is a spherical aggregation of soap molecules in water in which hydrocarbon ends are directed towards the centre and ionic ends are directed outwards.
Q.15. Pentane has the molecular formula C5H12. It has
(a) 5 covalent bonds
(b) 12 covalent bonds
(c) 16 covalent bonds
(d) 17 covalent bonds
Explanation: Pentane contains four C-C bonds and twelve C-H covalent bonds.
Q.16. Structural formula of benzene is:
Explanation: Benzene is simplest aromatic compound with six carbon atoms and six H atoms. There are three alternate pi bonds in ring of carbon atoms.
Q.17. Ethanol reacts with sodium and forms two products. These are
(a) Sodium ethanoate and hydrogen
(b) Sodium ethanoate and oxygen
(c) Sodium ethoxide and hydrogen
(d) Sodium ethoxide and oxygen
Explanation: Ethanol $(C_{2}H_{5}OH)$ reacts with sodium to form sodium ethoxide (C2H5ONa) along with liberation of hydrogen gas.
$2C_{2}H_{5}OH + 2Na\rightarrow 2C_{2}H_{5}ONa + H_{2}\uparrow$
Q.18. The correct structural formula of butanoic acid is:
Explanation: Butanoic acid is a carboxylic acid with four carbon atom and one –COOH group at terminal.
Q.19. Vinegar is a solution of:
(a) 50% – 60% acetic acid in alcohol
(b) 5% – 8% acetic acid in alcohol
(c) 5% – 8% acetic acid in water
(d) 50%- 60% acetic acid in water
Explanation: Vinegar is a 5%-8% aqueous solution of acetic acid.
Question. 20 Mineral acids are stronger acids than carboxylic acids because
(i) Mineral acids are completely ionised.
(ii) Carboxylic acids are completely ionised.
(iii) Mineral acids are partially ionised.
(iv) Carboxylic acids are partially ionised.
(a) (i) and (iv)
(b) (ii) and (iii)
(c) (i) and (ii)
(d) (iii) and (iv)
Ans. (a)
Explanation: Mineral acids like nitric acid, sulphuric acid are stronger than carboxylic acid as they can ionize 100% in their solution.
Question. 21 Carbon forms four covalent bonds by sharing its four valence electrons with four univalent atoms, e.g. hydrogen. After the formation of four bonds, carbon attains the electronic configuration of
(a) Helium
(b) Neon
(c) Argon
(d) Krypton
Explanation: Electronic configuration of carbon is 2, 4 hence it contains 4 valence electrons and after formation of 4 covalent bonds, it will get 4 more electrons through them so total would be 10 electrons that is atomic number for Neon gas.
Q.22. The correct electron dot structure of a water molecule is
Explanation: In water molecule, center oxygen atom contains two lone pairs of electrons and form two single covalent bonds with two hydrogen atoms.
Q.23. Which of the following is not a straight chain hydrocarbon?
Explanation: A branched chain hydrocarbon must contain some side chains which are bonded with parent carbon chain.
Q.24. Which among the following are unsaturated hydrocarbons?
(a) (i) and (iii)
(b) (ii) and (iii)
(c) (ii) and (iv)
(d) (iii) and (iv)
Explanation: Unsaturated hydrocarbons have multiple covalent bonds (double or triple bond) like alkene and alkyne.
Q.25. Which of the following does not belong to the same homologous series?
(a) $CH_{4}$
(b) $C_{2}H_{6}$
(c) $C_{3}H_{8}$
(d) $C_{4}H_{8}$
Explanation: Successive members of same homologous series are differ by $- CH_{2}$ unit. $CH_{4}, C_{2}H_{6}, C_{3}H_{8}$ belong to same series that is of alkane and differ by $─ CH_{2}$ unit but C4H8 does not belong to this.
Q.26.The name of the compound, $CH_{3} ─ CH_{2} ─ CHO$ is:
(a) Propanal
(b) Propanone
(c) Ethanol
(d) Ethanal
Explanation: Compound contains three carbon atoms so prop- would be root word and – CHO functional group so suffix will be –al. Hence name would be propane + al = propanal.
Q.27. The heteroatoms present in $CH_{3} ─ CH_{2} ─ O ─ CH_{2} ─ CH_{2}Cl$ are:
(i) Oxygen
(ii) Carbon
(iii) Hydrogen
(iv) Chlorine
(a) (i) and (ii)
(b) (ii) and (iii)
(c) (iii) and (iv)
(d) (i) and (iv) | 2021-08-01 18:12:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.486296147108078, "perplexity": 8412.281007313142}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154214.63/warc/CC-MAIN-20210801154943-20210801184943-00367.warc.gz"} |
http://isabelle.in.tum.de/website-Isabelle2013/dist/library/HOL/Groups.html | Theory Groups
Up to index of Isabelle/HOL
theory Groups
imports Orderings
(* Title: HOL/Groups.thy Author: Gertrud Bauer, Steven Obua, Lawrence C Paulson, Markus Wenzel, Jeremy Avigad*)header {* Groups, also combined with orderings *}theory Groupsimports Orderingsbeginsubsection {* Fact collections *}ML {*structure Ac_Simps = Named_Thms( val name = @{binding ac_simps} val description = "associativity and commutativity simplification rules")*}setup Ac_Simps.setuptext{* The rewrites accumulated in @{text algebra_simps} deal with theclassical algebraic structures of groups, rings and family. They simplifyterms by multiplying everything out (in case of a ring) and bringing sums andproducts into a canonical form (by ordered rewriting). As a result it decidesgroup and ring equalities but also helps with inequalities.Of course it also works for fields, but it knows nothing about multiplicativeinverses or division. This is catered for by @{text field_simps}. *}ML {*structure Algebra_Simps = Named_Thms( val name = @{binding algebra_simps} val description = "algebra simplification rules")*}setup Algebra_Simps.setuptext{* Lemmas @{text field_simps} multiply with denominators in (in)equationsif they can be proved to be non-zero (for equations) or positive/negative(for inequations). Can be too aggressive and is therefore separate from themore benign @{text algebra_simps}. *}ML {*structure Field_Simps = Named_Thms( val name = @{binding field_simps} val description = "algebra simplification rules for fields")*}setup Field_Simps.setupsubsection {* Abstract structures *}text {* These locales provide basic structures for interpretation into bigger structures; extensions require careful thinking, otherwise undesired effects may occur due to interpretation.*}locale semigroup = fixes f :: "'a => 'a => 'a" (infixl "*" 70) assumes assoc [ac_simps]: "a * b * c = a * (b * c)"locale abel_semigroup = semigroup + assumes commute [ac_simps]: "a * b = b * a"beginlemma left_commute [ac_simps]: "b * (a * c) = a * (b * c)"proof - have "(b * a) * c = (a * b) * c" by (simp only: commute) then show ?thesis by (simp only: assoc)qedendlocale monoid = semigroup + fixes z :: 'a ("1") assumes left_neutral [simp]: "1 * a = a" assumes right_neutral [simp]: "a * 1 = a"locale comm_monoid = abel_semigroup + fixes z :: 'a ("1") assumes comm_neutral: "a * 1 = a"sublocale comm_monoid < monoid proofqed (simp_all add: commute comm_neutral)subsection {* Generic operations *}class zero = fixes zero :: 'a ("0")class one = fixes one :: 'a ("1")hide_const (open) zero onelemma Let_0 [simp]: "Let 0 f = f 0" unfolding Let_def ..lemma Let_1 [simp]: "Let 1 f = f 1" unfolding Let_def ..setup {* Reorient_Proc.add (fn Const(@{const_name Groups.zero}, _) => true | Const(@{const_name Groups.one}, _) => true | _ => false)*}simproc_setup reorient_zero ("0 = x") = Reorient_Proc.procsimproc_setup reorient_one ("1 = x") = Reorient_Proc.proctyped_print_translation (advanced) {* let fun tr' c = (c, fn ctxt => fn T => fn ts => if not (null ts) orelse T = dummyT orelse not (Printer.show_type_constraint ctxt) andalso can Term.dest_Type T then raise Match else Syntax.const @{syntax_const "_constrain"} $Syntax.const c$ Syntax_Phases.term_of_typ ctxt T); in map tr' [@{const_syntax Groups.one}, @{const_syntax Groups.zero}] end;*} -- {* show types that are presumably too general *}class plus = fixes plus :: "'a => 'a => 'a" (infixl "+" 65)class minus = fixes minus :: "'a => 'a => 'a" (infixl "-" 65)class uminus = fixes uminus :: "'a => 'a" ("- _" [81] 80)class times = fixes times :: "'a => 'a => 'a" (infixl "*" 70)subsection {* Semigroups and Monoids *}class semigroup_add = plus + assumes add_assoc [algebra_simps, field_simps]: "(a + b) + c = a + (b + c)"sublocale semigroup_add < add!: semigroup plus proofqed (fact add_assoc)class ab_semigroup_add = semigroup_add + assumes add_commute [algebra_simps, field_simps]: "a + b = b + a"sublocale ab_semigroup_add < add!: abel_semigroup plus proofqed (fact add_commute)context ab_semigroup_addbeginlemmas add_left_commute [algebra_simps, field_simps] = add.left_commutetheorems add_ac = add_assoc add_commute add_left_commuteendtheorems add_ac = add_assoc add_commute add_left_commuteclass semigroup_mult = times + assumes mult_assoc [algebra_simps, field_simps]: "(a * b) * c = a * (b * c)"sublocale semigroup_mult < mult!: semigroup times proofqed (fact mult_assoc)class ab_semigroup_mult = semigroup_mult + assumes mult_commute [algebra_simps, field_simps]: "a * b = b * a"sublocale ab_semigroup_mult < mult!: abel_semigroup times proofqed (fact mult_commute)context ab_semigroup_multbeginlemmas mult_left_commute [algebra_simps, field_simps] = mult.left_commutetheorems mult_ac = mult_assoc mult_commute mult_left_commuteendtheorems mult_ac = mult_assoc mult_commute mult_left_commuteclass monoid_add = zero + semigroup_add + assumes add_0_left: "0 + a = a" and add_0_right: "a + 0 = a"sublocale monoid_add < add!: monoid plus 0 proofqed (fact add_0_left add_0_right)+lemma zero_reorient: "0 = x <-> x = 0"by (rule eq_commute)class comm_monoid_add = zero + ab_semigroup_add + assumes add_0: "0 + a = a"sublocale comm_monoid_add < add!: comm_monoid plus 0 proofqed (insert add_0, simp add: ac_simps)subclass (in comm_monoid_add) monoid_add proofqed (fact add.left_neutral add.right_neutral)+class comm_monoid_diff = comm_monoid_add + minus + assumes diff_zero [simp]: "a - 0 = a" and zero_diff [simp]: "0 - a = 0" and add_diff_cancel_left [simp]: "(c + a) - (c + b) = a - b" and diff_diff_add: "a - b - c = a - (b + c)"beginlemma add_diff_cancel_right [simp]: "(a + c) - (b + c) = a - b" using add_diff_cancel_left [symmetric] by (simp add: add.commute)lemma add_diff_cancel_left' [simp]: "(b + a) - b = a"proof - have "(b + a) - (b + 0) = a" by (simp only: add_diff_cancel_left diff_zero) then show ?thesis by simpqedlemma add_diff_cancel_right' [simp]: "(a + b) - b = a" using add_diff_cancel_left' [symmetric] by (simp add: add.commute)lemma diff_add_zero [simp]: "a - (a + b) = 0"proof - have "a - (a + b) = (a + 0) - (a + b)" by simp also have "… = 0" by (simp only: add_diff_cancel_left zero_diff) finally show ?thesis .qedlemma diff_cancel [simp]: "a - a = 0"proof - have "(a + 0) - (a + 0) = 0" by (simp only: add_diff_cancel_left diff_zero) then show ?thesis by simpqedlemma diff_right_commute: "a - c - b = a - b - c" by (simp add: diff_diff_add add.commute)lemma add_implies_diff: assumes "c + b = a" shows "c = a - b"proof - from assms have "(b + c) - (b + 0) = a - b" by (simp add: add.commute) then show "c = a - b" by simpqedendclass monoid_mult = one + semigroup_mult + assumes mult_1_left: "1 * a = a" and mult_1_right: "a * 1 = a"sublocale monoid_mult < mult!: monoid times 1 proofqed (fact mult_1_left mult_1_right)+lemma one_reorient: "1 = x <-> x = 1"by (rule eq_commute)class comm_monoid_mult = one + ab_semigroup_mult + assumes mult_1: "1 * a = a"sublocale comm_monoid_mult < mult!: comm_monoid times 1 proofqed (insert mult_1, simp add: ac_simps)subclass (in comm_monoid_mult) monoid_mult proofqed (fact mult.left_neutral mult.right_neutral)+class cancel_semigroup_add = semigroup_add + assumes add_left_imp_eq: "a + b = a + c ==> b = c" assumes add_right_imp_eq: "b + a = c + a ==> b = c"beginlemma add_left_cancel [simp]: "a + b = a + c <-> b = c"by (blast dest: add_left_imp_eq)lemma add_right_cancel [simp]: "b + a = c + a <-> b = c"by (blast dest: add_right_imp_eq)endclass cancel_ab_semigroup_add = ab_semigroup_add + assumes add_imp_eq: "a + b = a + c ==> b = c"beginsubclass cancel_semigroup_addproof fix a b c :: 'a assume "a + b = a + c" then show "b = c" by (rule add_imp_eq)next fix a b c :: 'a assume "b + a = c + a" then have "a + b = a + c" by (simp only: add_commute) then show "b = c" by (rule add_imp_eq)qedendclass cancel_comm_monoid_add = cancel_ab_semigroup_add + comm_monoid_addsubsection {* Groups *}class group_add = minus + uminus + monoid_add + assumes left_minus [simp]: "- a + a = 0" assumes diff_minus: "a - b = a + (- b)"beginlemma minus_unique: assumes "a + b = 0" shows "- a = b"proof - have "- a = - a + (a + b)" using assms by simp also have "… = b" by (simp add: add_assoc [symmetric]) finally show ?thesis .qedlemmas equals_zero_I = minus_unique (* legacy name *)lemma minus_zero [simp]: "- 0 = 0"proof - have "0 + 0 = 0" by (rule add_0_right) thus "- 0 = 0" by (rule minus_unique)qedlemma minus_minus [simp]: "- (- a) = a"proof - have "- a + a = 0" by (rule left_minus) thus "- (- a) = a" by (rule minus_unique)qedlemma right_minus [simp]: "a + - a = 0"proof - have "a + - a = - (- a) + - a" by simp also have "… = 0" by (rule left_minus) finally show ?thesis .qedsubclass cancel_semigroup_addproof fix a b c :: 'a assume "a + b = a + c" then have "- a + a + b = - a + a + c" unfolding add_assoc by simp then show "b = c" by simpnext fix a b c :: 'a assume "b + a = c + a" then have "b + a + - a = c + a + - a" by simp then show "b = c" unfolding add_assoc by simpqedlemma minus_add_cancel: "- a + (a + b) = b"by (simp add: add_assoc [symmetric])lemma add_minus_cancel: "a + (- a + b) = b"by (simp add: add_assoc [symmetric])lemma minus_add: "- (a + b) = - b + - a"proof - have "(a + b) + (- b + - a) = 0" by (simp add: add_assoc add_minus_cancel) thus "- (a + b) = - b + - a" by (rule minus_unique)qedlemma right_minus_eq: "a - b = 0 <-> a = b"proof assume "a - b = 0" have "a = (a - b) + b" by (simp add:diff_minus add_assoc) also have "… = b" using a - b = 0 by simp finally show "a = b" .next assume "a = b" thus "a - b = 0" by (simp add: diff_minus)qedlemma diff_self [simp]: "a - a = 0"by (simp add: diff_minus)lemma diff_0 [simp]: "0 - a = - a"by (simp add: diff_minus)lemma diff_0_right [simp]: "a - 0 = a" by (simp add: diff_minus)lemma diff_minus_eq_add [simp]: "a - - b = a + b"by (simp add: diff_minus)lemma neg_equal_iff_equal [simp]: "- a = - b <-> a = b" proof assume "- a = - b" hence "- (- a) = - (- b)" by simp thus "a = b" by simpnext assume "a = b" thus "- a = - b" by simpqedlemma neg_equal_0_iff_equal [simp]: "- a = 0 <-> a = 0"by (subst neg_equal_iff_equal [symmetric], simp)lemma neg_0_equal_iff_equal [simp]: "0 = - a <-> 0 = a"by (subst neg_equal_iff_equal [symmetric], simp)text{*The next two equations can make the simplifier loop!*}lemma equation_minus_iff: "a = - b <-> b = - a"proof - have "- (- a) = - b <-> - a = b" by (rule neg_equal_iff_equal) thus ?thesis by (simp add: eq_commute)qedlemma minus_equation_iff: "- a = b <-> - b = a"proof - have "- a = - (- b) <-> a = -b" by (rule neg_equal_iff_equal) thus ?thesis by (simp add: eq_commute)qedlemma diff_add_cancel: "a - b + b = a"by (simp add: diff_minus add_assoc)lemma add_diff_cancel: "a + b - b = a"by (simp add: diff_minus add_assoc)declare diff_minus[symmetric, algebra_simps, field_simps]lemma eq_neg_iff_add_eq_0: "a = - b <-> a + b = 0"proof assume "a = - b" then show "a + b = 0" by simpnext assume "a + b = 0" moreover have "a + (b + - b) = (a + b) + - b" by (simp only: add_assoc) ultimately show "a = - b" by simpqedlemma add_eq_0_iff: "x + y = 0 <-> y = - x" unfolding eq_neg_iff_add_eq_0 [symmetric] by (rule equation_minus_iff)lemma minus_diff_eq [simp]: "- (a - b) = b - a" by (simp add: diff_minus minus_add)lemma add_diff_eq[algebra_simps, field_simps]: "a + (b - c) = (a + b) - c" by (simp add: diff_minus add_assoc)lemma diff_eq_eq[algebra_simps, field_simps]: "a - b = c <-> a = c + b" by (auto simp add: diff_minus add_assoc)lemma eq_diff_eq[algebra_simps, field_simps]: "a = c - b <-> a + b = c" by (auto simp add: diff_minus add_assoc)lemma diff_diff_eq2[algebra_simps, field_simps]: "a - (b - c) = (a + c) - b" by (simp add: diff_minus minus_add add_assoc)lemma eq_iff_diff_eq_0: "a = b <-> a - b = 0" by (fact right_minus_eq [symmetric])lemma diff_eq_diff_eq: "a - b = c - d ==> a = b <-> c = d" by (simp add: eq_iff_diff_eq_0 [of a b] eq_iff_diff_eq_0 [of c d])endclass ab_group_add = minus + uminus + comm_monoid_add + assumes ab_left_minus: "- a + a = 0" assumes ab_diff_minus: "a - b = a + (- b)"beginsubclass group_add proof qed (simp_all add: ab_left_minus ab_diff_minus)subclass cancel_comm_monoid_addproof fix a b c :: 'a assume "a + b = a + c" then have "- a + a + b = - a + a + c" unfolding add_assoc by simp then show "b = c" by simpqedlemma uminus_add_conv_diff[algebra_simps, field_simps]: "- a + b = b - a"by (simp add:diff_minus add_commute)lemma minus_add_distrib [simp]: "- (a + b) = - a + - b"by (rule minus_unique) (simp add: add_ac)lemma diff_add_eq[algebra_simps, field_simps]: "(a - b) + c = (a + c) - b"by (simp add: diff_minus add_ac)lemma diff_diff_eq[algebra_simps, field_simps]: "(a - b) - c = a - (b + c)"by (simp add: diff_minus add_ac)(* FIXME: duplicates right_minus_eq from class group_add *)(* but only this one is declared as a simp rule. *)lemma diff_eq_0_iff_eq [simp, no_atp]: "a - b = 0 <-> a = b" by (rule right_minus_eq)lemma add_diff_cancel_left: "(c + a) - (c + b) = a - b" by (simp add: diff_minus add_ac)endsubsection {* (Partially) Ordered Groups *} text {* The theory of partially ordered groups is taken from the books: \begin{itemize} \item \emph{Lattice Theory} by Garret Birkhoff, American Mathematical Society 1979 \item \emph{Partially Ordered Algebraic Systems}, Pergamon Press 1963 \end{itemize} Most of the used notions can also be looked up in \begin{itemize} \item \url{http://www.mathworld.com} by Eric Weisstein et. al. \item \emph{Algebra I} by van der Waerden, Springer. \end{itemize}*}class ordered_ab_semigroup_add = order + ab_semigroup_add + assumes add_left_mono: "a ≤ b ==> c + a ≤ c + b"beginlemma add_right_mono: "a ≤ b ==> a + c ≤ b + c"by (simp add: add_commute [of _ c] add_left_mono)text {* non-strict, in both arguments *}lemma add_mono: "a ≤ b ==> c ≤ d ==> a + c ≤ b + d" apply (erule add_right_mono [THEN order_trans]) apply (simp add: add_commute add_left_mono) doneendclass ordered_cancel_ab_semigroup_add = ordered_ab_semigroup_add + cancel_ab_semigroup_addbeginlemma add_strict_left_mono: "a < b ==> c + a < c + b"by (auto simp add: less_le add_left_mono)lemma add_strict_right_mono: "a < b ==> a + c < b + c"by (simp add: add_commute [of _ c] add_strict_left_mono)text{*Strict monotonicity in both arguments*}lemma add_strict_mono: "a < b ==> c < d ==> a + c < b + d"apply (erule add_strict_right_mono [THEN less_trans])apply (erule add_strict_left_mono)donelemma add_less_le_mono: "a < b ==> c ≤ d ==> a + c < b + d"apply (erule add_strict_right_mono [THEN less_le_trans])apply (erule add_left_mono)donelemma add_le_less_mono: "a ≤ b ==> c < d ==> a + c < b + d"apply (erule add_right_mono [THEN le_less_trans])apply (erule add_strict_left_mono) doneendclass ordered_ab_semigroup_add_imp_le = ordered_cancel_ab_semigroup_add + assumes add_le_imp_le_left: "c + a ≤ c + b ==> a ≤ b"beginlemma add_less_imp_less_left: assumes less: "c + a < c + b" shows "a < b"proof - from less have le: "c + a <= c + b" by (simp add: order_le_less) have "a <= b" apply (insert le) apply (drule add_le_imp_le_left) by (insert le, drule add_le_imp_le_left, assumption) moreover have "a ≠ b" proof (rule ccontr) assume "~(a ≠ b)" then have "a = b" by simp then have "c + a = c + b" by simp with less show "False"by simp qed ultimately show "a < b" by (simp add: order_le_less)qedlemma add_less_imp_less_right: "a + c < b + c ==> a < b"apply (rule add_less_imp_less_left [of c])apply (simp add: add_commute) donelemma add_less_cancel_left [simp]: "c + a < c + b <-> a < b"by (blast intro: add_less_imp_less_left add_strict_left_mono) lemma add_less_cancel_right [simp]: "a + c < b + c <-> a < b"by (blast intro: add_less_imp_less_right add_strict_right_mono)lemma add_le_cancel_left [simp]: "c + a ≤ c + b <-> a ≤ b"by (auto, drule add_le_imp_le_left, simp_all add: add_left_mono) lemma add_le_cancel_right [simp]: "a + c ≤ b + c <-> a ≤ b"by (simp add: add_commute [of a c] add_commute [of b c])lemma add_le_imp_le_right: "a + c ≤ b + c ==> a ≤ b"by simplemma max_add_distrib_left: "max x y + z = max (x + z) (y + z)" unfolding max_def by autolemma min_add_distrib_left: "min x y + z = min (x + z) (y + z)" unfolding min_def by autolemma max_add_distrib_right: "x + max y z = max (x + y) (x + z)" unfolding max_def by autolemma min_add_distrib_right: "x + min y z = min (x + y) (x + z)" unfolding min_def by autoendsubsection {* Support for reasoning about signs *}class ordered_comm_monoid_add = ordered_cancel_ab_semigroup_add + comm_monoid_addbeginlemma add_pos_nonneg: assumes "0 < a" and "0 ≤ b" shows "0 < a + b"proof - have "0 + 0 < a + b" using assms by (rule add_less_le_mono) then show ?thesis by simpqedlemma add_pos_pos: assumes "0 < a" and "0 < b" shows "0 < a + b"by (rule add_pos_nonneg) (insert assms, auto)lemma add_nonneg_pos: assumes "0 ≤ a" and "0 < b" shows "0 < a + b"proof - have "0 + 0 < a + b" using assms by (rule add_le_less_mono) then show ?thesis by simpqedlemma add_nonneg_nonneg [simp]: assumes "0 ≤ a" and "0 ≤ b" shows "0 ≤ a + b"proof - have "0 + 0 ≤ a + b" using assms by (rule add_mono) then show ?thesis by simpqedlemma add_neg_nonpos: assumes "a < 0" and "b ≤ 0" shows "a + b < 0"proof - have "a + b < 0 + 0" using assms by (rule add_less_le_mono) then show ?thesis by simpqedlemma add_neg_neg: assumes "a < 0" and "b < 0" shows "a + b < 0"by (rule add_neg_nonpos) (insert assms, auto)lemma add_nonpos_neg: assumes "a ≤ 0" and "b < 0" shows "a + b < 0"proof - have "a + b < 0 + 0" using assms by (rule add_le_less_mono) then show ?thesis by simpqedlemma add_nonpos_nonpos: assumes "a ≤ 0" and "b ≤ 0" shows "a + b ≤ 0"proof - have "a + b ≤ 0 + 0" using assms by (rule add_mono) then show ?thesis by simpqedlemmas add_sign_intros = add_pos_nonneg add_pos_pos add_nonneg_pos add_nonneg_nonneg add_neg_nonpos add_neg_neg add_nonpos_neg add_nonpos_nonposlemma add_nonneg_eq_0_iff: assumes x: "0 ≤ x" and y: "0 ≤ y" shows "x + y = 0 <-> x = 0 ∧ y = 0"proof (intro iffI conjI) have "x = x + 0" by simp also have "x + 0 ≤ x + y" using y by (rule add_left_mono) also assume "x + y = 0" also have "0 ≤ x" using x . finally show "x = 0" .next have "y = 0 + y" by simp also have "0 + y ≤ x + y" using x by (rule add_right_mono) also assume "x + y = 0" also have "0 ≤ y" using y . finally show "y = 0" .next assume "x = 0 ∧ y = 0" then show "x + y = 0" by simpqedendclass ordered_ab_group_add = ab_group_add + ordered_ab_semigroup_addbeginsubclass ordered_cancel_ab_semigroup_add ..subclass ordered_ab_semigroup_add_imp_leproof fix a b c :: 'a assume "c + a ≤ c + b" hence "(-c) + (c + a) ≤ (-c) + (c + b)" by (rule add_left_mono) hence "((-c) + c) + a ≤ ((-c) + c) + b" by (simp only: add_assoc) thus "a ≤ b" by simpqedsubclass ordered_comm_monoid_add ..lemma max_diff_distrib_left: shows "max x y - z = max (x - z) (y - z)"by (simp add: diff_minus, rule max_add_distrib_left) lemma min_diff_distrib_left: shows "min x y - z = min (x - z) (y - z)"by (simp add: diff_minus, rule min_add_distrib_left) lemma le_imp_neg_le: assumes "a ≤ b" shows "-b ≤ -a"proof - have "-a+a ≤ -a+b" using a ≤ b by (rule add_left_mono) hence "0 ≤ -a+b" by simp hence "0 + (-b) ≤ (-a + b) + (-b)" by (rule add_right_mono) thus ?thesis by (simp add: add_assoc)qedlemma neg_le_iff_le [simp]: "- b ≤ - a <-> a ≤ b"proof assume "- b ≤ - a" hence "- (- a) ≤ - (- b)" by (rule le_imp_neg_le) thus "a≤b" by simpnext assume "a≤b" thus "-b ≤ -a" by (rule le_imp_neg_le)qedlemma neg_le_0_iff_le [simp]: "- a ≤ 0 <-> 0 ≤ a"by (subst neg_le_iff_le [symmetric], simp)lemma neg_0_le_iff_le [simp]: "0 ≤ - a <-> a ≤ 0"by (subst neg_le_iff_le [symmetric], simp)lemma neg_less_iff_less [simp]: "- b < - a <-> a < b"by (force simp add: less_le) lemma neg_less_0_iff_less [simp]: "- a < 0 <-> 0 < a"by (subst neg_less_iff_less [symmetric], simp)lemma neg_0_less_iff_less [simp]: "0 < - a <-> a < 0"by (subst neg_less_iff_less [symmetric], simp)text{*The next several equations can make the simplifier loop!*}lemma less_minus_iff: "a < - b <-> b < - a"proof - have "(- (-a) < - b) = (b < - a)" by (rule neg_less_iff_less) thus ?thesis by simpqedlemma minus_less_iff: "- a < b <-> - b < a"proof - have "(- a < - (-b)) = (- b < a)" by (rule neg_less_iff_less) thus ?thesis by simpqedlemma le_minus_iff: "a ≤ - b <-> b ≤ - a"proof - have mm: "!! a (b::'a). (-(-a)) < -b ==> -(-b) < -a" by (simp only: minus_less_iff) have "(- (- a) <= -b) = (b <= - a)" apply (auto simp only: le_less) apply (drule mm) apply (simp_all) apply (drule mm[simplified], assumption) done then show ?thesis by simpqedlemma minus_le_iff: "- a ≤ b <-> - b ≤ a"by (auto simp add: le_less minus_less_iff)lemma diff_less_0_iff_less [simp, no_atp]: "a - b < 0 <-> a < b"proof - have "a - b < 0 <-> a + (- b) < b + (- b)" by (simp add: diff_minus) also have "... <-> a < b" by (simp only: add_less_cancel_right) finally show ?thesis .qedlemmas less_iff_diff_less_0 = diff_less_0_iff_less [symmetric]lemma diff_less_eq[algebra_simps, field_simps]: "a - b < c <-> a < c + b"apply (subst less_iff_diff_less_0 [of a])apply (rule less_iff_diff_less_0 [of _ c, THEN ssubst])apply (simp add: diff_minus add_ac)donelemma less_diff_eq[algebra_simps, field_simps]: "a < c - b <-> a + b < c"apply (subst less_iff_diff_less_0 [of "a + b"])apply (subst less_iff_diff_less_0 [of a])apply (simp add: diff_minus add_ac)donelemma diff_le_eq[algebra_simps, field_simps]: "a - b ≤ c <-> a ≤ c + b"by (auto simp add: le_less diff_less_eq diff_add_cancel add_diff_cancel)lemma le_diff_eq[algebra_simps, field_simps]: "a ≤ c - b <-> a + b ≤ c"by (auto simp add: le_less less_diff_eq diff_add_cancel add_diff_cancel)lemma diff_le_0_iff_le [simp, no_atp]: "a - b ≤ 0 <-> a ≤ b" by (simp add: algebra_simps)lemmas le_iff_diff_le_0 = diff_le_0_iff_le [symmetric]lemma diff_eq_diff_less: "a - b = c - d ==> a < b <-> c < d" by (auto simp only: less_iff_diff_less_0 [of a b] less_iff_diff_less_0 [of c d])lemma diff_eq_diff_less_eq: "a - b = c - d ==> a ≤ b <-> c ≤ d" by (auto simp only: le_iff_diff_le_0 [of a b] le_iff_diff_le_0 [of c d])endML_file "Tools/group_cancel.ML"simproc_setup group_cancel_add ("a + b::'a::ab_group_add") = {* fn phi => fn ss => try Group_Cancel.cancel_add_conv *}simproc_setup group_cancel_diff ("a - b::'a::ab_group_add") = {* fn phi => fn ss => try Group_Cancel.cancel_diff_conv *}simproc_setup group_cancel_eq ("a = (b::'a::ab_group_add)") = {* fn phi => fn ss => try Group_Cancel.cancel_eq_conv *}simproc_setup group_cancel_le ("a ≤ (b::'a::ordered_ab_group_add)") = {* fn phi => fn ss => try Group_Cancel.cancel_le_conv *}simproc_setup group_cancel_less ("a < (b::'a::ordered_ab_group_add)") = {* fn phi => fn ss => try Group_Cancel.cancel_less_conv *}class linordered_ab_semigroup_add = linorder + ordered_ab_semigroup_addclass linordered_cancel_ab_semigroup_add = linorder + ordered_cancel_ab_semigroup_addbeginsubclass linordered_ab_semigroup_add ..subclass ordered_ab_semigroup_add_imp_leproof fix a b c :: 'a assume le: "c + a <= c + b" show "a <= b" proof (rule ccontr) assume w: "~ a ≤ b" hence "b <= a" by (simp add: linorder_not_le) hence le2: "c + b <= c + a" by (rule add_left_mono) have "a = b" apply (insert le) apply (insert le2) apply (drule antisym, simp_all) done with w show False by (simp add: linorder_not_le [symmetric]) qedqedendclass linordered_ab_group_add = linorder + ordered_ab_group_addbeginsubclass linordered_cancel_ab_semigroup_add ..lemma neg_less_eq_nonneg [simp]: "- a ≤ a <-> 0 ≤ a"proof assume A: "- a ≤ a" show "0 ≤ a" proof (rule classical) assume "¬ 0 ≤ a" then have "a < 0" by auto with A have "- a < 0" by (rule le_less_trans) then show ?thesis by auto qednext assume A: "0 ≤ a" show "- a ≤ a" proof (rule order_trans) show "- a ≤ 0" using A by (simp add: minus_le_iff) next show "0 ≤ a" using A . qedqedlemma neg_less_nonneg [simp]: "- a < a <-> 0 < a"proof assume A: "- a < a" show "0 < a" proof (rule classical) assume "¬ 0 < a" then have "a ≤ 0" by auto with A have "- a < 0" by (rule less_le_trans) then show ?thesis by auto qednext assume A: "0 < a" show "- a < a" proof (rule less_trans) show "- a < 0" using A by (simp add: minus_le_iff) next show "0 < a" using A . qedqedlemma less_eq_neg_nonpos [simp]: "a ≤ - a <-> a ≤ 0"proof assume A: "a ≤ - a" show "a ≤ 0" proof (rule classical) assume "¬ a ≤ 0" then have "0 < a" by auto then have "0 < - a" using A by (rule less_le_trans) then show ?thesis by auto qednext assume A: "a ≤ 0" show "a ≤ - a" proof (rule order_trans) show "0 ≤ - a" using A by (simp add: minus_le_iff) next show "a ≤ 0" using A . qedqedlemma equal_neg_zero [simp]: "a = - a <-> a = 0"proof assume "a = 0" then show "a = - a" by simpnext assume A: "a = - a" show "a = 0" proof (cases "0 ≤ a") case True with A have "0 ≤ - a" by auto with le_minus_iff have "a ≤ 0" by simp with True show ?thesis by (auto intro: order_trans) next case False then have B: "a ≤ 0" by auto with A have "- a ≤ 0" by auto with B show ?thesis by (auto intro: order_trans) qedqedlemma neg_equal_zero [simp]: "- a = a <-> a = 0" by (auto dest: sym)lemma double_zero [simp]: "a + a = 0 <-> a = 0"proof assume assm: "a + a = 0" then have a: "- a = a" by (rule minus_unique) then show "a = 0" by (simp only: neg_equal_zero)qed simplemma double_zero_sym [simp]: "0 = a + a <-> a = 0" by (rule, drule sym) simp_alllemma zero_less_double_add_iff_zero_less_single_add [simp]: "0 < a + a <-> 0 < a"proof assume "0 < a + a" then have "0 - a < a" by (simp only: diff_less_eq) then have "- a < a" by simp then show "0 < a" by (simp only: neg_less_nonneg)next assume "0 < a" with this have "0 + 0 < a + a" by (rule add_strict_mono) then show "0 < a + a" by simpqedlemma zero_le_double_add_iff_zero_le_single_add [simp]: "0 ≤ a + a <-> 0 ≤ a" by (auto simp add: le_less)lemma double_add_less_zero_iff_single_add_less_zero [simp]: "a + a < 0 <-> a < 0"proof - have "¬ a + a < 0 <-> ¬ a < 0" by (simp add: not_less) then show ?thesis by simpqedlemma double_add_le_zero_iff_single_add_le_zero [simp]: "a + a ≤ 0 <-> a ≤ 0" proof - have "¬ a + a ≤ 0 <-> ¬ a ≤ 0" by (simp add: not_le) then show ?thesis by simpqedlemma le_minus_self_iff: "a ≤ - a <-> a ≤ 0"proof - from add_le_cancel_left [of "- a" "a + a" 0] have "a ≤ - a <-> a + a ≤ 0" by (simp add: add_assoc [symmetric]) thus ?thesis by simpqedlemma minus_le_self_iff: "- a ≤ a <-> 0 ≤ a"proof - from add_le_cancel_left [of "- a" 0 "a + a"] have "- a ≤ a <-> 0 ≤ a + a" by (simp add: add_assoc [symmetric]) thus ?thesis by simpqedlemma minus_max_eq_min: "- max x y = min (-x) (-y)" by (auto simp add: max_def min_def)lemma minus_min_eq_max: "- min x y = max (-x) (-y)" by (auto simp add: max_def min_def)endcontext ordered_comm_monoid_addbeginlemma add_increasing: "0 ≤ a ==> b ≤ c ==> b ≤ a + c" by (insert add_mono [of 0 a b c], simp)lemma add_increasing2: "0 ≤ c ==> b ≤ a ==> b ≤ a + c" by (simp add: add_increasing add_commute [of a])lemma add_strict_increasing: "0 < a ==> b ≤ c ==> b < a + c" by (insert add_less_le_mono [of 0 a b c], simp)lemma add_strict_increasing2: "0 ≤ a ==> b < c ==> b < a + c" by (insert add_le_less_mono [of 0 a b c], simp)endclass abs = fixes abs :: "'a => 'a"beginnotation (xsymbols) abs ("¦_¦")notation (HTML output) abs ("¦_¦")endclass sgn = fixes sgn :: "'a => 'a"class abs_if = minus + uminus + ord + zero + abs + assumes abs_if: "¦a¦ = (if a < 0 then - a else a)"class sgn_if = minus + uminus + zero + one + ord + sgn + assumes sgn_if: "sgn x = (if x = 0 then 0 else if 0 < x then 1 else - 1)"beginlemma sgn0 [simp]: "sgn 0 = 0" by (simp add:sgn_if)endclass ordered_ab_group_add_abs = ordered_ab_group_add + abs + assumes abs_ge_zero [simp]: "¦a¦ ≥ 0" and abs_ge_self: "a ≤ ¦a¦" and abs_leI: "a ≤ b ==> - a ≤ b ==> ¦a¦ ≤ b" and abs_minus_cancel [simp]: "¦-a¦ = ¦a¦" and abs_triangle_ineq: "¦a + b¦ ≤ ¦a¦ + ¦b¦"beginlemma abs_minus_le_zero: "- ¦a¦ ≤ 0" unfolding neg_le_0_iff_le by simplemma abs_of_nonneg [simp]: assumes nonneg: "0 ≤ a" shows "¦a¦ = a"proof (rule antisym) from nonneg le_imp_neg_le have "- a ≤ 0" by simp from this nonneg have "- a ≤ a" by (rule order_trans) then show "¦a¦ ≤ a" by (auto intro: abs_leI)qed (rule abs_ge_self)lemma abs_idempotent [simp]: "¦¦a¦¦ = ¦a¦"by (rule antisym) (auto intro!: abs_ge_self abs_leI order_trans [of "- ¦a¦" 0 "¦a¦"])lemma abs_eq_0 [simp]: "¦a¦ = 0 <-> a = 0"proof - have "¦a¦ = 0 ==> a = 0" proof (rule antisym) assume zero: "¦a¦ = 0" with abs_ge_self show "a ≤ 0" by auto from zero have "¦-a¦ = 0" by simp with abs_ge_self [of "- a"] have "- a ≤ 0" by auto with neg_le_0_iff_le show "0 ≤ a" by auto qed then show ?thesis by autoqedlemma abs_zero [simp]: "¦0¦ = 0"by simplemma abs_0_eq [simp, no_atp]: "0 = ¦a¦ <-> a = 0"proof - have "0 = ¦a¦ <-> ¦a¦ = 0" by (simp only: eq_ac) thus ?thesis by simpqedlemma abs_le_zero_iff [simp]: "¦a¦ ≤ 0 <-> a = 0" proof assume "¦a¦ ≤ 0" then have "¦a¦ = 0" by (rule antisym) simp thus "a = 0" by simpnext assume "a = 0" thus "¦a¦ ≤ 0" by simpqedlemma zero_less_abs_iff [simp]: "0 < ¦a¦ <-> a ≠ 0"by (simp add: less_le)lemma abs_not_less_zero [simp]: "¬ ¦a¦ < 0"proof - have a: "!!x y. x ≤ y ==> ¬ y < x" by auto show ?thesis by (simp add: a)qedlemma abs_ge_minus_self: "- a ≤ ¦a¦"proof - have "- a ≤ ¦-a¦" by (rule abs_ge_self) then show ?thesis by simpqedlemma abs_minus_commute: "¦a - b¦ = ¦b - a¦"proof - have "¦a - b¦ = ¦- (a - b)¦" by (simp only: abs_minus_cancel) also have "... = ¦b - a¦" by simp finally show ?thesis .qedlemma abs_of_pos: "0 < a ==> ¦a¦ = a"by (rule abs_of_nonneg, rule less_imp_le)lemma abs_of_nonpos [simp]: assumes "a ≤ 0" shows "¦a¦ = - a"proof - let ?b = "- a" have "- ?b ≤ 0 ==> ¦- ?b¦ = - (- ?b)" unfolding abs_minus_cancel [of "?b"] unfolding neg_le_0_iff_le [of "?b"] unfolding minus_minus by (erule abs_of_nonneg) then show ?thesis using assms by autoqed lemma abs_of_neg: "a < 0 ==> ¦a¦ = - a"by (rule abs_of_nonpos, rule less_imp_le)lemma abs_le_D1: "¦a¦ ≤ b ==> a ≤ b"by (insert abs_ge_self, blast intro: order_trans)lemma abs_le_D2: "¦a¦ ≤ b ==> - a ≤ b"by (insert abs_le_D1 [of "- a"], simp)lemma abs_le_iff: "¦a¦ ≤ b <-> a ≤ b ∧ - a ≤ b"by (blast intro: abs_leI dest: abs_le_D1 abs_le_D2)lemma abs_triangle_ineq2: "¦a¦ - ¦b¦ ≤ ¦a - b¦"proof - have "¦a¦ = ¦b + (a - b)¦" by (simp add: algebra_simps add_diff_cancel) then have "¦a¦ ≤ ¦b¦ + ¦a - b¦" by (simp add: abs_triangle_ineq) then show ?thesis by (simp add: algebra_simps)qedlemma abs_triangle_ineq2_sym: "¦a¦ - ¦b¦ ≤ ¦b - a¦" by (simp only: abs_minus_commute [of b] abs_triangle_ineq2)lemma abs_triangle_ineq3: "¦¦a¦ - ¦b¦¦ ≤ ¦a - b¦" by (simp add: abs_le_iff abs_triangle_ineq2 abs_triangle_ineq2_sym)lemma abs_triangle_ineq4: "¦a - b¦ ≤ ¦a¦ + ¦b¦"proof - have "¦a - b¦ = ¦a + - b¦" by (subst diff_minus, rule refl) also have "... ≤ ¦a¦ + ¦- b¦" by (rule abs_triangle_ineq) finally show ?thesis by simpqedlemma abs_diff_triangle_ineq: "¦a + b - (c + d)¦ ≤ ¦a - c¦ + ¦b - d¦"proof - have "¦a + b - (c+d)¦ = ¦(a-c) + (b-d)¦" by (simp add: diff_minus add_ac) also have "... ≤ ¦a-c¦ + ¦b-d¦" by (rule abs_triangle_ineq) finally show ?thesis .qedlemma abs_add_abs [simp]: "¦¦a¦ + ¦b¦¦ = ¦a¦ + ¦b¦" (is "?L = ?R")proof (rule antisym) show "?L ≥ ?R" by(rule abs_ge_self)next have "?L ≤ ¦¦a¦¦ + ¦¦b¦¦" by(rule abs_triangle_ineq) also have "… = ?R" by simp finally show "?L ≤ ?R" .qedendsubsection {* Tools setup *}lemma add_mono_thms_linordered_semiring [no_atp]: fixes i j k :: "'a::ordered_ab_semigroup_add" shows "i ≤ j ∧ k ≤ l ==> i + k ≤ j + l" and "i = j ∧ k ≤ l ==> i + k ≤ j + l" and "i ≤ j ∧ k = l ==> i + k ≤ j + l" and "i = j ∧ k = l ==> i + k = j + l"by (rule add_mono, clarify+)+lemma add_mono_thms_linordered_field [no_atp]: fixes i j k :: "'a::ordered_cancel_ab_semigroup_add" shows "i < j ∧ k = l ==> i + k < j + l" and "i = j ∧ k < l ==> i + k < j + l" and "i < j ∧ k ≤ l ==> i + k < j + l" and "i ≤ j ∧ k < l ==> i + k < j + l" and "i < j ∧ k < l ==> i + k < j + l"by (auto intro: add_strict_right_mono add_strict_left_mono add_less_le_mono add_le_less_mono add_strict_mono)code_modulename SML Groups Arithcode_modulename OCaml Groups Arithcode_modulename Haskell Groups Arithtext {* Legacy *}lemmas diff_def = diff_minusend | 2015-08-03 21:19:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6445392966270447, "perplexity": 3852.266170291398}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990114.79/warc/CC-MAIN-20150728002310-00013-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://informationhoard.com/quiz/virginia-law-practice-test-2/ | # Virginia Law Practice Test 2
0
471
Welcome to Virginia Law Practice Test 2
Just like the real state exam, you must get 70% correct to pass!
*Note: At the end if the submit button does not work, then you have missed a question. Click on previous and go back through all of the questions to find the missed one.
Name Email
LIMITED TIME: Get 2 Free Stocks valued between $2.5-$1400 when you open and fund a Webull brokerage account. | 2021-10-23 06:53:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26739129424095154, "perplexity": 4113.657404201468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585653.49/warc/CC-MAIN-20211023064718-20211023094718-00139.warc.gz"} |
https://support.bioconductor.org/p/92332/ | RSQLite::dbGetPreparedQuery() is deprecated in AnnotationForge
2
0
Entering edit mode
MOD • 0
@mod-12330
Last seen 5.2 years ago
Teagasc Dublin
Hi,
I'm trying to run create an annotation database for Agaricus bisporus through NCBI in AnnotationForge, but I get a couple of errors:
Error in makeOrgDbFromDataFrames(data, tax_id, genus, species, dbFileName, :
'goTable' GO Ids must be formatted like 'GO:XXXXXXX'
In addition: Warning messages:
1: RSQLite::dbGetPreparedQuery() is deprecated, please switch to DBI::dbGetQuery(params = bind.data).
2: Named parameters not used in query: genes
3: Named parameters not used in query: name, value
How do I work around the deprecated RSQLite::dbGetPreparedQuery() function? The full script is given below along with sessonInfo. Furthermore, when I open the gene2go file the GO IDs seem fine so not sure why the go Table is not recognizing the IDs. Does anybody have an idea why the GO IDs are not recognized (I have pasted the top rows from the gene2go file that AnnotationForge obtained from NCBI at the bottom of this page)?
My script is:
> library(AnnotationDbi)
> library(GenomeInfoDb)
> library(biomaRt)
> library(survival)
> libraryUniProt.ws)
> library(knitr)
> library(DBI)
> library(mclust)
> makeOrgPackageFromNCBI(version = "0.1",
+ author = "my name",
+ maintainer = "email.com",
+ outputDir = ".",
+ tax_id = "936046",
+ genus = "Agaricus",
+ species = "bisporus")
If files are not cached locally this may take awhile to assemble a 12 GB cache databse in the NCBIFilesDir directory. Subsequent calls to this function should be faster (seconds). The cache will try to rebuild once per day.
preparing data from NCBI ...
getting data for gene2pubmed.gz
rebuilding the cache
extracting data for our organism from : gene2pubmed
getting data for gene2accession.gz
rebuilding the cache
extracting data for our organism from : gene2accession
getting data for gene2refseq.gz
rebuilding the cache
extracting data for our organism from : gene2refseq
getting data for gene_info.gz
rebuilding the cache
extracting data for our organism from : gene_info
getting data for gene2go.gz
rebuilding the cache
extracting data for our organism from : gene2go
processing gene2pubmed
processing gene_info: chromosomes
processing gene_info: description
processing alias data
processing refseq data
processing accession data
processing GO data
Please be patient while we work out which organisms can be annotated with
ensembl IDs.
making the OrgDb package ...
Populating genes table:
genes table filled
Populating pubmed table:
pubmed table filled
Populating chromosomes table:
chromosomes table filled
Populating gene_info table:
gene_info table filled
Populating entrez_genes table:
entrez_genes table filled
Populating alias table:
alias table filled
Populating refseq table:
refseq table filled
Populating accessions table:
accessions table filled
Populating go table:
go table filled
Error in makeOrgDbFromDataFrames(data, tax_id, genus, species, dbFileName, :
'goTable' GO Ids must be formatted like 'GO:XXXXXXX'
In addition: Warning messages:
1: RSQLite::dbGetPreparedQuery() is deprecated, please switch to DBI::dbGetQuery(params = bind.data).
2: Named parameters not used in query: genes
3: Named parameters not used in query: name, value
> sessionInfo()
R version 3.3.2 (2016-10-31)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1
locale:
[1] LC_COLLATE=English_Ireland.1252 LC_CTYPE=English_Ireland.1252
[3] LC_MONETARY=English_Ireland.1252 LC_NUMERIC=C
[5] LC_TIME=English_Ireland.1252
attached base packages:
[1] stats4 parallel stats graphics grDevices utils datasets methods
[9] base
other attached packages:
[1] mclust_5.2.2 DBI_0.5-1 knitr_1.15.1
[4] UniProt.ws_2.14.0 RCurl_1.95-4.8 bitops_1.0-6
[7] survival_2.40-1 biomaRt_2.30.0 GenomeInfoDb_1.10.3
[10] AnnotationHub_2.6.4 AnnotationForge_1.16.0 AnnotationDbi_1.36.2
[13] IRanges_2.8.1 S4Vectors_0.12.1 Biobase_2.34.0
[16] BiocGenerics_0.20.0 RSQLite_1.1-2
loaded via a namespace (and not attached):
[1] Rcpp_0.12.9 splines_3.3.2
[3] lattice_0.20-34 xtable_1.8-2
[5] R6_2.2.0 httr_1.2.1
[7] tools_3.3.2 grid_3.3.2
[9] htmltools_0.3.5 yaml_2.1.14
[11] digest_0.6.12 interactiveDisplayBase_1.12.0
[13] Matrix_1.2-8 shiny_1.0.0
[15] memoise_1.0.0 mime_0.5
[17] BiocInstaller_1.24.0 XML_3.98-1.5
[19] httpuv_1.3.3
An example of the gene2go file obtained from NCBI is:
#tax_id GeneID GO_ID Evidence Qualifier GO_term PubMed Category 3702 814629 GO:0005634 ISM - nucleus - Component 3702 814629 GO:0008150 ND - biological_process - Process 3702 814630 GO:0003677 IEA - DNA binding - Function 3702 814630 GO:0003700 ISS - transcription factor activity, sequence-specific DNA binding 11118137 Function 3702 814630 GO:0005634 IEA - nucleus - Component 3702 814630 GO:0005634 ISM - nucleus - Component 3702 814630 GO:0006351 IEA - transcription, DNA-templated - Process
annotation microarray annotate annotationforge • 1.8k views
1
Entering edit mode
@james-w-macdonald-5106
Last seen 2 days ago
United States
Your post title is misleading, as the real problem here is the error, not the warning. The error arises for species that have no GO data at NCBI. As a fail-over we then parse data from Blast2GO, and if that results in no data, then it fails because of a small bug. That's fixed now, and the updated version (1.16.1) should make its way through the build servers in the next day or so.
The warning is a long-standing issue that has to do with changes that were made in the RSQLite package, which AnnotationForge depends on. This doesn't stop anything from working - it's just letting us know that a function we are depending on is probably going to disappear in the future.
The devel version of AnnotationForge is now updated to remove the warnings, so once we have the new release in April, those warnings will go away as well.
0
Entering edit mode
MOD • 0
@mod-12330
Last seen 5.2 years ago
Teagasc Dublin
Ok, thanks for the info and your reply James. I'll keep an eye out for the update. I had thought that the warning was part of the issue of not seeing the GO IDs. The gene2go file though did appear to have GO IDs though (see the end of my original question for the first few lines of the gene2go dataframe) and I was wondering why the program was not parsing that data into the goTable?
1
Entering edit mode
If you want to comment on a post, please click the ADD COMMENT link and type in the box that appears. The 'Add your answer' box below is intended for answers.
While you did show some rows from gene2go, you should note that the taxonomic ID for those rows (the first column) is 3702, which is Arabidopsis thaliana, not Agaricus bisporus. There are no rows in the gene2go file that have 936046 in the first column, hence no data parsed out for your GO table.
0
Entering edit mode
ok, thanks. I did not see that. Any idea why it obtained Arabidopsis thaliana GO ID's and not Agaricus bisporus? I'll try to see if I can source the GO IDs some where else and use the makeOrgPackage(). Thanks again for your help.
1
Entering edit mode
The gene2go file that is downloaded is a generic file that contains Entrez Gene ID -> GO ID mappings for all the species that NCBI has currently annotated. It just so happens that A. thaliana is at the top of the file. The function makeOrgPackageFromNCBI downloads all these generic files, then extracts data that are specific to whatever species you are interested in, and uses those data to build the orgDb package.
In the case of GO mappings, there are no mappings for your species in gene2go. So the function then queries blast2go, and gets all the mappings they have. It so happens that there are 42 (or 44? I forget) mappings for your species in blast2go, but unfortunately there aren't any Entrez Gene IDs associated with those GO terms, so they get dropped as well. In the end, there aren't any Entrez Gene -> GO mappings that makeOrgPackageFromNCBI can find, so you end up with an orgDb package that has everything but the GO table.
0
Entering edit mode
ok, thanks for the information. I really appreciate it. I have found GO annotation for Agaricus bisporus on the JGI website for that species. I've downloaded it and will attempt to construct a database using that. | 2022-05-28 21:30:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2771058976650238, "perplexity": 12976.054829606088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663019783.90/warc/CC-MAIN-20220528185151-20220528215151-00768.warc.gz"} |
https://gitlab.freedesktop.org/mesa/piglit/blame/41b01e6b8c4a8e4c8949a13743125458100ff152/README.md | Nicolai Hähnle committed Mar 25, 2007 1 2 Piglit Jason Ekstrand committed Jun 08, 2018 3 4 ====== Nicolai Hähnle committed Mar 25, 2007 5 6 7 1. About 2. Setup 3. How to run tests Nicolai Hähnle committed Sep 21, 2009 8 9 10 4. Available test sets 5. How to write tests 6. Todo Nicolai Hähnle committed Mar 25, 2007 11 12 13 14 15 1. About -------- Blaž Tomažič committed Sep 05, 2012 16 17 Piglit is a collection of automated tests for OpenGL and OpenCL implementations. Nicolai Hähnle committed Mar 25, 2007 18 19 The goal of Piglit is to help improve the quality of open source Blaž Tomažič committed Sep 05, 2012 20 OpenGL and OpenCL drivers by providing developers with a simple means to Nicolai Hähnle committed Mar 25, 2007 21 22 23 24 25 26 27 28 29 30 31 32 perform regression tests. The original tests have been taken from - Glean ( http://glean.sf.net/ ) and - Mesa ( http://www.mesa3d.org/ ) 2. Setup -------- First of all, you need to make sure that the following are installed: Dylan Baker committed Aug 08, 2018 33 - Python 2.7.x or >=3.4 Brian Paul committed Mar 18, 2013 34 - Python Mako module Paul Berry committed Aug 05, 2011 35 - numpy (http://www.numpy.org) Dylan Baker committed Feb 23, 2015 36 - six (https://pypi.python.org/pypi/six) Nicolai Hähnle committed Mar 25, 2007 37 38 39 - cmake (http://www.cmake.org) - GL, glu and glut libraries and development packages (i.e. headers) - X11 libraries and development packages (i.e. headers) Chad Versace committed May 28, 2014 40 - waffle (http://www.waffle-gl.org) Dylan Baker committed Aug 05, 2016 41 - mako Dylan Baker committed Oct 29, 2014 42 43 44 45 46 47 Optionally, you can install the following: - lxml. An accelerated python xml library using libxml2 (http://lxml.de/) - simplejson. A fast C based implementation of the python json library. (https://simplejson.readthedocs.org/en/latest/) Dylan Baker committed Sep 09, 2016 48 49 - jsonstreams. A JSON stream writer for python. (https://jsonstreams.readthedocs.io/en/stable/) Neil Roberts committed Nov 07, 2018 50 51 - VkRunner. A shader script testing tool for Vulkan. (https://github.com/igalia/vkrunner) Dylan Baker committed Aug 05, 2016 52 53 54 For Python 2.x you can install the following to add features, these are unnecessary for python3: Rhys Kidd committed Dec 22, 2017 55 - backports.lzma. A backport of python3's lzma module to python2, Dylan Baker committed Jul 09, 2015 56 57 this enables fast native xz (de)compression in piglit for results files (https://github.com/peterjc/backports.lzma) Dylan Baker committed Aug 05, 2016 58 59 60 - subprocess32. A backport of the subprocess from python3.2, which includes timeout support. This only works for Linux Jason Ekstrand committed Jun 08, 2018 61 For testing the python framework using py.test unittests/framework Dylan Baker committed Aug 05, 2016 62 63 64 65 - py.test. A python test framework, used for running the python framework test suite. - tox. A tool for testing python packages against multiple configurations of python (https://tox.readthedocs.org/en/latest/index.html) Dylan Baker committed Oct 30, 2015 66 67 - mock. A python module for mocking other python modules. Required only for unittests (https://github.com/testing-cabal/mock) Dylan Baker committed Aug 05, 2016 68 69 70 71 - psutil. A portable process library for python - jsonschema. A JSON validator library for python - pytest-mock. A mock plugin for pytest - pytest-pythonpath. A plugin for pytest to do automagic with sys.path Rhys Kidd committed Dec 22, 2017 72 - pytest-raises. A plugin for pytest that allows decorating tests that expect Dylan Baker committed Aug 05, 2016 73 74 75 failure - pytest-warnings. A plugin for pytest that handles python warnings - pytest-timeout. A plugin for pytest to timeout tests. Nicolai Hähnle committed Mar 25, 2007 76 77 78 Now configure the build system: Jason Ekstrand committed Jun 08, 2018 79 $ccmake . Nicolai Hähnle committed Mar 25, 2007 80 81 82 83 This will start cmake's configuration tool, just follow the onscreen instructions. The default settings should be fine, but I recommend you: - Press 'c' once (this will also check for dependencies) and then Jason Ekstrand committed Jun 08, 2018 84 - Set CMAKE_BUILD_TYPE to Debug Now you can press 'c' again and then 'g' to generate the build system. Nicolai Hähnle committed Mar 25, 2007 85 86 Now build everything: Jason Ekstrand committed Jun 08, 2018 87 $ make Nicolai Hähnle committed Mar 25, 2007 88 89 Jason Ekstrand committed Jun 08, 2018 90 ### 2.1 Cross Compiling chadversary committed Jul 20, 2011 91 Konstantin Kharlamov committed Mar 09, 2017 92 93 On Linux, if cross-compiling a 32-bit build on a 64-bit host, first make sure you didn't have CMakeCache.txt file left from 64-bit build (it would retain old Jason Ekstrand committed Jun 08, 2018 94 95 flags), then you must invoke cmake with options -DCMAKE_SYSTEM_PROCESSOR=x86 -DCMAKE_C_FLAGS=-m32 -DCMAKE_CXX_FLAGS=-m32. chadversary committed Jul 20, 2011 96 Nicolai Hähnle committed Mar 25, 2007 97 Jason Ekstrand committed Jun 08, 2018 98 ### 2.2 Ubuntu Vinson Lee committed Aug 08, 2011 99 100 Install development packages. Jason Ekstrand committed Jun 08, 2018 101 102 $sudo apt-get install cmake g++ mesa-common-dev libgl1-mesa-dev python-numpy python-mako freeglut3-dev x11proto-gl-dev libxrender-dev libwaffle-dev Chad Versace committed Nov 05, 2012 103 Vinson Lee committed Aug 08, 2011 104 Configure and build. Jason Ekstrand committed Jun 08, 2018 105 106 107 $ cmake . $make Vinson Lee committed Aug 08, 2011 108 109 Jason Ekstrand committed Jun 08, 2018 110 ### 2.3 Mac OS X Vinson Lee committed Aug 09, 2011 111 Jason Ekstrand committed Jun 08, 2018 112 Install CMake. Vinson Lee committed Aug 09, 2011 113 114 115 116 117 118 119 http://cmake.org/cmake/resources/software.html Download and install 'Mac OSX Universal' platform. Install Xcode. http://developer.apple.com/xcode Configure and build. Jason Ekstrand committed Jun 08, 2018 120 121 122 $ cmake . $make Vinson Lee committed Aug 09, 2011 123 124 Jason Ekstrand committed Jun 08, 2018 125 ### 2.4 Cygwin Vinson Lee committed Aug 12, 2011 126 127 128 129 130 131 132 133 134 135 136 137 Install development packages. - cmake - gcc4 - make - opengl - libGL-devel - python - python-numpy - libglut-devel Configure and build. Jason Ekstrand committed Jun 08, 2018 138 139 140 $ cmake . $make Vinson Lee committed Aug 12, 2011 141 142 Jason Ekstrand committed Jun 08, 2018 143 ### 2.5 Windows Vinson Lee committed Nov 03, 2011 144 Jose Fonseca committed Jul 31, 2017 145 Install Python 3. Vinson Lee committed Nov 03, 2011 146 147 148 149 150 151 http://www.python.org/download Install CMake. http://cmake.org/cmake/resources/software.html Download and install 'Windows' platform. Jose Fonseca committed Jul 31, 2017 152 153 154 155 156 Download and install Ninja https://github.com/ninja-build/ninja/releases Install MinGW-w64 https://mingw-w64.org/ Vinson Lee committed Nov 03, 2011 157 158 159 Download OpenGL Core API and Extension Header Files. http://www.opengl.org/registry/#headers Jason Ekstrand committed Jun 08, 2018 160 Pass -DGLEXT_INCLUDE_DIR=/path/to/headers Jordan Justen committed Apr 02, 2014 161 162 Install python mako. Jason Ekstrand committed Jun 08, 2018 163 164 pip install mako Jose Fonseca committed Jul 31, 2017 165 166 Install NumPy. Jason Ekstrand committed Jun 08, 2018 167 168 pip install numpy Jordan Justen committed Apr 02, 2014 169 Emil Velikov committed Dec 14, 2014 170 Jason Ekstrand committed Jun 08, 2018 171 #### 2.5.1 GLUT Emil Velikov committed Dec 14, 2014 172 Jose Fonseca committed Jul 31, 2017 173 174 Download freeglut for Mingw. http://www.transmissionzero.co.uk/software/freeglut-devel/ Emil Velikov committed Dec 14, 2014 175 Jason Ekstrand committed Jun 08, 2018 176 177 cmake -H. -Bbuild -G "Ninja" -DGLEXT_INCLUDE_DIR=\path\to\glext -DGLUT_INCLUDE_DIR=\path\to\freeglut\include -DGLUT_glut_LIBRARY=\path\to\freeglut\lib\x64\libfreeglut.a -DGLEXT_INCLUDE_DIR=\path\to\glext ninja -C build Vinson Lee committed Nov 03, 2011 178 179 Jason Ekstrand committed Jun 08, 2018 180 #### 2.5.2 Waffle Emil Velikov committed Dec 14, 2014 181 Jose Fonseca committed Jul 31, 2017 182 Download and build waffle for MinGW. Emil Velikov committed Dec 14, 2014 183 184 185 186 187 http://www.waffle-gl.org/ Open the Command Prompt. CD to piglit directory. Jason Ekstrand committed Jun 08, 2018 188 cmake -H. -Bbuild -G "Ninja" -DGLEXT_INCLUDE_DIR=\path\to\glext -DPIGLIT_USE_WAFFLE=TRUE -DWAFFLE_INCLUDE_DIRS=\path\to\waffle\include\waffle WAFFLE_LDFLAGS=\path\to\waffle\lib\libwaffle-1.a Jose Fonseca committed Jul 31, 2017 189 Vinson Lee committed Nov 03, 2011 190 Nicolai Hähnle committed Mar 25, 2007 191 192 193 194 195 3. How to run tests ------------------- Make sure that everything is set up correctly: Jason Ekstrand committed Jun 08, 2018 196 $ ./piglit run sanity results/sanity Nicolai Hähnle committed Mar 25, 2007 197 Dylan Baker committed Oct 29, 2014 198 199 You may include '.py' on the profile, or you may exclude it (sanity vs sanity.py), both are equally valid. Chad Versace committed Mar 12, 2011 200 Dylan Baker committed Jan 27, 2015 201 202 203 You may also preface test profiles with tests/ (or any other path you like), which may be useful for shell tab completion. Dylan Baker committed Jan 27, 2015 204 You may provide multiple profiles to be run at the same time: Jason Ekstrand committed Jun 08, 2018 205 Jason Ekstrand committed Jun 08, 2018 206 $./piglit run quick_cl gpu deqp_gles3 results/gl-cl-combined Dylan Baker committed Jan 27, 2015 207 Chad Versace committed Mar 12, 2011 208 Use Nicolai Hähnle committed Mar 25, 2007 209 Jason Ekstrand committed Jun 08, 2018 210 211 212 213 214 $ ./piglit run or $./piglit run -h Nicolai Hähnle committed Mar 25, 2007 215 Dylan Baker committed Jan 27, 2015 216 217 218 To learn more about the command's syntax. Have a look into the tests/ directory to see what test profiles are available: Nicolai Hähnle committed Mar 25, 2007 219 Jason Ekstrand committed Jun 08, 2018 220 $ ls tests/*.py Nicolai Hähnle committed Mar 25, 2007 221 Nicolai Hähnle committed Sep 21, 2009 222 223 See also section 4. Nicolai Hähnle committed Mar 25, 2007 224 225 To create some nice formatted test summaries, run Jason Ekstrand committed Jun 08, 2018 226 $./piglit summary html summary/sanity results/sanity Nicolai Hähnle committed Mar 25, 2007 227 228 Hint: You can combine multiple test results into a single summary. Jason Ekstrand committed Jun 08, 2018 229 During development, you can use this to watch for regressions: Nicolai Hähnle committed Mar 25, 2007 230 Jason Ekstrand committed Jun 08, 2018 231 $ ./piglit summary html summary/compare results/baseline results/current Nicolai Hähnle committed Mar 25, 2007 232 Jason Ekstrand committed Jun 08, 2018 233 234 You can combine as many testruns as you want this way (in theory; the HTML layout becomes awkward when the number of testruns increases) Nicolai Hähnle committed Mar 25, 2007 235 236 237 Have a look at the results with a browser: Jason Ekstrand committed Jun 08, 2018 238 $xdg-open summary/sanity/index.html Nicolai Hähnle committed Mar 25, 2007 239 240 241 The summary shows the 'status' of a test: Jason Ekstrand committed Jun 08, 2018 242 243 244 245 246 247 248 - **pass:** This test has completed successfully. - **warn:** The test completed successfully, but something unexpected happened. Look at the details for more information. - **fail:** The test failed. - **crash:** The test binary exited with a non-zero exit code. - **skip:** The test was skipped. - **timeout:** The test ran longer than its allotted time and was forcibly killed. Jason Ekstrand committed Jun 08, 2018 249 Dylan Baker committed Oct 29, 2014 250 251 252 There are also dmesg-* statuses. These have the same meaning as above, but are triggered by dmesg related messages. Nicolai Hähnle committed Mar 25, 2007 253 Jason Ekstrand committed Jun 08, 2018 254 255 ### 3.1 Environment Variables Dylan Baker committed Jan 08, 2016 256 257 258 259 There are a number of environment variables that control the way piglit behaves. Jason Ekstrand committed Jun 08, 2018 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 - PIGLIT_COMPRESSION Overrides the compression method used. The same values that piglit.conf allows for core:compression. - PIGLIT_PLATFORM Overrides the platform run on. These allow the same values as piglit run -p. This values is honored by the tests themselves, and can be used when running a single test. - PIGLIT_FORCE_GLSLPARSER_DESKTOP Force glslparser tests to be run with the desktop (non-gles) version of glslparsertest. This can be used to test ESX_compatability extensions for OpenGL - PIGLIT_NO_FAST_SKIP Piglit has a mechanism run in the python layer for skipping tests with unmet OpenGL or window system dependencies without starting a new process (which is expensive). Sometimes this system doesn't work or is undesirable, setting this environment variable to True will disable this system. - PIGLIT_NO_TIMEOUT When this variable is true in python then any timeouts given by tests will be ignored, and they will run until completion or they are killed. Dylan Baker committed Jan 21, 2016 289 Neil Roberts committed Nov 07, 2018 290 291 292 293 294 295 296 - PIGLIT_VKRUNNER_BINARY Can be used to override the path to the vkrunner executable for running Vulkan shader tests. Alternatively the config option vkrunner:bin can be used instead. If neither are set then vkrunner will be searched for in the search path. Jason Ekstrand committed Jun 08, 2018 297 298 ### 3.2 Note Dylan Baker committed Aug 28, 2015 299 Jason Ekstrand committed Jun 08, 2018 300 301 302 303 304 305 The way piglit run and piglit summary count tests are different, piglit run counts the number of Test derived instance in the profile(s) selected, while piglit summary counts the number of subtests a result contains, or it's result if there are no subtests. This means that the number shown by piglit run will be less than or equal to the number calculated by piglit summary. Dylan Baker committed Aug 28, 2015 306 Nicolai Hähnle committed Mar 25, 2007 307 Jason Ekstrand committed Jun 08, 2018 308 ### 3.3 Shell Completions Dylan Baker committed Mar 03, 2016 309 310 311 312 313 314 315 316 Piglit has completions for bash, located in completions/bash/piglit. Once this file is sourced into bash piglit and ./piglit will have tab completion available. For global availability place the file somewhere that bash will source the file on startup. If piglit is installed and bash-completions are available, then this completion file will be installed system-wide. Nicolai Hähnle committed Sep 21, 2009 317 318 319 320 321 322 4. Available test sets ---------------------- Test sets are specified as Python scripts in the tests directory. The following test sets are currently available: Dylan Baker committed Oct 29, 2014 323 Jason Ekstrand committed Jun 08, 2018 324 ### 4.1 OpenGL Tests Dylan Baker committed Oct 29, 2014 325 Jason Ekstrand committed Jun 08, 2018 326 327 328 329 - **sanity.py** This suite contains minimal OpenGL sanity tests. These tests must pass, otherwise the other tests will not generate reliable results. - **all.py** This suite contains all OpenGL tests. - **quick.py** Run all tests, but cut down significantly on their runtime Nicolai Hähnle committed Sep 21, 2009 330 (and thus on the number of problems they can find). Jason Ekstrand committed Jun 08, 2018 331 332 333 334 335 336 337 338 339 340 - **gpu.py** A further reduced set of tests from quick.py, this runs tests only for hardware functionality and not tests for the software stack. - **llvmpipe.py** A reduced set of tests from gpu.py removing tests that are problematic using llvmpipe - **cpu.py** This profile runs tests that don't touch the gpu, in other words all of the tests in quick.py that are not run by gpu.py - **glslparser.py** A subset of all.py which runs only glslparser tests - **shader.py** A subset of all.py which runs only shader tests - **no_error.py** A modified version of the test list run as khr_no_error variants Rhys Kidd committed Oct 04, 2017 341 Nicolai Hähnle committed Sep 21, 2009 342 Jason Ekstrand committed Jun 08, 2018 343 ### 4.2 OpenCL Tests Dylan Baker committed Oct 29, 2014 344 Jason Ekstrand committed Jun 08, 2018 345 346 347 - **cl.py** This suite contains all OpenCL tests. - **quick_cl.py** This runs all of the tests from cl.py as well as tests from opencv and oclconform. Dylan Baker committed Oct 29, 2014 348 349 Neil Roberts committed Nov 07, 2018 350 351 352 353 354 355 356 ### 4.3 Vulkan tests - **vulkan.py** This suite contains all Vulkan tests. Note that currently all of the Vulkan tests require VkRunner. If it is not installed then all of the tests will be skipped. ### 4.4 External Integration Dylan Baker committed Oct 29, 2014 357 Jason Ekstrand committed Jun 08, 2018 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 - **xts.py** Support for running the X Test Suite using piglit. - **igt.py** Support for running Intel-gpu-tools test suite using piglit. - **deqp_egl.py** Support for running dEQP's EGL profile with piglit. - **deqp_gles2.py** Support for running dEQP's gles2 profile with piglit. - **deqp_gles3.py** Support for running dEQP's gles3 profile with piglit. - **deqp_gles31.py** Support for running dEQP's gles3.1 profile with piglit. - **deqp_vk.py** Support for running the official Khronos Vulkan CTS profile with piglit. - **khr_gl.py** Support for running the open source Khronos OpenGL CTS tests with piglit. - **khr_gl45.py** Support for running the open source Khronos OpenGL 4.5 CTS tests with piglit. - **cts_gl.py** Support for running the closed source Khronos OpenGL CTS tests with piglit. - **cts_gl45.py** Support for running the closed source Khronos OpenGL 4.5 CTS tests with piglit. - **cts_gles.py** Support for running the closed source Khronos GLES CTS tests with piglit. - **oglconform.py** Support for running sub-test of the Intel oglconform test suite with piglit. Rhys Kidd committed Oct 04, 2017 378 Dylan Baker committed Oct 29, 2014 379 Nicolai Hähnle committed Sep 21, 2009 380 5. How to write tests Nicolai Hähnle committed Mar 25, 2007 381 382 383 384 385 386 --------------------- Every test is run as a separate process. This minimizes the impact that severe bugs like memory corruption have on the testing process. Therefore, tests can be implemented in an arbitrary standalone language. Eric Engestrom committed Apr 17, 2016 387 C is the preferred language for compiled tests, piglit also supports its own Dylan Baker committed Oct 29, 2014 388 simple formats for test shaders and glsl parser input. Nicolai Hähnle committed Mar 25, 2007 389 Dylan Baker committed Oct 29, 2014 390 391 392 All new tests must be added to the appropriate profile, all.py profile for OpenGL and cl.py for OpenCL. There are a few basic test classes supported by the python framework: Nicolai Hähnle committed Mar 25, 2007 393 Jason Ekstrand committed Jun 08, 2018 394 395 - PiglitBaseTest A shared base class for all native piglit tests. Dylan Baker committed Oct 29, 2014 396 Jason Ekstrand committed Jun 08, 2018 397 398 It starts each test as a subprocess, captures stdout and stderr, and waits for the test to return. Jason Ekstrand committed Jun 08, 2018 399 Jason Ekstrand committed Jun 08, 2018 400 401 It provides test timeouts by setting the instances 'timeout' attribute to an integer > 0 which is the number of seconds the test should run. Dylan Baker committed Oct 29, 2014 402 Jason Ekstrand committed Jun 08, 2018 403 404 405 It interprets output by reading stdout and looking for 'PIGLIT: ' in the output, and then reading any trailing characters as well formed json returning the test result. Dylan Baker committed Oct 29, 2014 406 Jason Ekstrand committed Jun 08, 2018 407 408 This is a base class and should not be used directly, but provides an explanation of the behavior of the following classes. Dylan Baker committed Oct 29, 2014 409 Jason Ekstrand committed Jun 08, 2018 410 411 - PiglitGLTest A test class for native piglit OpenGL tests. Dylan Baker committed Oct 29, 2014 412 Jason Ekstrand committed Jun 08, 2018 413 414 415 In addition to the properties of PiglitBaseTest it provides a mechanism for detecting test window resizes and rerunning tests as well as keyword arguments for platform requirements. Dylan Baker committed Oct 29, 2014 416 Jason Ekstrand committed Jun 08, 2018 417 418 - PiglitCLTest A test class for native piglit OpenCL tests. Dylan Baker committed Oct 29, 2014 419 Jason Ekstrand committed Jun 08, 2018 420 It currently provides no special features. Dylan Baker committed Oct 29, 2014 421 Jason Ekstrand committed Jun 08, 2018 422 423 - GLSLParserTest A class for testing a glsl parser. Dylan Baker committed Oct 29, 2014 424 Jason Ekstrand committed Jun 08, 2018 425 426 It is generally unnecessary to call this class directly as it uses a helper function to search directories for tests. Nicolai Hähnle committed Mar 25, 2007 427 Jason Ekstrand committed Jun 08, 2018 428 429 - ShaderTest A class for testing using OpenGL shaders. Nicolai Hähnle committed Mar 25, 2007 430 Jason Ekstrand committed Jun 08, 2018 431 432 It is generally unnecessary to call this class directly as it uses a helper function to search directories for tests. Dylan Baker committed Dec 31, 2015 433 434 435 436 437 438 439 440 441 442 443 444 445 446 6. Integration -------------- Piglit provides integration for other test suites as well. The rational for this is that it provides piglit's one process per test protections (one test crashing does not crash the whole suite), and access to piglit's reporting tools. Most integration is done through the use of piglit.conf, or through environment variables, with piglit.conf being the preferred method. Jason Ekstrand committed Jun 08, 2018 447 ### 6.1 dEQP Dylan Baker committed Dec 31, 2015 448 449 450 451 452 453 454 455 Piglit provides a generic layer for dEQP based test suites, and specific integration for several suites. I suggest using Chad Versace's repo of dEQP, which contains a gbm target. https://github.com/chadversary/deqp It should be built as follows: Jason Ekstrand committed Jun 08, 2018 456 457 cmake . -DDEQP_TARGET=gbm -GNinja Dylan Baker committed Dec 31, 2015 458 459 460 461 462 463 464 465 Additional targets are available in the targets directory. gbm isn't compatible for most (any?) blob driver, so another target might be necessary if that is a requirement. One of the x11_* targets or drm is probably a good choice. The use of ninja is optional. Once dEQP is built add the following information to piglit.conf, which can Jason Ekstrand committed Jun 08, 2018 466 467 either be located in the root of the piglit repo, or in $XDG_CONFIG_HOME (usually \$HOME/.config). Dylan Baker committed Dec 31, 2015 468 Jason Ekstrand committed Jun 08, 2018 469 470 [deqp-gles2] bin=/modules/gles2/deqp-gles2 Dylan Baker committed Dec 31, 2015 471 Jason Ekstrand committed Jun 08, 2018 472 473 [deqp-gles3] bin=/modules/gles3/deqp-gles3 Dylan Baker committed Dec 31, 2015 474 Jason Ekstrand committed Jun 08, 2018 475 476 [deqp-gles31] bin=/modules/gles31/deqp-gles31 Dylan Baker committed Dec 31, 2015 477 478 These platforms can be run using deqp_gles*.py as a suite in piglit. Jason Ekstrand committed Jun 08, 2018 479 480 481 For example: ./piglit run deqp_gles31 my_results -c Dylan Baker committed Dec 31, 2015 482 483 It is also possible to mix integrated suites and piglit profiles together: Jason Ekstrand committed Jun 08, 2018 484 485 ./piglit run deqp_gles31 quick cl my_results Dylan Baker committed Dec 31, 2015 486 487 488 489 490 dEQP profiles generally contain all of the tests from the previous profile, so gles31 covers gles3 and gles2. Jason Ekstrand committed Jun 08, 2018 491 ### 6.2 Khronos CTS Dylan Baker committed Dec 31, 2015 492 493 494 Add the following to your piglit.conf file: Jason Ekstrand committed Jun 08, 2018 495 496 [cts] bin=/cts/glcts | 2019-07-15 20:06:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4981059730052948, "perplexity": 14425.628241687082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524111.50/warc/CC-MAIN-20190715195204-20190715221204-00409.warc.gz"} |
http://www.gallifrance.net/chcn32u/viewtopic.php?id=8e3e13-derivative-symbol-in-word | For more info, see Insert a symbol in Word.. Derivative definition: A derivative is something which has been developed or obtained from something else. Obtain this by typing the fraction and pressing space: 1/2, 1 which browser is better Microsoft edge or foxfire browser default? As I said last entry, I'm working on a symbolic logic course and am learning new quirks for dealing with with Unicode logic symbols...and one of them apparently is the Microsoft Word Insert Symbol tool (this is found by going to Insert » Symbol in most versions of Word.. Like the Windows Character Map and Mac Character Palette, the Insert Symbol ⦠Fill in this slope formula: ÎyÎx = f(x+Îx) â f(x)Îx 2. d If you plan to type in other languages often you should consider switching your keyboard layout to that ⦠⦠→ ) are denoted by a hat (circumflex), which can be obtained by following a letter variable with "\hat". All Free. This affects a few expressions to make them appear smaller. 1 where it will display multiple symbols, although the del symbol for partial. {\displaystyle {\sqrt[{a}]{b}}} The partial derivative of a function f with respect to the variable x is variously denoted by The partial-derivative symbol is â. x Go to insert symbol and type "2202" in character code from Unicode category. Is there a music streaming service that specializes in music videos and live concerts? Relevance. \sdiv) and pressing space (twice) or by typing 1 \ldiv 2 (resp. There are multiple ways to display a fraction. The \partialcommand is used to write the partial derivative in any equation. {\displaystyle \left({\frac {1}{2}}(x+1)\right)}. 2 One of the most common modern notations for differentiation is due to Joseph Louis Lagrange. Typically this is the LaTeX code for the symbol. Derivatives Derivative Applications Limits Integrals Integral Applications Riemann Sum Series ODE Multivariable Calculus Laplace Transform Taylor/Maclaurin Series Fourier Series Functions Line Equations Functions Arithmetic & Comp. \cup - Union (if you want to see big symbol, enter \bigcup) \neq - Not equal to \approx - Almost equal to (asymptotic to) \equiv - Identical to (equivalent) \int - Integral \oint - Contour integral. Editing Math with Microsoft Word Dennis Silverman, UC Irvine, Physics and Astronomy. This is implemented via math autocorrect which you can modify. Inline specifies that the equation is to be in line with text. Thanks. The easiest thing to do would be to find a LaTeX reference sheet. d There are differences between Math Builder and LaTeX code: advanced functionality that requires more than just a symbol tend to follow the same flavor but have slightly different syntax. {\displaystyle \int \limits _{a}^{b}{\frac {1}{x}}\,dx}. y Creative Commons Attribution-ShareAlike License. What new habits have you formed this year due to covid ? A A stepping stone between word processing (MS Word) and typesetting (LaTeX). These symbols are constructed with all the commands starting with "\" as illustrated in the above sections. ∇ If f is a function, then its derivative evaluated at x is written Linear fraction (resp. Let's use the view of derivatives as tangents to motivate a geometric definition of the derivative. Fraktur letters can be obtained by typing "\" followed by "fraktur" followed by the letter. Use "@" to separate rows, and "&" to separate columns. These symbols include "(), {}, [], ||". denotes the antiderivative of something (the expression where the " ⯠" is), so the symbol d d x (â¯) denotes the derivative of something (again, the expression where the " ⯠" is). b Typing Delta symbol in Word/Excel. $(2)$ which are related to the derivatives of Christoffel symbols in $(1)$. In Word 2016 equation many symbols can be inserted using a \+name of the symbol: How to quickly insert Double strike or Blackboard bold symbols in Word equations Double strike or Blackboard bold is a typeface style that is often used for certain symbols in mathematical texts, in which certain lines of the symbol ⦠¨ x The word "integral" can also be used as an adjective meaning "related to integers ". Greek letters can be obtained by typing a "\" followed by the name of symbol. Get answers by asking now. For instance fractions will use a smaller font. Additionally, \sqrt(x) will simply output {\displaystyle {\overrightarrow {A}}} NOTE: I demonstrate using the uppercase Delta symbol (Î).However, the same method can be used to insert any other symbol including the lowercase Delta symbol (δ).Below are the various ways to insert the Delta symbol into Word. + 1 1 \sdiv 2) and pressing space. 5 r You can do this by adding the command to the math autocorrect directory. To find the derivative of a function y = f(x)we use the slope formula: Slope = Change in Y Change in X = ÎyÎx And (from the diagram) we see that: Now follow these steps: 1. a . Display specifies to use as much space as needed. You get to equation editor by: Insert-Object-Microsoft Equation 3.0; You can then also check Display Icon to make an icon on the toolbar. Could someone tell me exactly where it is if it is in symbols because I keep missing it. The derivative is the instantaneous rate of change of a function with respect to one of its variables. 3 {\displaystyle \nabla \cdot A} There are different orders of derivatives. ∇ {\displaystyle x_{2}^{5}}. Insert ---- Equations ---- fraction ----- common fraction. 2 One easy way to do this is by pressing the right arrow key. The symbol for integration, in calculus, is: {\displaystyle \textstyle \int _ {\,}^ {\,}} as a tall letter ⦠The format used is non-proprietary and given in Unicode Technical Note #28. Analysis & calculus symbols table - limit, epsilon, derivative, integral, interval, imaginary unit, convolution, laplace transform, fourier transform The default is vertically aligned as illustrated below. The default is vertically aligned as illustrated below. Not to be confused with ð¿ or ð. x p Microsoft Word has two different typing environments: text and math. Radicals are obtained using the "\sqrt" symbol, followed by the index, then "&", then the radicand. ] ctrl+= goes into subscript mode. Join Yahoo Answers and get 100 points today. 2 In calculus, the slope of the tangent line to a curve at a particular point on the curve. b Taking the prime symbol as an example, in order to do that: Double click on the text label to enter in-place edit mode. This has not been verified with Equation Editor or Word for Mac. The monomial below can be obtained by typing x_2^5 or x^5_2 and pressing space. Use "@" to separate equations. Blackboard bold letters can be obtained by typing "\" followed by "double" followed by the letter. ^ A vector is often denoted by an overhead right arrow, which can be obtained by following a letter variable with "\vec": There are multiple ways to display a fraction. \sdiv) and pressing space (twice) or by typing 1 \ldiv 2 (resp. {\displaystyle {\hat {x}}} | Meaning, pronunciation, translations and ⦠How to use all these symbols outside the equation, select the option Use Math AutoCorrect rules outside of math regions in the Word ⦠{\displaystyle {\frac {1}{2}}}. The derivative in mathematics signifies the rate of change. Grouping symbols will automatically size to the appropriate size. Simplify it as best we can 3. The matrix below can be created by typing [\matrix(1&2&3@4&5&6)]. The second and then every other occurrence is white space. Definition of Partial Derivative in the Definitions.net dictionary. \doubled \doubleD produces {\displaystyle {\sqrt {x}}} Indeed, the first stands for "material derivative" while the second is an ordinary partial derivative. Suitable for Grade 12 Calculus, Ontario curriculum (MCV4U). Equations have two forms. skewed fraction) is obtained using \ldiv (resp. ∫ This will insert the delta symbol in the selected cell. Unit vectors (e.g. 1.43 FAQ-159 How can I add a prime or double prime symbol in text labels? MS-Word File with Mathematical Symbols First I give a list of symbols for both MS-Word and Powerpoint. 6 â - this symbol . ctrl+space goes back to normal mode. It can be used in Outlook to easily write equations in emails; it renders as images to the recipent. It's easy to get started: it's already built in to Microsoft Word. Is there a way to do automatic backups on an external hard disk with windows 10 or do I need a program for that? Math Builder code tends to be shorter than LaTeX code and disappears upon completion to the WYSIWYG output. Typing any document whose focus is not itself mathematics. Using MS Word 2007, creating derivative notation and correct multiplication signs. In the Symbols dialogue box that opens, select the âGreek and Copticâ as Font Subset. ( Note that this is a different tool than the legacy tool Equation Editor 3.0 (which is still available on 32-bit Office versions until the January 2018 update[1]) and MathType. To obtain the math environment, click on "Equation" on the "Insert" ribbon on Windows or Word for Mac '16, or in "Document Elements" on Word for Mac '11. derivative - WordReference English dictionary, questions, discussion and forums. Matrices are obtained with the "\matrix" symbol. Examples here are matrices, multiple aligned equations, and binomial coefficients. 4 Some uncommon symbols are not listed in the menu and require knowing the keyboard shortcut. Still have questions? In introductory courses, much math can be edited without using the equation editor. derivatives copies to this client as a paragraph symbol ¶ but it copies correctly. Let's write the order of derivatives using the Latex code. For example: \sqrt(a&b) will output 7 This symbol can be used variously to denote a partial derivative such as She is a RN.? Get the complete details on Unicode character U+2202 on FileFormat.Info [ 1 decade ago. Answer Save. These are all common symbols. 2 Answers. Everything in Math Builder requires special symbols that the computer knows how to interpret. Could someone tell me exactly where it is if it is in symbols because I keep missing it. This book is about the Math Builder (officially called as Equation Editor) tool in Microsoft Word and Outlook 2007 and higher. Obtain this by typing the fraction and pressing space: 1/2 1 2 {\displaystyle {\frac {1}{2}}} Linear fraction (resp. The Insert Symbol Tool in Word. . Derivative definition, derived. r The equations below can be obtained by typing the following text: 2 Aligning equations can be obtained with the "eqarray" symbol. Word Problems. Then I explain how to get summation and integration, how to put one thing above another, and, finally, how to make fractions, for MS-Word. Its calculation, in fact, derives from the slope formula for a straight line, except that a limiting process must be used for curves.The slope is often expressed as the ⦠in Microsoft Office Products such as Word and Powerpoint. symbol (n.) early 15c., "creed, summary, religious belief," from Late Latin symbolum "creed, token, mark," from Greek symbolon "token, watchword, sign by which one infers; ticket, a permit, licence" (the word was applied c. 250 by Cyprian of Carthage to the Apostles' Creed, on the notion of the "mark" that distinguishes Christians ⦠This is equivalent to finding the slope of the tangent line to the function at a point. 1 x ? . Use parentheses to start and end the matrix. Math Builder is a much easier to use tool that has less functionality than LaTeX but more than typical document processing. A derivative is the steepness (or "slope"), as the rate of change, of a curve. In the following list of symbols, ⦠Integrals are obtained by inserting the desired integral symbol (see above table), and then pressing space twice. The first "&" and then every other occurrence is alignment. Typesetting mathematics on a computer has always been a challenge. v Fraktur does not have capitals. This method is ⦠The partial derivative is defined as a method to hold the variable constants. $\endgroup$ â Giuseppe Negro Mar 1 '13 at 0:18 4 $\begingroup$ But a single variable function is just a special case of a function taking n variables. | 2021-06-22 01:43:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8728530406951904, "perplexity": 1747.427836318825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488504969.64/warc/CC-MAIN-20210622002655-20210622032655-00039.warc.gz"} |
https://math.stackexchange.com/questions/3125565/how-to-solve-this-differential-equation-for-psi-n | # How to solve this differential equation for $\psi_n$?
How to solve this differential equation for $$\psi_n$$?:
$$\frac{1}{2}\frac{\partial^2}{\partial x^2}\psi_n=\lambda_n\psi_n$$
apparently this is a heat equation but I cannot find information on this. Any help is much appreciated. thanks.
EDIT
The boundary conditions are for initial and terminal, respectively, $$\psi_n(x_0)=0$$, and $$\psi_n(x_T)=0$$.
• What are your boundary conditions? Where did this equation come from? What was the starting problem? Please add more context – Dylan Feb 25 at 5:36
• @Dylan, the boundary conditions are 0 at both the initial and terminal points $x_0$ and $x_T$...and I'm still trying to see how to solve for $c_1$ and $c_2$....and the lambdas. I really want to take the $ln$ of both sides, but obviously that will not work when there is a zero. Is a taylor expansion useful here? – nundo Feb 25 at 5:40
No, this is not "a heat equation", but solving it (with appropriate boundary conditions) is typically one of the steps in solving a heat equation boundary value problem using separation of variables. The general solution of your equation is $$\psi_n = c_1 \exp(\sqrt{2\lambda_n} x) + c_2 \exp(-\sqrt{2\lambda_n} x)$$. | 2019-04-20 04:44:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9079915285110474, "perplexity": 181.83216956356486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578528523.35/warc/CC-MAIN-20190420040932-20190420062932-00147.warc.gz"} |
https://docs.oramavr.com/en/4.0.2/unity/tutorials/action_prototypes/insert_action.html | # Insert Action¶
To generate an Insert Action you need the following three prefabs:
1. The interactable Prefab
2. Its final position
3. A hologram indicating the final position
## Interactable Prefab¶
From the MAGES menu select the option CreatePrefab/Interactable
The template gameObject for the interactable prefab will appear. It is recommended to use this object as the root of your interactable prefab. Now we will populate the prefab with our object. In this case we will use two cubes. Below you can see the final result.
We renamed the gameobject to “Interactable” for our convenience. Remember to add physical colliders to the object as you need to grab it, otherwise it will pass through the table.
The next step is optional but recommended for a more natural interaction.
We need to configure hand postures when interacting with an interactable object. You can read here a detailed tutorial on how to properly setup hand postures. The image below shows the posture of the right hand attached on our object.
## Final Prefab¶
The next step is to generate the final prefab. This indicates the correct position and the orientation of the object. In a similar way, we navigate to the MAGES menu and click the CreatePrefab/Final Placement of Interactable.
Warning
The Final prefab must have the same pivot with the interactable prefab because the PrefabLerpPlacement script checks if the orientation (position and rotation) of the objects match to perform the Action.
For this reason, the safest way to generate the final prefab is to duplicate the interactable, copy the transform of its root, paste it on the final prefab template and transfer its children to the final prefab.
Remember to set its rigidbody component to kinematic and all its colliders to trigger.
The image below shows both the interactable (right) and the final prefab (right).
## Hologram Prefab¶
The hologram prefab does not have any component or script attached. It is just a copy of the final prefab with the holographic material. Remember to remove its colliders as well.
## Save prefabs and final configuration¶
Save the prefabs in the Resources folder. It is recommended to keep the prefabs in folders according to the scenegraph structure. In this case we will save the interactable, final and hologram prefab at Resources/Lesson0/Stage0/Action1 folder.
The final step is to configure the PrefabLerpPlacement script which is attached to our final prefab. This component indicates the interactable prefab that matches with this final prefab as well as the hologram. Additionally you can set up properties like the tolerance in angle difference with the interactable or set up the lerping behavior. The image below shows the interactable along with the hologram prefab linked with the PrefabLerpPlacement component.
## Action Script¶
In this step we will write the Action script. The script below initializes our interactable and final prefab and spawns the hologram prefab as well.
using MAGES.ActionPrototypes;
public class InsertCubeAction : InsertAction
{
public override void Initialize()
{
SetInsertPrefab("Lesson0/Stage0/Action1/Interactable", "Lesson0/Stage0/Action1/Final");
SetHoloObject("Lesson0/Stage0/Action1/Hologram");
base.Initialize();
}
}
We save the Action script in path following the scenegraph structure
## Add the Action to Scenegraph¶
The final step is to link the ActionScript to the scenegraph. From the MAGES menu click Scenegraph Editor. At the Scenegraph Editor tab, click File/Load and the proper .xml import the scenegraph. In this case is the Empty_Scene.xml.
To add a new Action Node right click inside the Scenegraph Editor and select “Action Node”. Fill the Action description, along with the proper NodeID (in this case is the second Action) Finally, add the reference of the Action script. | 2022-12-01 14:47:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3340769112110138, "perplexity": 3461.1447441551304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710813.48/warc/CC-MAIN-20221201121601-20221201151601-00464.warc.gz"} |
https://api-project-1022638073839.appspot.com/questions/how-do-you-solve-e-x-lnx-5 | # How do you solve e^x + lnx = 5?
May 9, 2016
1.522. Let $f \left(x\right) = {e}^{x} + \ln x - 5$. Locate the root in (n, n+1) by sign test f(n)f(n+1)<0. Bisect the interval and choose the half in which f passes the sign test. Halving continues, for desired accuracy...
#### Explanation:
The root could not be got in mathematical exactitude. Only an approximation to any sd-accuracy could be obtained. Graphical methods lack accuracy. Iterative methods are fine.
Bisection method is explained here. This is quite easy for anyone. There are other faster numerical methods..
Let $f \left(x\right) = {e}^{x} + \ln x - 5$.
As $f \left(1\right) f \left(2\right) = \left(- 2.28 . .\right) \left(3.08 . .\right) < 0$, a root lies between 1 and 2.
Now $f \left(1.5\right) = - 0.11 . . < 0$ and quite small in magnitude.
Use your discretion for a short enclosure.
Choose (1.5, 1.6) for sign test, and so on, until you get the desired accuracy.
I find f(1.5215)f(1.5220) < 0. So rounded 4-sd approximation of the root is 1.522..
This algorithms for the bisection method and its variations are programmable, for any befitting computer language... | 2021-10-18 07:22:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7126379013061523, "perplexity": 2153.4307674799556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585199.76/warc/CC-MAIN-20211018062819-20211018092819-00418.warc.gz"} |
http://math.stackexchange.com/questions/242431/invariant-factors-of-mathbb-z-m-times-mathbb-z-n | Invariant factors of $\mathbb Z_m\times\mathbb Z_n$
Prove that the invariant factors of $\mathbb Z_m\times\mathbb Z_n$ are $mn$ if $m$ and $n$ are relatively prime and are the greatest common divisor and the least common multiple of $m$ and $n$ if they are not relatively prime.
-
Hint: use the Chinese Remainder Theorem. – DonAntonio Nov 22 '12 at 4:43
If $m$ and $n$ are coprime, we know that
$$\mathbb{Z}_{mn} \cong \mathbb{Z}_{m} \times \mathbb{Z}_{n}$$
Since one can easily define an isomorphism from the latter to the former via $\varphi(a,b) = (ab)$ (check that this works).
From this we can deduce that if $n = \prod_{i=1}^k p_{i}^{r_i}$ is the prime factorization of $n$, then
$$\mathbb{Z}/n\mathbb{Z} \cong \mathbb{Z}/p_{1}^{r_1}\mathbb{Z} \times \cdots \times \mathbb{Z}/p_{k}^{r_k}\mathbb{Z} \,\,\,\,\,\, (*)$$
If $m$ and $n$ are not coprime, we do the following. Let's start with $\mathbb{Z}_m \times \mathbb{Z}_n$. We can break $\mathbb{Z}_m$ and $\mathbb{Z}_n$ as in $(*)$ into the product of the cyclic groups corresponding to their prime factors. We will now "recollect" these terms so we get the product $\mathbb{Z}_{gcd(m,n)}$ and $\mathbb{Z}_{lcm(m,n)}$. Let $\mathbb{Z}/p_{i}^{r_i}$ be one of the groups in this product. If $p$ divides only one of $m,n$ let this group be part of our collection that will form the lcm. Otherwise, if $p$ divides both, so we have two groups of this form, then put the group with the larger power of $p$ into our collection for the lcm, and the other for the collection into the gcd.
Okay, now we have these two collections, and in each collection all the groups are of coprime order. Check that in our "lcm collection" the product of the orders of these groups is indeed the lcm, and since the orders are coprime their product is in fact $\mathbb{Z}_{lcm(m,n)}$ by the first part of this problem. By an identical argument, the product of the groups in the "gcd collection" is $\mathbb{Z}_{gcd(m,n)}$ , so we have
$$\mathbb{Z}_{m} \times \mathbb{Z}_{m} \cong \mathbb{Z}_{lcm(m,n)} \times \mathbb{Z}_{gcd(m,n)}$$
-
Thank you guys so much, is there by any chance a method to do this without the Chinese Remainder Theorem? – user50291 Nov 25 '12 at 18:27
Yes, I've just rewritten it so that it avoids the Chinese Remainder Theorem. Is this any better? – Isaac Solomon Nov 25 '12 at 19:24 | 2015-07-30 04:44:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7634003162384033, "perplexity": 121.67568257949777}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987127.36/warc/CC-MAIN-20150728002307-00171-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/117121/lower-bound-for-exponential-sums/117132 | MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4).
## Lower bound for exponential sums.
Let $D$ be a subset of $\mathbb Z/n \mathbb Z$ containing $0$. For $m$ an integer, set $$\alpha(m,D)=\sum_{d \in D} e\left (\frac{m d }{n}\right ),$$ where as usual $e(x) = e^{2 i \pi x}$ This is an exponential sum (or if you like: character sum). Obviously $|\alpha(m,D)| \leq |D|$.
Now consider $$\sigma(D) = \frac{1}{n} \sum_{m=0}^{n-1} |\alpha(m,D)|.$$
A simple upper bound using Cauchy-Schwartz for $\sigma(D)$ is $\sigma(D) \leq \sqrt{ |D|}$, which is better than the trivial bound $\sigma(D) \leq |D|$. But here I am interested in a lower bound:
is it true that $\sigma(D) \geq 1$, with equality only if $D$ is a subgroup of $\mathbb Z/n \mathbb Z$ ?
This seems true on some numerical tests I have done, and this seems a very elementary question , but I don't see how to prove it at this time (I might be missing something trivial). More generally, I am interested in any information or reference on this number $\sigma(D)$. It appears in the error term in the formula for the number of primes $p \leq x$ which residues modulo $n$ is in $D$.
-
This seems easier than you might have expected. Up to normalization, your quantities $\alpha(m,D)$ are Fourier coefficients of the indicator function of $D$ (for which reason many people would rather use the notation $\hat 1_D(m)$). As such, they satisfy the Parseval identity $$\sum_m |\alpha(m,D)|^2 = n|D|.$$ In view of $|\alpha(m,D)|\le|D|$, this yields $$n|D| \le \sum_m |D| |\alpha(m,D)|,$$ whence $$\sigma = \frac1n \sum_m |\alpha(m,D)| \ge 1.$$ Moreover, for equality to hold, one needs all $|\alpha(m,D)|$ to be equal to either $0$ or $|D|$, which is only possible if $D$ is a coset of a subgroup of ${\mathbb Z}/n{\mathbb Z}$. | 2013-06-19 21:26:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9751133322715759, "perplexity": 96.87259823225817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709224828/warc/CC-MAIN-20130516130024-00082-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/122406-roots-cubic-equations-new-equation.html | Thread: Roots of cubic equations and new equation
1. Roots of cubic equations and new equation
The equation $x^3 + 4x^2 + 3x + 2 = 0$ has roots $\alpha , \beta, \gamma$.
Find the equation with roots $\beta \gamma , \gamma \alpha , \alpha \beta$
For this I so far have:
$\alpha + \beta + \gamma$ = -4
$\beta \alpha + \gamma \alpha + \alpha \gamma$ = 3
$\alpha \beta \gamma$ = -2
How would I progress from here?
2. work out $(x-\alpha\gamma)(x-\alpha\beta)(x-\beta\gamma) = x^3-(\beta\gamma+\alpha\gamma+\alpha\beta)x^2 +(\alpha^2\beta\gamma+\alpha\beta^2\gamma+\alpha\b eta\gamma^2)x - \alpha^2\beta^2\gamma^2$ with the relations you have.
(Then $(x-\alpha\gamma)(x-\alpha\beta)(x-\beta\gamma) = 0$ is your desired equation)
3. Originally Posted by db5vry
The equation $x^3 + 4x^2 + 3x + 2 = 0$ has roots $\alpha , \beta, \gamma$.
Find the equation with roots $\beta \gamma , \gamma \alpha , \alpha \beta$
I would've made $x^3 + 4x^2 + 3x + 2 = (x-\alpha)(x-\beta)(x-\gamma)$ and then factorised the LHS to find $\beta \gamma , \alpha$
Originally Posted by db5vry
$\alpha + \beta + \gamma$ = -4
$\beta \alpha + \gamma \alpha + \alpha \gamma$ = 3
$\alpha \beta \gamma$ = -2
How would I progress from here?
You should be able to solve it by substitution and elimination
for example
$\beta \alpha + \gamma \alpha + \alpha \gamma = 3$
Is the same as
$\alpha (\beta + 2\gamma )= 3$
$\alpha = \frac{3}{\beta + 2\gamma }$
4. Originally Posted by Dinkydoe
work out $(x-\alpha\gamma)(x-\alpha\beta)(x-\beta\gamma) = x^3-(\beta\gamma+\alpha\gamma+\alpha\beta)x^2 +(\alpha^2\beta\gamma+\alpha\beta^2\gamma+\alpha\b eta\gamma^2)x - \alpha^2\beta^2\gamma^2$ with the relations you have.
How does this work? I can see why you substitute $\beta \gamma + \gamma \alpha + \alpha \beta$ into the coeffient space but why does it become $\alpha^2\beta^2\gamma^2$ at the end? I don't understand that (yet).
5. Originally Posted by db5vry
but why does it become $\alpha^2\beta^2\gamma^2$ at the end?
It is the product of the last term in each factor
$-\alpha\gamma\times -\alpha\beta\times -\beta\gamma = -\alpha^2\beta^2\gamma^2$
6. Right, I found an answer for this question in detail now but I'm now sure how this works either:
It says consider $\beta\gamma + \gamma\alpha + \alpha\beta$ = 3
then
$\beta\gamma^2\alpha + \gamma\alpha^2\beta\gamma + \alpha\beta^2\gamma = \alpha\beta\gamma(\alpha + \beta + \gamma)$
= -2 multiplied by -4 = 8
How does the equation with the squares originate? What do you calculate in order to get to that?
7. How does this work? I can see why you substitute into the coeffient space but why does it become at the end? I don't understand that (yet).
There was asked for an equation with roots $\alpha\gamma,\beta\gamma, \alpha\beta$.
$(x-\alpha\gamma)(x-\beta\gamma)(x-\alpha\beta) = 0$ is ofcourse such an equation.
If you multiply all the factors you see the result is what I've written out.
the last coefficient is $-\alpha\gamma\cdot - \alpha\beta\cdot - \beta\gamma = -\alpha^2\beta^2\gamma^2$
8. Originally Posted by Dinkydoe
There was asked for an equation with roots $\alpha\gamma,\beta\gamma, \alpha\beta$.
$(x-\alpha\gamma)(x-\beta\gamma)(x-\alpha\beta) = 0$ is ofcourse such an equation.
If you multiply all the factors you see the result is what I've written out.
the last coefficient is $-\alpha\gamma\cdot - \alpha\beta\cdot - \beta\gamma = -\alpha^2\beta^2\gamma^2$
I now understand the last coefficient but I'm not sure about the one that I described in another post. Is there some way you can help me with the other one please
9. Have you actually tried to write out the expression $(x-\alpha\beta)(x-\beta\gamma)(x-\alpha\gamma)$ ?
You can multiply do you? Then collect the coefficients for the powers of x and decide what their values are on the basis of the relations you have.
10. Originally Posted by Dinkydoe
Have you actually tried to write out the expression $(x-\alpha\beta)(x-\beta\gamma)(x-\alpha\gamma)$ ?
You can multiply do you? Then collect the coefficients for the powers of x and decide what their values are on the basis of the relations you have.
I have multiplied that out but it doesn't make anything easier or make absolutely any sense to me!
My mark scheme says:
= -4
= 3
= -2
then consider = 3 [I understand this all up to here]
= -2 multiplied by -4 = 8 [I do not know how this works - where is the above line coming from?]
$\beta\gamma . \gamma\alpha . \alpha\beta$ = $\alpha^2\beta^2\gamma^2$
= 4 [I understand this]
then the required equation is
$x^3 - 3x^2 + 8x - 4 = 0$ [I can see how it got here considering the previous working]
All I really want to know is how the mark scheme has come to this! I understand most of it, barring the line which results in "=8".
11. Then I wonder how you came to these relations
= -4
= 3
= -2
The only way you could have found these relations is by considering $(x-\alpha)(x-\beta)(x-\gamma) = x^3-(\alpha+\beta+\gamma)x^2+(\alpha\beta+\alpha\gamma +\beta\gamma)x - \alpha\beta\gamma = x^3 + 4x^2 + 3x + 2$
Then the relations you start out with follow.
After that you want an equation with roots $\alpha\gamma,\beta\gamma,\alpha\beta$
An equation that satisfies this property is: $(x-\alpha\gamma)(x-\beta\gamma)(x-\alpha\beta) = 0$
But we like to know explicitly how this polynomial looks like from the relations we derived.
Thus we write out this factored form: This can be tenacious work but it's not that hard actually. Mathematics is 10 % inspiration, 90% perspiration.
12. Originally Posted by Dinkydoe
Then I wonder how you came to these relations
The only way you could have found these relations is by considering $(x-\alpha)(x-\beta)(x-\gamma) = x^3-(\alpha+\beta+\gamma)x^2+(\alpha\beta+\alpha\gamma +\beta\gamma)x - \alpha\beta\gamma = x^3 + 4x^2 + 3x + 2$
Then the relations you start out with follow.
After that you want an equation with roots $\alpha\gamma,\beta\gamma,\alpha\beta$
An equation that satisfies this property is: $(x-\alpha\gamma)(x-\beta\gamma)(x-\alpha\beta) = 0$
But we like to know explicitly how this polynomial looks like from the relations we derived.
Thus we write out this factored form: This can be tenacious work but it's not that hard actually. Mathematics is 10 % inspiration, 90% perspiration.
I came to the relations that you quoted by using: -b/a to get -4, c/a to get 3 and -d/a to get -2 which was simple enough.
I agree with what you said about mathematics, except with me much of the 90% perspiration is lost in confusion.
Please could you write out a solution? I have tortured myself too long over this question and have tried what you said - expanding those brackets but I can't do that without obtaining $x^2$ and $x$ in a number of terms and it doesn't add up to what my mark scheme details. | 2017-03-23 10:51:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 45, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8316269516944885, "perplexity": 524.2022796270301}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186891.75/warc/CC-MAIN-20170322212946-00151-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/86288-normal-line.html | 1. ## Normal Line
What is the equation of the line normal to the curve y=(ln(x))* e raised to (2x) where x=1? I just don't know how to find normal lines.
What is the equation of the line normal to the curve y=(ln(x))* e raised to (2x) where x=1? I just don't know how to find normal lines.
know how to find a tangent line?
normal lines are perpendicular to tangent lines.
3. I just drew a blank on how to find tangent lines. Derivative? I know perpendicular means the slope is the negative reciprocal.
4. The normal at the point $\left( {a,f(a)} \right)$ is $y - f(a) = \frac{{ - 1}}{{f'(a)}}\left( {x - a)} \right)$.
I just drew a blank on how to find tangent lines. Derivative? I know perpendicular means the slope is the negative reciprocal.
use the derivative to find the slope at at the given point ... normal slope will be the opposite reciprocal of that slope. finally, use the point-slope form to get the equation.
6. I don't understand.
$y = \ln{x} \cdot e^{2x}$ | 2017-11-23 15:57:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6928366422653198, "perplexity": 357.0191328562862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806842.71/warc/CC-MAIN-20171123142513-20171123162513-00424.warc.gz"} |
https://docs.nvidia.com/drive/archive/5.1.0L/nvvib_docs/DRIVE%20Linux%20AGX%20PDK%20Development%20Guide/baggage/structNvMediaEncodeConfigH265.html | ## NVIDIA DRIVE OS Linux API Reference
#### 5.1.3.0 Release
NvMediaEncodeConfigH265 Struct Reference
## Detailed Description
Holds the H265 encoder configuration parameters.
Definition at line 2304 of file nvmedia_common.h.
Collaboration diagram for NvMediaEncodeConfigH265:
## Data Fields
uint32_t features
Holds bit-wise ORed configuration feature flags. More...
uint32_t gopLength
Holds the number of pictures in one GOP. More...
NvMediaEncodeRCParams rcParams
Holds the rate control parameters for the current encoding session. More...
NvMediaEncodeH264SPSPPSRepeatMode repeatSPSPPS
Holds the frequency of the writing of Sequence and Picture parameters. More...
uint32_t idrPeriod
Holds the IDR interval. More...
uint16_t numSliceCountMinus1
Set to 1 less than the number of slices desired per frame. More...
uint8_t disableDeblockingFilter
Holds disable the deblocking filter. More...
uint8_t enableWeightedPrediction
Holds enable weighted predition. More...
uint32_t intraRefreshPeriod
Holds the interval between frames that trigger a new intra refresh cycle and this cycle lasts for intraRefreshCnt frames. More...
uint32_t intraRefreshCnt
Holds the number of frames over which intra refresh will happen. More...
uint32_t maxSliceSizeInBytes
Holds the maximum slice size in bytes for dynamic slice mode. More...
uint32_t numCTUsPerSlice
Number of CTU per slice. More...
NvMediaEncodeConfigH265VUIParamsh265VUIParameters
Holds the H265 video usability info pamameters. More...
NvMediaEncodeQuality quality
Holds encode quality pre-set. More...
NvMediaEncodeQP initQP
Holds Initial QP parameters. More...
NvMediaEncodeQP maxQP
Holds maximum QP parameters. More...
## Field Documentation
uint8_t NvMediaEncodeConfigH265::disableDeblockingFilter
Holds disable the deblocking filter.
Definition at line 2323 of file nvmedia_common.h.
uint8_t NvMediaEncodeConfigH265::enableWeightedPrediction
Holds enable weighted predition.
Definition at line 2325 of file nvmedia_common.h.
uint32_t NvMediaEncodeConfigH265::features
Holds bit-wise ORed configuration feature flags.
See the NvMediaEncodeH265Features enum.
Definition at line 2307 of file nvmedia_common.h.
uint32_t NvMediaEncodeConfigH265::gopLength
Holds the number of pictures in one GOP.
Low latency application client can set the goplength field to NVMEDIA_ENCODE_INFINITE_GOPLENGTH so that keyframes are not inserted automatically.
Definition at line 2311 of file nvmedia_common.h.
NvMediaEncodeConfigH265VUIParams* NvMediaEncodeConfigH265::h265VUIParameters
Holds the H265 video usability info pamameters.
Set to NULL if VUI is not needed
Definition at line 2352 of file nvmedia_common.h.
uint32_t NvMediaEncodeConfigH265::idrPeriod
Holds the IDR interval.
If not set, this is made equal to NvMediaEncodeConfigH265::gopLength. Low latency application client can set IDR interval to NVMEDIA_ENCODE_INFINITE_GOPLENGTH so that IDR frames are not inserted automatically.
Definition at line 2319 of file nvmedia_common.h.
NvMediaEncodeQP NvMediaEncodeConfigH265::initQP
Holds Initial QP parameters.
Client must set NVMEDIA_ENCODE_CONFIG_H265_INIT_QP in features to use this. QP values should be within the range of 1 to 51
Definition at line 2358 of file nvmedia_common.h.
uint32_t NvMediaEncodeConfigH265::intraRefreshCnt
Holds the number of frames over which intra refresh will happen.
This value must be less than or equal to intraRefreshPeriod. Setting it to zero turns off the intra refresh functionality. Setting it to one essentially means that after every intraRefreshPeriod frames the encoded P frame contains only intra predicted macroblocks. This value is used only if the NVMEDIA_ENCODE_CONFIG_H265_ENABLE_INTRA_REFRESH is set in features.
Definition at line 2343 of file nvmedia_common.h.
uint32_t NvMediaEncodeConfigH265::intraRefreshPeriod
Holds the interval between frames that trigger a new intra refresh cycle and this cycle lasts for intraRefreshCnt frames.
This value is used only if the NVMEDIA_ENCODE_CONFIG_H265_ENABLE_INTRA_REFRESH is set in features. The NVMEDIA_ENCODE_PIC_TYPE_P_INTRA_REFRESH picture type also triggers a new intra- refresh cycle and resets the current intra-refresh period. Setting it to zero results in that no automatic refresh cycles are triggered. In this case only NVMEDIA_ENCODE_PIC_TYPE_P_INTRA_REFRESH picture type can trigger a new refresh cycle.
Definition at line 2335 of file nvmedia_common.h.
NvMediaEncodeQP NvMediaEncodeConfigH265::maxQP
Holds maximum QP parameters.
Client must set NVMEDIA_ENCODE_CONFIG_H265_QP_MAX in features to use this. The maximum QP values must be within the range of 1 to 51 and must be set to a value greater than NvMediaEncodeRCParams::minQP.
Definition at line 2363 of file nvmedia_common.h.
uint32_t NvMediaEncodeConfigH265::maxSliceSizeInBytes
Holds the maximum slice size in bytes for dynamic slice mode.
The Client must set NVMEDIA_ENCODE_CONFIG_H265_ENABLE_DYNAMIC_SLICE_MODE in features to use max slice size in bytes.
Definition at line 2347 of file nvmedia_common.h.
uint32_t NvMediaEncodeConfigH265::numCTUsPerSlice
Number of CTU per slice.
Set to 0 if fix number of macroblocks not required or maxSliceSizeInBytes or numSliceCountMinus1 is set to non-zero value.
Definition at line 2350 of file nvmedia_common.h.
uint16_t NvMediaEncodeConfigH265::numSliceCountMinus1
Set to 1 less than the number of slices desired per frame.
Definition at line 2321 of file nvmedia_common.h.
NvMediaEncodeQuality NvMediaEncodeConfigH265::quality
Holds encode quality pre-set.
See NvMediaEncodeQuality enum. Recommended pre-setting is NVMEDIA_ENCODE_QUALITY_L0.
Definition at line 2355 of file nvmedia_common.h.
NvMediaEncodeRCParams NvMediaEncodeConfigH265::rcParams
Holds the rate control parameters for the current encoding session.
Definition at line 2313 of file nvmedia_common.h.
NvMediaEncodeH264SPSPPSRepeatMode NvMediaEncodeConfigH265::repeatSPSPPS
Holds the frequency of the writing of Sequence and Picture parameters.
Definition at line 2315 of file nvmedia_common.h.
The documentation for this struct was generated from the following file: | 2020-07-02 20:04:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23407964408397675, "perplexity": 13813.563841312975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655879738.16/warc/CC-MAIN-20200702174127-20200702204127-00231.warc.gz"} |
http://math.stackexchange.com/questions/195245/average-distance-between-random-points-on-a-line?answertab=active | # Average Distance Between Random Points on a Line
Suppose I have a line of length L. I now select two points at random along the line. What is the expectation value of the distance between the two points, and why?
-
$L/3$, by symmetry. – Byron Schmuland Sep 13 '12 at 15:18
Care to elaborate, @Byron? – David Sep 13 '12 at 16:13
@David I've added more explanation in my answer below. – Byron Schmuland Sep 13 '12 at 18:21
Let $X$ be a random variable uniformly distributed over $[0,L]$, i.e., the probability density function of $X$ is the following
$$f_X (x) = \displaystyle\left\{\begin{array}{rl} \frac{1}{L} & \textrm{if} \quad{} x \in [0,L]\\ 0 & \textrm{otherwise}\end{array}\right.$$
Let us randomly pick two points in $[0,L]$ independently. Let us denote those by $X_1$ and $X_2$, which are random variables distributed according to $f_X$. The distance between the two points is a new random variable
$$Y = |X_1 - X_2|$$
Hence, we would like to find the expected value $\mathbb{E}(Y) = \mathbb{E}( |X_1 - X_2| )$. Let us introduce function $g$
$$g (x_1,x_2) = |x_1 - x_2| = \displaystyle\left\{\begin{array}{rl} x_1 - x_2 & \textrm{if} \quad{} x_1 \geq x_2\\ x_2 - x_1 & \textrm{if} \quad{} x_2 \geq x_1\end{array}\right.$$
Since the two points are picked independently, the joint probability density function is the product of the pdf's of $X_1$ and $X_2$, i.e., $f_{X_1 X_2} (x_1, x_2) = f_{X_1} (x_1) f_{X_2} (x_2) = 1 / L^2$ in $[0,L] \times [0,L]$. Therefore, the expected value $\mathbb{E}(Y) = \mathbb{E}(g(X_1,X_2))$ is given by
$$\begin{array}{rl} \mathbb{E}(Y) &= \displaystyle\int_{0}^L\int_{0}^L g(x_1,x_2) \, f_{X_1 X_2} (x_1, x_2) \,d x_1 \, d x_2\\ &= \frac{1}{L^2}\displaystyle\int_{0}^L\int_{0}^L |x_1 - x_2| \,d x_1 \, d x_2\\ &= \frac{1}{L^2}\displaystyle\int_{0}^L\int_{0}^{x_1} (x_1 - x_2) \,d x_2 \, d x_1 + \frac{1}{L^2}\displaystyle\int_{0}^L\int_{x_1}^{L} (x_2 - x_1) \,d x_2 \, d x_1\\ &= \frac{L^3}{6 L^2} + \frac{L^3}{6 L^2} = \frac{L}{3}\end{array}$$
-
On the third line of the $\mathbb{E}(Y)$ derivation, shouldn't $d x_1$ and $d x_2$ be swapped? (Or the limits of the integrals.) Pedantic, I know. – David Sep 13 '12 at 16:17
@David: You're totally right. Thanks for pointing that out. I fixed those typos. – Rod Carvalho Sep 13 '12 at 16:22
Sorry. I posted a cryptic comment just before running off to class. What I meant was that if $X,Y$ are independent uniform $(0,1)$ random variables, then the triple $$(A,B,C):=(\min(X,Y),\ \max(X,Y)-\min(X,Y),\ 1-\max(X,Y))$$ is an exchangeable sequence. In particular, $\mathbb{E}(A)=\mathbb{E}(B)=\mathbb{E}(C),$ and since $A+B+C=1$ identically we must have $\mathbb{E}(B)=\mathbb{E}(\mbox{distance})={1\over 3}.$
Intuitively, the "average" configuration of two random points on a interval looks like this:
-
What I like the most about this answer is that it can be used to prove the case where we have $n$ points and not only two points $x$ and $y$. Or am I missing something ? – AJed Sep 11 '13 at 3:40
Let $X_1, X_2$ be independent uniformly distributed on $[0,1]$. Assume $X_1=x_1$. Then $P(X_2<x_1) = x_1$. Moreover $E(X_2|X_2<x_1)=\frac{x_1}2$ and $E(X_2|X_2>x-1) = x_1+\frac{1-x_1}2=\frac{1+x_1}2$, hence $E(|X_2-x_1|) = x_1\cdot\frac{x_1}2+(1-x_1)\cdot \frac{1-x_1}2=\frac12-x_1+x_1^2$. Finally $$E(|X_2-X_1|) = \int_0^1E(|X_2-x|)dx = \left[\frac12x-\frac12x^2+\frac13x^3\right]_0^1=\frac13.$$ Hence with an interval of length $L$ isntead of $1$, the answer is $\frac L 3$.
-
In the fourth sentence ("Moreover..") there are two places where on the RHS you wrote $x_2$ when you meant $x_1$, if I am not mistaken. – David Sep 13 '12 at 15:59
Yes, I did. Thanks – Hagen von Eitzen Sep 13 '12 at 16:42
Let $X_1$ and $X_2$ be independent identically distributed random variables, with $f_X(x) = [0<x<1]$. It is well known that $X \stackrel{d}{=} 1-X$.
For simplicity assume $L=1$.
Therefore $|X_1-X_2| \stackrel{d}{=} |X_1+X_2-1|$. Random variable $D = X_1+X_2-1$ follows symmetric triangular distribution on $(-1,1)$, being a special case of Irwin-Hall distribution. We immediately have: $$f_{|D|}(\ell) = 2 (1-\ell)[0<\ell<1]$$ Immediately yielding the expectation: $$\mathbb{E}(|D|) = \int_0^1 2 \ell(1-\ell) \mathrm{d} \ell = \frac{1}{3}$$
-
What does $\buildrel d\over=$ mean? – MJD Sep 13 '12 at 16:31
@MJD Symbol $\stackrel{d}{=}$ stands for "equality in distribution". – Sasha Sep 13 '12 at 16:34 | 2015-01-28 04:54:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.958894670009613, "perplexity": 233.90125391329906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122123549.67/warc/CC-MAIN-20150124175523-00154-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://www.gamedev.net/blogs/entry/1384211-untitled/ | • entries
707
1173
• views
433388
# Untitled
77 views
I still haven't done any game development stuff lately. Well, I just did for about the last hour, but before that it was nothing. I've been spending my time with the girl I'm going to ask out tonight and working.
Not much will be done today with it being Valentine's but I do plan on getting some mockups done. I'll see what I can get done and post those up later tonight.
On a totally unrelated matter, I'm probably going to be ordering a 37" LCD widescreen HDTV from Dell tonight. With my new job I'll be able to afford it along with the rest of my bills, I just need to get the ok from my parents (they said if I can afford it, I can buy it, but since the bill is in their name I'm going to ask first anyway.)
Speaking of bills, the doctor's office finally caught up with me (when I had to have my gums lanceted a while back) as well as Sallie Mae for my student loan. So that's another $77 a month ($25 for the doctor's bill since my mom said she'd pay half of it as it was her fault lol and $52 for Sallie Mae.) All in all at the moment I have about$192 I owe a month (soon to be \$248 once I put my order into Dell.)
Haven't gotten my 360 back yet.
Know how you feel. I just discovered that unbeknowst to me, I've been paying 2.5% interest on a £3,000 student loan for the last eight years, since I've been below the threshold for payment, and now owe £4,500. And I didn't even get a qualification.
Now they get to take 9% of anything above £1,250 I earn in a month. Only found out I still had a loan because I had a good month sales-wise in December and went over the bar in January.
Call this supporting and assisting students? HA!
##### Link to comment
Quote:
Original post by EasilyConfused Know how you feel. I just discovered that unbeknowst to me, I've been paying 2.5% interest on a £3,000 student loan for the last eight years, since I've been below the threshold for payment, and now owe £4,500. And I didn't even get a qualification. Now they get to take 9% of anything above £1,250 I earn in a month. Only found out I still had a loan because I had a good month sales-wise in December and went over the bar in January. Call this supporting and assisting students? HA!
Ouch, that sucks [sad].
## Create an account or sign in to comment
You need to be a member in order to leave a comment
## Create an account
Sign up for a new account in our community. It's easy!
Register a new account | 2017-09-25 13:36:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2462548017501831, "perplexity": 2062.1660444024833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818691830.33/warc/CC-MAIN-20170925130433-20170925150433-00450.warc.gz"} |
http://mapleprimes.com/tags/label | ## Labeling symbols in plot ...
Dear Maple users,
In Maple 18, I want to label y axis as $\hat{\sigma}_y$.
labels = [x, sigma[y]] works fine but I have no idea how to put a hat on sigma[y] .
regards,
how to add two algebaric equations? I have two equation (1) and (2). i simply wrote (1) + (2) instead of adding the eqautions i got result is 3. i think there is some mistake of writng text for calling equation that was in bold.
## How to label axes in Maple...
Pls how can I label my vertical axis in such a way that the label will be written vertically e.g "proportions of Susceptible"
## How to put the independent variable on the vertica...
Hi!
If somebody could help me it would be awesome!
I would like to be able to switch the independent variable onto the vertical axis!
If that is not possible, what i'm trying to do is modelise in a function a set of points, but with multiple values for the same x, but not more than one for the y, so the best option would be to switch the variables, to have x dependant of y. My graphic has to be vertical though, so I can't just switch the points.
Thank you a lot
Charlotte
## 3D Plot - Labels outside figure margins?...
Hi!
I'm currently using Maple 17 and I'm trying to programatically export .eps 3D plots using the following code:
plotsetup(eps, plotoutput = square_of_x, plotoptions = color=rgb , portrait, noborder, height=4in,width=4in, shrinkby=0.1)
The problem that I'm facing is:
1- The z-axis label is outside of my figure (see uploaded figure). This happens even if I try to save the figure in .png and it seems that the 'shrinkby' option its not working properly;
Could you help me solving this?
Thanks!
## How do you add a superscript in maple?...
Hello,
I would like to know how one would add a superscript in maple? For example, I need to label an axis with unit cm^-1 but it shows 1/cm (like a fraction) when executed. How could I avoid this so the graph label would output cm^-1? Thank you for your help in advance!
Gambia Man
## Labeling solutions...
I have used solve to find the solution to an equation that has two solutions, and I want to give each solution a label so that I can use each individually in subsequent manipulations. How do I label each solution separately?
## Plotting labels without axes...
Hello,
i want a plot with labels = ["x values", "y values"] but without displaying y-axis
## Labeling in matrixplot...
Hello,
I have use a sparsematrixplot to identify the patterns in a matrix.
In order to facilitate the readability of my sparsematrixplot, i would like to change the labels of the rows/ columns axis.
- for my rows, the labels : [eq1,eq2, ..., ]
- for my columns, the labels : [q1,q2,q3,...,]
Do you have ideas how I can do that ?
## Convert doesn't work with label?...
Hi guys,
I can not convert a series with label.
If anyone can confirm this?
Maybe-Bug.mw
## How to label contours in Maple 17? ...
hi friends
i have a question and i could find the answer in existing questions but it was not clear at all!!!
i want to label my contours and i know that i should use lebelledcontourplot command.
But how?!
"1- First download the files located on his webpage. Advisor6.zip It should be zipped with 3 files in it.
2 - Unzip them and copy them into a directory which you name, probably somewhere in the maple directory named advisor. c:\maple12\advisor
3 - Then create a maple.ini in the maple12 directory with the following line to match your directory location
libname:= c:/maple12/advisor, libname: just like on the instruction installation page."
these are my questions:
1-how can i create a maple.ini?!
2-what should i do with the file i will create?
please explain more about the third phrase and explain exactly what should i do step by step.
thanks a lot
## List of all labels...
How I can get a list of all label refernces of a worksheet. In fact I want to write a small code which would produce the LaTex output (written to a text file) for my indicated labels. thanx
## Customise Virtual Solar System by Yi Xie...
I am trying to alter the Virtual Solar system Maple worksheet of Yi Xie in the way that I added several objects to the eight planets and Pluto (e.g. Hale-Bopp, Sedna, 2012 VP113 etc.) and would like to adjust the array such that when zooming out and the obrit and labels overlap so that it's unreadable anymore (orbits and labels) that I can switch on and off (respectively display/not display) specific parts, e.g. the inner solar system. In the original file a single array was created from 1..18 (including 9 orbital entries and 9 label entries). What I did is to create arrays for each part of the Solar system, e.g. Inner for the planets+Pluto 1..18, an array for Hale-Bopp with an orbital entry and a label entry, so [1,2], and an array with 6 entries for 3 additional objects like Sedna, Planet X and 2012 VP113. As well as the sun, which only has a single entry as there are no orbital elements necessary and one just makes a 3dplot (I did not label it, so just one entry). All arrays are converted into lists in the end and displayed. Here is the code:
> with(linalg);
> with(plots);
> with(plottools);
> P1 := matrix([[cos(omega[j]), -sin(omega[j]), 0], [sin(omega[j]), cos(omega[j]), 0], [0, 0, 1]]); P2 := matrix([[1, 0, 0], [0, cos(i[j]), -sin(i[j])], [0, sin(i[j]), cos(i[j])]]); P3 := matrix([[cos(Omega[j]), -sin(Omega[j]), 0], [sin(Omega[j]), cos(Omega[j]), 0], [0, 0, 1]]);
> A:=matrix([[a[j]*(cos(E[j])-e[j])],[a[j]*sqrt(1-e[j]^2)*sin(E[j])],[0]]);
> R:=multiply(P3,P2,P1);
> B:=multiply(R,A);
> a := [.38709893, .72333199, 1.00000011, 1.52366231, 5.20336301, 9.53707032, 19.19126393, 30.06896348, 39.48168677, 1/0.5454e-2, 268.2509283, 532.7838156, 300];
> e := [.20563069, 0.677323e-2, 0.1671022e-1, 0.9341233e-1, 0.4839266e-1, 0.5415060e-1, 0.4716771e-1, 0.858587e-2, .24880766, .994920, .7005635, .8570973, .1];
> i := [7.00487, 3.39471, 0.5e-4, 1.85061, 1.30530, 2.48446, .76986, 1.76917, 17.14175, 89.5328, 24.01830, 11.92859, 10];
> Omega := [48.33167, 76.68069, -11.26064, 49.57854, 100.55615, 113.71504, 74.22988, 131.72169, 110.30347, 282.1476, 90.88303, 144.53190, 45];
> omega := [77.45645, 131.53298, 102.94719, 336.04084, 14.75385, 92.43194, 170.96424, 44.97135, 224.06676, 130.8038, 293.03200, 311.18311, 150];
> i := map(x→ convert(x, units, deg, rad) end proc, i);
> Omega := map(x→ convert(x, units, deg, rad) end proc, Omega);
> omega := map(x→ convert(x, units, deg, rad) end proc, omega);
> for j to 13 do omega[j] := arcsin(sin(omega[j]-Omega[j])/sin(arccos(sin(i[j])*cos(omega[j]-Omega[j])))) end do;
> x := array(1 .. 13);
> y := array(1 .. 13);
> z := array(1 .. 13);
> for j to 13 do x[j] := B[1, 1]; y[j] := B[2, 1]; z[j] := B[3, 1] end do;
> Sun := array([1]);
> Inner := array(1 .. 18); for j to 9 do Colors := [black, green, blue, red, black, yellow, green, violet, brown, aquamarine, black, black, red]; Linestyle := [solid, solid, solid, solid, solid, solid, solid, solid, solid, longdash, solid, solid, longdash]; Inner[j] := spacecurve([subs(E[j] = E, x[j]), subs(E[j] = E, y[j]), subs(E[j] = E, z[j])], E = 0 .. 2*Pi, color = Colors[j], linestyle = Linestyle[j]) end do;
> Comet := array([1, 2]); if j = 10 then Colors := [aquamarine]; Linestyle := [longdash]; Comet[1] := spacecurve([subs(E[j] = E, x[j]), subs(E[j] = E, y[j]), subs(E[j] = E, z[j])], E = 0 .. 2*Pi, color = Colors[j], linestyle = Linestyle[j]) end if;
> Oort := array(1 .. 6); for j from 11 to 13 do Colors := [black, black, red]; Linestyle := [solid, solid, longdash]; Inner[j] := spacecurve([subs(E[j] = E, x[j]), subs(E[j] = E, y[j]), subs(E[j] = E, z[j])], E = 0 .. 2*Pi, color = Colors[j], linestyle = Linestyle[j]) end do;
> Sun[1] := plot3d(0.1e-1, 0 .. 2*Pi, 0 .. Pi, style = PATCHNOGRID, coords = spherical, color = red);
> Inner[10] := textplot3d([subs(E[1] = 0, x[1]), subs(E[1] = 0, y[1]), subs(E[1] = 0, z[1]), "Mercury"]); Inner[11] := textplot3d([subs(E[2] = 0, x[2]), subs(E[2] = 0, y[2]), subs(E[2] = 0, z[2]), "Venus"]); Inner[12] := textplot3d([subs(E[3] = 0, x[3]), subs(E[3] = 0, y[3]), subs(E[3] = 0, z[3]), "Earth"]); Inner[13] := textplot3d([subs(E[4] = 0, x[4]), subs(E[4] = 0, y[4]), subs(E[4] = 0, z[4]), "Mars"]); Inner[14] := textplot3d([subs(E[5] = 0, x[5]), subs(E[5] = 0, y[5]), subs(E[5] = 0, z[5]), "Jupiter"]); Inner[15] := textplot3d([subs(E[6] = 0, x[6]), subs(E[6] = 0, y[6]), subs(E[6] = 0, z[6]), "Saturn"]); Inner[16] := textplot3d([subs(E[7] = 0, x[7]), subs(E[7] = 0, y[7]), subs(E[7] = 0, z[7]), "Uranus"]); Inner[17] := textplot3d([subs(E[8] = 0, x[8]), subs(E[8] = 0, y[8]), subs(E[8] = 0, z[8]), "Neptune"]); Inner[18] := textplot3d([subs(E[9] = 0, x[9]), subs(E[9] = 0, y[9]), subs(E[9] = 0, z[9]), "Pluto"]); Comet[2] := textplot3d([subs(E[10] = 0, x[10]), subs(E[10] = 0, y[10]), subs(E[10] = 0, z[10]), Hale-Bopp]); Oort[4] := textplot3d([subs(E[11] = 0, x[11]), subs(E[11] = 0, y[11]), subs(E[11] = 0, z[11]), "2012 VP113"]); Oort[5] := textplot3d([subs(E[12] = 0, x[12]), subs(E[12] = 0, y[12]), subs(E[12] = 0, z[12]), "Sedna"]); Oort[6] := textplot3d([subs(E[13] = 0, x[13]), subs(E[13] = 0, y[13]), subs(E[13] = 0, z[13]), "Planet X ?", color = red]);
> Sun1 := convert(Sun, 'list');
> Inner1 := convert(Inner, 'list');
> Comet1 := convert(Comet, 'list');
> Oort1 := convert(Oort, 'list');
> display(Sun1, Inner1, Comet1, Oort1, scaling = CONSTRAINED);
The first error message appears after the if-condition. Can you tell me where I am making a mistake? Beware: when copy paste the code from Maple to Word and from Word in here the colons at the end of lines have changed into semi-colons. Hope this is no problem in executing the code despite the lines being in the same ">..." e.g. where the labels are defined.
## How to move curve labels?...
When you label a function of curve, this label is put just next to it. Is there a way to move this label to another position of the same curve?
## labels on equations changed by dragging an equatio...
Hello people in mapleprimes,
I have a question of labels of equations.
For example, suppose that there is a following description in a worksheet.
>2*x+y=5;
2*x+y=5 (1)
>3*x+y=6;
3*x+y=6 (2)
>solve({(1),(2)},{x,y});
{x=1,y=3} (3)
subs((3),100+x+100*y)
400
And, when I moved (2) to the next to (1) with dragging, and then, clicked !!! to execute it, then the worksheet
changes as follows, which is wrong as accompanying ??
>2*x+y=5;3*x+y=6;
2*x+y=5
3*x+y=6 (1)
>solve({(1),??},{x,y})
{x = x, y = -3*x+6} (2)
>subs({(1),??},100*x+100*y)
-200*x+600
Do you know some good way for ?? not to appear in the above sitiation.
Best wishes.
taro
1 2 3 Page 1 of 3
| 2017-08-17 02:02:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4227198660373688, "perplexity": 7465.155786887214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102819.58/warc/CC-MAIN-20170817013033-20170817033033-00290.warc.gz"} |
http://fortranwiki.org/fortran/show/nint | Fortran Wiki nint
Description
nint(x) rounds its argument to the nearest whole number.
Standard
FORTRAN 77 and later, with kind argument Fortran 90 and later
Class
Elemental function
Syntax
result = nint(x [, kind])
Arguments
• x - The type of the argument shall be real.
• kind - (Optional) An integer initialization expression indicating the kind parameter of the result.
Return value
Returns a with the fractional portion of its magnitude eliminated by rounding to the nearest whole number and with its sign preserved, converted to an integer of the default kind.
Example
program test_nint
real(4) x4
real(8) x8
x4 = 1.234E0_4
x8 = 4.321_8
print *, nint(x4), idnint(x8)
end program test_nint | 2015-03-04 08:30:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5024157762527466, "perplexity": 8438.73778495169}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463460.96/warc/CC-MAIN-20150226074103-00262-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://homework.cpm.org/cpm-homework/homework/category/CCI_CT/textbook/Int3/chapter/Ch11/lesson/11.1.1/problem/11-8 | ### Home > INT3 > Chapter Ch11 > Lesson 11.1.1 > Problem11-8
11-8.
1. In Lesson 11.1.2 you will focus on multiplying and dividing rational expressions. Recall what you learned about multiplying and dividing fractions in a previous course as you answer the questions below. To help you, the following examples have been provided. Homework Help ✎
1. Without a calculator, multiply and reduce the result. Describe your method for multiplying fractions.
2. Without a calculator, divide and reduce the result. Then use a calculator to check your answer. Describe your method for dividing fractions.
Multiply the numerators together to find the numerator of the final answer, and multiply the denominators together to find the denominator of the final answer.
Do 2 and 14 have a common factor? Do 3 and 9 have a common factor? It is possible to reduce before multiplying across? Use whichever method is easiest for you.
$\frac{3}{7}$
Take the reciprocal of the divisor, and multiply the two resulting fractions. | 2019-09-16 07:11:03 | {"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8109571933746338, "perplexity": 711.7473038840305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572491.38/warc/CC-MAIN-20190916060046-20190916082046-00360.warc.gz"} |
http://blog.innodatalabs.com/viterbi-demystified/ | # Viterbi algorithm, demystified
When dealing with sequences, Viterbi algorithm and Viterbi decoding pops up regularly. This algorithm is usually described in the context of Hidden Markov Models. However, the application of this algorithm is not limited to HMMs. Besides, HMMs lately fell out of fashion as better Machine Learning techniques have been developed.
## Definitions
Bear with me here, please. This is the most tedious and boring part - explaining the notations.
We have a discrete sequence that starts at index $t=1$ and ends at index $t=T$. Here $T$ is the length of the sequence. At each index $t$ there are $S$ possible states. The sequence (sometimes called path) is when we decide which state at each index was chosen. State valiable $s(t)$ can take any value in the range $1\dots S$ at each index $t$.
Thus, we describe sequence as $s(t)$, $t=1\dots T$. Number of possible sequences (paths) grows exponentially with the length of the sequence $T$.
One example: we want to describe (or predict) weather for a week. Here the length of our sequence is 7 (number of days), and each day we have either rainy, cloudy or sunny state. We have total of three states. Let us encode rainy=0, cloudy=1, and sunny=2.
Here is one possible sequence:
sunny(2)
sunny(2)
cloudy(1)
rainy(0)
rainy(0)
cloudy(1)
sunny(2)
Or, in short, our s(t) is:
2 2 1 0 0 1 2
There are total $3^7=2187$ possible weather sequences in this model ($T=7$, $S=3$).
Now, if we want to predict the weather for a week, we need to come up with an algorithm that finds the most likely sequence $s(t)$. In ML this task is cast into optimization of some objective function $L[s]$. This function depends on the sequence, and we want to find $s(t)$ that minimizes the $L$. When I write $L[s]$ I mean that $L$ is a real number that depends on complete path, i.e. depends on the state choice at every index $t$. Same can be written as $L[s] = L(s(1), s(2), \dots s(T))$.
Practically, there are specific popular forms of $L$ dependency on $s$. Choice of $L$ is super important, because it defines the model. Good choice will predict weather reliably. Bad choice will fail to do so.
### Local loss
Somewhat trivial, but still a reasonable model.
Here we introduced $l(t, s)$, which we will call logit array (or matrix). For each index $t$ we have $S$ numbers that tell us how likely the corresponding state is. For example, for weather prediction we may have the following logits:
index rainy=0 cloudy=1 sunny=2
-----------------------------------------
t=1 -0.1 -3.5 2.3
t=2 -0.8 -2.5 1.3
t=3 -1.2 -1.0 4.3
t=4 -0.2 -3.0 0.1
t=5 0.15 0.2 -2.7
t=6 0.19 1.5 -2.8
t=7 0.7 3.5 -5.3
Where do these logits come from? Typically from the top Neural Network layer. But for the discussion of Viterbi this is not important. What is important is that our model $L[s]$ is fully described by a $3\times 7$ matrix of logits. Once we have these logits computed, we can find optimal sequence $s^*(t)$ by minimizing the $L$.
Even though we have 2187 possible paths to consider, finding best sequence can be done by just picking the smallest logit at each index. Thus, we can efficiently compute the best sequence $s^* = [1, 1, 0, 1, 2, 2, 2]$ and predict this:
cloudy
cloudy
rainy
cloudy
sunny
sunny
sunny
### Pairwise loss
In the previous model, solution was computed locally, for each index $t$. That was possible because optimization objective $L$ was a sum of local objectives.
For the weather prediction problem one can notice that in practice weather does not switch from rainy to sunny or from sunny to rainy. It (almost) always goes via cloudy. To capture this observation, we can introduce a better loss:
Here first term is the same as before - the sum of local losses. The second term allows us to model dependencies between today and tomorrow. We introduced a matrix $m(t, s, q)$ that controls the transitions. A physisist will say that we have “a potential with pairwise interactions”. In the context of HMMs term $m$ is called transition probabilities.
For a given $t$, $m$ is a matrix $S\times S$ that tells us how probable is the transition from state $s$ to state $q$. In our weather prediction example, such a matrix may look like:
0.0 2.3 1000.0
5.3 1.5 4.2
1000.0 3.3 0.1
If we choose a path that switches from rainy(0) to cloudy(1) we will get additional loss of $m(t, 0, 1)$, or 5.3. Note that for simplicity in this example $m$ is the same for all indices $t$.
Now, if we choose a path that switches from rainy(0) to sunny(2) we will be penalized by extra loss of 1000.
## Viterbi algoritm
Brute-force way of finding optimal sequence in a pairwise loss model is prohibitively expensive. Just think about a problem with $S=10$ and sequence length $T=80$. The number of all possible sequences in this problem is $10^{80}$, which is how many atoms we have in the observable universe. Good luck computing this!
Yet we can efficiently find the best sequence exploiting the pairwise structure of the model.
### Dynamic programming to the resque
Viterbi algorithm is an instance of a dynamic programming class of algorithms.
First that we do in DP is we pretend we know the solution. To elaborate, lets first introduce a constrained optimization problem: we are asked to optimize objective function $L[s]$ with the additional condition that sequence $s(t)$ ends at a specific state $q$. I will write this optimization objective as:
where $q$ can take any value in the range $1\dots S$.
If we know solution for the constrained problem, we can find the full solution by just cycling thru all values of $q$ and finding the one that yields the minimal value of $L$. Thus, knowing $L[s \vert s(T)=q]$ we can easily provide the answer for the original problem.
Now, back to the Dynamic Programming. We pretend that we know the answer already for a bit shorter problem. Specifically, we pretend that we can easily compute the best constrained objective $L[s \vert s(T-1)=q]$. Assuming this, we notice that to find the solution to the original problem we just need to consider the last step of our sequence.
Lets express out pairwise objective function thru the objective of a one step smaller problem:
Now, if we know solution $L^*_{T-1}(q)$ to a constrained problem of size $T-1$, we can find a solution to the constrained problem of size $T$, $L^*_T(r)$, because:
To finsh the hard part, lets note that solution for a problem with size 1 is super easy (as there is no pairwise term in the optimization objective). Viterbi will start with sequence of size 1 and grow the solution till it reaches final length of $T$. The complexity is $O(S^2T)$, and space needed is $O(ST)$.
Again, returning to our imaginary problem that has 10 states and sequence of length 80, computational complexity is $O(10^2 \times 80) = O(8000)$, while space needed is $O(800)$ - very manageable!
## Viterbi and transition constraints
Traditionally, Viterbi is used in ML methods like HMMs and CRFs where transition matrix is learned as part of the model training.
In the classical sequence labeling (e.g. POS tagging) only logits are used and local loss is used for label decoding.
However, we can still use Viterbi if we add an ad-hoc transition matrix that expresses some additional knowledge.
For example, if we want to enforce a rule that a sunny day is never followed by a rainy day, we just postulate that transition matrix $m$ is the same for all $t$ and has the following structure:
0 0 0
0 0 0
1000 0 0
Then, we can use the same logits to decode the sequence of weather predictions. It is guaranteed to obey this constraint!
If additionally we want to forbid transitions from rainy to sunny, we would use the following transition matrix:
0 0 -1000
0 0 0
-1000 0 0
Thus, we are taking the original solution to a sequence labeling problem, and add external transition constraint. Then we find the best sequence that minimizes loss under the given constraint.
To recap, we can use Viterbi to find solutions to sequence labeling problems under additional transition constrains.
## IOB constraint
One important transition constraint arises when we use popular IOB label encoding.
Indeed, in IOB sequence there are some transitions that are not expected, given the meaning of IOB labeling. For example, well-formed IOB sequence uses B label to mark the start of an interval:
in O
South B-location
Korea I-location
and O
China B-location
But this one does not make sense:
in O
South I-location
Korea I-location
and O
China B-location
Formally, the rules are: label I can only be preceded by I or B. In other words, transition from O to I is not allowed.
This can readily be expressed in terms of a transition matrix, and Viterbi can be used to ensure that we always return a sensible IOB label sequence.
## XML structure constraint
When doing prediction on a text of a structural document (think XML), there are additional constraints if we want to express our predictions as additional XML tags. The additional XML tags we are injecting should not contradict the hierarchical structure of XML document.
Again, these constrains can be expressed in terms of transition matrix. This time for every position in our sequence we will have a different constraint - we can no longer use $m$ that does not depend on index $t$. So building the matrix $m$ becomes more complicated. But once it is built, we will employ standard Viterbi decoder and get back label sequence that is guaranteed to play well with the structure of the input XML document.
## Viterbi extensions
Apart from asking Viterbi algorithm to find the best sequence, one can ask: give me the next best sequence. A greedy client can even demand this: give me $N$ best sequences, sorted by their “bestness”.
Luckily, a simple modification to Viterbi algorithm allows one to efficiently compute next best and next next best, and so on.
This can be used to estimate confidence of the predicted sequence and to locate suspicious places where prediction is not so sure.
The informal reasoning is this: lets get 10 best decodings and compare the very best decoding to the rest 9 decodings.
If there is a significant drop in the value of loss function when we go from the very best sequence to the next best, then system is quite confident in the prediction. Conversely, if losses of the very best and next best decoding are similar, then system thinks that both sequences are almost equally likely. Look where they differ and that would be a location where machine is not sure with the prediction.
## Summary
When doing sequence labeling, Viterbi algorithm is very useful and broadly applicable - get to know it!
Written on March 29, 2017 | 2018-06-22 20:24:44 | {"extraction_info": {"found_math": true, "script_math_tex": 72, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7619489431381226, "perplexity": 550.9489040718777}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864795.68/warc/CC-MAIN-20180622201448-20180622221448-00597.warc.gz"} |
https://tex.stackexchange.com/questions/329255/break-line-in-if-condition-using-algpseudocode-while-having-nice-indentation | # break line in if condition using algpseudocode while having nice indentation
I am trying to split a long line in my algorithm, which is in my if condition into two lines, while maintaining indentation. AND I don't want the line break to be anywhere but after my -or- in the code.
SO here is my code
\documentclass[a4paper,10pt]{article}
\usepackage[utf8]{inputenc}
\usepackage{algorithm}
\usepackage{algpseudocode}
\algnewcommand\algorithmicor{\textbf{or}}
\newcommand{\pushcode}[1][1]{\hskip\dimexpr#1\algorithmicindent\relax}
\begin{algorithm}[h]
\caption{Core genome identification}
\begin{algorithmic}[1]
%I tired using this push
\While{$list$ is not empty \algorithmicor \\ \pushcode[1] second condition}
\EndWhile
\end{algorithmic}
\end{algorithm}
\end{document}
I have tried it with this push for example, but this gives me a new line number what I don't want!
And other suggestions like defining
\newcommand{\myindent}[1]{
\newline\makebox[#1]{}
}
Also puts the indent on a spec. indentation, what is not what I am searching for. I want it to be at the same indentation as the condition before. So something like
if (this is my first condition OR
this is the second)
Redefine the loop and if statements in the preamble (after \usepackage{algpseudocode}) as follows:
\newcommand\CONDITION[2]%
{\begin{tabular}[t]{@{}l@{}l@{}}
#1
\end{tabular}%
}
\algdef{SE}[WHILE]{While}{EndWhile}[1]%
{\algorithmicwhile\ \CONDITION{#1}{\ \algorithmicdo}}%
{\algorithmicend\ \algorithmicwhile}
\algdef{SE}[FOR]{For}{EndFor}[1]%
{\algorithmicfor\ \CONDITION{#1}{\ \algorithmicdo}}%
{\algorithmicend\ \algorithmicfor}
\algdef{S}[FOR]{ForAll}[1]%
{\algorithmicforall\ \CONDITION{#1}{\ \algorithmicdo}}
\algdef{SE}[REPEAT]{Repeat}{Until}{\algorithmicrepeat}[1]%
{\algorithmicuntil\ \CONDITION{#1}{}}
\algdef{SE}[IF]{If}{EndIf}[1]%
{\algorithmicif\ \CONDITION{#1}{\ \algorithmicthen}}%
{\algorithmicend\ \algorithmicif}%
\algdef{C}[IF]{IF}{ElsIf}[1]%
{\algorithmicelse\ \algorithmicif\ \CONDITION{#1}{\ \algorithmicthen}}
The definitions are taken from the style file algpseudocode.sty and modified to handle multiline conditions.
Here is a complete example:
\documentclass{article}
\usepackage{algpseudocode}
... Code from above ...
\begin{document}
\begin{algorithmic}[1]
\While{short condition}
\EndWhile
\While{very long condition\\broken into two lines}
\EndWhile
\end{algorithmic}
\end{document} | 2021-07-31 09:27:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9528648257255554, "perplexity": 4646.827625709678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154085.58/warc/CC-MAIN-20210731074335-20210731104335-00142.warc.gz"} |
http://www.gradesaver.com/textbooks/math/geometry/elementary-geometry-for-college-students-5th-edition/chapter-3-section-3-3-isosceles-triangles-exercises-page-151/19 | ## Elementary Geometry for College Students (5th Edition)
Published by Brooks Cole
# Chapter 3 - Section 3.3 - Isosceles Triangles - Exercises: 19
#### Answer
55$^{\circ}$
#### Work Step by Step
The answer is 55$^{\circ}$ because 180-70=110 and because you need two of the same angles divide 110 by 2 and you will get 55$^{\circ}$. Because the triangle is an isosceles triangle you need two angles to be congruent.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2017-02-26 10:00:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5830272436141968, "perplexity": 1478.7658275684435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00415-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://gmatclub.com/forum/emily-keeps-12-different-pairs-of-shoes-24-individual-shoes-in-total-276383.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 17 Jan 2019, 09:54
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in January
PrevNext
SuMoTuWeThFrSa
303112345
6789101112
13141516171819
20212223242526
272829303112
Open Detailed Calendar
• ### The winning strategy for a high GRE score
January 17, 2019
January 17, 2019
08:00 AM PST
09:00 AM PST
Learn the winning strategy for a high GRE score — what do people who reach a high score do differently? We're going to share insights, tips and strategies from data we've collected from over 50,000 students who used examPAL.
• ### Free GMAT Strategy Webinar
January 19, 2019
January 19, 2019
07:00 AM PST
09:00 AM PST
Aiming to score 760+? Attend this FREE session to learn how to Define your GMAT Strategy, Create your Study Plan and Master the Core Skills to excel on the GMAT.
# Emily keeps 12 different pairs of shoes (24 individual shoes in total)
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 52230
Emily keeps 12 different pairs of shoes (24 individual shoes in total) [#permalink]
### Show Tags
16 Sep 2018, 22:20
00:00
Difficulty:
35% (medium)
Question Stats:
70% (01:03) correct 30% (01:15) wrong based on 132 sessions
### HideShow timer Statistics
Emily keeps 12 different pairs of shoes (24 individual shoes in total) under her bed. If her dog drags out two shoes at random, what is the probability that he drags out a matching pair of shoes?
A. 1/144
B. 1/66
C. 1/23
D. 1/12
E. 1/11
_________________
RC Moderator
Joined: 24 Aug 2016
Posts: 635
Concentration: Entrepreneurship, Operations
GMAT 1: 630 Q48 V28
GMAT 2: 540 Q49 V16
Re: Emily keeps 12 different pairs of shoes (24 individual shoes in total) [#permalink]
### Show Tags
17 Sep 2018, 12:04
Bunuel wrote:
Emily keeps 12 different pairs of shoes (24 individual shoes in total) under her bed. If her dog drags out two shoes at random, what is the probability that he drags out a matching pair of shoes?
A. 1/144
B. 1/66
C. 1/23
D. 1/12
E. 1/11
No of ways selecting the 1st shoe = 24
No of ways selecting the 1st & 2nd shoe= 24 *23
No of ways selecting the 2nd shoe which is the exact pair of the 1st = 1
No of ways selecting selecting the 1st shoe & the 2nd shoe which is the exact pair of the 1st= 24*1
Hence P = 24*1 /24*23 = 1/23 ....... Ans C
_________________
Please let me know if I am going in wrong direction.
Thanks in appreciation.
Intern
Joined: 26 Aug 2018
Posts: 7
Re: Emily keeps 12 different pairs of shoes (24 individual shoes in total) [#permalink]
### Show Tags
17 Sep 2018, 12:10
1
Probability of the first one don't not matter, so we should consider what is the probability of the second one be the pair of the first one.
1/23
Manager
Joined: 23 Aug 2016
Posts: 108
Location: India
Concentration: Finance, Strategy
GMAT 1: 660 Q49 V31
GPA: 2.84
WE: Other (Energy and Utilities)
Re: Emily keeps 12 different pairs of shoes (24 individual shoes in total) [#permalink]
### Show Tags
17 Sep 2018, 20:11
Bunuel wrote:
Emily keeps 12 different pairs of shoes (24 individual shoes in total) under her bed. If her dog drags out two shoes at random, what is the probability that he drags out a matching pair of shoes?
A. 1/144
B. 1/66
C. 1/23
D. 1/12
E. 1/11
Hi,
My approach-
Probability of selecting a particular shoe from the 12 different pair=2/24
Now probability of selecting the matching shoe to the shoe already dragged= 1/23(remember one shoe is already out)
Therefore combined probability of one matching pair of shoe= 1/24* 1/23
Now, there are 12 different pair of shoe so your probability will shoot up 12 times-
2/24 *1/23 *12=1/23.
Bunuel is it correct?
_________________
Thanks and Regards,
Honneeey.
In former years,Used to run for "Likes", nowadays, craving for "Kudos". :D
Target Test Prep Representative
Status: Founder & CEO
Affiliations: Target Test Prep
Joined: 14 Oct 2015
Posts: 4527
Location: United States (CA)
Re: Emily keeps 12 different pairs of shoes (24 individual shoes in total) [#permalink]
### Show Tags
18 Sep 2018, 17:50
1
Bunuel wrote:
Emily keeps 12 different pairs of shoes (24 individual shoes in total) under her bed. If her dog drags out two shoes at random, what is the probability that he drags out a matching pair of shoes?
A. 1/144
B. 1/66
C. 1/23
D. 1/12
E. 1/11
The first shoe can be any shoe, so its probability is 24/24 = 1. However, since the second shoe must match the first shoe, its probability of being chosen is 1/23. Therefore, the probability of the two shoes forming a matching pair is 1 x 1/23 = 1/23.
_________________
Scott Woodbury-Stewart
Founder and CEO
GMAT Quant Self-Study Course
500+ lessons 3000+ practice problems 800+ HD solutions
CEO
Joined: 11 Sep 2015
Posts: 3328
Re: Emily keeps 12 different pairs of shoes (24 individual shoes in total) [#permalink]
### Show Tags
09 Jan 2019, 06:15
Top Contributor
Bunuel wrote:
Emily keeps 12 different pairs of shoes (24 individual shoes in total) under her bed. If her dog drags out two shoes at random, what is the probability that he drags out a matching pair of shoes?
A. 1/144
B. 1/66
C. 1/23
D. 1/12
E. 1/11
P(dog selects matching pair) = P(dog chooses ANY sock 1st AND 2nd sock matches the 1st sock)
= P(dog chooses ANY sock 1st) x P(2nd sock matches the 1st sock)
= 24/24 x 1/23
= 1/23
Cheers,
Brent
_________________
Test confidently with gmatprepnow.com
GMATH Teacher
Status: GMATH founder
Joined: 12 Oct 2010
Posts: 616
Re: Emily keeps 12 different pairs of shoes (24 individual shoes in total) [#permalink]
### Show Tags
09 Jan 2019, 08:05
1
Bunuel wrote:
Emily keeps 12 different pairs of shoes (24 individual shoes in total) under her bed. If her dog drags out two shoes at random, what is the probability that he drags out a matching pair of shoes?
A. 1/144
B. 1/66
C. 1/23
D. 1/12
E. 1/11
$$\left. \matrix{ {\rm{Total}}\,\,:\,\,\,C\left( {24,2} \right) = {{24 \cdot 23} \over 2} = 12 \cdot 23\,\,{\rm{equiprobable}}\,\,{\rm{pairs}}\,\,\,\, \hfill \cr {\rm{Favorable:}}\,\,\,12\,\,{\rm{real}}\,\,{\rm{pairs}}\,\,\left( {{\rm{matches}}} \right) \hfill \cr} \right\}\,\,\,\,\,\, \Rightarrow \,\,\,\,\,? = {{12} \over {12 \cdot 23}} = {1 \over {23}}$$
This solution follows the notations and rationale taught in the GMATH method.
Regards,
Fabio.
_________________
Fabio Skilnik :: GMATH method creator (Math for the GMAT)
Our high-level "quant" preparation starts here: https://gmath.net
Re: Emily keeps 12 different pairs of shoes (24 individual shoes in total) &nbs [#permalink] 09 Jan 2019, 08:05
Display posts from previous: Sort by | 2019-01-17 17:54:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4460192918777466, "perplexity": 10426.446934341888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659056.44/warc/CC-MAIN-20190117163938-20190117185938-00297.warc.gz"} |
https://www.zbmath.org/?q=an%3A0951.68038 | # zbMATH — the first resource for mathematics
Universal coalgebra: A theory of systems. (English) Zbl 0951.68038
In the semantics of programming, finite data types such as finite lists, have traditionally been modelled by initial algebras. Later final coalgebras were used in order to deal with infinite data types. Coalgebras, which are the dual of algebras, turned out to be suited, moreover, as models for certain types of automata and more generally, for (transition and dynamical) systems. An important property of initial algebras is that they satisfy the familiar principle of induction. Such a principle was missing for coalgebras until the work of P. Aczel [Non-Well-Founded sets, CSLI Lecture Notes, Vol. 14, center for the study of Languages and information, Stanford (1988)] on a theory of non-wellfounded sets, in which he introduced a proof principle nowadays called coinduction. It was formulated in terms of bisimulation, a notion originally stemming from the world of concurrent programming languages. Using the notion of coalgebra homomorphism, the definition of bisimulation on coalgebras can be shown to be formally dual to that of congruence on algebras. Thus, the three basic notions of universal algebra: algebra, homomorphism of algebras, and congruence, turn out to correspond to coalgebra, homomorphism of coalgebras, and bisimulation, respectively. In this paper, the latter are taken as the basic ingredients of a theory called universal coalgebra. Some standard results from universal algebra are reformulated (using the aforementioned correspondence) and proved for a large class of coalgebras, leading to a series of results on, e.g., the lattices of subcoalgebras and bisimulations, simple coalgebras and coinduction, and a covariety theorem for coalgebras similar to Birkhoff’s variety theorem.
##### MSC:
68Q10 Modes of computation (nondeterministic, parallel, interactive, probabilistic, etc.) 68Q55 Semantics in the theory of computing
##### Keywords:
universal coalgebra
Full Text:
##### References:
[1] Abramsky, S., A domain equation for bisimulation, Inform and comput., 92, 2, 161-218, (1991) · Zbl 0718.68057 [2] P. Aczel, Non-Well-Founded Sets, CSLI Lecture Notes, Vol. 14, Center for the Study of Languages and Information, Stanford, 1988. [3] Aczel, P., Final universes of processes, (), 1-28 [4] Aczel, P.; Mendler, N., A final coalgebra theorem, (), 357-365 [5] Arbib, M.A.; Manes, E.G., Machines in a category, J. pure appl. algebra, 19, 9-20, (1980) · Zbl 0441.18010 [6] Arbib, M.A.; Manes, E.G., Parametrized data types do not need highly constrained parameters, Inform. and control, 52, 2, 139-158, (1982) · Zbl 0505.68013 [7] America, P.; Rutten, J.J.M.M., Solving reflexive domain equations in a category of complete metric spaces, J. comput. system sci., 39, 3, 343-375, (1989) · Zbl 0717.18002 [8] de Bakker, J.W.; de Vink, E., Control flow semantics, foundations of computing series, (1996), The MIT Press Cambridge, MA · Zbl 0971.68099 [9] M. Barr, Terminal coalgebras in well-founded set theory, Theoret. Comput. Sci. 114 (2) (1993) 299-315. Some additions and corrections were published in Theoret. Comput. Sci. 124 (1994) 189-192. · Zbl 0779.18004 [10] Barr, M., Additions and corrections to “terminal coalgebras in well-founded set theory”, Theoret. comput. sci., 124, 1, 189-192, (1994) · Zbl 0788.18001 [11] J. Barwise, L.S. Moss, Vicious Circles, On the Mathematics of Non-wellfounded Phenomena, CSLI Lecture Notes, Vol. 60, Center for the Study of Language and Information, Stanford, 1996. · Zbl 0865.03002 [12] M.-P. Béal, D. Perrin, Symbolic dynamics and finite automata, Report IGM 96-18, Université de Marne-la-Vallée, 1996. [13] Borceux, F., Handbook of categorical algebra 1: basic category theory, encyclopedia of mathematics and its applications, vol. 50, (1994), Cambridge University Press Cambridge [14] van Breugel, F., Terminal metric spaces of finitely branching and image finite linear processes, Theoret. comput. sci., 202, 1-2, 223-230, (1998) · Zbl 0902.68118 [15] Cı̂rstea, C., Coalgebra semantics for hidden algebra: parameterised objects and inheritance, proc. 12th workshop on algebraic development techniques, (1998), Springer Berlin [16] Cohn, P.M., Universal algebra, mathematics and its applications, vol. 6, (1981), D. Reidel Publishing Company Dordrecht [17] Corradini, A., A complete calculus for equational deduction in coalgebraic specification, report SEN-R9723, (1997), CWI Amsterdam [18] Csákány, B., Completeness in coalgebras, Acta sci. math., 48, 75-84, (1985) · Zbl 0593.08004 [19] Davis, R.C., Universal coalgebra and categories of transition systems, Math. systems theory, 4, 1, 91-95, (1970) · Zbl 0191.01203 [20] Devaney, R.L., An introduction to chaotic dynamical systems, (1986), The Benjamin/Cummings Publishing Company Menlopark, CA [21] E.P. de Vink, J.J.M.M. Rutten, Bisimulation for probabilistic transition systems: a coalgebraic approach, in: P. Degano, R. Gorrieri, A. Marchetti-Spaccamela (Eds.), Proc. ICALP’97, Lecture Notes in Computer Science, Vol. 1256, 1997, pp. 460-470. Theoret. Comput. Sci., to appear. · Zbl 0930.68092 [22] Fiore, M.P., A coinduction principle for recursive data types based on bisimulation, Inform. and comput., 127, 2, 186-198, (1996) · Zbl 0868.68054 [23] Forti, M.; Honsell, F.; Lenisa, M., Processes and hyperuniverses, (), 352-363 [24] Goguen, J.A.; Thatcher, J.W.; Wagner, E.G., An initial algebra approach to the specification, correctness and implementation of abstract data types, (), 80-149 [25] R. Goldblatt, Logics of time and computation, CSLI Lecture Notes, Vol. 7, Center for the study of Language and Information, Standard, 1987. [26] Groote, J.F.; Vaandrager, F., Structured operational semantics and bisimulation as a congruence, Inform. and comput., 100, 2, 202-260, (1992) · Zbl 0752.68053 [27] Gumm, H.P.; Schröder, T., Covarieties and complete covarieties, () · Zbl 0973.68174 [28] T. Hagino, A categorical programming language, Ph.D. Thesis, University of Edinburgh, Edinburgh, 1987. · Zbl 0643.03010 [29] Hennessy, M.; Plotkin, G.D., Full abstraction for a simple parallel programming language, (), 108-120 · Zbl 0457.68006 [30] Hensel, U.; Huisman, M.; Jacobs, B.; Tews, H., Reasoning about classes in object-oriented languages: logical models and tools, (), 105-121 [31] C. Hermida, B. Jacobs, Structural induction and coinduction in a fibrational setting, Inform. and Comput. (1998) to appear. · Zbl 0941.18006 [32] Honsell, F.; Lenisa, M., Final semantics for untyped $$λ$$-calculus, (), 249-265 · Zbl 1063.03516 [33] Jacobs, B., Mongruences and cofree coalgebras, (), 245-260 [34] B. Jacobs, Behaviour-refinement of object-oriented specifications with coinductive correctness proofs, Report CSI-R9618, Computing Science Institute, University of Nijmegen, 1996. Also in the Proc. TAPSOFT’97. [35] Jacobs, B., Inheritance and cofree constructions, (), 210-231 [36] Jacobs, B., Object and classes, co-algebraically, () [37] Jacobs, B., Coalgebraic reasoning about classes in object-oriented languages, () · Zbl 0917.68127 [38] B. Jacobs, L. Moss, H. Reichel, J.J.M.M. Rutten (Eds.), Proc. 1st Internat. Workshop on Coalgebraic Methods in Computer Science (CMCS) ’98), Electronic Notes in Theoretical Computer Science, Vol. 11, Elsevier Science B.V., Amsterdam, 1998. Available at URL: www.elsevier.nl/locate/entcs. · Zbl 0903.00067 [39] Jacobs, B.; Rutten, J., A tutorial on (co)algebras and (co)induction, Bull. EATCS, 62, 222-259, (1997) · Zbl 0880.68070 [40] B. Jacobs, J.J.M.M. Rutten (Eds.), Proc. 2nd Internat. Workshop on Coalgebraic Methods in Computer Science (CMCS ’99), Electronic Notes in Theoretical Computer Science, Vol. 19, Elsevier Science B.V., Amsterdam, 1999. Available at URL: www.elsevier.nl/locate/entcs. [41] Joyal, A.; Nielsen, M.; Winskel, G., Bisimulation from open maps, Inform. and comput., 127, 2, 164-185, (1996) · Zbl 0856.68067 [42] Y. Kawahara, M. Mori, A small final coalgebra theorem, Theoret. Comput. Sci. (1998) to appear. · Zbl 0952.68101 [43] Keller, R.M., Formal verification of parallel programs, Comm. ACM, 19, 7, 371-384, (1976) · Zbl 0329.68016 [44] Kent, R.E., The metric closure powerspace construction, (), 173-199 · Zbl 0653.18008 [45] A. Kurz, Specifying coalgebras with modal logic, in: B. Jacobs, L. Moss, H. Reichel, J.J.M.M. Rutten (Eds.), Proc. 1st Internat. Workshop on Coalgebraic Methods in Computer Science (CMCS) ’98), Electronic Notes in Theoretical Computer Science, Vol. 11, Elsevier Science B.V., Amsterdam, 1998. · Zbl 0917.68134 [46] Larsen, K.G.; Skou, A., Bisimulation through probabilistic testing, Inform. comput., 94, 1-28, (1991) · Zbl 0756.68035 [47] Lehmann, D.J.; Smyth, M.B., Algebraic specification of data types: a synthetic approach, Math. systems theory, 14, 97-139, (1981) · Zbl 0457.68035 [48] Lenisa, M., Final semantics for a higher-order concurrent language, (), 102-118 [49] M. Lenisa, Themes in final semantics, Ph.D. Thesis, University of Udine, Udine, Italy, 1998. [50] Mac Lane, S., Categories for the working Mathematician, Graduate texts in mathematics, Vol. 95, (1971), Springer New York [51] Manes, E.G., Algebraic theories, graduate texts in mathematics, vol. 26, (1976), Springer Berlin [52] Manes, E.G.; Arbib, M.A., Algebraic approaches to program semantics, texts and monographs in computer science, (1986), Springer Berlin [53] Marvan, M., On covarieties of coalgebras, Arch. math. (Brno), 21, 1, 51-64, (1985) · Zbl 0577.18004 [54] Meinke, K.; Tucker, J.V., Universal algebra, (), 189-411 [55] Milner, R., Processes: a mathematical model of computing agents, (), 157-173 [56] Milner, R., A calculus of communicating systems, Lecture notes in computer science, Vol. 92, (1980), Springer Berlin [57] Milner, R.; Tofte, M., Co-induction in relational semantics, Theoret. comput. sci., 87, 209-220, (1991) · Zbl 0755.68100 [58] L.S. Moss, Coalgebraic logic, Ann. Pure Appl. Logic $$(1998)$$ to appear. · Zbl 0969.03026 [59] Moss, L.S.; Danner, N., On the foundations of corecursion, Logic J. IGPL, 231-257, (1997) · Zbl 0872.03030 [60] Park, D.M.R., Concurrency and automata on infinite sequences, (), 15-32 [61] Paulson, L.C., Mechanizing coinduction and corecursion in higher-order logic, J. logic comput., 7, 175-204, (1997) · Zbl 0878.68111 [62] D. Pavlovic̀, Guarded induction on final coalgebras, in: B. Jacobs, L. Moss, H. Reichel, J.J.M.M. Rutten (Eds.), Proc. 1st Internat. Workshop on Coalgebraic Methods in Computer Science (CMCS) ’98), Electronic Notes in Theoretical Computer Science, Vol. 11, Elsevier Science B.V., Amsterdam, 1998. [63] Pitts, A.M., A co-induction principle for recursively defined domains, Theoret. comput. sci., 124, 2, 195-219, (1994) · Zbl 0795.68129 [64] Pitts, A.M., Relational properties of domains, Inform. and comput., 127, 2, 66-90, (1996) · Zbl 0868.68037 [65] G.D. Plotkin, A structural approach to operational semantics, Report DAIMI FN-19, Aarhus University, Aarhus, September 1981. [66] R. Pöschel, M. Rößiger, A general Galois theory for cofunctions and corelations, Report MATH-AL-11-1997, Technische Universität Dresden, Dresden, 1997. · Zbl 1011.08008 [67] J. Power, H. Watanabe, An axiomatics for categories of coalgebras, in: B. Jacobs, L. Moss, H. Reichel, J.J.M.M. Rutten (Eds.), Proc. 1st Internat. Workshop on Coalgebraic Methods in Computer Science (CMCS) ’98), Electronic Notes in Theoretical Computer Science, Vol. 11, Elsevier Science B.V., Amsterdam, 1998. · Zbl 0917.68124 [68] Reichel, H., An approach to object semantics based on terminal coalgebras, Math. struct. comput. sci., 5, 129-152, (1995) · Zbl 0854.18006 [69] M. Rößiger, From modal logic to terminal coalgebras, Report MATH-AL-3-1998, Technische Universität Dresden, Dresden, 1998. [70] Rutten, J.J.M.M., Processes as terms: non-well-founded models for bisimulation, Math. struct. comput. sci., 2, 3, 257-275, (1992) · Zbl 0798.68094 [71] J.J.M.M. Rutten, A calculus of transition systems (towards universal coalgebra), in: A. Ponse, M. de Rijke, Y. Venema (Eds.), Modal Logic and Process Algebra, a Bisimulation Perspective, CSLI Lecture Notes, Vol. 53, Stanford, CSLI Publications, 1995, pp. 231-256. FTP-available at ftp.cwi.nl as pub/CWIreports/AP/CS-R9503.ps.Z. [72] J.J.M.M. Rutten, Universal coalgebra: a theory of systems, Report CS-R9652, CWI, 1996. FTP-available at ftp.cwi.nl as pub/CWIreports/AP/CS-R9652.ps.Z. [73] J.J.M.M. Rutten, Automata and coinduction (an exercise in coalgebra), Report SEN-R9803, CWI, 1998. FTP-available at ftp.cwi.nl as pub/CWIreports/SEN/SEN-R9803.ps.Z. Also in the Proc. CONCUR ’98, Lecture Notes in Computer Science, Vol. 1466, Springer, Berlin, 1998, pp. 194-218. · Zbl 0940.68085 [74] J.J.M.M. Rutten, Relators and metric Bisimulations, in: B. Jacobs, L. Moss, H. Reichel, J.J.M.M. Rutten (Eds.), Proc. 1st Internat. Workshop on Coalgebraic Methods in Computer Science (CMCS) ’98), Electronic Notes in Theoretical Computer Science, Vol. 11, Elsevier Science B.V., Amsterdam, 1998. · Zbl 0917.68146 [75] Rutten, J.J.M.M.; Turi, D., On the foundations of final semantics: non-standard sets, metric spaces, partial orders, (), 477-530 [76] J.J.M.M. Rutten, D. Turi, Initial algebra and final coalgebra semantics for concurrency, in: J.W. de Bakker, W.-P. de Roever, G. Rozenberg (Eds.), Proc. REX School/ Symp. ‘A decade of concurrency’, Lecture Notes in Computer Science, Vol. 803, Springer, Berlin, 1994, pp. 530-582. FTP-available at ftp.cwi.nl as pub/CWIreports/AP/CS-R9409.ps.Z. [77] Schmidt, G.; Ströhlein, T., Relations and graphs, discrete mathematics for computer scientists, EATCS monographs on theoretical computer science, (1993), Springer-Verlag New York · Zbl 0900.68328 [78] Smyth, M.B.; Plotkin, G.D., The category-theoretic solution of recursive domain equations, SIAM J. comput., 11, 4, 761-783, (1982) · Zbl 0493.68022 [79] Székely, Z., Maximal clones of co-operations, Acta sci. math., 53, 43-50, (1989) · Zbl 0695.08008 [80] Tarski, A., A lattice-theoretical fixpoint theorem and its applications, Pacific J. math., 5, 285-309, (1955) · Zbl 0064.26004 [81] D. Turi, Functorial operational semantics and its denotational dual, Ph.D. Thesis, Vrije Universiteit, Amsterdam, 1996. [82] Turi, D., Categorical modelling of structural operational rules: case studies, (), 127-146 · Zbl 0881.18004 [83] D. Turi, G.D. Plotkin, Towards a mathematical operational semantics, Proc. 12th LICS Conf., IEEE Computer Society Press, Silverspring, MD, 1997, pp. 280-291. [84] R. van Glabbeek, The meaning of negative premises in transition system specifications II, Report STAN-CS-TN-95-16, Department of Computer Science, Stanford University, 1996. Extended abstract in: F. Meyer auf der Heide, B. Monien (Eds.), Automata, Languages and Programming, Proc. 23th International Colloquium, ICALP ’96, Paderborn, Germany, July 1996, Lecture Notes in Computer Science, Vol. 1099, Springer, Berlin, 1996, pp. 502-513. · Zbl 1046.68629 [85] van Glabbeek, R.J.; Smolka, S.A.; Steffen, B., Reactive, generative, and stratified models of probabilistic processes, Inform. and comput., 121, 59-80, (1995) · Zbl 0832.68042 [86] Winskel, G.; Nielsen, M., Models for concurrency, (), 1-148
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-07-27 05:15:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7525873780250549, "perplexity": 11227.551182477184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152236.64/warc/CC-MAIN-20210727041254-20210727071254-00343.warc.gz"} |
http://math.stackexchange.com/questions/108173/prove-frac23-pi-leq-int-2-pi3-pi-frac-sin-xx-dx-leq-frac1 | # Prove: $\frac{2}{3\pi} \leq \int_{2\pi}^{3\pi}\frac{\sin x}{x}\;dx \leq \frac{1}{\pi}$
I'd like your help proving that $$\frac{2}{3\pi} \leq \int_{2\pi}^{3\pi}\frac{\sin x}{x} \; dx \leq \frac{1}{\pi}.$$
I tried to bound both inequalities with Taylor series, other positive functions, but without any success.
Thanks a lot!
-
Hint: $\sin x$ is non-negative and $\frac 1{3\pi}\leq \frac 1x\frac 1{2\pi}$. – Davide Giraudo Feb 11 '12 at 17:09
Note that, on $[2\pi, 3\pi]$, we have $$\frac{\sin x}{3\pi} \leq \frac{\sin x}{x} \leq \frac{\sin x}{2\pi}.$$ Now integrate both sides over $[2\pi, 3\pi]$.
HINT: Because $$\int_{k\pi}^{(k+1)\pi}|\sin(t)|\;\mathrm{d}t=2\tag{1}$$ we have $$\frac{2}{(k+1)\pi}\le\int_{k\pi}^{(k+1)\pi}\left|\frac{\sin(t)}{t}\right|\;\mathrm{d}t\le\frac{2}{k\pi}\tag{2}$$ | 2015-05-03 17:58:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9920665621757507, "perplexity": 787.2634179805106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430448957257.55/warc/CC-MAIN-20150501025557-00047-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://gamedev.stackexchange.com/questions/164026/any-way-to-interface-methods-in-objects/164371 | # Any way to interface methods in objects?
Say I have object A, which has children B, C and D.
In OOP I would have A be an interface containing method action(), then I would have B, C and D implement A and, in their class definition, I would define the method in each of them in extremely different ways (as intended).
Is there a way to do something similar in GML? So far I can only think of making a script with an chain of if statements for every kind of action based on object_is_ancestor() which feels messy and unorganized to say the least.
Thank you for the help.
Yes, you can by using function pointers (or script indexes as game maker calls them).
You already know that you can have an object be a child object type of some other parent. You want parent A to have a general method "action()" that can be called in B, C, and D and have varying behavior. This can be done via function pointers, and is precisely the same way that the internals of most OOP languages do so. The reason you have to do this yourself is that game maker is an event oriented language (or at least that's how I have heard it phrased). Hence, objects don't have methods they have events and those are predefined or addable with triggers.
So how do you do function pointers in game maker? Suppose you have a few versions of the method written as scripts:
action()
actionB()
actionC()
etc.
Then just give the parent A and each of the children a variable called action_pointer. For A set action_pointer = action, and do similarly for each of the children. Then use the function script_execute to call the stored index.
You will have to make the methods separate from the objects and written as scripts (unless you wish to use the forbidden execute_string function which I highly recommend not using), but that can be done with simple organization and folders for different objects. The nice thing here is that if you do all this in the create event and call the parent's event before doing the sub-objects initialization within a create event then you get the effect of hierarchical behavior as an interface. I haven't had a particular use for it in game maker but it sounds like what you are wanting.
See here for more details on the script_execute function: http://gamemaker.info/en/manual/409_06_scripts
Game Maker doesn't support anything resembling OOP interfaces. On the other hand, you can use inheritance to override/overload actions defined in parent objects' events.
Also, you don't need to use object_is_ancestor() continuously, as long as you design your game object in a good way. You may create generic or ad-hoc scripts for different objects, or make a reference to a common user-defined event, and place your custom code in there as if you were implementing an interface method. | 2019-10-21 05:16:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17642678320407867, "perplexity": 774.7312350840426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987756350.80/warc/CC-MAIN-20191021043233-20191021070733-00203.warc.gz"} |
https://leanprover-community.github.io/archive/stream/116395-maths/topic/What.20are.20the.20arrows.20in.20.60R.20.E2.9F.B6.20S.20induces.20S-Alg.20.E2.A5.A4.20R-Alg.60.html | ## Stream: maths
### Topic: What are the arrows in R ⟶ S induces S-Alg ⥤ R-Alg
#### Eric Wieser (Mar 05 2021 at 15:38):
We currently have docs#is_scalar_tower.restrict_base which is defined as (A →ₐ[S] B) → (algebra.comap R S A →ₐ[R] algebra.comap R S B), and documented as R ⟶ S induces S-Alg ⥤ R-Alg.
I want to add this for alg_equiv in #6548 (defined the obvious way), but I have no ideas what arrows I should be using in the docstring for the new method. Are these the arrows from category_theory?
#### Yury G. Kudryashov (Mar 05 2021 at 15:46):
I see no docstring for docs@is_scalar_tower.comap. The arrows in your quote are indeed from category_theory.
#### Yury G. Kudryashov (Mar 05 2021 at 15:47):
I would prefer to have docstrings readable by people who are not fluent in category theory and its Lean implementation.
#### Yury G. Kudryashov (Mar 05 2021 at 15:49):
I mean, it's OK to mention the category theory interpretation but there should be an actual explanation of the meaning as well.
#### Eric Wieser (Mar 05 2021 at 15:50):
Maybe I'll just stick with "restrict_base but for alg_equiv instead of alg_hom" as my description for the new def
#### Yury G. Kudryashov (Mar 05 2021 at 15:53):
In linear algebra this is called restrict_scalars
#### Yury G. Kudryashov (Mar 05 2021 at 15:53):
I think, it should be renamed to alg_hom.restrict_scalars
#### Yury G. Kudryashov (Mar 05 2021 at 15:54):
Probably the current name is an artifact from the time when we used a type tag instead of [is_scalar_tower]
#### Yury G. Kudryashov (Mar 05 2021 at 15:55):
And your new def should be alg_equiv.restrict_scalars.
#### Eric Wieser (Mar 05 2021 at 16:02):
The names you suggest are exactly what the PR already renames them to :)
#### Anne Baanen (Mar 05 2021 at 16:20):
I see restrict_base instead of restrict_scalars in #6548?
Oh, whoops
#### Eric Wieser (Mar 05 2021 at 16:26):
I'll look at the docstring for docs#linear_map.restrict_scalars for inspiration on what to use as documentation instead of the arrows
#### Eric Wieser (Mar 05 2021 at 16:35):
Updated the PR to use the name restrict_scalars thanks for pointing out my blindness to the difference in name
Last updated: May 14 2021 at 20:13 UTC | 2021-05-14 20:47:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5548948645591736, "perplexity": 5783.081459055974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991207.44/warc/CC-MAIN-20210514183414-20210514213414-00422.warc.gz"} |
https://stats.stackexchange.com/questions/147777/devising-a-mixed-strength-cost-function-for-clustering?noredirect=1 | # Devising a mixed strength cost function for clustering
I'm asking this question with a Computer Vision background (my stat background is limited). I have a set of data that measure the edge strength (based on color gradient) of a set of colors.
Since these was some sort of similarity measure, based on the answer to this question I posted earlier, I ran HAC and got some results. Then, I calculate the mean edge strength of each cluster with its std. deviation.
(mean) (#) (std. dev)
13.9970 11.0000 1.2536 --- medium strength edge strength cluster
21.6859 1.0000 0
22.3964 1.0000 0
23.1407 1.0000 0
25.6370 1.0000 0
26.1904 1.0000 0
19.5371 2.0000 0.2155
3.2880 7.0000 1.7849 --- very low edge strength cluster
25.3500 2.0000 1.4310
These results make sense naturally, as the ones with low edge strength are grouped together.
My actual requirement, is to get a number of clusters (k), that would nicely mix the low edge strength ones with higher edge ones, so that the colors with low edge strength will have a good chance of getting recognized as an edge. I was wondering is there a way to define some cost function so that I can get these sort of clusters, instead of just getting clusters that group based on overall edge strength.
Essentially, if I want 3 clusters, I'd like the best possible 3 clusters (within a certain number of iterations) such that the edge strengths are mixed to get the best possible results.
I think understand that I would have to define my own cost function for this to pre-process the data. But, I'm at a loss as to how to define such a cost function. Any help here is much appreciated.
Note: The actual data after being pre-processed to a symmetrical square matrix is available at this link, for anyone who's interested. I directly feed this to HAC through the matlab linkage function using the 'single' method. matlab hac documentation | 2020-08-12 23:42:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7025913596153259, "perplexity": 826.9706879953965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738950.31/warc/CC-MAIN-20200812225607-20200813015607-00073.warc.gz"} |
http://clay6.com/qa/26256/a-transistor-is-connected-in-common-emitter-mode-in-the-collector-circuit-t | # A transistor is connected in common emitter mode. In the collector circuit the voltage drop across the resistor of $800 \; \Omega$ is $0.5 V$ . If current gain $\alpha$ is $0.96$ What is base current
$(a)\;30\;\mu A \\ (b)\;20\;\mu A \\ (c)\;24\;\mu\;A \\ (d)\;26\;\mu\;A$ | 2020-09-18 23:42:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4485093951225281, "perplexity": 527.356424149286}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400189264.5/warc/CC-MAIN-20200918221856-20200919011856-00267.warc.gz"} |
https://tex.stackexchange.com/questions/203908/compiler-error-when-creating-a-macro-environment | # Compiler Error when Creating a Macro/Environment
I'm trying to create a new command or environment or something to format what is essentially a section, but it needs to have it's own counter. I would like to not use any of the \section \subsection etc. as I go into the full depth that they provide.
The idea being I would type in something similar to...
\session{Hello World}
... and it would output something like (although centred) ...
Session 1 : Hello World
The command I've currently got is as follows:
\newcounter{sessioncounter}
\newcommand{\session}
{
\begin{center}
\begin{emph}
\begin{textbf}
\begin{Large}
Session \value{sessioncounter}\stepcounter{sessioncounter}:
}{
\end{Large}
\end{textbf}
\end{emph}
\end{center}
}
Trying this exactly, the Latex compiler throws the following error (note, this is before \begin{document} has even been called).
LaTeX Error: \begin{document} ended by \end{Large}
I've also tried creating a new environment by literally swapping out \newcommand{\session} to \newenvironment{session}. While this compiles, I get the following error on the line of \begin{session}.
Missing \endcsname inserted.
<to be read again>
\aftergroup
l.13 \begin{session}
Can anyone see where I'm going wrong? I presume by the error that newcommand can't be used exactly with this syntax for what I want; however, I'm also confused as to why the environment does not work either.
• \newcommand would define a command \session you are defining an environment so \newenvironment{session} (and unrelated but you want % at the end of all the lines in the definition). – David Carlisle Sep 30 '14 at 15:39
Your definition matches the format of an environment, not a command.
This is how environments are defined:
\newenvironment{example}{<starting commands>}{<ending commands>}
They are then used like this:
\begin{example}
<text>
\end{example}
But I think you want a command that takes one argument, as in this example. Also, \thesessioncounter gives you the number as text; \value is for use in other internal commands. (I've adjusted the formatting commands a bit as well.)
\documentclass{article}
\newcounter{sessioncounter}
\newcommand{\session}[1]{%
\hfil\bgroup\Large\itshape\bfseries
Session~\thesessioncounter: #1\egroup\par\bigskip
\stepcounter{sessioncounter}%
}
\begin{document}
\session{Hello World}
\session{Hello again}
\session{Hello for the last time}
\end{document}
• Thanks, got few questions from your formatting though if you don't mind. What do the %s do? I gather the \bgroup and \egroup is an alternative to a 'scope' (excuse my C++ terminology). What's the ~ used for? – AdmiralJonB Sep 30 '14 at 22:07
• % tells TeX to ignore the subsequent EOL; \bgroup and \egroup are the same as { and } and yes, limit the scope of the commands in between; ~ indicates a non-breaking space, typically used before the numeral in cases like this. – musarithmia Sep 30 '14 at 23:29
You can't do \begin{textbf} or \begin{emph} and you shouldn't do \begin{Large} either.
You want
\newcommand{\session}[1]{%
\begin{center}\Large\itshape\bfseries
\stepcounter{sessioncounter}%
Session \value{sessioncounter}: #1%
\end{center}%
}
in the preamble and use
\session{Hello world}
in the document. | 2020-02-22 05:17:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9600272178649902, "perplexity": 2859.183881785461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145648.56/warc/CC-MAIN-20200222023815-20200222053815-00363.warc.gz"} |
https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Ample_line_bundle.html | # Ample line bundle
In algebraic geometry, a very ample line bundle is one with enough global sections to set up an embedding of its base variety or manifold into projective space. An ample line bundle is one such that some positive power is very ample. Globally generated sheaves are those with enough sections to define a morphism to projective space.
## Introduction
### Inverse image of line bundle and hyperplane divisors
Given a morphism , any vector bundle on Y, or more generally any sheaf in modules, e.g. a coherent sheaf, can be pulled back to X, (see Inverse image functor). This construction preserves the condition of being a line bundle, and more generally the rank.
The notions described in this article are related to this construction in the case of morphisms to projective spaces
and ,
the line bundle corresponding to the hyperplane divisor, whose sections are the 1-homogeneous regular functions. See Algebraic geometry of projective spaces#Divisors and twisting sheaves.
### Sheaves generated by their global sections
Let X be a scheme or a complex manifold and F a sheaf on X. One says that F is generated by (finitely many) global sections , if every stalk of F is generated as a module over the stalk of the structure sheaf by the germs of the ai. For example, if F happens to be a line bundle, i.e. locally free of rank 1, this amounts to having finitely many global sections, such that for any point x in X, there is at least one section not vanishing at this point. In this case a choice of such global generators a0, ..., an gives a morphism
such that the pullback f*(O(1)) is F (Note that this evaluation makes sense when F is a subsheaf of the constant sheaf of rational functions on X). The converse statement is also true: given such a morphism f, the pullback of O(1) is generated by its global sections (on X).
More generally, a sheaf generated by global sections is a sheaf F on a locally ringed space X, with structure sheaf OX that is of a rather simple type. Assume F is a sheaf of abelian groups. Then it is asserted that if A is the abelian group of global sections, i.e.
then for any open set U of X, ρ(A) spans F(U) as an OU-module. Here
is the restriction map. In words, all sections of F are locally generated by the global sections.
An example of such a sheaf is that associated in algebraic geometry to an R-module M, R being any commutative ring, on the spectrum of a ring Spec(R). Another example: according to Cartan's theorem A, any coherent sheaf on a Stein manifold is spanned by global sections.
### Very ample line bundles
Given a scheme X over a base scheme S or a complex manifold, a line bundle (or in other words an invertible sheaf, that is, a locally free sheaf of rank one) L on X is said to be very ample, if there is an embedding i : X → PnS, the n-dimensional projective space over S for some n, such that the pullback of the standard twisting sheaf O(1) on PnS is isomorphic to L:
Hence this notion is a special case of the previous one, namely a line bundle is very ample if it is globally generated and the morphism given by some global generators is an embedding.
Given a very ample sheaf L on X and a coherent sheaf F, a theorem of Serre shows that (the coherent sheaf) F ⊗ L⊗n is generated by finitely many global sections for sufficiently large n. This in turn implies that global sections and higher (Zariski) cohomology groups are finitely generated. This is a distinctive feature of the projective situation. For example, for the affine n-space Ank over a field k, global sections of the structure sheaf O are polynomials in n variables, thus not a finitely generated k-vector space, whereas for Pnk, global sections are just constant functions, a one-dimensional k-vector space.
## Definitions
The notion of ample line bundles L is slightly weaker than very ample line bundles: a line bundle L is ample if for any coherent sheaf F on X, there exists an integer n(F), such that FLn is generated by its global sections for n > n(F).
An equivalent, maybe more intuitive, definition of the ampleness of the line bundle is its having a positive tensorial power that is very ample. In other words, for there exists a projective embedding such that , that is the zero divisors of global sections of are hyperplane sections.
This definition makes sense for the underlying divisors (Cartier divisors) ; an ample is one where moves in a large enough linear system. Such divisors form a cone in all divisors of those that are, in some sense, positive enough. The relationship with projective space is that the for a very ample corresponds to the hyperplane sections (intersection with some hyperplane) of the embedded .
The equivalence between the two definitions is credited to Jean-Pierre Serre in Faisceaux algébriques cohérents.
## Criteria for ampleness of line bundles
### Intersection theory
To decide in practice when a Cartier divisor D corresponds to an ample line bundle, there are some geometric criteria.
For curves, a divisor D is very ample if and only if l(D) = 2 + l(D A B) whenever A and B are points. By the Riemann–Roch theorem every divisor of degree at least 2g + 1 satisfies this condition so is very ample. This implies that a divisor is ample if and only if it has positive degree. The canonical divisor of degree 2g 2 is very ample if and only if the curve is not a hyperelliptic curve.
The Nakai–Moishezon criterion (Nakai 1963, Moishezon 1964) states that a Cartier divisor D on a proper scheme X over an algebraically closed field is ample if and only if Ddim(Y).Y > 0 for every closed integral subscheme Y of X. In the special case of curves this says that a divisor is ample if and only if it has positive degree, and for a smooth projective algebraic surface S, the Nakai–Moishezon criterion states that D is ample if and only if its self-intersection number D.D is strictly positive, and for any irreducible curve C on S we have D.C > 0.
The Kleiman condition states that for any projective scheme X, a divisor D on X is ample if and only if D.C > 0 for any nonzero element C in the closure of NE(X), the cone of curves of X. In other words a divisor is ample if and only if it is in the interior of the real cone generated by nef divisors.
Nagata (1959) constructed divisors on surfaces that have positive intersection with every curve, but are not ample. This shows that the condition D.D > 0 cannot be omitted in the Nakai–Moishezon criterion, and it is necessary to use the closure of NE(X) rather than NE(X) in the Kleiman condition.
Seshadri (1972, Remark 7.1, p. 549) showed that a line bundle L on a complete algebraic scheme is ample if and only if there is some positive ε such that deg(L|C) ≥ εm(C) for all integral curves C in X, where m(C) is the maximum of the multiplicities at the points of C.
### Sheaf cohomology
The theorem of Cartan-Serre-Grothendieck states that for a line bundle on a variety , the following conditions are equivalent:
• is ample
• for m big enough, is very ample
• for any coherent sheaf on X, the sheaf is generated by global sections, for m big enough
If is proper over some noetherian ring, this is also equivalent to:
## Generalizations
### Vector bundles of higher rank
A locally free sheaf (vector bundle) on a variety is called ample if the invertible sheaf on is ample Hartshorne (1966).
Ample vector bundles inherit many of the properties of ample line bundles.
### Big line bundles
Main article: Iitaka dimension
An important generalization, notably in birational geometry, is that of a big line bundle. A line bundle on X is said to be big if the equivalent following conditions are satisfied:
The interest of this notion is its stability with respect to rational transformations. | 2022-01-29 11:39:20 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9321037530899048, "perplexity": 341.0929576623744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304883.8/warc/CC-MAIN-20220129092458-20220129122458-00140.warc.gz"} |
http://math.wikia.com/wiki/Complementary_angle | ## FANDOM
1,022 Pages
Two angles are complementary if their measures add up to 90 degrees or $\frac{\pi}{2}$ radians. | 2017-07-26 04:50:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.538748025894165, "perplexity": 1672.4563368962838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425766.58/warc/CC-MAIN-20170726042247-20170726062247-00119.warc.gz"} |
https://maker.pro/forums/threads/21-monitor.48871/ | # 21" Monitor
D
#### David Kierzkowski
Jan 1, 1970
0
Thanks, that was it
I bought the $20 belkin cable had the same problems. tried addaing a extension cable to see what would happen, made the problem much worse. returned the belkin for a$8 no name cable w/triple shielding
It did the trick!
The belkin cable was nice, just didnt have any shielding.
D
#### David Kierzkowski
Jan 1, 1970
0
Just picked up a 21" dell CRT monitor
Only problem there is a repeating shadow of all text, lines, etc.. thats
going from the object to the right side of the screen
Any ideas whats wrong with this thing and how it can be fixed if possible?
thanks
-dk
D
#### David
Jan 1, 1970
0
If you want any ideas, how about posting some basic troubleshooting
information. It is not like we all have a crystal ball and can see inside
your monitor and make any kind of measurements through the internet.
www.repairfaq.org has some information for people with some electronics
knowledge and experience.
H
#### Hubert Littau
Jan 1, 1970
0
Just picked up a 21" dell CRT monitor
Only problem there is a repeating shadow of all text, lines, etc.. thats
going from the object to the right side of the screen
Any ideas whats wrong with this thing and how it can be fixed if possible?
thanks
-dk
My own experience has been that it's caused by a bad input (video)
cable. Hopefully it's detachable at the monitor. Get a good quality
one. \$20-40 USD. for a Belkin VGA cable. 21" = worth it.
J
#### James Sweet
Jan 1, 1970
0
David Kierzkowski said:
Just picked up a 21" dell CRT monitor
Only problem there is a repeating shadow of all text, lines, etc.. thats
going from the object to the right side of the screen
Any ideas whats wrong with this thing and how it can be fixed if possible?
thanks
-dk
Bad capacitors on the neck board are the usual cause of this.
Replies
0
Views
115
Replies
2
Views
317
Replies
3
Views
547
Replies
2
Views
1K
Replies
11
Views
730 | 2022-09-24 17:16:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20813335478305817, "perplexity": 4784.643306707135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00412.warc.gz"} |
http://econ34430.lamadon.com/earnings-mixtures.html | In this homework we will write a gaussian mixture estimator and apply it to data from the US. As in the other homeworks we will be using R. The way this will work is that I will provide pieces of code that you will need to put together to solve the overall task.
The goal in this homework is to estimate a model of earnings using three consecutive obersvations $$(Y_1,Y_2,Y_3)$$. As we covered in class, we are going to use a latent heterogeneity model. We then write down a model for $$\{Y_t,\eta_t\}_t=1..3$$.
In the first part of the homework we consider a normal mixture model. The latent variable $$\eta_i$$ is drawn from a distributions with probabilities $$p_k$$, then the wages are drawn independently according to time specific normal distributions centered at $$\mu_{kt}$$ with variance $$\sigma^2_{kt}$$. As we covered in class, the likelihood is given by:
$L(Y_{1},Y_{2},Y_{3};\theta)=Pr[Y_{1}{=}y_{1},Y_{2}{=}y_{2},Y_{3}{=}y_{3};\theta]=\sum_{k=1}^{K}p_{k}\prod_{t=1}^{3}\phi(Y_{t};\mu_{kt},\sigma_{kt})$
#some imports
require(gtools)
require(data.table)
require(ggplot2)
require(reshape)
require(readstata13)
## Expectation-maximization algorithm for Gaussian Mixture
The alogirthm consists of the expecation and the maximization step. I provide some guidance on how to code them. We are going to write a mixture EM estimator for a gaussian mixture with K components. Each components has its own mean $$\mu_k$$ and variance $$\sigma_k$$. Each component also has a proportion that we will call $$p_k$$. The data will be a sequence of wages, we only need 3 consecutive observations, so we will focus on that.
### The Expectation step
We can write the posterior probability for a given $$k$$ given $$Y_1,Y_2,Y_3$$ and taking our current parameters $$p_k,\mu_{kt},\sigma_{kt}$$ as given. The posterior probabilities $$\omega_{ik}$$ are given by
$\omega_{ik} = Pr[k|Y_{i1},Y_{i2},Y_{i3}] = \frac{p_k \prod_t \phi(Y_{it},\mu_{kt},\sigma_{kt}) }{ \sum_{l} p_{l} \prod_t \phi(Y_{it},\mu_{lt},\sigma_{lt})}$
this gives us the posterior probabilities that we can use in the maximization step. Some guidance on computing the likelihood on the computer.
• usually better to compute everything in logs, and compute the sum on $$k$$ using a logsumexp function that removes the highest value. This avoids underflow .
• I recommend using the close form expression of log normal direclty.
lognormpdf <- function(Y,mu=0,sigma=1) {
-0.5 * ( (Y-mu) / sigma )^2 - 0.5 * log(2.0*pi) - log(sigma)
}
logsumexp <- function(v) {
vm = max(v)
log(sum(exp(v-vm))) + vm
}
And here is an example of how I would implement it where A are the means and S are the standard deviations:
tau = array(0,c(N,nk))
lpm = array(0,c(N,nk))
lik = 0
for (i in 1:N) {
ltau = log(pk)
lnorm1 = lognormpdf(Y1[i], A[1,], S[1,])
lnorm2 = lognormpdf(Y2[i], A[2,], S[2,])
lnorm3 = lognormpdf(Y3[i], A[3,], S[3,])
lall = ltau + lnorm2 + lnorm1 +lrnorm3
lpm[i,] = lall
lik = lik + logsumexp(lall)
tau[i,] = exp(lall - logsumexp(lall))
}
### The maximization step
Given our $$\omega_{ik}$$ we can procede to update our parameters using our first order conditions on the $$Q(\theta | \theta^{(t)})$$ function. I will let you write the code to update the $$p_k$$ term. For the mean and variance, my favorite way of implementing is to stack up the $$Y_{it}$$ and duplicate them for each $$k$$. Something along this lines:
DY1 = as.matrix(kronecker(Y1 ,rep(1,nk)))
DY2 = as.matrix(kronecker(Y2 ,rep(1,nk)))
DY3 = as.matrix(kronecker(Y3 ,rep(1,nk)))
Dkj1 = as.matrix.csr(kronecker(rep(1,N),diag(nk)))
Dkj2 = as.matrix.csr(kronecker(rep(1,N),diag(nk)))
Dkj3 = as.matrix.csr(kronecker(rep(1,N),diag(nk)))
then you easily recover the means and variances using the posterior weights with the following expression:
rw = c(t(tau))
fit = slm.wfit(Dkj1,DY1,rw)
A[1,] = coef(fit)[1:nk]
fit_v = slm.wfit(Dkj1,resid(fit)^2/rw,rw)
S[1,] = sqrt(coef(fit_v)[1:nk])
where slm.wfit is in the SparseM package. Note how you have to scale the residuals when using this function. You can edit this code to recover all means and variances at once!
Question 1: Write a function that takes the data in, and estimates the parameters of the mixture model. To that end, you can use the supplied code if you want. You need to write a loop that alternates between the expectation and the maximization step. You need to then add a termination condition. You can for instance check that the likelihood changes by very little.
### Checking things with H and Q functions
We now want to make sure that the code is working. However there are, as usual many sources of mistakes when writing the code. We know that the likelihood will always increase when running the EM. However it is often the case that the likelihood increases even when the code is incorrect. A stronger check is to implement the $$Q$$ and $$H$$ functions of the EM algorithm which both are increasing at each EM step.
Question 2: Extend your EM function to include the computation of the $$Q$$ and $$H$$ functions at every step. Note that this function needs to take in both the previous and new parameters (or at least the last $$\omega_{ik}$$). They are very simple expression of lpm and taum. If you compute them under $$\theta^(t+1)$$ and $$\theta^(t)$$ they are given by:
Q1 = sum( ( (res1$taum) * res1$lpm ))
Q2 = sum( ( (res1$taum) * res2$lpm ))
H1 = - sum( (res1$taum) * log(res1$taum))
H2 = - sum( (res1$taum) * log(res2$taum))
## Testing your code
Here is a simple function that will generate random data for you to estimate from. Your code should take a starting such model structure and update its parameters. This way we can easily check whether it matches in the end.
model.mixture.new <-function(nk) {
model = list()
# model for Y1,Y2,Y3|k
model$A = array(3*(1 + 0.8*runif(3*nk)),c(3,nk)) model$S = array(1,c(3,nk))
model$pk = rdirichlet(1,rep(1,nk)) model$nk = nk
return(model)
}
and here is code that will simulate from it:
model.mixture.simulate <-function(model,N,sd.scale=1) {
Y1 = array(0,sum(N))
Y2 = array(0,sum(N))
Y3 = array(0,sum(N))
K = array(0,sum(N))
A = model$A S = model$S
pk = model$pk nk = model$nk
# draw K
K = sample.int(nk,N,TRUE,pk)
# draw Y1, Y2, Y3
Y1 = A[1,K] + S[1,K] * rnorm(N) *sd.scale
Y2 = A[2,K] + S[2,K] * rnorm(N) *sd.scale
Y3 = A[3,K] + S[3,K] * rnorm(N) *sd.scale
data.sim = data.table(k=K,y1=Y1,y2=Y2,y3=Y3)
return(data.sim)
}
Here is code that simulates a data set:
model = model.mixture.new(3)
data = model.mixture.simulate(model,10000,sd.scale=0.5) # simulating with lower sd to see separation
datal = melt(data,id="k")
ggplot(datal,aes(x=value,group=k,fill=factor(k))) + geom_density() + facet_grid(~variable) + theme_bw()
Question 3: Simulate from this model, use your function to estimate the parameters from the data. Show that you do receover all the parameters (plot esimated values verus true vales). Finally, also report the sequence of values of the likelihood, the $$H$$ function and the $$Q$$ function. When you update your estimator, make sure your function returns a list structure similar to the one presented right here. This way you can use the simulation code right away.
## Estimating on PSID data
Get the prepared data from Blundell, Pistaferri and Saporta. To load this data you will need to install the package readstata13. You can do that by running:
install.packages('readstata13')
then you can load the data
require(readstata13)
data = data.table(read.dta13("~/Dropbox/Documents/Teaching/ECON-24030/lectures-laborsupply/homeworks/data/AER_2012_1549_data/output/data4estimation.dta"))
we start by computing the wage residuals
fit = lm(log(log_y) ~ year + marit + state_st ,data ,na.action=na.exclude)
data[, log_yr := residuals(fit)]
we then want to create a data-set in the same format as before. We can do this by selecting some given years and using the cast function.
# extract lags
setkey(data,person,year)
data[, log_yr_l1 := data[J(person,year-2),log_yr]]
data[, log_yr_l2 := data[J(person,year-4),log_yr]]
# compute difference from start
fdata = data[!is.na(log_yr*log_yr_l1*log_yr_l2)][,list(y1=log_yr_l2,y2=log_yr_l1,y3=log_yr)]
This gives 4941 to estimate your model!
Question 4: Use your function to estimate the mixture model on this data. Try to estimate for different number of mixture component (3,4,5). For each set of parameters, report how much of the cross-sectional dispersion in wages can be attibuted to permanent heterogeneity $$\eta_i$$ and how much to the rest. Finally, we want to assess the fit of the model, to do that, simulate from your estimated model, then plot the quantiles from your simulated data in the cross-section versus the one in the data.
## Estimate model with auto-correlation
Here we want to make things more interesting and consider the following model:
$Y_{it} - \rho Y_{it-1} | k \sim N(\mu_{kt},\sigma_{kt})$
The code does not need any modicification if we do things conditional on $$\rho$$. You just need to feed in the wages differences out already by rho. You run your algorithm on $$Y_2 - \rho Y_1$$, $$Y_3 - \rho Y_2$$ and $$Y_4 - \rho Y_3$$. Here we use one additional time period. Here is the code to prepare your data:
# extract lags
setkey(data,person,year)
data[, log_yr_l3 := data[J(person,year-2),log_yr]]
# compute difference from start
fdata = data[!is.na(log_yr*log_yr_l1*log_yr_l2)][,list(y1=log_yr_l3,y2=log_yr_l2,y3=log_yr_l1,y4=log_yr)]
Question 5: Use your function estimate the mixture model in differences using $$\rho=0.6$$. Next run your code on a grid for $$\rho$$ between 0 and 1. Report the likelihood plot over the values of $$\rho$$ and report the maximum likelihood estimator of $$\rho$$.
## Bonus questions, NP estimation using joint diagonalization
In class we saw that one could try to directly esimate the mixture components using SVD and eigen value decompositions. Multiple values of $$y_3$$ can be pulled together into a joint diagonalization. The first step is to construct the $$A(y3)$$ matrix for several values of $$y_3$$. Cut the values of $$y3$$ into groups and construct the matrix within each group (use 10 groups). You also need to discretize $$y_1$$ and $$y_2$$, use 20 point of supports. This should give you 10 different matrices.
The compute the $$A(\infty)$$ matrix which is the joint distribution of $$(Y_1,y_2)$$ unconditional of $$y_3$$. Compute the SVD decomposition of this matrix such that $$A(\infty) = U S V^\intercal$$. We do have that $$A(\infty) = F D(\infty) F^\intercal$$.
Next construct the matrices $$\tilde{A}(y_3) = S^{1-/2} U^\intercal A(y_3) V S^{1-/2}$$. However only select a few of vectors in $$U$$ and $$V$$. Select the K vectors associated with the highest values in $$S$$. $$\tilde{A}(y_3)$$ matrices should be $$k\times K$$.
Note that if the model is stationary, then \begin{align} \tilde{A}(y_3) & = S^{1-/2} U^\intercal F D(y_3) F^\intercal S^{1-/2} \\ & = S^{1-/2} U^\intercal F D(\infty) D(\infty)^{-1} D(y_3) F^\intercal S^{1-/2} \\ & = Q D(\infty)^{-1} D(y_3) Q^{-1} \\ \end{align}
And so at this point, to gain in efficiency, we can use a joint diagonalization of the all the $$\tilde{A}(y_3)$$ matrices at once. The ffdiag command in the joitnDiag package to perform this decomposition given a set of $$\tilde{A}(y_3)$$ matrices. It will return the $$Q$$ matrix.
Bonus question: Implement this procedure, run it on simulated data (impose the stationarity) and plot the true CDF versus the estimated CDF. Finally, run it on the data and report the estimated CDF!
Fork me on GitHub | 2021-09-22 23:12:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7833133935928345, "perplexity": 991.9217188342205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057403.84/warc/CC-MAIN-20210922223752-20210923013752-00080.warc.gz"} |
http://theinfolist.com/html/ALL/s/algebra.html | TheInfoList
Algebra (from ar, الجبر, lit=reunion of broken parts, bonesetting, translit=al-jabr) is one of the broad areas of
mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and their changes (cal ...
, together with
number theory Number theory (or arithmetic or higher arithmetic in older usage) is a branch of devoted primarily to the study of the s and . German mathematician (1777–1855) said, "Mathematics is the queen of the sciences—and number theory is the queen ...
,
geometry Geometry (from the grc, γεωμετρία; ' "earth", ' "measurement") is, with , one of the oldest branches of . It is concerned with properties of space that are related with distance, shape, size, and relative position of figures. A mat ...
and
analysis Analysis is the process of breaking a complex topic or substance Substance may refer to: * Substance (Jainism), a term in Jain ontology to denote the base or owner of attributes * Chemical substance, a material with a definite chemical composit ...
. In its most general form, algebra is the study of
mathematical symbol A mathematical symbol is a figure or a combination of figures that is used to represent a mathematical object A mathematical object is an abstract concept arising in mathematics. In the usual language of mathematics, an ''object'' is anything that ...
s and the rules for manipulating these symbols; it is a unifying thread of almost all of mathematics. It includes everything from elementary equation solving to the study of abstractions such as
groups A group is a number of people or things that are located, gathered, or classed together. Groups of people * Cultural group, a group whose members share the same cultural identity * Ethnic group, a group whose members share the same ethnic identi ...
,
rings Ring most commonly refers either to a hollow circular shape or to a high-pitched sound. It thus may refer to: *Ring (jewellery), a circular, decorative or symbolic ornament worn on fingers, toes, arm or neck Ring may also refer to: Sounds * Ri ...
, and
fields File:A NASA Delta IV Heavy rocket launches the Parker Solar Probe (29097299447).jpg, FIELDS heads into space in August 2018 as part of the ''Parker Solar Probe'' FIELDS is a science instrument on the ''Parker Solar Probe'' (PSP), designed to mea ...
. The more basic parts of algebra are called
elementary algebra Elementary algebra encompasses some of the basic concepts of algebra, one of the main branches of mathematics. It is typically taught to secondary school students and builds on their understanding of arithmetic. Whereas arithmetic deals with spec ...
; the more abstract parts are called
abstract algebra In algebra, which is a broad division of mathematics, abstract algebra (occasionally called modern algebra) is the study of algebraic structures. Algebraic structures include group (mathematics), groups, ring (mathematics), rings, field (mathema ...
or modern algebra. Elementary algebra is generally considered to be essential for any study of mathematics, science, or engineering, as well as such applications as medicine and economics. Abstract algebra is a major area in advanced mathematics, studied primarily by professional mathematicians. Elementary algebra differs from
arithmetic Arithmetic (from the Ancient Greek, Greek wikt:en:ἀριθμός#Ancient Greek, ἀριθμός ''arithmos'', 'number' and wikt:en:τική#Ancient Greek, τική wikt:en:τέχνη#Ancient Greek, έχνη ''tiké échne', 'art' or 'cr ...
in the use of abstractions, such as using letters to stand for numbers that are either unknown or allowed to take on many values. For example, in $x + 2 = 5$ the letter $x$ is an unknown, but applying
additive inverse In mathematics, the additive inverse of a number A number is a mathematical object A mathematical object is an abstract concept arising in mathematics. In the usual language of mathematics, an ''object'' is anything that has been (or could be ...
s can reveal its value: $x=3$. In , the letters $E$ and $m$ are variables, and the letter $c$ is a
constant Constant or The Constant may refer to: Mathematics * Constant (mathematics) In mathematics, the word constant can have multiple meanings. As an adjective, it refers to non-variance (i.e. unchanging with respect to some other Value (mathematics ...
, the speed of light in a vacuum. Algebra gives methods for writing formulas and solving equations that are much clearer and easier than the older method of writing everything out in words. The word ''algebra'' is also used in certain specialized ways. A special kind of mathematical object in abstract algebra is called an "algebra", and the word is used, for example, in the phrases
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrix (mat ...
and
algebraic topology Algebraic topology is a branch of mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained ...
. A mathematician who does research in algebra is called an algebraist.
# Etymology
The word ''algebra'' comes from the ar, الجبر, lit=reunion of broken parts, bonesetting, translit=al-jabr from the title of the early 9th century book '' cIlm al-jabr wa l-muqābala'' "The Science of Restoring and Balancing" by the
Persian Persian may refer to: * People and things from Iran, historically called ''Persia'' in the English language ** Persians, Persian people, the majority ethnic group in Iran, not to be conflated with the Iranian peoples ** Persian language, an Iranian ...
mathematician and astronomer
al-Khwarizmi Muḥammad ibn Mūsā al-Khwārizmī ( fa, محمد بن موسی خوارزمی, Moḥammad ben Musā Khwārazmi; ), Arabized as al-Khwarizmi and formerly Latinized as ''Algorithmi'', was a Persian polymath A polymath ( el, πολυμα ...
. In his work, the term ''al-jabr'' referred to the operation of moving a term from one side of an equation to the other, المقابلة ''al-muqābala'' "balancing" referred to adding equal terms to both sides. Shortened to just ''algeber'' or ''algebra'' in Latin, the word eventually entered the English language during the fifteenth century, from either Spanish, Italian, or
Medieval Latin Medieval Latin was the form of Latin Latin (, or , ) is a classical language A classical language is a language A language is a structured system of communication Communication (from Latin ''communicare'', meaning "to share ...
. It originally referred to the surgical procedure of setting broken or dislocated bones. The mathematical meaning was first recorded (in English) in the sixteenth century.
# Different meanings of "algebra"
The word "algebra" has several related meanings in mathematics, as a single word or with qualifiers. * As a single word without an article, "algebra" names a broad part of mathematics. * As a single word with an article or in the plural, "an algebra" or "algebras" denotes a specific mathematical structure, whose precise definition depends on the context. Usually, the structure has an addition, multiplication, and scalar multiplication (see
Algebra over a field In mathematics, an algebra over a field (often simply called an algebra) is a vector space equipped with a bilinear map, bilinear product (mathematics), product. Thus, an algebra is an algebraic structure consisting of a set (mathematics), set to ...
). When some authors use the term "algebra", they make a subset of the following additional assumptions:
associative In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). ...
,
commutative In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is no ge ...
, unital, and/or finite-dimensional. In
universal algebra Universal algebra (sometimes called general algebra) is the field of mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spa ...
, the word "algebra" refers to a generalization of the above concept, which allows for n-ary operations. * With a qualifier, there is the same distinction: ** Without an article, it means a part of algebra, such as
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrix (mat ...
,
elementary algebra Elementary algebra encompasses some of the basic concepts of algebra, one of the main branches of mathematics. It is typically taught to secondary school students and builds on their understanding of arithmetic. Whereas arithmetic deals with spec ...
(the symbol-manipulation rules taught in elementary courses of mathematics as part of
primary Primary or primaries may refer to: Arts, entertainment, and media Music Groups and labels * Primary (band), from Australia * Primary (musician), hip hop musician and record producer from South Korea * Primary Music, Israeli record label Works * ...
and
secondary education Secondary education covers two phases on the International Standard Classification of Education The International Standard Classification of Education (ISCED) is a statistical framework for organizing information on education Education i ...
), or
abstract algebra In algebra, which is a broad division of mathematics, abstract algebra (occasionally called modern algebra) is the study of algebraic structures. Algebraic structures include group (mathematics), groups, ring (mathematics), rings, field (mathema ...
(the study of the algebraic structures for themselves). ** With an article, it means an instance of some abstract structure, like a
Lie algebra In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities an ...
, an
associative algebra In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). I ...
, or a
vertex operator algebra In mathematics, a vertex operator algebra (VOA) is an algebraic structure that plays an important role in two-dimensional conformal field theory and string theory. In addition to physical applications, vertex operator algebras have proven useful ...
. ** Sometimes both meanings exist for the same qualifier, as in the sentence: ''
Commutative algebra Commutative algebra is the branch of algebra Algebra (from ar, الجبر, lit=reunion of broken parts, bonesetting, translit=al-jabr) is one of the areas of mathematics, broad areas of mathematics, together with number theory, geometry ...
is the study of
commutative ring In , a branch of , a commutative ring is a in which the multiplication operation is . The study of commutative rings is called . Complementarily, is the study of s where multiplication is not required to be commutative. Definition and first e ...
s, which are commutative algebras over the integers''.
# Algebra as a branch of mathematics
Algebra began with computations similar to those of
arithmetic Arithmetic (from the Ancient Greek, Greek wikt:en:ἀριθμός#Ancient Greek, ἀριθμός ''arithmos'', 'number' and wikt:en:τική#Ancient Greek, τική wikt:en:τέχνη#Ancient Greek, έχνη ''tiké échne', 'art' or 'cr ...
, with letters standing for numbers. This allowed proofs of properties that are true no matter which numbers are involved. For example, in the
quadratic equation In algebra Algebra (from ar, الجبر, lit=reunion of broken parts, bonesetting, translit=al-jabr) is one of the areas of mathematics, broad areas of mathematics, together with number theory, geometry and mathematical analysis, analysis. ...
:$ax^2+bx+c=0,$ $a, b, c$ can be any numbers whatsoever (except that $a$ cannot be $0$), and the
quadratic formula In elementary algebra Elementary algebra encompasses some of the basic concepts of algebra, one of the main branches of mathematics. It is typically taught to secondary school students and builds on their understanding of arithmetic. Whereas a ...
can be used to quickly and easily find the values of the unknown quantity $x$ which satisfy the equation. That is to say, to find all the solutions of the equation. Historically, and in current teaching, the study of algebra starts with the solving of equations such as the quadratic equation above. Then more general questions, such as "does an equation have a solution?", "how many solutions does an equation have?", "what can be said about the nature of the solutions?" are considered. These questions led extending algebra to non-numerical objects, such as
permutation In , a permutation of a is, loosely speaking, an arrangement of its members into a or , or if the set is already ordered, a rearrangement of its elements. The word "permutation" also refers to the act or process of changing the linear order o ...
s,
vectors Vector may refer to: Biology *Vector (epidemiology), an agent that carries and transmits an infectious pathogen into another living organism; a disease vector *Vector (molecular biology), a DNA molecule used as a vehicle to artificially carr ...
,
matrices Matrix or MATRIX may refer to: Science and mathematics * Matrix (mathematics) In mathematics, a matrix (plural matrices) is a rectangle, rectangular ''wikt:array, array'' or ''table'' of numbers, symbol (formal), symbols, or expression (mathema ...
, and
polynomial In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). I ...
s. The structural properties of these non-numerical objects were then abstracted into
algebraic structure In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ...
s such as
groups A group is a number of people or things that are located, gathered, or classed together. Groups of people * Cultural group, a group whose members share the same cultural identity * Ethnic group, a group whose members share the same ethnic identi ...
,
rings Ring most commonly refers either to a hollow circular shape or to a high-pitched sound. It thus may refer to: *Ring (jewellery), a circular, decorative or symbolic ornament worn on fingers, toes, arm or neck Ring may also refer to: Sounds * Ri ...
, and
fields File:A NASA Delta IV Heavy rocket launches the Parker Solar Probe (29097299447).jpg, FIELDS heads into space in August 2018 as part of the ''Parker Solar Probe'' FIELDS is a science instrument on the ''Parker Solar Probe'' (PSP), designed to mea ...
. Before the 16th century, mathematics was divided into only two subfields,
arithmetic Arithmetic (from the Ancient Greek, Greek wikt:en:ἀριθμός#Ancient Greek, ἀριθμός ''arithmos'', 'number' and wikt:en:τική#Ancient Greek, τική wikt:en:τέχνη#Ancient Greek, έχνη ''tiké échne', 'art' or 'cr ...
and
geometry Geometry (from the grc, γεωμετρία; ' "earth", ' "measurement") is, with , one of the oldest branches of . It is concerned with properties of space that are related with distance, shape, size, and relative position of figures. A mat ...
. Even though some methods, which had been developed much earlier, may be considered nowadays as algebra, the emergence of algebra and, soon thereafter, of
infinitesimal calculus Calculus, originally called infinitesimal calculus or "the calculus of infinitesimal In mathematics, infinitesimals or infinitesimal numbers are quantities that are closer to zero than any standard real number, but are not zero. They do not ex ...
as subfields of mathematics only dates from the 16th or 17th century. From the second half of the 19th century on, many new fields of mathematics appeared, most of which made use of both arithmetic and geometry, and almost all of which used algebra. Today, algebra has grown until it includes many branches of mathematics, as can be seen in the
Mathematics Subject Classification The Mathematics Subject Classification (MSC) is an alphanumerical classification scheme collaboratively produced by staff of, and based on the coverage of, the two major mathematical reviewing databases, Mathematical Reviews and Zentralblatt MATH ...
where none of the first level areas (two digit entries) is called ''algebra''. Today algebra includes section 08-General algebraic systems, 12- Field theory and
polynomial In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). I ...
s, 13-
Commutative algebra Commutative algebra is the branch of algebra Algebra (from ar, الجبر, lit=reunion of broken parts, bonesetting, translit=al-jabr) is one of the areas of mathematics, broad areas of mathematics, together with number theory, geometry ...
, 15-
Linear Linearity is the property of a mathematical relationship (''function Function or functionality may refer to: Computing * Function key A function key is a key on a computer A computer is a machine that can be programmed to carry out se ...
and
multilinear algebra In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). I ...
;
matrix theory In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). ...
, 16- Associative rings and algebras, 17-
Nonassociative ring A non-associative algebra (or distributive algebra) is an algebra over a field where the binary operation, binary multiplication operation is not assumed to be associative operation, associative. That is, an algebraic structure ''A'' is a non-assoc ...
s and
algebras In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ...
, 18-
Category theory Category theory formalizes mathematical structure In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and ...
;
homological algebra Homological algebra is the branch of mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and thei ...
, 19-
K-theory In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is no gen ...
and 20-
Group theory In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is no ...
. Algebra is also used extensively in 11-
Number theory Number theory (or arithmetic or higher arithmetic in older usage) is a branch of devoted primarily to the study of the s and . German mathematician (1777–1855) said, "Mathematics is the queen of the sciences—and number theory is the queen ...
and 14-
Algebraic geometry Algebraic geometry is a branch of mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and thei ...
.
# History
## Early history of algebra
The roots of algebra can be traced to the ancient
Babylonians Babylonia () was an ancient Ancient history is the aggregate of past eventsWordNet Search – ...
, who developed an advanced arithmetical system with which they were able to do calculations in an
algorithm In and , an algorithm () is a finite sequence of , computer-implementable instructions, typically to solve a class of problems or to perform a computation. Algorithms are always and are used as specifications for performing s, , , and other ...
ic fashion. The Babylonians developed formulas to calculate solutions for problems typically solved today by using
linear equation In mathematics, a linear equation is an equation that may be put in the form :a_1x_1+\cdots +a_nx_n+b=0, where x_1, \ldots, x_n are the variable (mathematics), variables (or unknown (mathematics), unknowns), and b, a_1, \ldots, a_n are the coeffi ...
s,
quadratic equation In algebra Algebra (from ar, الجبر, lit=reunion of broken parts, bonesetting, translit=al-jabr) is one of the areas of mathematics, broad areas of mathematics, together with number theory, geometry and mathematical analysis, analysis. ...
s, and indeterminate linear equations. By contrast, most
Egyptians Egyptians ( arz, المصريين, ; cop, ⲣⲉⲙⲛ̀ⲭⲏⲙⲓ, remenkhēmi) are an ethnic group of people originating from the country of Egypt Egypt ( ar, مِصر, Miṣr), officially the Arab Republic of Egypt, is a spanning t ...
of this era, as well as
Greek#REDIRECT Greek Greek may refer to: Greece Anything of, from, or related to Greece Greece ( el, Ελλάδα, , ), officially the Hellenic Republic, is a country located in Southeast Europe. Its population is approximately 10.7 million as of ...
and
Chinese mathematicsMathematics in China emerged independently by the 11th century BC. The Chinese independently developed a real number Real may refer to: * Reality, the state of things as they exist, rather than as they may appear or may be thought to be Currencies ...
in the 1st millennium BC, usually solved such equations by geometric methods, such as those described in the ''
Rhind Mathematical Papyrus The Rhind Mathematical Papyrus (RMP; also designated as papyrus British Museum The British Museum, in the Bloomsbury Bloomsbury is a district in the West End of London The West End of London (commonly referred to as the West End ...
'', Euclid's ''Elements'', and ''
The Nine Chapters on the Mathematical Art ''The Nine Chapters on the Mathematical Art'' () is a Chinese mathematics book, composed by several generations of scholars from the 10th–2nd century BCE, its latest stage being from the 2nd century CE. This book is one of the earliest surv ...
''. The geometric work of the Greeks, typified in the ''Elements'', provided the framework for generalizing formulae beyond the solution of particular problems into more general systems of stating and solving equations, although this would not be realized until mathematics developed in medieval Islam. By the time of
Plato Plato ( ; grc-gre, Πλάτων ; 428/427 or 424/423 – 348/347 BC) was an Classical Athens, Athenian philosopher during the Classical Greece, Classical period in Ancient Greece, founder of the Platonist school of thought and the Platoni ...
, Greek mathematics had undergone a drastic change. The Greeks created a
geometric algebra In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ...
where terms were represented by sides of geometric objects, usually lines, that had letters associated with them.See , ''Europe in the Middle Ages'', p. 258: "In the arithmetical theorems in Euclid's ''Elements'' VII–IX, numbers had been represented by line segments to which letters had been attached, and the geometric proofs in al-Khwarizmi's ''Algebra'' made use of lettered diagrams; but all coefficients in the equations used in the ''Algebra'' are specific numbers, whether represented by numerals or written out in words. The idea of generality is implied in al-Khwarizmi's exposition, but he had no scheme for expressing algebraically the general propositions that are so readily available in geometry."
Diophantus Diophantus of Alexandria ( grc, Διόφαντος ὁ Ἀλεξανδρεύς; born probably sometime between AD 200 and 214; died around the age of 84, probably sometime between AD 284 and 298) was an Alexandrian mathematician, who was the autho ...
Alexandria Alexandria ( or ; ar, الإسكندرية ; arz, اسكندرية ; Coptic language, Coptic: Rakodī; el, Αλεξάνδρεια ''Alexandria'') is the List of cities and towns in Egypt, third-largest city in Egypt after Cairo and Giza, ...
n Greek mathematician and the author of a series of books called ''
Arithmetica ''Arithmetica'' ( grc-gre, Ἀριθμητικά) is an Ancient Greek Ancient Greek includes the forms of the Greek language used in ancient Greece and the classical antiquity, ancient world from around 1500 BC to 300 BC. It is often roug ...
''. These texts deal with solving
algebraic equation In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It h ...
s, and have led, in
number theory Number theory (or arithmetic or higher arithmetic in older usage) is a branch of devoted primarily to the study of the s and . German mathematician (1777–1855) said, "Mathematics is the queen of the sciences—and number theory is the queen ...
, to the modern notion of
Diophantine equation In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and ...
. Earlier traditions discussed above had a direct influence on the Persian mathematician Muḥammad ibn Mūsā al-Khwārizmī (c. 780–850). He later wrote ''
The Compendious Book on Calculation by Completion and Balancing ''The Compendious Book on Calculation by Completion and Balancing'' ( ar, ٱلْكِتَاب ٱلْمُخْتَصَر فِي حِسَاب ٱلْجَبْر وَٱلْمُقَابَلَة, ''al-Kitāb al-Mukhtaṣar fī Ḥisāb al-Jabr wal-Muqā ...
'', which established algebra as a mathematical discipline that is independent of
geometry Geometry (from the grc, γεωμετρία; ' "earth", ' "measurement") is, with , one of the oldest branches of . It is concerned with properties of space that are related with distance, shape, size, and relative position of figures. A mat ...
and
arithmetic Arithmetic (from the Ancient Greek, Greek wikt:en:ἀριθμός#Ancient Greek, ἀριθμός ''arithmos'', 'number' and wikt:en:τική#Ancient Greek, τική wikt:en:τέχνη#Ancient Greek, έχνη ''tiké échne', 'art' or 'cr ...
. The
Hellenistic The Hellenistic period spans the period of History of the Mediterranean region, Mediterranean history between the death of Alexander the Great in 323 BC and the emergence of the Roman Empire, as signified by the Battle of Actium in 31 ...
mathematicians
Hero of Alexandria Hero of Alexandria (; grc-gre, Ἥρων ὁ Ἀλεξανδρεύς, ''Heron ho Alexandreus'', also known as Heron of Alexandria ; c. 10 AD – c. 70 AD), was a Greek mathematician and engineer who was active in his native city of Alexandria, R ...
and Diophantus as well as
Indian mathematicians The chronology of Indian mathematicians spans from the Indus Valley Civilization oxen for pulling a cart and the presence of the chicken The chicken (''Gallus gallus domesticus''), a subspecies of the red junglefowl, is a type of d ...
such as
Brahmagupta Brahmagupta ( – ) was an Indian Indian mathematics, mathematician and Indian astronomy, astronomer. He is the author of two early works on mathematics and astronomy: the ''Brāhmasphuṭasiddhānta'' (BSS, "correctly established Siddhanta, doc ...
, continued the traditions of Egypt and Babylon, though Diophantus' ''Arithmetica'' and Brahmagupta's ''
Brāhmasphuṭasiddhānta The ''Brāhmasphuṭasiddhānta'' ("Correctly Established Doctrine of Brahma Brahma ( sa, ब्रह्मा, Brahmā) is one of the Hindu deities, principal deities of Hinduism, though his importance has declined in recent centuries. He i ...
'' are on a higher level. For example, the first complete arithmetic solution written in words instead of symbols, including zero and negative solutions, to quadratic equations was described by Brahmagupta in his book ''Brahmasphutasiddhanta,'' published in 628 AD.Bradley, Michael. ''The Birth of Mathematics: Ancient Times to 1300'', p. 86 (Infobase Publishing 2006). Later, Persian and Arabic mathematicians developed algebraic methods to a much higher degree of sophistication. Although Diophantus and the Babylonians used mostly special ''ad hoc'' methods to solve equations, Al-Khwarizmi's contribution was fundamental. He solved linear and quadratic equations without algebraic symbolism,
negative numbers In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). I ...
or
zero 0 (zero) is a number A number is a mathematical object A mathematical object is an abstract concept arising in mathematics. In the usual language of mathematics, an ''object'' is anything that has been (or could be) formally defined, and ...
, thus he had to distinguish several types of equations. In the context where algebra is identified with the
theory of equations {{unreferenced, date=May 2014 In algebra Algebra (from ar, الجبر, lit=reunion of broken parts, bonesetting, translit=al-jabr) is one of the areas of mathematics, broad areas of mathematics, together with number theory, geometry and math ...
, the Greek mathematician Diophantus has traditionally been known as the "father of algebra" and in the context where it is identified with rules for manipulating and solving equations, Persian mathematician al-Khwarizmi is regarded as "the father of algebra".See , page 263–277: "In a sense, al-Khwarizmi is more entitled to be called "the father of algebra" than Diophantus because al-Khwarizmi is the first to teach algebra in an elementary form and for its own sake, Diophantus is primarily concerned with the theory of numbers". A debate now exists whether who (in the general sense) is more entitled to be known as "the father of algebra". Those who support Diophantus point to the fact that the algebra found in ''Al-Jabr'' is slightly more elementary than the algebra found in ''Arithmetica'' and that ''Arithmetica'' is syncopated while ''Al-Jabr'' is fully rhetorical. Those who support Al-Khwarizmi point to the fact that he introduced the methods of " reduction" and "balancing" (the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation) which the term ''al-jabr'' originally referred to,See , ''The Arabic Hegemony'', p. 229: "It is not certain just what the terms ''al-jabr'' and ''muqabalah'' mean, but the usual interpretation is similar to that implied in the translation above. The word ''al-jabr'' presumably meant something like "restoration" or "completion" and seems to refer to the transposition of subtracted terms to the other side of an equation; the word ''muqabalah'' is said to refer to "reduction" or "balancing" – that is, the cancellation of like terms on opposite sides of the equation". and that he gave an exhaustive explanation of solving quadratic equations, supported by geometric proofs while treating algebra as an independent discipline in its own right. His algebra was also no longer concerned "with a series of problems to be resolved, but an
exposition Exposition (also the French for exhibition) may refer to: * Universal exposition or World's Fair *Expository writing Rhetorical modes (also known as modes of discourse) describe the variety, conventions, and purposes of the major kinds of languag ...
which starts with primitive terms in which the combinations must give all possible prototypes for equations, which henceforward explicitly constitute the true object of study". He also studied an equation for its own sake and "in a generic manner, insofar as it does not simply emerge in the course of solving a problem, but is specifically called on to define an infinite class of problems". Another Persian mathematician
Omar Khayyam Omar Khayyam (; fa, عمر خیّام ; 18 May 1048 – 4 December 1131) was a Persian Persian may refer to: * People and things from Iran, historically called ''Persia'' in the English language ** Persians, Persian people, the majority ethni ...
is credited with identifying the foundations of
algebraic geometry Algebraic geometry is a branch of mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and thei ...
and found the general geometric solution of the
cubic equation roots A root is the part of a plant that most often lies below the surface of the soil but can also be aerial or aerating, that is, growing up above the ground or especially above water. Root or roots may also refer to: Art, entertainment, a ...
. His book ''Treatise on Demonstrations of Problems of Algebra'' (1070), which laid down the principles of algebra, is part of the body of Persian mathematics that was eventually transmitted to Europe. Yet another Persian mathematician,
Sharaf al-Dīn al-Tūsī Sharaf al-Dīn al-Muẓaffar ibn Muḥammad ibn al-Muẓaffar al-Ṭūsī ( fa, شرفالدین مظفر بن محمد بن مظفر توسی; c. 1135 – c. 1213) was an Iranian peoples, Iranian Islamic mathematics, mathematician and Islamic ...
, found algebraic and numerical solutions to various cases of cubic equations. He also developed the concept of a
function Function or functionality may refer to: Computing * Function key A function key is a key on a computer A computer is a machine that can be programmed to carry out sequences of arithmetic or logical operations automatically. Modern comp ...
. The Indian mathematicians
Mahavira Mahavira ( sa, महावीर:), also known as Vardhamana, was the 24th ''Tirthankara In Jainism Jainism (), traditionally known as ''Jain Dharma'', is an ancient Indian religion and the method of acquiring perfect knowledge ...
and Bhaskara II, the Persian mathematician
Al-Karaji ( fa, ابو بکر محمد بن الحسن الکرجی; c. 953 – c. 1029) was a 10th-century Persian mathematician A mathematician is someone who uses an extensive knowledge of mathematics Mathematics (from Ancient Greek, Greek ...
,See , ''The Arabic Hegemony'', p. 239: "Abu'l Wefa was a capable algebraist as well as a trigonometer. ... His successor al-Karkhi evidently used this translation to become an Arabic disciple of Diophantus – but without Diophantine analysis! ... In particular, to al-Karkhi is attributed the first numerical solution of equations of the form ax2n + bxn = c (only equations with positive roots were considered)," and the Chinese mathematician Zhu Shijie, solved various cases of cubic, quartic,
quintic In algebra Algebra (from ar, الجبر, lit=reunion of broken parts, bonesetting, translit=al-jabr) is one of the areas of mathematics, broad areas of mathematics, together with number theory, geometry and mathematical analysis, analysis. I ...
and higher-order
polynomial In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). I ...
equations using numerical methods. In the 13th century, the solution of a cubic equation by
Fibonacci Fibonacci (; also , ; – ), also known as Leonardo Bonacci, Leonardo of Pisa, or Leonardo Bigollo Pisano ('Leonardo the Traveller from Pisa'), was an Italian mathematician A mathematician is someone who uses an extensive knowledge of mathem ...
is representative of the beginning of a revival in European algebra. Abū al-Ḥasan ibn ʿAlī al-Qalaṣādī (1412–1486) took "the first steps toward the introduction of algebraic symbolism". He also computed ∑''n''2, ∑''n''3 and used the method of successive approximation to determine square roots.
## Modern history of algebra
François Viète François Viète, Seigneur de la Bigotière ( la, Franciscus Vieta; 1540 – 23 February 1603) was a French mathematician A mathematician is someone who uses an extensive knowledge of mathematics Mathematics (from Ancient Greek, Gre ...
's work on
new algebra New is an adjective referring to something recently made, discovered, or created. New or NEW may refer to: Music * New, singer of K-pop group The Boyz Albums and EPs * ''New'' (album), by Paul McCartney, 2013 * ''New'' (EP), by Regurgitator, ...
at the close of the 16th century was an important step towards modern algebra. In 1637,
René Descartes René Descartes ( or ; ; Latinized Latinisation or Latinization can refer to: * Latinisation of names, the practice of rendering a non-Latin name in a Latin style * Latinisation in the Soviet Union, the campaign in the USSR during the 1920s ...
published ''
La Géométrie ''La Géométrie'' was published Publishing is the activity of making information, literature, music, software and other content available to the public for sale or for free. Traditionally, the term refers to the distribution of printed works ...
'', inventing
analytic geometry In classical mathematics, analytic geometry, also known as coordinate geometry or Cartesian geometry, is the study of geometry Geometry (from the grc, γεωμετρία; ' "earth", ' "measurement") is, with , one of the oldest branches ...
and introducing modern algebraic notation. Another key event in the further development of algebra was the general algebraic solution of the cubic and quartic equations, developed in the mid-16th century. The idea of a
determinant In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and t ...
was developed by Japanese mathematician Seki Kōwa in the 17th century, followed independently by
Gottfried Leibniz Gottfried Wilhelm (von) Leibniz ; see inscription of the engraving depicted in the " 1666–1676" section. ( – 14 November 1716) was a German polymath A polymath ( el, πολυμαθής, , "having learned much"; la, homo universalis, " ...
ten years later, for the purpose of solving systems of simultaneous linear equations using
matrices Matrix or MATRIX may refer to: Science and mathematics * Matrix (mathematics) In mathematics, a matrix (plural matrices) is a rectangle, rectangular ''wikt:array, array'' or ''table'' of numbers, symbol (formal), symbols, or expression (mathema ...
.
Gabriel Cramer Gabriel Cramer (; 31 July 1704 – 4 January 1752) was a Genevan mathematician A mathematician is someone who uses an extensive knowledge of mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quan ...
also did some work on matrices and determinants in the 18th century. Permutations were studied by
Joseph-Louis Lagrange Joseph-Louis Lagrange (born Giuseppe Luigi LagrangiaLagrange resolvents In Galois theory In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analys ...
.
Paolo Ruffini Paolo Ruffini (September 22, 1765 – May 10, 1822) was an Italian mathematician A mathematician is someone who uses an extensive knowledge of mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as qua ...
was the first person to develop the theory of
permutation group In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ...
s, and like his predecessors, also in the context of solving algebraic equations.
Abstract algebra In algebra, which is a broad division of mathematics, abstract algebra (occasionally called modern algebra) is the study of algebraic structures. Algebraic structures include group (mathematics), groups, ring (mathematics), rings, field (mathema ...
was developed in the 19th century, deriving from the interest in solving equations, initially focusing on what is now called
Galois theory In mathematics, Galois theory, originally introduced by Évariste Galois, provides a connection between field (mathematics), field theory and group theory. This connection, the fundamental theorem of Galois theory, allows reducing certain problems ...
, and on constructibility issues.
George Peacock George Peacock FRS (9 April 1791 – 8 November 1858) was an English mathematician A mathematician is someone who uses an extensive knowledge of mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as ...
was the founder of axiomatic thinking in arithmetic and algebra.
Augustus De Morgan Augustus De Morgan (27 June 1806 – 18 March 1871) was a British mathematician A mathematician is someone who uses an extensive knowledge of mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmeti ...
discovered
relation algebra In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ...
in his ''Syllabus of a Proposed System of Logic''.
Josiah Willard Gibbs Josiah Willard Gibbs (; February 11, 1839 – April 28, 1903) was an American scientist who made significant theoretical contributions to physics, chemistry, and mathematics. His work on the applications of thermodynamics was instrumental in tr ...
developed an algebra of vectors in three-dimensional space, and
Arthur Cayley Arthur Cayley (; 16 August 1821 – 26 January 1895) was a prolific British mathematician A mathematician is someone who uses an extensive knowledge of mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such ...
developed an algebra of matrices (this is a noncommutative algebra).
# Areas of mathematics with the word algebra in their name
Some areas of mathematics that fall under the classification abstract algebra have the word algebra in their name;
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrix (mat ...
is one example. Others do not:
group theory In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is no ...
,
ring theory In algebra Algebra (from ar, الجبر, lit=reunion of broken parts, bonesetting, translit=al-jabr) is one of the areas of mathematics, broad areas of mathematics, together with number theory, geometry and mathematical analysis, analysis. ...
, and field theory are examples. In this section, we list some areas of mathematics with the word "algebra" in the name. *
Elementary algebra Elementary algebra encompasses some of the basic concepts of algebra, one of the main branches of mathematics. It is typically taught to secondary school students and builds on their understanding of arithmetic. Whereas arithmetic deals with spec ...
, the part of algebra that is usually taught in elementary courses of mathematics. *
Abstract algebra In algebra, which is a broad division of mathematics, abstract algebra (occasionally called modern algebra) is the study of algebraic structures. Algebraic structures include group (mathematics), groups, ring (mathematics), rings, field (mathema ...
, in which
algebraic structure In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ...
s such as
groups A group is a number of people or things that are located, gathered, or classed together. Groups of people * Cultural group, a group whose members share the same cultural identity * Ethnic group, a group whose members share the same ethnic identi ...
,
rings Ring most commonly refers either to a hollow circular shape or to a high-pitched sound. It thus may refer to: *Ring (jewellery), a circular, decorative or symbolic ornament worn on fingers, toes, arm or neck Ring may also refer to: Sounds * Ri ...
and
fields File:A NASA Delta IV Heavy rocket launches the Parker Solar Probe (29097299447).jpg, FIELDS heads into space in August 2018 as part of the ''Parker Solar Probe'' FIELDS is a science instrument on the ''Parker Solar Probe'' (PSP), designed to mea ...
are
axiomatically An axiom, postulate or assumption is a statement that is taken to be truth, true, to serve as a premise or starting point for further reasoning and arguments. The word comes from the Greek ''axíōma'' () 'that which is thought worthy or fit' or ...
defined and investigated. *
Linear algebra Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrix (mat ...
, in which the specific properties of
linear equation In mathematics, a linear equation is an equation that may be put in the form :a_1x_1+\cdots +a_nx_n+b=0, where x_1, \ldots, x_n are the variable (mathematics), variables (or unknown (mathematics), unknowns), and b, a_1, \ldots, a_n are the coeffi ...
s,
vector space In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities a ...
s and
matrices Matrix or MATRIX may refer to: Science and mathematics * Matrix (mathematics) In mathematics, a matrix (plural matrices) is a rectangle, rectangular ''wikt:array, array'' or ''table'' of numbers, symbol (formal), symbols, or expression (mathema ...
are studied. *
Boolean algebra In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and t ...
, a branch of algebra abstracting the computation with the
truth value In logic Logic is an interdisciplinary field which studies truth and reasoning Reason is the capacity of consciously making sense of things, applying logic Logic (from Ancient Greek, Greek: grc, wikt:λογική, λογική, la ...
s ''false'' and ''true''. *
Commutative algebra Commutative algebra is the branch of algebra Algebra (from ar, الجبر, lit=reunion of broken parts, bonesetting, translit=al-jabr) is one of the areas of mathematics, broad areas of mathematics, together with number theory, geometry ...
, the study of
commutative ring In , a branch of , a commutative ring is a in which the multiplication operation is . The study of commutative rings is called . Complementarily, is the study of s where multiplication is not required to be commutative. Definition and first e ...
s. *
Computer algebra In mathematics and computer science, computer algebra, also called symbolic computation or algebraic computation, is a scientific area that refers to the study and development of algorithms and software for manipulating expression (mathematics), ...
, the implementation of algebraic methods as
algorithm In and , an algorithm () is a finite sequence of , computer-implementable instructions, typically to solve a class of problems or to perform a computation. Algorithms are always and are used as specifications for performing s, , , and other ...
s and
computer program In imperative programming, a computer program is a sequence of instructions in a programming language that a computer can execute or interpret. In declarative programming, a ''computer program'' is a Set (mathematics), set of instructions. A comp ...
s. *
Homological algebra Homological algebra is the branch of mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and thei ...
, the study of algebraic structures that are fundamental to study
topological space In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is no gener ...
s. *
Universal algebra Universal algebra (sometimes called general algebra) is the field of mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spa ...
, in which properties common to all algebraic structures are studied. *
Algebraic number theory Algebraic number theory is a branch of number theory Number theory (or arithmetic or higher arithmetic in older usage) is a branch of devoted primarily to the study of the s and . German mathematician (1777–1855) said, "Mathematics is th ...
, in which the properties of numbers are studied from an algebraic point of view. *
Algebraic geometry Algebraic geometry is a branch of mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and thei ...
, a branch of geometry, in its primitive form specifying curves and surfaces as solutions of polynomial equations. *
Algebraic combinatorics Algebraic combinatorics is an area of mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (m ...
, in which algebraic methods are used to study combinatorial questions. *
Relational algebra In database theory, relational algebra is a theory that uses algebraic structures with a well-founded semantics for modeling data, and defining queries on it. The theory has been introduced by Edgar F. Codd. The main application of relational ...
: a set of
finitary relation In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ...
s that is closed under certain operators. Many mathematical structures are called algebras: *
Algebra over a field In mathematics, an algebra over a field (often simply called an algebra) is a vector space equipped with a bilinear map, bilinear product (mathematics), product. Thus, an algebra is an algebraic structure consisting of a set (mathematics), set to ...
or more generally
algebra over a ring In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ...
.
Many classes of algebras over a field or over a ring have a specific name: **
Associative algebra In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). I ...
**
Non-associative algebra A non-associative algebra (or distributive algebra) is an algebra over a field where the binary operation, binary multiplication operation is not assumed to be associative operation, associative. That is, an algebraic structure ''A'' is a non-ass ...
**
Lie algebra In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities an ...
**
Hopf algebra In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and t ...
**
C*-algebra In mathematics, specifically in functional analysis Image:Drum vibration mode12.gif, 200px, One of the possible modes of vibration of an idealized circular drum head. These modes are eigenfunctions of a linear operator on a function space, a commo ...
**
Symmetric algebra In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ...
**
Exterior algebra In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ...
**
Tensor algebra In mathematics, the tensor algebra of a vector space ''V'', denoted ''T''(''V'') or ''T''(''V''), is the algebra over a field, algebra of tensors on ''V'' (of any rank) with multiplication being the tensor product. It is the free algebra on ''V'', i ...
* In
measure theory Measure is a fundamental concept of mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contai ...
, ** Sigma-algebra ** Algebra over a set * In category theory ** F-algebra and F-coalgebra ** T-algebra * In logic, ** Relation algebra, a residuated Boolean algebra expanded with an involution called converse. ** Boolean algebra (structure), Boolean algebra, a complemented lattice, complemented distributive lattice. ** Heyting algebra
# Elementary algebra
Elementary algebra is the most basic form of algebra. It is taught to students who are presumed to have no knowledge of
mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and their changes (cal ...
beyond the basic principles of
arithmetic Arithmetic (from the Ancient Greek, Greek wikt:en:ἀριθμός#Ancient Greek, ἀριθμός ''arithmos'', 'number' and wikt:en:τική#Ancient Greek, τική wikt:en:τέχνη#Ancient Greek, έχνη ''tiké échne', 'art' or 'cr ...
. In arithmetic, only numbers and their arithmetical operations (such as +, −, ×, ÷) occur. In algebra, numbers are often represented by symbols called variable (mathematics), variables (such as ''a'', ''n'', ''x'', ''y'' or ''z''). This is useful because: * It allows the general formulation of arithmetical laws (such as ''a'' + ''b'' = ''b'' + ''a'' for all ''a'' and ''b''), and thus is the first step to a systematic exploration of the properties of the real number, real number system. * It allows the reference to "unknown" numbers, the formulation of equations and the study of how to solve these. (For instance, "Find a number ''x'' such that 3''x'' + 1 = 10" or going a bit further "Find a number ''x'' such that ''ax'' + ''b'' = ''c''". This step leads to the conclusion that it is not the nature of the specific numbers that allow us to solve it, but that of the operations involved.) * It allows the formulation of function (mathematics), functional relationships. (For instance, "If you sell ''x'' tickets, then your profit will be 3''x'' − 10 dollars, or ''f''(''x'') = 3''x'' − 10, where ''f'' is the function, and ''x'' is the number to which the function is applied".)
## Polynomials
A polynomial is an expression (mathematics), expression that is the sum of a finite number of non-zero Summand, terms, each term consisting of the product of a constant and a finite number of Variable (mathematics), variables raised to whole number powers. For example, ''x''2 + 2''x'' − 3 is a polynomial in the single variable ''x''. A polynomial expression is an expression that may be rewritten as a polynomial, by using commutativity, associativity and distributivity of addition and multiplication. For example, (''x'' − 1)(''x'' + 3) is a polynomial expression, that, properly speaking, is not a polynomial. A polynomial function is a function that is defined by a polynomial, or, equivalently, by a polynomial expression. The two preceding examples define the same polynomial function. Two important and related problems in algebra are the factorization of polynomials, that is, expressing a given polynomial as a product of other polynomials that cannot be factored any further, and the computation of polynomial greatest common divisors. The example polynomial above can be factored as (''x'' − 1)(''x'' + 3). A related class of problems is finding algebraic expressions for the root of a function, roots of a polynomial in a single variable.
## Education
It has been suggested that elementary algebra should be taught to students as young as eleven years old, though in recent years it is more common for public lessons to begin at the eighth grade level (≈ 13 y.o. ±) in the United States. However, in some US schools, algebra is started in ninth grade.
# Abstract algebra
Abstract algebra extends the familiar concepts found in elementary algebra and
arithmetic Arithmetic (from the Ancient Greek, Greek wikt:en:ἀριθμός#Ancient Greek, ἀριθμός ''arithmos'', 'number' and wikt:en:τική#Ancient Greek, τική wikt:en:τέχνη#Ancient Greek, έχνη ''tiké échne', 'art' or 'cr ...
of numbers to more general concepts. Here are the listed fundamental concepts in abstract algebra. Set (mathematics), Sets: Rather than just considering the different types of numbers, abstract algebra deals with the more general concept of ''sets'': a collection of all objects (called Element (mathematics), elements) selected by property specific for the set. All collections of the familiar types of numbers are sets. Other examples of sets include the set of all two-by-two Matrix (mathematics), matrices, the set of all second-degree polynomials (''ax''2 + ''bx'' + ''c''), the set of all two dimensional Vector (geometric), vectors in the plane, and the various finite groups such as the cyclic groups, which are the groups of integers modular arithmetic, modulo ''n''. Set theory is a branch of logic and not technically a branch of algebra. Binary operations: The notion of addition (+) is abstracted to give a ''binary operation'', ∗ say. The notion of binary operation is meaningless without the set on which the operation is defined. For two elements ''a'' and ''b'' in a set ''S'', ''a'' ∗ ''b'' is another element in the set; this condition is called Closure (mathematics), closure. Addition (+), subtraction (−), multiplication (×), and Division (mathematics), division (÷) can be binary operations when defined on different sets, as are addition and multiplication of matrices, vectors, and polynomials. Identity elements: The numbers zero and one are abstracted to give the notion of an ''identity element'' for an operation. Zero is the identity element for addition and one is the identity element for multiplication. For a general binary operator ∗ the identity element ''e'' must satisfy ''a'' ∗ ''e'' = ''a'' and ''e'' ∗ ''a'' = ''a'', and is necessarily unique, if it exists. This holds for addition as ''a'' + 0 = ''a'' and 0 + ''a'' = ''a'' and multiplication ''a'' × 1 = ''a'' and 1 × ''a'' = ''a''. Not all sets and operator combinations have an identity element; for example, the set of positive natural numbers (1, 2, 3, ...) has no identity element for addition. Inverse elements: The negative numbers give rise to the concept of ''inverse elements''. For addition, the inverse of ''a'' is written −''a'', and for multiplication the inverse is written ''a''−1. A general two-sided inverse element ''a''−1 satisfies the property that ''a'' ∗ ''a''−1 = ''e'' and ''a''−1 ∗ ''a'' = ''e'', where ''e'' is the identity element. Associativity: Addition of integers has a property called associativity. That is, the grouping of the numbers to be added does not affect the sum. For example: . In general, this becomes (''a'' ∗ ''b'') ∗ ''c'' = ''a'' ∗ (''b'' ∗ ''c''). This property is shared by most binary operations, but not subtraction or division or octonion multiplication. Commutative operation, Commutativity: Addition and multiplication of real numbers are both commutative. That is, the order of the numbers does not affect the result. For example: 2 + 3 = 3 + 2. In general, this becomes ''a'' ∗ ''b'' = ''b'' ∗ ''a''. This property does not hold for all binary operations. For example, matrix multiplication and Quaternion, quaternion multiplication are both non-commutative.
## Groups
Combining the above concepts gives one of the most important structures in mathematics: a group (mathematics), group. A group is a combination of a set ''S'' and a single binary operation ∗, defined in any way you choose, but with the following properties: * An identity element ''e'' exists, such that for every member ''a'' of ''S'', ''e'' ∗ ''a'' and ''a'' ∗ ''e'' are both identical to ''a''. * Every element has an inverse: for every member ''a'' of ''S'', there exists a member ''a''−1 such that ''a'' ∗ ''a''−1 and ''a''−1 ∗ ''a'' are both identical to the identity element. * The operation is associative: if ''a'', ''b'' and ''c'' are members of ''S'', then (''a'' ∗ ''b'') ∗ ''c'' is identical to ''a'' ∗ (''b'' ∗ ''c''). If a group is also commutativity, commutative – that is, for any two members ''a'' and ''b'' of ''S'', ''a'' ∗ ''b'' is identical to ''b'' ∗ ''a'' – then the group is said to be Abelian group, abelian. For example, the set of integers under the operation of addition is a group. In this group, the identity element is 0 and the inverse of any element ''a'' is its negation, −''a''. The associativity requirement is met, because for any integers ''a'', ''b'' and ''c'', (''a'' + ''b'') + ''c'' = ''a'' + (''b'' + ''c'') The non-zero rational numbers form a group under multiplication. Here, the identity element is 1, since 1 × ''a'' = ''a'' × 1 = ''a'' for any rational number ''a''. The inverse of ''a'' is 1/''a'', since ''a'' × 1/''a'' = 1. The integers under the multiplication operation, however, do not form a group. This is because, in general, the multiplicative inverse of an integer is not an integer. For example, 4 is an integer, but its multiplicative inverse is ¼, which is not an integer. The theory of groups is studied in
group theory In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is no ...
. A major result in this theory is the classification of finite simple groups, mostly published between about 1955 and 1983, which separates the finite set, finite simple groups into roughly 30 basic types. Semi-groups, quasi-groups, and monoids structure similar to groups, but more general. They comprise a set and a closed binary operation but do not necessarily satisfy the other conditions. A semi-group has an ''associative'' binary operation but might not have an identity element. A monoid is a semi-group which does have an identity but might not have an inverse for every element. A quasi-group satisfies a requirement that any element can be turned into any other by either a unique left-multiplication or right-multiplication; however, the binary operation might not be associative. All groups are monoids, and all monoids are semi-groups.
## Rings and fields
Groups just have one binary operation. To fully explain the behaviour of the different types of numbers, structures with two operators need to be studied. The most important of these are Ring (mathematics), rings and Field (mathematics), fields. A Ring (mathematics), ring has two binary operations (+) and (×), with × distributive over +. Under the first operator (+) it forms an ''abelian group''. Under the second operator (×) it is associative, but it does not need to have an identity, or inverse, so division is not required. The additive (+) identity element is written as 0 and the additive inverse of ''a'' is written as −''a''. Distributivity generalises the ''distributive law'' for numbers. For the integers and and × is said to be ''distributive'' over +. The integers are an example of a ring. The integers have additional properties which make it an integral domain. A Field (mathematics), field is a ''ring'' with the additional property that all the elements excluding 0 form an ''abelian group'' under ×. The multiplicative (×) identity is written as 1 and the multiplicative inverse of ''a'' is written as ''a''−1. The rational numbers, the real numbers and the complex numbers are all examples of fields.
* Outline of algebra * Outline of linear algebra * Algebra tile
* * *
* * * * * * * * | 2022-08-09 08:49:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7526748180389404, "perplexity": 2483.259120458513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570913.16/warc/CC-MAIN-20220809064307-20220809094307-00687.warc.gz"} |
https://ec.gateoverflow.in/618/gate2015-1-51 | 0 like 0 dislike
12 views
In the system shown in Figure (a), $m(t)$ is a low-pass signal with bandwidth $W$ Hz. The frequency response of the band-pass filter $H(f)$ is shown in Figure (b). If it is desired that the output signal $z(t)=10x(t)$, the maximum value of $W$ (in Hz) should be strictly less than _____________.
recategorized | 12 views | 2021-04-11 09:05:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8823689818382263, "perplexity": 357.4881023128563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038061820.19/warc/CC-MAIN-20210411085610-20210411115610-00061.warc.gz"} |
https://atb.nrel.gov/electricity/2021/changes_in_2021 | You are viewing an older version of the ATB. The current content for ATB electricity is 2022.
# Changes in 2021
The Electricity ATB provides a transparent set of technology cost and performance data for electric sector analysis. The update of the 2020 ATB to the 2021 ATB includes general updates to all technologies as well as technology-specific updates—both of which are described below. Use the following charts to explore the changes from 2020 to 2021.
Parameter value projections by ATB projection year
Compare the 2020 ATB and the 2021 ATB. Click "more details" above the chart to select a parameter (LCOECAPEX, fixed operation and maintenance O&M [FOM], capacity factor, and fixed charge rate [FCR]) and other filters.
## General Updates to All Technologies
• The assumptions in each of the two financial assumptions cases are modified to reflect current assessments.
• Fixed O&M costs have been updated to ensure property taxes and insurance costs are included for all technologies.
• The Base Year is updated from 2018 to 2019 using new market data or analysis where applicable.
• The dollar year is updated from 2018 to 2019 with a 1.8% inflation rate (BLS, 2020).
• Historical data are updated to include data reported through year end 2019.
• Land-Based Wind: Projections are based on bottom-up technology analysis and cost modeling plus learning rates, with innovations that increase wind turbine size, improve controls, and enhance science-based modeling.
• Offshore Wind: Projections are based on experiential learning curves derived from market data and cost reductions associated with economies of size and scale.
• Photovoltaics: Projections are based on bottom-up techno-economic analysis of effects of improved module efficiency, inverters, installation efficiencies from assembly and design, all attributable to technological innovation. Resource categorization is split into 10 resource classes by irradiance instead of by representative location.
• Concentrating Solar Power: Component and system cost estimates for Base Year reference a 2017 industry survey, and a 2018 cost analysis of recent market developments.
• Geothermal: New data are consistent with GeoVision Study.
• Hydropower: Non-powered dam data are based on new, 2020 cost analysis.
• Battery Storage: Cost data are available broken down by grid scale, commercial, and residential technologies, and are updated with bottom up cost modeling for current costs.
• Pumped-Storage Hydropower: This technology is new to the 2021 ATB.
• Base Year: Capital expenditures (CAPEX) associated with wind plants installed in the interior of the country are used to characterize CAPEX for hypothetical wind plants with average annual wind speeds that correspond with the median conditions for recently installed wind facilities based on the 2019 Cost of Wind Energy Review (Stehly et al., 2020). The operations and maintenance (O&M) cost of 43/kW-yr is estimated (Stehly et al., 2020); no variation of FOM with wind speed class is assumed. Capacity factors align with performance in Wind Speed Classes 2–7, where most installations are located. • Projections: Specific technology innovations are associated with each scenario. In the Moderate Technology Innovation Scenario (Moderate Scenario), large, segmented blades are transported by truck, enabling larger rotors. Segmentation enables higher hubs and larger turbines, and advanced controls enable higher capacity factors and lower CAPEX. In the Advanced Technology Innovation Scenario (Advanced Scenario), even larger turbines and advanced rotor configurations increase turbine capacity, on-site manufacturing further increases hub heights, and high-fidelity modeling and advanced controls are fully implemented. ### Offshore Wind • Base Year: CAPEX and O&M costs are calculated with a combination of bottom-up techno-economic cost modeling (Beiter et al., 2016) and experiential learning effects with economies of size and scale from higher turbine and plant ratings (Beiter et al., 2020). Bottom-up estimates from the 2020 ATB are brought forward one year (2018 to 2019) using the learning methodology. Capacity factors are determined using a representative power curve for a generic NREL-modeled 6-MW offshore wind turbine (Beiter et al., 2016), and they include geospatial estimates of gross capacity factors for the entire resource (Musial et al., 2016). • Projections: Instead of projecting costs with literature estimates of cost reductions induced by specific technological innovations in each future year (Valpy et al., 2017)(Hundleby et al., 2017), the 2021 ATB uses experiential learning curves derived from empirical market data (Musial et al., 2019) along with economies of size and scale to project future costs (Beiter et al., 2020). As the learning curve predicts future costs as a function of future offshore wind deployment, future costs in each of the ATB technology innovation scenarios are driven by different levels of deployment based on literature estimates. ### Photovoltaics (PV): Utility-Scale, Commercial, and Residential • Base Year: CAPEX for 2019 and 2020 are based on new bottom-up modeling and market data from (Feldman et al., 2021), which focuses on larger systems to align with market trends. The O&M costs are based on modeled pricing for a 100-MWDC, one-axis tracking system (Feldman et al., 2021). • Projections: Projections that were based on literature surveys are now based on bottom-up CAPEX benchmarks. The Moderate Scenario is based on module efficiency gains consistent with PERC (passivated emitter and rear contact) n-type mono modules, improved inverter systems, and installation efficiencies that are due to automation, preassembly, and improved design. The Advanced Scenario assumes additional innovations, such as continuation of the historical rate of module efficiency improvement, simplification of inverter design and automation of inverter manufacturing, and greater installation efficiency from preassembly, automation, and materials innovations. Estimates for energy yield gain for utility-scale and commercial PV systems are also included. ### Concentrating Solar Power (CSP) • Base Year: Estimates are based on bottom-up cost modeling from (Turchi et al., 2019) for the updates to the System Advisor Model (SAM) cost components. Future year projections are informed by the literature, NREL expertise, and technology pathway assessments for reductions in capital expenditures. • Projections: The Moderate Scenario assumes a transition to a supercritical CO2 cycle in the powerblock, advanced coatings on the receiver, improved tanks, pumps, and component configurations for the thermal storage unit, and improved heliostat installation and learning that are due to deployment in the solar field. The Advanced Scenario assumes higher temperature supercritical CO2 , higher temperature receiver, advanced storage compatible with higher temperatures, and low-cost, modular solar fields with increased efficiency. ### Geothermal • Base Year: As before, estimates are based on bottom-up cost modeling using the Geothermal Electricity Technology Evaluation Model (GETEM) and inputs from the GeoVision Business-as-Usual (BAU) scenario (DOE, 2019).The Base Year is updated to 2019 dollar year based on the consumer price index and producer price indices.
• Projections: The projection of future geothermal plant CAPEX for the Advanced Scenario is based on the Technology Improvement scenario from the GeoVision Study ((DOE, 2019) and (Augustine et al., 2019)). The Moderate Scenario is based on the Intermediate 1 Drilling Curve detailed as part of the GeoVision report to 2030, and a minimum learning rate to 2050 which is implemented in AEO2015 (EIA, 2015) as a 10% CAPEX reduction by 2035. The Conservative Technology Innovation Scenario (Conservative Scenario) retains all cost and performance assumptions equivalent to the Base Year and assumes a minimum learning rate to 2050.
### Hydropower
• Base Year: The 2021 ATB data for non-powered dams (NPD) are based on a bottom-up modeling of reference sites using site-specific data (Oladosu, G. et al., 2021), whereas the 2020 data were based on econometric cost equations with assumed capacity factor estimates (DOE, 2016); thus, NPD categories for the 2020 ATB and the 2021 ATB are not directly comparable. NSD is updated to 2019\$ dollar year based on the consumer price index.
• Projections: New cost analysis is used to update NPD data in the 2021 ATB. The analysis involved identification of 20 reference sites for U.S. NPD hydropower and detailed bottom-up design and cost simulations under baseline and near-term innovation cases. The near-term innovation case is judged to be applicable in the next 5–10 years. (Oladosu, G. et al., 2021). New stream-reach development (NSD) data in the 2021 ATB retain previous years data, which are based on projections developed for the Hydropower Vision study (DOE, 2016) using technological learning assumptions and bottom-up analysis of process and/or technology improvements to provide a range of future cost outcomes (O'Connor et al., 2015). The NSD projections use a mix of U.S. Energy Information Administration (EIA) technological learning assumptions, input from a technical team of Oak Ridge National Laboratory researchers, and the experience of expert hydropower consultants.
### Utility-Scale PV-Plus-Battery
• This technology is new to the 2021 ATB.
• Base Year: CAPEX for 2019 is based on new bottom-up modeling and market data from (Feldman et al., 2021). The chosen configuration reflects recent and proposed utility-scale PV-plus-battery projects. Capacity factors and tax credits assume 75% of the energy used to charge the battery component is derived from the coupled PV.
• Projections: PV-plus-battery projections in the 2021 ATB are driven primarily by CAPEX cost improvements, but also by improvements in energy yield, operational cost, and cost of capital (for the Market+Policies Financial Assumptions Case).
### Battery Storage
• Base Year: CAPEX for 2019 is based on new bottom-up modeling and market data from (Feldman et al., 2021).
• Projections: Battery projections in the 2021 ATB are represented for utility-scale, commercial-scale and residential-scale battery systems. Cost improvements are driven by a literature survey as described by (Cole et al., 2021). This literature survey incorporates more-rapid reductions in battery pack and cell costs while soft costs and costs related to other factors decline more slowly.
### Pumped-Storage Hydropower (PSH)
• This technology is new to the 2021 ATB. Resource characterizations including capital costs are forthcoming and will accompany the national closed-loop PSH resource assessment.
### Natural Gas and Coal
• The 2021 ATB represents the first time the U.S. Department of Energy (DOE) Office of Fossil Energy and Carbon Management directly contributed to an ATB update. One notable change is the inclusion of assumptions for property taxes and insurance (PT&I) as a component of fixed operation and maintenance costs. PT&I are not included in prior ATB cost and performance estimates matched to EIA's Annual Energy Outlook (AEO).
### Nuclear and Biopower
• Cost and performance estimates are updated to match AEO2021 (EIA, 2021).
• Information about current published costs in the literature is updated.
## References
The following references are specific to this page; for all references in this ATB, see References.
Beiter, Philipp, Walt Musial, Patrick Duffy, Aubryn Cooperman, Matt Shields, Donna Heimiller, and Mike Optis. “The Cost of Floating Offshore Wind Energy in California between 2019 and 2032.” NREL Technical Report. Golden, CO, November 2020. https://www.nrel.gov/docs/fy21osti/77384.pdf.
Feldman, David, Vignesh Ramasamy, Ran Fu, Ashwin Ramdas, Jal Desai, and Robert Margolis. “U.S. Solar Photovoltaic System and Energy Storage Cost Benchmark: Q1 2020.” National Renewable Energy Lab. (NREL), Golden, CO (United States), January 27, 2021. https://doi.org/10.2172/1764908.
Cole, Wesley, Will A. Frazier, and Chad Augustine. “Cost Projections for Utility-Scale Battery Storage: 2021 Update.” Technical Report. Golden, CO: National Renewable Energy Laboratory, 2021. https://www.nrel.gov/docs/fy21osti/79236.pdf.
Oladosu, G., George, L., and Wells, J. “2020 Cost Analysis of Hydropower Options at Non-Powered Dams.” Oak Ridge, TN: Oak Ridge National Laboratory, 2021.
EIA. “Annual Energy Outlook 2021.” Energy Information Administration, January 2021. https://www.eia.gov/outlooks/aeo/.
Stehly, Tyler, Philipp Beiter, and Patrick Duffy. “2019 Cost of Wind Energy Review.” Technical. National Renewable Energy Laboratory, December 2020. https://www.nrel.gov/docs/fy21osti/78471.pdf.
Turchi, Craig, Matthew Boyd, Devon Kesseli, Parthiv Kurup, Mark Mehos, Ty Neises, Prashant Sharan, Michael Wagner, and Timothy Wendelin. “CSP Systems Analysis: Final Project Report.” Technical Report. Golden, CO: National Renewable Energy Laboratory, May 2019. https://doi.org/10.2172/1513197.
O’Connor, Patrick W., Scott T. DeNeale, Dol Raj Chalise, Emma Centurion, and Abigail Maloof. “Hydropower Baseline Cost Modeling, Version 2.” Oak Ridge, TN: Oak Ridge National Laboratory, 2015. https://doi.org/10.2172/1244193.
Musial, Walter, Philipp Beiter, Paul Spitsen, and Jake Nunemaker. “2018 Offshore Wind Technologies Market Report.” Technical Report. Golden, CO: National Renewable Energy Laboratory, December 2019. https://doi.org/10.2172/1226783.
DOE. “GeoVision: Harnessing the Heat Beneath Our Feet.” Washington, D.C.: U.S. Department of Energy, May 2019. https://www.energy.gov/sites/prod/files/2019/06/f63/GeoVision-full-report-opt.pdf.
BLS. “CPI for All Urban Consumers (CPI-U).” U.S. Bureau of Labor Statistics, 2020. https://beta.bls.gov/dataViewer/view/timeseries/CUSR0000SA0.
Beiter, Philipp, Walter Musial, Aaron Smith, Levi Kilcher, Rick Damiani, Michael Maness, Senu Sirnivas, et al. “A Spatial-Economic Cost-Reduction Pathway Analysis for U.S. Offshore Wind Energy Development from 2015-2030.” Technical Report. Golden, CO: National Renewable Energy Laboratory, 2016. https://doi.org/10.2172/1324526.
Augustine, Chad, Jonathan Ho, and Nate Blair. “GeoVision Analysis Supporting Task Force Report: Electric Sector Potential to Penetration.” Technical Report. Golden, CO: National Renewable Energy Laboratory, 2019. https://doi.org/10.2172/1524768.
Musial, Walt, Donna Heimiller, Philipp Beiter, George Scott, and Caroline Draxl. “2016 Offshore Wind Energy Resource Assessment for the United States.” Technical Report. Golden, CO: National Renewable Energy Laboratory, September 2016. https://doi.org/10.2172/1324533.
Hundleby, Giles, Kate Freeman, Andy Logan, and Ciaran Frost. “Floating Offshore: 55 Technology Innovations That Will Have Greater Impact on Reducing the Cost of Electricity from European Floating Offshore Wind Farms.” KiC InnoEnergy, and BVG Associates, 2017. http://www.innoenergy.com/new-floating-offshore-wind-report-55-technology-innovations-that-will-impact-the-lcoe-in-floating-offshore-wind-farms/.
Valpy, Bruce, Giles Hundleby, Kate Freeman, Alun Roberts, and Andy Logan. “Future Renewable Energy Costs: Offshore Wind: 57 Technology Innovations That Will Have Greater Impact on Reducing the Cost of Electricity From European Offshore Wind Farms.” KiC InnoEnergy, and BVG Associates, 2017. https://bvgassociates.com/wp-content/uploads/2017/11/InnoEnergy-Offshore-Wind-anticipated-innovations-impact-2017_A4.pdf.
DOE. “Hydropower Vision: A New Chapter for America’s Renewable Electricity Source.” Washington, D.C.: U.S. Department of Energy, 2016. https://www.energy.gov/sites/prod/files/2018/02/f49/Hydropower-Vision-021518.pdf.
EIA. “Annual Energy Outlook 2015 with Projections to 2040.” Annual Energy Outlook. Washington, D.C.: U.S. Energy Information Administration, 2015. https://www.eia.gov/outlooks/archive/aeo15/. | 2023-02-04 08:04:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5455523729324341, "perplexity": 10378.928163848865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500095.4/warc/CC-MAIN-20230204075436-20230204105436-00460.warc.gz"} |
https://math.stackexchange.com/questions/601870/best-fit-circle-to-planetary-elliptical-orbit | # Best fit circle to “planetary” elliptical orbit?
I considered posting this to astronomy.stackexchange.com, but I've bugged them enough for today...
Let $p(t)$ be a parametric function that traverses an ellipse such that it sweeps out equal areas in equal time, as per the diagram above. In other words, $p(t)$ is the ideal elliptical orbit (no 3rd party peturbations). Additionally:
• The ellipse's semimajor axis has length $m$.
• The ellipse's eccentricity is $e$.
• $p(t)$'s period is $T$. In other words, $p(T) = p(0)$
Question: what circle best fits this ellipse? More specifically, if we parametrize a circle as:
$$c(t) = \big(x_0+r\cos(bt-c), y_0+r\sin(bt-c)\big)$$
what values of $x_0$, $y_0$, $b$, and $c$ would minimize:
$$\int_0^T d(c(t),p(t)) \, dt$$
where $d$ is the linear distance between $c(t)$ and $p(t)$?
By introducing $x_0$, $y_0$, $b$, and $c$, I'm allowing for the possibility that:
• The best fit circle's center is different from the ellipse's center.
• The best fit circle is "out of phase" with the ellipse.
• The best fit circle's period is not $T$, the ellipse's period.
These all seem unlikely (especially the last 2), but I want to allow for the most general parametric circle.
This ultimately answers the question: if we ARE going to pretend a planet's orbit is circular, what's the best circle to use?
(as a humorous note, stackexchange suggested my question was subjective and would most likely be closed, possibly because I used the word "best". Of course, in mathematics "best fit" is a perfectly valid, non-subjective concept)
• If you were to minimize $\int_0^T d(c(t),p(t))^2\,\mathrm dt$ instead, it would make the problem more separable in $x$ and $y$. Then the Fourier transform might be useful. – Rahul Dec 10 '13 at 21:33
• Are you willing to entertain nonuniform motion on the circle? – hardmath Dec 10 '13 at 21:36
• @RahulNarain OK, I'm willing to do that (ie, accept an answer that minimizes d^2, not d). It seems traditional to minimize the distance squared, even though it's different from minimizing the distance itself. – barrycarter Dec 10 '13 at 21:38
• @hardmath I'd be interested in seeing a solution like that, but the idea of using a circle is to make the math easy. If the nonuniform motion were ugly enough, it would defeat the purpose. However, I'll upvote (but not approve) an answer like that, just to see what it looks like. – barrycarter Dec 10 '13 at 21:40
• According to your requirements it is indeed possible that the best “circle” does not share period with the ellipse. Two possible scenarios: the circle is open in the aphelion or the circle is open in the perihelion. – Carlos Eugenio Thompson Pinzón Dec 10 '13 at 21:46 | 2019-06-16 13:09:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8796846270561218, "perplexity": 693.3679097608948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998238.28/warc/CC-MAIN-20190616122738-20190616144738-00245.warc.gz"} |
https://socratic.org/questions/how-do-you-simplify-3-sqrt26 | # How do you simplify 3 sqrt26?
May 2, 2016
This mixed radical is already as simplified as it gets.
#### Explanation:
This mixed radical is already as simplified as it gets. To show this, we need to factor the components into their prime factors:
$3 \cdot \sqrt{2 \cdot 13}$
There are no repeated factors in the the square root for us to remove, so the simplest form is
$3 \sqrt{26}$ | 2019-09-21 23:24:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8050838708877563, "perplexity": 827.7420957463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574710.66/warc/CC-MAIN-20190921231814-20190922013814-00162.warc.gz"} |
https://stats.stackexchange.com/questions/531303/how-to-calculate-degrees-of-freedom-in-a-chi-squared-test | How to calculate degrees of freedom in a Chi-Squared Test? [closed]
I'm looking to calculate the degrees of freedom (df) for a chi squared test which has one dependent group with 3 categories and one dependent group with 4 categories. What would the df be?
When the data are in a contingency table, the formula to use is $$df = (r-1)(c-1)$$ where $$r$$ is the number of rows and $$c$$ is the number of columns. So in this particular case the answer will be 6. | 2021-09-24 21:38:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.645625650882721, "perplexity": 140.79934587809947}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057580.39/warc/CC-MAIN-20210924201616-20210924231616-00080.warc.gz"} |
https://mmath.dev/publication/MP18a | # Practical Low-Dimensional Halfspace Range Space Sampling
Published in European Symposium on Algorithms Track b, 2018
Recommended citation: Michael Matheny and Jeff M. Phillips. Practical low-dimensional halfspace range space sampling. In European Symposium on Algorithms (arXiv:1804.11307), 2018. https://arxiv.org/abs/1804.11307
We develop, analyze, implement, and compare new algorithms for creating $\varepsilon$-samples of range spaces defined by halfspaces which have size sub-quadratic in $\frac{1}{\varepsilon}$, and have runtime linear in the input size and near-quadratic in $\frac{1}{\varepsilon}$. The key to our solution is an efficient construction of partition trees. Despite not requiring any techniques developed after the early 1990s, apparently such a result was not ever explicitly described. We demonstrate that our implementations, including new implementations of several variants of partition trees, do indeed run in time linear in the input, appear to run linear in output size, and observe smaller error for the same size sample compared to the ubiquitous random sample (which requires size quadratic in $\frac{1}{\varepsilon}$). This result has direct applications in speeding up discrepancy evaluation, approximate range counting, and spatial anomaly detection. | 2019-05-20 23:02:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5359269380569458, "perplexity": 1806.7980495802622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256163.40/warc/CC-MAIN-20190520222102-20190521004102-00049.warc.gz"} |
https://physics.stackexchange.com/questions/436154/definitions-of-operators-and-commutativity-in-quantum-mechanics | # Definitions of operators and commutativity in quantum mechanics
If $$[\hat A,\hat B] = 0$$, where $$\hat A$$ and $$\hat B$$ are operators, then the operators commute. This also means that, when applied to a wavefunction, that one can measure observables $$A$$ and $$B$$ in any order and in any sequence that one would like without getting a different answer. However, if one computes:
$$[\hat x, \hat p]\psi = i\hbar \ \psi$$
This implies that momentum and position do not commute, and this then implies that they cannot be measured together. However, this doesn't seem obvious to me. All it seems to me is it just multiplies $$i \hbar$$ to the eigenstate. It multiplies a constant to an eigenstate, just an observable like $$\hat p$$ does. It makes sense to me that, for order of measurement not to be matter, that they should commute, but it doesn't seem obvious to me that this particular answer implies that they are incompatible, if that makes sense.
I've been trying to understand the answer to my confusion in terms of linear algebra, since I feel relatively comfortable approaching learning this discipline in as much linear algebra recourse possible, but I haven't worked it out yet.
I suppose the exact question(s) I have is (are):
$$1)$$ Why does $$[\hat x, \hat p]\psi = i\hbar \ \psi$$ directly imply that $$\hat x$$ and $$\hat p$$ are incompatible? It just seems like a linear transformation to me with eigenvalue $$i\hbar$$ to me, just like $$\hat p$$ is a linear transformation with eigenvalue $$p$$.
$$2)$$ What kind of linear mappings are quantum mechanics operators, anyway? I understand that they map vectors to the same Hilbert space they all live in, but are observables operators change of bases for the Hilbert space? I am told that some eigenstates can be eigenfunctions of more than one observable -- does this mean they are members of a subspace of the Hilbert space that describe eigenstates that are eigenfunctions of those two operators? I'm used to understanding linear transformations as a matrix detailing the changes the bases take, and so trying to imagine observables operators in this respect seem obscure to me. What exactly does applying an operator to a wavefunction do to it in terms of a transformation other than mapping the vector to the same space?
• A general operator $A$ acting on a state $| \alpha \rangle$ gives $| \beta \rangle$, where $| \beta \rangle$ is a new state, in general. Only when $| \alpha \rangle$ is an eigenvector of $A$, does $A |\alpha \rangle$ end up being equal to its eigenvalue $\times | \alpha \rangle$, and then $A$ is called an observable. – Avantgarde Oct 22 '18 at 15:41 | 2019-08-25 22:13:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9007455706596375, "perplexity": 181.3662600472604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330907.46/warc/CC-MAIN-20190825215958-20190826001958-00400.warc.gz"} |
https://windowsontheory.org/seminar/ | Skip to content
# Theory Seminar – MSR-Silicon Valley – RIP
### Organizers (by order of their service): Omer Reingold, Parikshit Gopalan, Guy Rothblum and Raghu Meka
09/18/2014 Madhu Sudan (Microsoft Research) (Note unusual time: Thursday 2:30 – 3:30)
Communication with Imperfectly Shared Randomness
A common feature in most natural communication is that it relies on a large shared context between speaker and listener, where the context is shared vaguely rather than precisely. Knowledge of language, technical terms, socio-political events, history etc. all combine to form this shared context; and it should be obvious that no one can expect any part of this context to be shared perfectly by the two parties. This shared context helps in compressing communication (else I would have to include an English dictionary, a book on grammar etc. to this abstract); but the imperfection of the sharing potentially leads to misunderstanding and ambiguity. The challenge of achieving the benefits provided by the shared context without leading to new errors due to the imperfection leads to a host of new mathematical questions.
In this talk we will focus on one specific setting for this tension between shared context and imperfection of sharing, namely in the use of shared randomness in communication complexity. It is widely known that shared randomness between speaker and listener can lead to immense savings in communication complexity for certain communication tasks. What happens when this randomness is not perfectly shared? We model this as saying that sender has access to a sequence of uniform random bits and receiver has access to a noisy version of the same sequence where each bit is flipped independently with some probability p. While most known communication protocols fail when the randomness is imperfectly shared, it turns out that many of the effects of shared randomness can be recovered with a slight loss by more carefully designed protocols. Among other results we will describe a general one which shows that any k-bit one-way communication protocol with perfectly shared randomness can be “simulated” with 2^k bits of imperfectly shared randomness, and this is essentially tight.
Based on joint work with Clément Canonne (Columbia), Venkatesan Guruswami (CMU), and Raghu Meka (MSR).
09/17/2014 Tim Roughgarden (Stanford University)
Barriers to Near-Optimal Equilibria
We explain when and how communication and computational lower bounds for algorithms for an optimization problem translate to lower bounds on the worst-case quality of equilibria in games derived from the problem. The most straightforward use of our lower bound framework is to harness an existing computational or communication lower bound to derive a lower bound on the worst-case price of anarchy
(POA) in a class of games. This is a new approach to POA lower bounds, which relies on reductions in lieu of explicit constructions. Our lower bounds are particularly significant for problems of game design— ranging from the design of simple combinatorial auctions to the existence of effective tolls for routing networks — where the goal is to design a game that has only near-optimal equilibria.
09/10/2014 Dawn Woodard (Cornell, visiting MSR)
Characterizing the Efficiency of Markov Chain Monte Carlo
The introduction of simulation-based computational methods has allowed the field of Bayesian statistics to dramatically expand, achieving widespread use in areas from finance to information technology. Despite this, the biggest challenge for the adoption of Bayesian approaches is still the computational methods used in implementation: understanding of the error associated with these methods has lagged far behind their use, and for some statistical models all available methods take too long to converge. I will describe work characterizing the efficiency of particular computational methods used for Bayesian statistical inference, including adaptive Markov chain Monte Carlo and a Gibbs sampler for Bayesian motif discovery in genomics.
I will highlight in particular our recent results on the efficiency of approximate Bayesian computation (ABC), which has become a fundamental tool in population genetics and systems biology. ABC is used when evaluation of the likelihood function is prohibitively expensive or impossible; it instead approximates the likelihood function by drawing pseudo-samples from the model. We address both the rejection sampling and Markov chain Monte Carlo versions of ABC, presenting the surprising result that multiple pseudo-samples typically do not improve the efficiency of the algorithm as compared to employing a high-variance estimate computed using a single pseudo-sample. This result means that it is unnecessary to tune the number of pseudo-samples, and is in contrast to particle MCMC methods, in which many particles are often required to provide sufficient accuracy.
09/03/2014 Omri Weinstein (Princeton University)
From honest but curious to malicious: An Interactive Information Odometer and Applications to Privacy
We introduce a novel technique which enables two players to maintain an estimate of the internal information cost of their conversation in an online fashion without revealing much extra information. We use this construction to obtain new results about information-theoretically secure computation and interactive compression.
More specifically, we show that the information odometer can be used to achieve information-theoretic secure communication between two untrusted parties: If the players’ goal is to compute a function f(x,y), and f admits a protocol with information cost I and communication cost C, then our odometer can be used to produce a “robust” protocol which:
(i) Assuming both players are honest, computes f with high probability, and (ii) Even if one party is malicious/adversarial, then for any k, the probability that the honest player reveals more than O(k(I+ log C)) bits of information to the other player is at most 2^{-Omega(k)}.
We also outline a potential approach which uses our odometer as a proxy for braking state of the art interactive compression
results: Any progress on interactive compression in the regime where I=O(\log C) will lead to new *general* compression results in all regimes, and hence to new direct sum theorems in communication complexity.
Joint work with Mark Braverman
08/27/2014 Amit Daniely (Hebrew University)
From average case complexity to improper learning complexity
It is presently still unknown how to show hardness of learning problems. There are huge gaps between our upper and lower bounds in the area. The main obstacle is that standard NP-reductions do not yield hardness of learning . All known lower bounds rely on (unproved) cryptographic assumptions.
We introduce a new technique to this area, using reductions from problems that are hard on average. We put forward a natural generalization of Feige’s assumption about the complexity of refuting random K-SAT instances. Under this assumption we show:
1. Learning DNFs is hard.
2. Learning an intersection of super logarithmic number of halfspaces is hard.
In addition, the same assumption implies the hardness of virtually all learning problems that were previously shown hard under cryptographic assumptions.
Joint with Nati Linial and Shai Shelev-Shwartz.
08/20/2014 Robi Krauthgamer (Weizmann Institute)
Adaptive Metric Dimensionality Reduction
I plan to discuss data-adaptive dimensionality reduction in the context of supervised learning in general metric spaces. Our contribution is twofold. Statistically, we present a generalization bound for Lipschitz functions in a metric space that is doubling, or nearly doubling. Consequently, we provide a new theoretical explanation for empirically reported improvements gained by preprocessing Euclidean data by PCA (Principal Components Analysis) prior to constructing a linear classifier.
On the algorithmic front, we introduce a PCA-analogue for general metric spaces, namely, an efficient procedure to approximate the data’s intrinsic dimension, which is often much lower than the ambient dimension. Our results thus leverage the dual benefits of low dimensionality:
(1) more efficient algorithms, e.g., for proximity search, and
(2) more optimistic generalization bounds.
Joint work with Lee-Ad Gottlieb and Aryeh Kontorovich.
08/13/2014 Moni Naor (Weizmann Institute)
Physical Zero-Knowledge Proofs of Physical Properties
Is it possible to prove that two DNA-fingerprints match, or that they do not match, without revealing any further information about the fingerprints? Is it possible to prove that two objects have the same design without revealing the design itself?
In the digital domain, zero-knowledge is an established concept where a prover convinces a verifier of a statement without revealing any information beyond the statement’s validity. However, zero-knowledge is not as well-developed in the context of problems that are inherently physical.
In this talk, we are interested in protocols that prove physical properties of physical objects without revealing further information. The literature lacks a unified formal framework for designing and analyzing such protocols. We suggest the first paradigm for formally defining, modeling, and analyzing physical zero-knowledge (PhysicalZK) protocols, using the Universal Composability framework. We demonstrate applications of physical zero-knowledge to DNA profiling and neutron radiography. Finally, we explore an analog of public-coin proofs in the context of PhysicalZK that we call publicly observable proofs.
Joint work with Ben Fisch and Daniel Freund
08/06/2014 Lyle Ramshaw
Stråhle’s equal-tempered triumph
In 1743, the Swedish organ builder Daniel P. Stråhle gave an elegant geometric construction for determining the precise pitches of musical notes — for example, for locating frets along the neck of a guitar. Stråhle chose 24/17 as the tilt ratio of the perspectivity in his construction. That ratio needs to be roughly the square root of 2; but why did he choose 24/17 over the continued-fraction convergent 17/12? Stråhle’s choice turns out to have the musical advantage that the frets that are furthest from being equal-tempered appear higher up on the fretboard.
07/30/2014 Ryan Williams (Stanford University)
Faster all-pairs shortest paths via circuit complexity
The good old all-pairs shortest path problem in dense n-node directed graphs with arbitrary edge weights (APSP) has been known for 50 years (by Floyd and Warshall) to be solvable in O(n^3) time on the real RAM, where additions and comparisons of reals take unit cost (but all other operations have typical logarithmic cost). Faster algorithms (starting with Fredman in 1975) were known to take O(n^3/(log^c n)) time for various constants c <= 2. It’s a longstanding open problem if APSP can be solved in n^{3-e} time for some constant e > 0. A first step towards a positive answer would be to determine if even an n^3/(log^c n) time algorithm is possible, for *every* constant c.
I will outline a new randomized method for computing the min-plus product (a.k.a., tropical product) of two n by n matrices, yielding a faster algorithm for APSP. On the real RAM, the algorithm runs in time O~(n^3/2^{(log n)^{1/2}) and works with high probability on any pair of matrices. The algorithm applies tools from low-level circuit complexity.
07/23/2014 Shachar Lovett (University of California, San Diego)
List decoding Reed-Muller codes over small fields
The list decoding problem for a code asks for the maximal radius up to which any ball of that radius contains only a constant number of codewords. The list decoding radius is not well understood even for well studied codes, like Reed-Solomon or Reed-Muller codes.Fix a finite field $\F$. The Reed-Muller code $\RM_{\F}(n,d)$ is defined by $n$-variate degree-$d$ polynomials over $\F$. In this work, we study the list decoding radius of Reed-Muller codes over a constant prime field $\F=\F_p$, constant degree $d$ and large $n$. We show that the list decoding radius is equal to the minimal distance of the code.That is, if we denote by $\delta(d)$ the normalized minimal distance of $\RM_{\F}(n,d)$, then the number of codewords in any ball of radius $\delta(d)-\eps$ is bounded by $c=c(p,d,\eps)$ independent of $n$. This resolves a conjecture of Gopalan-Klivans-Zuckerman [STOC 2008], who among other results proved it in the special case of $\F=\F_2$; and extends the work of Gopalan [FOCS 2010] who proved the conjecture in the case of $d=2$.We also analyse the number of codewords in balls of radius exceeding the minimal distance of the code. For $e \leq d$, we show that the number of codewords of $\RM_{\F}(n,d)$ in a ball of radius $\delta(e) – \eps$ is bounded by $\exp(c \cdot n^{d-e})$, where $c=c(p,d,\eps)$ is independent of $n$. The dependence on $n$ is tight. This extends the work of Kaufman-Lovett-Porat [IEEE Inf. Theory 2012] who proved similar bounds over $\F_2$.The proof relies on several new ingredients: an extension of the Frieze-Kannan weak regularity to general function spaces, higher-order Fourier analysis, and an extension of the Schwarz-Zippel lemma to compositions of polynomials.
Joint work with Abhishek Bhowmick.
07/16/2014 Noam Nisan (Hebrew University of Jerusalem)
Economic Efficiency Requires Interaction
We study the necessity of interaction between individuals for obtaining approximately efficient allocations. The role of interaction in markets has received significant attention in economic thinking, e.g. in Hayek’s 1945 classic paper.
We consider this problem in the framework of simultaneous communication complexity. We analyze the amount of simultaneous communication required for achieving an approximately efficient allocation. In particular, we consider two settings: combinatorial auctions with unit demand bidders (bipartite matching) and combinatorial auctions with sub-additive bidders. For both settings we first show that non-interactive systems have enormous communication costs relative to interactive ones. On the other hand, we show that limited interaction enables us to find approximately efficient allocations.
Joint work with Shahar Dobzinski and Sigal Oren.
07/09/2014 Anupam Gupta (CMU)
The Power of Recourse: Online Spanning Trees with Alterations
(UNUSUAL TIME: Talk will start at 3 PM)
In the online spanning (or Steiner) tree problem, we are given a metric space, vertices from this metric space arrive online, and we want to buy connections maintain a tree spanning all arrived points with least possible cost. It is known that the greedy algorithm maintains an O(log n) competitive spanning tree, and this is optimal.
But suppose decisions of the online algorithm are not irrevocable. When a new vertex arrives, in addition to adding an edge to connect the newly arrived terminal, we are allowed to exchange a small number of previously bought edges for other edges. Can we maintain a better solution? We show a positive answer to this problem: even allowing us to change a single edge at each point in time can give us a constant competitive tree.
Joint work with Amit Kumar and Albert Gu.
07/02/2014 David Zuckerman (UT Austin)
Pseudorandomness from Shrinkage
One powerful theme in complexity theory and pseudorandomness in the past few decades has been the use of lower bounds to give pseudorandom generators (PRGs). However, the general results using this hardness vs. randomness paradigm suffer a quantitative loss in parameters, and hence do not give nontrivial implications for models where we don’t know super-polynomial lower bounds but do know lower bounds of a fixed polynomial. We show that when such lower bounds are proved using random restrictions, we can construct PRGs that are essentially best possible without in turn improving the lower bounds.
More specifically, say that a circuit family has shrinkage exponent Gamma if a random restriction leaving a p fraction of variables unset shrinks the size of any circuit in the family by a factor of p^{Gamma + o(1)}. Our PRG uses a seed of length s^{1/(Gamma + 1) + o(1)} to fool circuits in the family of size s. By using this generic construction, we get PRGs with polynomially small error for the following classes of circuits of size s and with the following seed lengths:
1. For de Morgan formulas, seed length s^{1/3+o(1)};
2. For formulas over an arbitrary basis, seed length s^{1/2+o(1)};
3. For read-once de Morgan formulas, seed length s^{.234…};
4. For branching programs of size s, seed length s^{1/2+o(1)}.
The previous best PRGs known for these classes used seeds of length bigger than n/2 to output n bits, and worked only when the size s=O(n).
Joint work with Russell Impagliazzo and Raghu Meka.
6/25/2014 Rocco Servedio (Columbia University)
A complexity theoretic perspective on unsupervised learning
We introduce and study a new type of learning problem for probability distributions over the n-dimensional Boolean hypercube. A learning problem in our framework is defined by a class C of Boolean functions over the hypercube; in our model the learning algorithm is given uniform random satisfying assignments of an unknown function in C and its goal is to output a high-accuracy approximation of the uniform distribution over the space of satisfying assignments for f. We discuss connections between the existence of efficient learning algorithms in this framework and evading the “curse of dimensionality” for more traditional density estimation problems.
As our main results, we show that the two most widely studied classes of Boolean functions in computational learning theory — linear threshold functions and DNF formulas — have efficient distribution learning algorithms in our model. Our algorithm for linear threshold functions runs in time poly(n,1/epsilon) and our algorithm for polynomial-size DNF runs in quasipolynomial time. On the other hand, we also prove complementary hardness results which show that under cryptographic assumptions, learning monotone 2-CNFs, intersections of 2 halfspaces, and degree-2 PTFs are all hard. This shows that our algorithms are close to the limits of what is efficiently learnable in this model.
Joint work with Anindya De and Ilias Diakonikolas.
6/18/2014 Haim Kaplan (Tel-Aviv University)
Adjacency labeling schemes and induced-universal graphs
We describe a way of assigning \emph{labels} to the vertices of any undirected graph on up to~$n$ vertices, each composed of $n/2+O(1)$ bits, such that given the labels of two vertices, and no other information regarding the graph, it is possible to decide whether or not the vertices are adjacent in the graph. This is optimal, up to an additive constant, and constitutes the first improvement in almost 50 years of an $n/2+O(\log n)$ bound of Moon. As a consequence, we obtain an \emph{induced-universal} graph for $n$-vertex graphs containing only $O(2^{n/2})$ vertices, which is optimal up to a multiplicative constant, solving an open problem of Vizing from 1968. We obtain similar tight results for directed graphs, tournaments and bipartite graphs.
6/17/2014 Amit Sahai (UCLA)
NOTE UNUSUAL DATE AND TIME: Tuesday, 2:30-3:30 in Titan
Advances in Program Obfuscation
Is it possible for someone to keep a secret, when an adversary can see how every neuron in her brain behaves, even while she is thinking about her secret? The problem of Program Obfuscation asks the analogue of this question for software: can we write software that uses a secret built into its code, and yet keep this secret safe from an adversary that obtains the entire source code of the software? The secret should remain hidden no matter what the adversary does with the code, including modifying or executing it. For decades, achieving this goal for general programs remained elusive.
This changed with our recent work (FOCS 2013), which gave the first candidate constructions of secure general-purpose program obfuscation. These new constructions are based on novel mathematical ideas. However, especially because these ideas are new, we must ask the question: How can we gain confidence in the security of these new constructions? This talk will discuss recent works that address this question.
6/11/2014 Bernhard Haeupler (MSR)
Making Conversations Robust to Noise Made Easy and Rate Optimal
Interactive Coding schemes add redundancy to any interactive protocol (=conversation)such that it can be correctly executed over a noisy channel which can corrupt any eps fraction of the transmitted symbols.
The fact that coding schemes that tolerate a constant fraction of errors even exist is a surprising result of Schulman. His and most subsequent coding schemes achieve this feat by using tree-codes, intricate structures proven to exist by Schulman but for which no efficient constructions or encoding/decoding procedures are known. Up to the recent work of [Kol, Raz, STOC13] all these schemes required a large (unspecified) constant factor overhead in their communication. Kol and Raz showed that if errors are random then a rate approaching one can be achieved. In particular they prove a 1 – Theta(\sqrt{H(eps))) upper and lower bound.
This talk will show that one can break this bound even for adversarial errors. In particular, we give coding schemes that achieve a rate of 1 – Theta(\sqrt{\eps}) for random or oblivious channels, and a rate of 1 – Theta(\sqrt{eps log log 1/eps}) for fully adversarial channels. We conjecture these bounds to be tight. The coding schemes are extremely natural and simple in their design and essentially consist of both parties having their original conversation as-is (no coding!!), interspersed only by short exchanges of hash values. When hash values do not match, the parties backtrack.
This will be an interactive board talk. 😉 Come and participate!
6/4/2014 No Theory Seminar (STOC week)
5/28/2014 Paris Siminelakis (Stanford University)
Convex Random Graph Models
Realistic models of random graphs are important both for design purposes (predicting the average performance of different protocols/algorithms) as well as for network inference (extracting latent group membership, clustering, etc. ). There are by now thousands of papers defining different random graph models. I will present a principled framework of deriving random graph models by dramatically generalizing the approach of Erdos-Renyi in defining their classic G(n,m) model. Our central principle is to study the uniform measure over symmetric sets of graphs, i.e., sets that are invariant under a group of transformations. Our main contribution is to derive natural sufficient conditions under which the uniform measure over a symmetric set of graphs (i) collapses asymptotically to a distribution where edges appear independently, and (ii) wherein the probability of each edge can be computed from the property through the solution of an optimization problem. Time permitting I will also present an application of this work in resolving the fragility of Navigable Graphs of Kleinberg’s augmentation scheme.
Based on joint work with Dimitris Achlioptas.
5/21/2014 Chen Avin (Ben-Gurion University. Visiting Professor at Brown and ICERM)
Homophily and the Emergence of a Glass Ceiling Effect in Social Networks
The glass ceiling may be defined as “the unseen, yet unbreakable barrier that keeps minorities and women from rising to the upper rungs of the corporate ladder, regardless of their qualifications or achievements”. Although undesirable, it is well documented that many societies and organizations exhibit a glass ceiling.
In this paper we formally define and study the glass ceiling effect in social networks and provide a natural mathematical model that (partially) explains it. We propose a biased preferential attachment model that has two type of nodes, and is based on three well known social phenomena: i) minority of females in the network, ii) rich get richer (preferential attachment) and iii) homophily (liking those who are the same). We prove that our model exhibits a strong class ceiling effect and that all three conditions are necessary, i.e., removing any one of them, will cause the model not to exhibit glass ceiling.
Additionally we present empirical evidence of a student-mentor network of researchers (based on DBLP data) that exhibits all the above properties: female minority, preferential attachment, homophily and glass ceiling.
Joint work with: Barbara Keller, Zvi Lotker, Claire Mathieu, David Peleg, Yvonne-Ann Pignolet
5/14/2014 Swastik Kopparty (Rutgers University)
Simultaneous approximation for Constraint Satisfaction Problems
Given k collections of 2SAT clauses on the same set of variables V , can we find one assignment to the variables in V that satisfies a large fraction of clauses from each collection? We consider such simultaneous constraint satisfaction problems, and design the first nontrivial approximation algorithms for them.
Our main result is that for every CSP F, when k = log^{O(1)}(n), there is a polynomial time constant factor “Pareto” approximation algorithm for k simultaneous MAX-F-CSP instances. In contrast, we show that for every nontrivial Boolean CSP, when k = log^{\Omega(1)}(n), no nonzero approximation factor for k simultaneous MAX-F-CSP instances can be achieved in polynomial time (assuming the Exponential Time Hypothesis).
Joint work with Amey Bhangale and Sushant Sachdeva
5/7/2014 Shubhangi Saraf (Rutgers University)
Lower bounds for bounded depth arithmetic circuits
In the last few years there have been several exciting results related to depth reduction of arithmetic circuits and strong lower bounds for bounded depth arithmetic circuits. I will survey these results and highlight some of the main challenges and open directions in this area.
4/23/2014 Motty Perry (Hebrew University of Jerusalem)
Implementing the “Wisdom of the Crowd”
We study a novel mechanism design model in which agents each arrive sequentially and choose one action from a set of actions with unknown rewards. The information revealed by the principal affects the incentives of the agents to explore and generate new information. We characterize the optimal disclosure policy of a planner whose goal is to maximize social welfare. One interpretation of our result is the implementation of what is known as the “wisdom of the crowd”. This topic has become increasingly relevant with the rapid spread of the Internet over the past decade.
Joint with Ilan Kremer and Yishay Mansour
4/16/2014 Greg Valiant (Stanford)
An Automated Inequality Prover and Instance Optimal Identity Testing
This talk will have two sections. In the first section, I’ll discuss the problem of verifying the identity of a distribution: Given the description of a distribution, p, over a discrete support, how many samples (independent draws) must one obtain from an unknown distribution, q, to distinguish, with high probability, the case that p=q from the case that the total variation distance (L1 distance) is at least eps? In joint work with Paul Valiant, we resolve this question, up to constant factors, on an instance by instance basis: there exist universal constants c, c’ and a function f(p,eps) on distributions and error parameters, such that our tester distinguishes the two cases using f(p) samples with success probability 2/3 , but no tester can distinguish the case that p=q from the case that the total variation distance is at least c*eps when given c’ f(p,eps) samples. The function f(p,eps) is upper bounded by the 2/3-norm of p, divided by eps^2, but is more complicated. This result significantly generalizes and tightens previous results: since distributions of support at most n have 2/3 norm bounded by sqrt(n) this result immediately shows that for such distributions, O(sqrt(n)/eps^2) samples suffice, matching the (tight) results for the case that p is the uniform distribution over support n.
The second part of the talk will focus on the main analysis tool that we leverage to obtain the testing results. The analysis of our simple testing algorithm involves several hairy inequalities; to enable this analysis, we give a complete characterization of a general class of inequalities—generalizing Cauchy-Schwarz, Holder’s inequality, and the monotonicity of L_p norms. Our characterization is of a, perhaps, non-traditional nature in that it uses linear programming to compute a derivation that may otherwise have to be sought through trial and error, by hand. We do not believe such a characterization has appeared in the literature, and have found its computational nature extremely useful.
4/02/2014 Thomas Vidick (Simons Institute)
Fully device independent quantum key distribution
Quantum cryptography promises levels of security that are impossible to replicate in a classical world. Can this security be guaranteed even when the quantum devices on which the protocol relies are untrusted?
This central question in quantum cryptography dates back to the early nineties when the challenge of achieving device independent quantum key distribution, or DIQKD, was first formulated.
In this talk I will give a positive answer this challenge by exhibiting a robust protocol for DIQKD and rigorously proving its security. The proof of security is based on a fundamental property of quantum entanglement, called monogamy. The resulting protocol is robust: while assuming only that the devices can be modeled by the laws of quantum mechanics and are spatially isolated from each other and from any adversary’s laboratory, it achieves a linear key rate and tolerates a constant noise rate in the devices.
The talk will be at an introductory level and accessible without prior knowledge of quantum information or cryptography.
Based on joint work with Umesh Vazirani.
3/26/2014 No Seminar
3/19/2014 No Seminar
3/12/2014 Gilad Tsur (TAU)
Fast Affine Template Matching
In this work we consider approximately matching a template to a grayscale image under affine transformations. We give theoretical results and find that the algorithm is surprisingly successful in practice.
Given a grayscale template M_1 of dimensions n_1 \times n_1 and a grayscale image M_2 of dimensions n_2 \times n_2, our goal is to find an affine transformation that maps pixels from M1 to pixels in M2 minimizing the sum-of-absolute-differences error. We present sublinear algorithms that give an approximate result for this problem, that is, we perform this task while querying as few pixels from both images as possible, and give a transformation that comes close to minimizing the difference.
Our major contribution is an algorithm for a natural family of images, namely, smooth images. We consider an image smooth when the total difference between neighboring pixels is O(n). For such images we provide an approximation of the distance between the images to within an additive error of \epsilon using a number of queries depending polynomially only on 1/\epsilon and on n_2/n_1.
The implementation of this sublinear algorithm works surprisingly well. We performed several experiments on three different datasets, and got very good results, showing resilience to noise and the ability to match real-world templates to images.
Joint work with Simon Korman (TAU), Daniel Reichman (WIS) and Shai Avidan(TAU).
3/5/2014 No Seminar
2/26/2014 No Seminar
2/21/2014 Anup Rao (University of Washington)
NOTE UNUSUAL DAY AND TIME (Thursday, 10:30 AM)
Circuits with Large Fan-In
We consider boolean circuits where every gate in the circuit may compute an arbitrary function of k other gates, for a parameter k. We give an explicit function f : {0, 1}^n → {0, 1} that requires at least Ω(log^2 n) non-input gates in this model when k = 2n/3. When the circuit is restricted to being depth 2, we prove a stronger lower bound of n^Ω(1), and when it is restricted to being a formula, our lower bound is strengthened to Ω(n^2 /k log n) gates.
Our model is connected to some well known approaches to proving lower bounds in complexity theory. Optimal lower bounds for the Number-On-Forehead model in communication complexity, or for bounded depth circuits in AC0, or for bounded depth monotone circuits, or extractors for varieties over small fields would imply strong lower bounds in our model. On the other hand, new lower bounds for our model would prove new time-space tradeoffs for branching programs and impossibility results for (fan-in 2) circuits with linear size and logarithmic depth. In particular, our lower bound gives a different proof for the best known time-space tradeoff for oblivious branching programs.
Joint work with Pavel Hrubes.
2/5/2014 Mariana Raykova (SRI)
Candidate Indistinguishability Obfuscation and Functional Encryption for all circuits
In this work, we study indistinguishability obfuscation and functional encryption for general circuits:
Indistinguishability obfuscation requires that given any two equivalent circuits $C_0$ and $C_1$ of similar size, the obfuscations of $C_0$ and $C_1$ should be computationally indistinguishable.
In functional encryption, ciphertexts encrypt inputs $x$ and keys are issued for circuits $C$. Using the key $\SK_C$ to decrypt a ciphertext $\CT_x=\enc(x)$, yields the value $C(x)$ but does not reveal anything else about $x$. Furthermore, no collusion of secret key holders should be able to learn anything more than the union of what they can each learn individually.
We give constructions for indistinguishability obfuscation and functional encryption that supports all polynomial-size circuits. We accomplish this goal in three steps:
We describe a candidate construction for indistinguishability obfuscation for $NC^1$ circuits. The security of this construction is based on a new algebraic hardness assumption. The candidate and assumption use a simplified variant of multilinear maps, which we call Multilinear Jigsaw Puzzles.
We show how to use indistinguishability obfuscation for $NC^1$ together with Fully Homomorphic Encryption (with decryption in $NC^1$) to achieve indistinguishability obfuscation for all circuits.
Finally, we show how to use indistinguishability obfuscation for circuits, public-key encryption, and non-interactive zero knowledge to achieve functional encryption for all circuits. The functional encryption scheme we construct also enjoys succinct ciphertexts, which enables several other applications.
Joint work with Sanjam Garg, Craig Gentry, Shai Halevi, Amit Sahai, Brent Waters
1/22/2014 George Varghese (Microsoft Research)
Reconciling Differences
If you and I both have a million song titles, of which almost all are the same, how can we efficiently communicate the songs that are different? I will describe a new and practical algorithm (joint work with D. Eppstein, M. Goodrich, and F. Uyeda) for computing the set difference using communication proportional to the difference, linear computation, and small latency. A key component is a new estimator for the set difference that outperforms earlier estimators such as MinWise sketches for small values of the set difference. Potential applications include trading blocks in a peer-to-peer environment, link state routing and data deduplication. I will show that the similarity to the “peeling algorithm” used in say Tornado codes is not surprising because there is a reduction from set difference to coding. In the second part I will describe generalizations to reconciling sequences under the edit distance metric (joint work with J. Ullman), and to reconciling sets on graphs (a generalization of the celebrated rumor spreading problem, joint work with N. Goyal and R. Kannan). A “Steiner” version of the graph problem suggests new network coding problems. If time permits, I will describe a simple connection that shows that the basic data structure can be used to design new error correction codes that are efficiently decodable (joint work with M. Mitzenmacher). I will describe some open problems in this space.
1/15/2014 Iftach Haitner (Tel-Aviv University)
Coin Flipping of Any Constant Bias Implies One-Way Functions
We show that the existence of a coin-flipping protocol safe against any non-trivial constant bias (e.g., .499), implies the existence of one way functions. This improves upon a recent result of Haitner and Omri [FOCS ’11], who proved this implication for protocols with bias ~.207. Unlike the result of Haitner and Omri, our result holds also for weak coin-flipping protocols. Joint work with Itay Berman and Aris Tentes.
*** Winter Break 11/21/2013-1/14/2014***
11/20/2013 Daniel Reichman (Weizmann Institute, Israel)
Smoothed analysis on connected graphs
The main paradigm of smoothed analysis on graphs suggests that for any large graph G in a certain class of graphs, perturbing slightly the edges of G at random (usually adding few random edges to G) typically results in a graph having much nicer properties.
In this talk we discuss smoothed analysis on trees, or equivalently on connected graphs. A connected graph G on n vertices can be a very bad expander, can have very large diameter, very high mixing time, and possibly has no long paths. The situation changes dramatically when \epsilon n random edges are added on top of G, the so obtained graph G* has with high probability the following properties:
– its edge expansion is at least c/log n;
– its diameter is O(log n);
– its vertex expansion is at least c/log n;
– it has a linearly long path;
– its mixing time is O(log^2n)
(the last three items assuming the base graph G has bounded degrees). All of the above estimates are asymptotically tight. Joint work with Michael Krivelevich (Tel Aviv) and Wojciech Samotij (Tel Aviv/Cambridge).
11/13/2013 Ankit Sharma (CMU)
Multiway Cut
Multiway cut is a generalization of the min-cut problem to one with more than two terminals. Theoretically, given a set of terminals in a graph, the objective is to assign each vertex to some terminal while minimizing the number of ‘cut’ edges — edges whose end points are assigned to different terminals. The special case of this problem with two terminals is the “max-flow/min-cut problem”. With three or more terminals, the problem is NP hard.
The problem has a rich history in approximation algorithms. Starting with a 2-approximation by Dahlhaus et al. in 1994, the problem received a major improvement in a paper by Calinescu et al., where they presented a relaxation of the problem, and a 1.5 approximation to it. It was subsequently shown that it is UGC hard to beat the integrality gap of this relaxation. The rounding schemes to the relaxation were also subsequently improved, and in a recent result, Buchbinder et al. in STOC 2013, introduced a new rounding scheme that gave a 1.32388 approximation. In a work with Jan Vondrak, we first present the best combination of rounding schemes used by Buchbinder et al., and show that it is limited to achieving a factor of 1.30902 (=(3+sqrt(5))/4). We then introduce a new rounding scheme and show that the new combination of rounding schemes achieves an approximation factor close to 1.297. Under UGC, it is NP hard to go below 1.14.
This is a joint work with Jan Vondrak.
Bio: Ankit Sharma is a graduate student at Carnegie Mellon University. He is advised by Avrim Blum and Anupam Gupta. His research focuses on approximation algorithms and algorithmic game theory.
11/6/2013 Ashwinkumar Badanidiyuru Varadaraja (Cornell)
Bandits with Knapsacks
Multi-armed bandit problems are the predominant theoretical model of exploration-exploitation tradeoffs in machine learning, and they have countless applications ranging from medical trials, to communication networks, to Web search and advertising, to dynamic pricing. In many of these application domains the learner may be constrained by one or more supply (or budget) limits, in addition to the customary limitation on the time horizon. The literature lacks a general model encompassing these sorts of problems. We introduce such a model, called “bandits with knapsacks”, that combines aspects of stochastic integer programming with online learning. A distinctive feature of our problem, in comparison to the existing regret-minimization literature, is that the optimal policy for a given latent distribution may significantly outperform the policy that plays the optimal fixed arm. Consequently, achieving sublinear regret in the bandits-with-knapsacks problem is significantly more challenging than in conventional bandit problems.
We present two algorithms whose reward is close to the information-theoretic optimum: one is based on a novel “balanced exploration” paradigm, while the other is a primal-dual algorithm that uses multiplicative updates. Further, we prove that the regret achieved by both algorithms is optimal up to polylogarithmic factors.
Joint work with Robert Kleinberg, Alex Slivkins. Appeared at FOCS 2013.
10/30/2013 NO SEMINAR (FOCS)
10/25/2013 Adam Smith (Penn State)
NOTE UNUSUAL LOCATION (TELSTAR) AND TIME (1:30 PM)
Coupled-Worlds Privacy: Exploiting Adversarial Uncertainty in Statistical Data Privacy
In this talk, I will present a new framework for defining privacy in statistical databases that enables reasoning about and exploiting adversarial uncertainty about the data. Roughly, our framework requires indistinguishability of the real world in which a mechanism is computed over the real dataset, and an ideal world in which a simulator outputs some function of a “scrubbed” version of the dataset (e.g., one in which an individual user’s data is removed). In each world, the underlying dataset is drawn from the same distribution in some class (specified as part of the definition), which models the adversary’s uncertainty about the dataset.
We argue that our framework provides meaningful guarantees in a broader range of settings as compared to previous efforts to model privacy in the presence of adversarial uncertainty. We present several natural, “noiseless” mechanisms that satisfy our definitional framework under realistic assumptions on the distribution of the underlying data.
Joint work with Raef Bassily, Adam Groce and Jonathan Katz, to appear at FOCS 2013.
10/24/2013 Omri Weinstein (Princeton)
NOTE UNUSUAL LOCATION (LUNA) and TIME (10:30 AM)
Information complexity and applications
Over the past three decades, communication complexity has found applications in nearly every area of computer science, and constitutes one of the few known techniques for proving unconditional lower bounds. Developing tools in communication complexity is therefore a promising approach for making progress in other computational models such as circuit complexity, streaming, data structures, and privacy to mention a few.
One striking example of such tool is information theory, introduced by Shannon in the late 1940’s in the context of the one way data transmission problem. Shannon’s work revealed the intimate connection between information and communication, namely, that the amortized transmission cost of a random message is equal to the amount of information it contains. This compression theory, however, does not readily convert to interactive setups, where two (or more) parties must engage in a multi-round conversation to accomplish a task.
The goal of our ongoing research is to extend this theory, develop the right tools, and understand how information behaves in interactive setups, such as the communication complexity model. In this introductory talk, I will give an overview of Information Complexity, an interactive analogue of Shannon’s theory. I will describe some of the main open problems in this emerging field, and some of the interesting applications we found, including an exact bound on the communication complexity of Set Disjointness function (0.48n), and how information helped us understand the limits of parallel computation.
10/23/2013 Jonathan Ullman (Harvard)
NOTE UNUSUAL LOCATION (LUNA)
(time is unchanged)
Fingerprinting Codes and the Price of Approximate Differential Privacy
We show new lower bounds on the sample complexity of (eps, delta)-differentially private algorithms that accurately answer large sets of counting queries. A counting query on a database D in ({0,1}^d)^n has the form “What fraction of the individual records in the database satisfy the property q?” We show that in order to answer an arbitrary set of >> nd counting queries, Q, on D to within error +/- alpha it is necessary that n > Omega~(d^{1/2} log|Q| / alpha^2 eps). This bound is optimal up to poly-logarithmic factors, as demonstrated by the Private Multiplicative Weights algorithm of Hardt and Rothblum (FOCS’10). In particular, our lower bound is the first to show that the sample complexity required for accuracy and (eps, delta)-differential privacy is asymptotically larger than what is required merely for accuracy, which is O(log|Q| / alpha^2). In addition, we show that our lower bound holds for the specific case of k-way marginals (where |Q|~(2d)^k) when alpha is a constant.
Our results rely on the existence of short fingerprinting codes (Boneh-Shaw, CRYPTO’95), which we show are closely connected to the sample complexity of differentially private data release. We also give a new method for combining certain types of sample complexity lower bounds into stronger lower bounds.
Joint work with Mark Bun and Salil Vadhan.
10/16/2013 Justin Thaler (Simons Institute, UC Berkeley)
Time-Optimal Interactive Proofs for Circuit Evaluation
Considerable attention has recently been devoted to the development of protocols for verifiable computation. These protocols enable a computationally weak verifier to offload computations to a powerful but untrusted prover, while providing the verifier with a guarantee that the prover performed the computations correctly. Despite substantial progress, existing implementations fall short of full practicality, with the main bottleneck typically being the extra effort required by the prover to return an answer with a guarantee of correctness.
I will describe very recent work that addresses this bottleneck by substantially reducing the prover’s runtime in a powerful interactive proof protocol originally due to Goldwasser, Kalai, and Rothblum (GKR), and previously refined and implemented by Cormode, Mitzenmacher, and Thaler.
This talk will include a detailed technical overview of the GKR protocol and the algorithmic techniques underlying its efficient implementation.
10/9/2013 Nikhil Srivastava (MSR India)
(This will be a two hour talk with a 15 minute break in between. The first part would be self contained for the most part.)
Interlacing Families, Ramanujan Graphs, and the Kadison-Singer Problem
We introduce a new type of existence argument based on random polynomials and use it to prove the following two results.
(1) Expander graphs are very sparse graphs which are nonetheless very well-connected, in the sense that their adjacency matrices have large spectral gap. There is a limit to how large this gap can be for a d-regular graph, and graphs which achieve the limit are called Ramanujan graphs. A beautiful number-theoretic construction of Lubotzky-Phillips-Sarnak and Margulis shows that infinite families of Ramanujan graphs exist for every d=p+1 where p is prime, leaving open the question of whether they exist for other degrees. We prove that there exist infinite families of bipartite Ramanujan graphs of every degree bigger than 2. We do this by proving a variant of a conjecture of Bilu and Linial about the existence of good 2-lifts of every graph.
(2) The Kadison-Singer problem is a question in operator theory which arose while trying to make certain assertions in Dirac’s formulation of quantum mechanics mathematically rigorous. Over several decades, this question was shown to be equivalent to several discrepancy-type conjectures about finite matrices, with applications in signal processing, harmonic analysis, and computer science. We prove a strong variant of the conjecture due to Nik Weaver, which says that every set of vectors satisfying some mild conditions can be divided into two sets each of which approximates the whole set spectrally.
Both proofs are based on two significant ingredients: a new existence argument, which reduces the existence of the desired object to bounding the roots of the expected characteristic polynomial of a certain random matrix, and systematic techniques for proving sharp bounds on the roots of such polynomials. The techniques are mostly elementary and based on tools from the theory of real stable polynomials.
Joint work with Adam Marcus and Dan Spielman.
10/2/2013 Eli Gafni (UCLA)
Adaptive Register Allocation with a Linear Number of Registers
We give an adaptive algorithm in which processes use multiwriter multi-reader registers to acquire exclusive write access to their own single-writer, multi-reader registers. It is the first such algorithm that uses a number of registers linear in the number of participating processes. Previous adaptive algorithms require at least (n^{3/2}) registers.
Joint work with: Carole Delporte-Gallet (Paris 7), Hugues Fauconnier (Paris 7), and Leslie Lamport (MSR-SVC).
10/1/2013 Milan Vojnovic (MSR Cambridge)
NOTE UNUSUAL DAY (Tuesday) AND TIME (10:30AM)
Cooperation and Efficiency in Utility Maximization Games
We consider a framework for studying the effect of cooperation on the quality of outcomes in utility maximization games. This is a class of games that accommodates as a special case a game where individuals make strategic investments of effort across a set of available projects. A key feature of interest in such environments is the effect of strategic coalitional deviations on the value produced. In the talk, we shall discuss how the recently developed framework of smooth games allows to derive price of anarchy bounds for utility maximization games. Specifically, we shall discuss a novel concept of coalitional smoothness and show how it implies strong price of anarchy bounds in utility maximization games.
This talk is based on joint works with Yoram Bachrach, Vasilis Syrgkanis and Eva Tardos.
9/25/2013 Vijay V. Vazirani (Georgia Tech)
Dichotomies in Equilibrium Computation: Markets Provide a Surprise
Equilibrium computation is among the most significant additions to the theory of algorithms and computational complexity in the last decade — it has its own character, quite distinct from the computability of optimization problems.
Our contribution to this evolving theory can be summarized in the following sentence: Natural equilibrium computation problems tend to exhibit striking dichotomies. The dichotomy for Nash equilibrium, showing a qualitative difference between 2-Nash and k- Nash for k > 2, has been known for some time. We establish a dichotomy for market equilibrium.
For this purpose. we need to define the notion of Leontief-free functions which help capture the joint utility of a bundle of goods that are substitutes, e.g., bread and bagels. We note that when goods are complements, e.g., bread and butter, the classical Leontief function does a splendid job. Surprisingly enough, for the former case, utility functions had been defined only for special cases in economics, e.g., CES utility function. A new min-max relation supports the claim that our notion is well-founded.
We were led to our notion from the high vantage point provided by an algorithmic approach to market equilibria.
Note: Joint work with Jugal Garg and Ruta Mehta.
9/18/2013 Jelani Nelson (Harvard)
OSNAP: Faster numerical linear algebra algorithms via sparser subspace embeddings
An “oblivious subspace embedding” (OSE) is a distribution over matrices S such that for any low-dimensional subspace V, with high probability over the choice of S,||Sx||_2 approximately equals ||x||_2 (up to 1+eps multiplicative error) for all x in V simultaneously. Sarlos in 2006 pioneered the use of OSE’s for speeding up algorithms for several numerical linear algebra problems. Problems that benefit from OSE’s include: approximate least squares regression, low-rank approximation, l_p regression, approximating leverage scores, and constructing good preconditioners.
We give a class of OSE distributions we call “oblivious sparse norm-approximating projections” (OSNAP) that yield matrices S with few rows that are also extremely sparse, yielding improvements over recent work in this area by (Clarkson, Woodruff STOC ’13). In particular, we show S can have O(d^2) rows and 1 non-zero entry per column, or even O(d^{1+gamma}) rows and poly(1/gamma) non-zero entries per column for any desired constant gamma>0. When applying the latter bound for example to the approximate least squares regression problem of finding x to minimize ||Ax – b||_2 up to a constant factor, where A is n x d for n >> d, we obtain an algorithm with running time O(nnz(A) + ^{omega + gamma}). Here nnz(A) is the number of non-zero entries in A, and omega is the exponent of square matrix multiplication.
Our main technical result is essentially a Bai-Yin type theorem in random matrix theory and is likely to be of independent interest: i.e. we show that for any U in R^{n x d} with orthonormal columns and random sparse S with appropriately chosen entries and sufficiently many rows, all singular values of SU lie in the interval [1-eps, 1+eps] with good probability.
Joint work with Huy Lê Nguyễn (Princeton).
9/11/2013 Graham Cormode (University of Warwick)
Small summaries for Big Data
In dealing with big data, we often need to look at a small summary to get the big picture. Over recent years, many novel techniques have been developed which allow important properties of large distributions to be extracted from compact and easy-to-build summaries. This talk highlights some examples of different types of summarization: sampling, sketching, and special-purpose. It concludes by outlining the road ahead for further development and adoption of such summaries.
9/4/2013 Aleksander Madry (EPFL)
Navigating Central Path with Electrical Flows: from Flows to Matchings, and Back
We describe a new way of employing electrical flow computations to solve the maximum flow and minimum s-t cut problems. This approach draws on ideas underlying path-following interior-point methods (IPMs) – a powerful tool in convex optimization, and exploits certain interplay between maximum flows and bipartite matchings.
The resulting algorithm provides improvements over some long-standing running time bounds for the maximum flow and minimum s-t cut problems, as well as, the closely related bipartite matching problem. Additionally, we establish a connection between primal-dual structure of electrical flows and convergence behavior of IPMs when applied to flow problems. This connection enables us to overcome the notorious Omega(sqrt(m))-iterations convergence barrier that all the known interior-point methods suffer from.
8/28/2013 Shai Vardi (TAU)
Local Computation Algorithms and Local Mechanism Design
Abstract: The talk is divided into two parts. In the first part I will give an introduction to Local Computation Algorithms (LCAs). LCAs implement query access to a global solution to computational problems, using polylogarithmic time and space. I will also discuss how to construct LCAs using a reduction from online algorithms.
In the second part I will explore Local Mechanism Design – designing truthful mechanisms that run in polylogarithmic time and space. I will focus on local scheduling algorithms.
The talk is based on joint works with Noga Alon, Yishay Mansour, Ronitt Rubinfeld, Aviad Rubinstein and Ning Xie
8/26/2013 Kai-Min Chung (Institute of Information Science, Academia Sinica, Taiwan)
Interactive Coding, Revisited
How can we encode a communication protocol between two parties to become resilient to adversarial errors on the communication channel? This question dates back to the seminal works of Shannon and Hamming from the 1940’s, initiating the study of error-correcting codes (ECC). But, even if we encode each message in the communication protocol with a “good” ECC, the error rate of the encoded protocol becomes poor (namely O(1/m) where m is the number of communication rounds). Towards addressing this issue, Schulman (FOCS’92, STOC’93) introduced the notion of interactive coding. We argue that whereas the method of separately encoding each message with an ECC ensures that the encoded protocol carries the same amount of information as the original protocol, this may no longer be the case if using interactive coding. In particular, the encoded protocol may completely leak a player’s private input, even if it would remain secret in the original protocol. Towards addressing this problem, we introduce the notion of knowledge-preserving interactive coding, where the interactive coding protocol is required to preserve the “knowledge” transmitted in the original protocol. Our main results are as follows.
• The method of separately applying ECCs to each message is essentially optimal: No knowledge-preserving interactive coding scheme can have an error rate of 1/m, where m is the number of rounds in the original protocol.
• If restricting to computationally-bounded (polynomial-time) adversaries, then assuming the existence of one-way functions (resp. subexponentially- hard one-way functions), for every eps > 0, there exists a knowledge-preserving interactive coding schemes with constant error rate and information rate n^eps (resp. 1/polylog(n)) where n is the security parameter; additionally to achieve an error of even 1/m requires the existence of one-way functions.
• Finally, even if we restrict to computationally-bounded adversaries, knowledge-preserving interactive coding schemes with constant error rate can have an information rate of at most o(1/ log n). This results applies even to non-constructive interactive coding schemes.
Joint work with Rafael Pass and Sidharth Telang.
8/21/2013 Thomas Steinke (Harvard)
Pseudorandomness for Regular Branching Programs via Fourier Analysis
We present an explicit pseudorandom generator for oblivious, read-once, permutation branching programs of constant width that can read their input bits in any order. The seed length is $O(\log^2 n)$, where $n$ is the length of the branching program. The previous best seed length known for this model was $n^{1/2+o(1)}$, which follows as a special case of a generator due to Impagliazzo, Meka, and Zuckerman (FOCS 2012) (which gives a seed length of $s^{1/2+o(1)}$ for arbitrary branching programs of size $s$). Our techniques also give seed length $n^{1/2+o(1)}$ for general oblivious, read-once branching programs of width $2^{n^{o(1)}}$, which is incomparable to the results of Impagliazzo et al.
Our pseudorandom generator is similar to the one used by Gopalan et al. (FOCS 2012) for read-once CNFs, but the analysis is quite different; ours is based on Fourier analysis of branching programs. In particular, we show that an oblivious, read-once, regular branching program of width $w$ has Fourier mass at most $(2w^2)^k$ at level $k$, independent of the length of the program.
Joint work with Omer Reingold and Salil Vadhan. See http://eccc.hpi-web.de/report/2013/086/
8/14/2013 Renato Paes Leme (MSR SV)
Efficiency Guarantees in Auctions with Budgets
In settings where players have a limited access to liquidity, represented in the form of budget constraints, efficiency maximization has proven to be a challenging goal. In particular, the social welfare cannot be approximated by a better factor then the number of players. Therefore, the literature has mainly resorted to Pareto-efficiency as a way to achieve efficiency in such settings. While successful in some important scenarios, in many settings it is known that either exactly one incentive-compatible auction that always outputs a Pareto-efficient solution, or that no truthful mechanism can always guarantee a Pareto-efficient outcome. Traditionally, impossibility results can be avoided by considering approximations. However, Pareto-efficiency is a binary property (is either satisfied or not), which does not allow for approximations.
In this paper we propose a new notion of efficiency, called \emph{liquid welfare}. This is the maximum amount of revenue an omniscient seller would be able to extract from a certain instance. We explain the intuition behind this objective function and show that it can be 2-approximated by two different auctions. Moreover, we show that no truthful algorithm can guarantee an approximation factor better than 4/3 with respect to the liquid welfare, and provide a truthful auction that attains this bound in a special case.
Importantly, the liquid welfare benchmark also overcomes impossibilities for some settings. While it is impossible to design Pareto-efficient auctions for multi-unit auctions where players have decreasing marginal values, we give a deterministic $O(\log n)$-approximation for the liquid welfare in this setting.
(Joint work with Shahar Dobzinski)
arxiv: http://arxiv.org/abs/1304.7048
8/7/2013 Abhradeep Guha Thakurta (Stanford and MSR SV)
(Near) Dimension Independent Differentially Private Learning
In this talk I will present some of the recent developments in the area of differentially private machine learning. More specifically, I will present results which provide exponential improvement (in terms of dimensions) over the error guarantees of existing differentially private learning algorithms (namely, output perturbation, objective perturbation and private follow the perturbed leader). In fact, for some of these algorithms the error bounds will be independent of any explicit dependence on dimensions.
I will also provide experimental results which support these error bounds.
Joint work with Prateek Jain from Microsoft Research India.
7/31/2013 Amitabh Trehan (Technion)
Networks that Fix Themselves aka Self-Healing Networks
Given a connected graph, two players play a turn-based game: First. the red guy removes a node (and therefore, its adjoining edges too), now the blue guy adds edges between the remaining nodes. What edges should the blue guy add so that over a whole run of the game, the network remains connected, no node gets too many new edges and the distance between any pair of nodes (i.e. the network stretch) does not blow up by much? Now, imagine that the nodes in the graph are computers and the graph is a distributed network; the nodes themselves are the blue guys but they do not know anybody beyond the nodes they share an edge with. Solving such problems is the essence of self-healing distributednetworks.
We shall present the distributed self-healing model which is especially applicable to reconfigurable networks such as peer-to-peer and wireless mesh networks and present fully distributed algorithms that can ‘heal’ certain global and topological properties using only local information. ForgivingTree[PODC2008] and Forgiving Graph [PODC 2009; DC 2012] use a ‘virtual graphs’ approach maintaining connectivity, low degree increase and closeness of nodes (i.e. diameter, stretch). Xheal [PODC 2011; Xheal: localizedself-healing using expanders] further maintains expansion and spectral properties of the network. We present a fully distributed implementation in the LOCAL message passing model. However, we are working on ideas to allow even more efficient implementations and stronger guarantees.
Joint works with Thomas P Hayes, Jared Saia, Navin Rustagi and Gopal Pandurangan.
7/24/2013 Haim Kaplan (Tel Aviv University)
Submatrix maximum queries in Monge matrices and Monge partial matrices, and their applications
We describe a data structure for submatrix maximum queries in Monge matrices or partial Monge matrices, where a query seeks the maximum element in a contiguous submatrix of the given matrix. The structure, for an $n \times n$ Monge matrix, takes $O(n \log n)$ space, $O(n \log^2 n)$ preprocessing time, and answers queries in $O(\log^2 n)$ time. For partial Monge matrices the space and preprocessing grow by $\alpha(n)$ (the inverse Ackermann function), and the query remains $O(\log^2 n)$. Our design exploits an interpretation of the column maxima in a Monge (resp., partial Monge) matrix as an upper envelope of pseudo-lines (resp., pseudo-segments).
This data structure has already found few applications for dynamic distance oracles in planar graphs, for maximum flow in planar graphs, and for some geometric problem on empty rectangles.
7/17/2013 Moni Naor (The Weizmann Institute)
Cryptography and Data Structures: A Match Made in Heaven
The developments of cryptography and complexity theory often go hand
in hand. In this talk I will survey the connection of cryptography
with a different area of computer science: data structures. There are
numerous cases where developments in one area have been fruitfully
applied in the other. Early examples include Hellman’s Time/Space
Tradeoffs from 1980 and there are developments to this day.
7/10/2013 Toniann Pitassi (University of Toronto)
Average Case Lower Bounds for Monotone Switching Networks
(Joint work with Yuval Filmus, Robert Robere and Stephen Cook)
7/3/2013 Andrew V. Goldberg (MSR SV)
The Hub Labeling Algorithm
Given a weighted graph, a distance oracle takes as an input a pair of vertices and returns the distance between them. The labeling approach to distance oracle design is to precompute a label for every vertex so that the distances can be computed from the corresponding labels, without looking at the graph. We survey results on hub labeling (HL), a labeling algorithm that received a lot of attention recently.
HL query time and memory requirements depend on the label size. While some graphs admit small labels, one can prove that there are graphs for which the labels must be large. Computing optimal hub labels is hard, but in polynomial time one can approximate them up to a factor of O(log(n)). This can be done for the total label size (i.e., memory required to store the labels), the maximum label size (which determines the worst-case query time), and in general for an Lp norm of the vector induced by the vertex label sizes. One can also simultaneously approximate Lp and Lq norms.
Hierarchical labels are a special class of HL. For networks with small highway dimension, one can compute provably small hierarchical labels in polynomial time. On the other hand, one can prove that for some graphs hierarchical labels are significantly larger than the general ones. A heuristic for computing hierarchical labels leads to the fastest implementation of distance oracles for road networks. One can use label compression to trade off time for space, making the algorithm practical for a wider range of applications. We give experimental results showing that the heuristic hierarchical labels work well on road networks as well as some other graph classes, but not on all graphs. We also discuss efficient implementations of the provably good approximation algorithms and give experimental results.
Finally, we show that the labels can be stored in a database and HL queries can be implemented in SQL, making the algorithm accessible to SQL programmers.
6/26/2013 David Woodruf (IBM Almaden)
Low Rank Approximation and Regression in Input Sparsity Time
We improve the running times of algorithms for least squares regression and low-rank approximation to account for the sparsity of the input matrix. Namely, if nnz(A) denotes the number of non-zero entries of an input matrix A:
– we show how to solve approximate least squares regression given an n x d matrix A in nnz(A) + poly(d log n) time
– we show how to find an approximate best rank-k approximation of an n x n matrix in nnz(A) + n*poly(k log n) time
All approximations are relative error. Previous algorithms based on fast Johnson-Lindenstrauss transforms took at least ndlog d or nnz(A)*k time.
Joint work with Ken Clarkson.
6/19/2013 Siu On Chan (UC Berkeley)
Approximation Resistance from Pairwise Independent Subgroups
6/10/2013 Prateek Jain (MSR)
1:30-2:30PM. Please notice unusual day (Monday)!
Provable Alternating Minimization methods for Low-rank Matrix Estimation Problems
Alternating minimization represents a widely applicable and empirically successful approach for finding low-rank matrices that best fit the given data. For example, for the problem of low-rank matrix completion, this method is believed to be one of the most accurate and efficient, and formed a major component of the winning entry in the Netflix Challenge.
In the alternating minimization approach, the low-rank target matrix is written in a bi-linear form, i.e. $X = UV^\dag$; the algorithm then alternates between finding the best $U$ and the best $V$. Typically, each alternating step in isolation is convex and tractable. However the overall problem becomes non-convex and there has been almost no theoretical understanding of when this approach yields a good result.
In this talk, we present one of the first theoretical analysis of alternating minimization methods for several different low-rank matrix estimation problems, e.g., matrix completion, inductive matrix completion etc. For both these problems, celebrated recent results have shown that they become well-posed and tractable once certain (now standard) conditions are imposed on the problem. We show that alternating minimization also succeeds under similar conditions. Moreover, compared to existing results, our paper shows that alternating minimization guarantees faster (in particular, geometric) convergence to the true matrix, while allowing a simpler analysis.
This is a joint work with Praneeth Netrapalli, Sujay Sanghavi, Inderjit Dhillon.
6/5/2013 *No Talk* (STOC@Stanford)
5/28/2013 *No Talk* (Visions Symposium@Berkeley)
5/21/2013 Daniel Weitzner (MIT)
1:30-2:30PM. Please notice unusual day (Tuesday)!
Real Privacy: Context and Individual Control as the Path to Genuine Privacy Protection for the 21st Century.
Daniel J. Weitzner
Director, Decentralized Information Group
MIT Computer Science and Artificial Intelligence Laboratory
We hear that “your privacy matters to us,” but does anyone really know what that means? As public awareness of privacy grows there is also a growing divergence about what privacy really is. Whether we call privacy a fundamental human right, something that exists in the penumbra of other constitutional rights or a matter of consumer fairness, 20th century implementations of privacy are both unsatisfying for individuals and burdensome for innovators. This is a talk about rediscovering the foundations of privacy: freedom of association, protection from discrimination and limiting the tyranny of large institutions, be they public or private. In returning to privacy basics we see that real privacy is depends little on ‘notice’ but places a high value on respect for context and individual control. Pseudo-contractual notions of ‘choice’ are unhelpful to users, but respect for the context of relationships matters. Formalistic misunderstandings about the differences between US and European privacy frameworks are based on the simplistic view that Europe has ‘more’ privacy and the US has ‘less’, leaving us to believe we can chose privacy levels along a linear scale. In reality, we face a more complex set of technical and legal choices about how to enforce highly nuanced rules in complex information relationships. Finally, secrecy through encryption or barriers to the free flow of information are generally fig-leaves over privacy risks, but accountability for how information is actually used can build a world in which real privacy exists alongside a vibrant, open information society.
5/15/2013 Anupam Gupta (CMU)
How to Run your Chores (and Get to Dinner on Time)
In the orienteering problem, we are given a metric space (the distances are supposed to represent travel times between the locations), a start vertex (“home”) and a deadline B, and want to visit as many points as possible using a tour of length at most B. We know constant-factor approximation algorithms for this problem since the work of Blum et al in 2002.
However, suppose it is not enough for us to visit the nodes: upon reaching a location, we also have to wait for some (random) time at each location before we can get the reward. Each such waiting time is drawn from a known probability distribution. What can we do then? In this talk, we will discuss adaptive and non-adaptive approximation algorithms for this stochastic orienteering problem.
This is based on work with Ravi Krishnaswamy, Viswanath Nagarajan, and R.Ravi.
5/8/2013 Vitaly Feldman (IBM Research, Almaden)
Statistical Algorithms and a Lower Bound for Detecting Planted Cliques
We introduce a framework for proving lower bounds on computational problems over distributions, based on a class of algorithms called statistical algorithms. For such algorithms, access to the input distribution is limited to obtaining an estimate of the expectation of any given function on a sample drawn randomly from the input distribution, rather than directly accessing samples. Most natural algorithms of interest in theory and in practice, e.g., moments-based methods, local search, standard iterative methods for convex optimization, MCMC and simulated annealing, are statistical algorithms or have statistical counterparts. Our framework is inspired by and generalizes the statistical query model in learning theory.
Our main application is a nearly optimal lower bound on the complexity of any statistical algorithm for detecting planted bipartite clique distributions (or planted dense subgraph distributions) when the planted clique has size O(n^(1/2-\delta)) for any constant \delta > 0. Variants of these problems have been assumed to be hard to prove hardness for other problems and for cryptographic applications. Our lower bounds provide concrete evidence of hardness, thus supporting these assumptions.
Joint work with Elena Grigorescu, Lev Reyzin, Santosh Vempala and Ying Xiao
5/1/2013 Chandra Chekuri (University of Illinois, Urbana-Champaign)
Large-Treewidth Graph Decompositions and Applications
Treewidth is a graph parameter that plays a fundamental role in many structural and algorithmic results. We study the problem of decomposing a given graph $G$ into several node-disjoint subgraphs, where each subgraph has sufficiently large treewidth. We prove two theorems on the tradeoff between the number of the desired subgraphs $h$, and the desired lower bound $r$ on the treewidth of each subgraph. The theorems assert that, given a graph $G$ with treewidth $k$, a decomposition with parameters $h,r$ is feasible whenever $hr^2 \le k/\polylog(k)$, or $h^3r \le k/\polylog(k)$ holds.
The decomposition theorems are inspired by the breakthrough work of Chuzhoy on the maximum edge-disjoint paths problem in undirected graphs and follow up work that extended the ideas to node-disjoint paths. The goal of the talk to explain the background for the theorems and their application to routing and to fixed-parameter tractability and Erdos-Posa-type theorems.
The latter applications allow one to bypass the well-known Grid-Minor theorem of Robertson and Seymour. No prior knowledge of treewidth will be assumed.
The decomposition theorems are from a paper joint with Julia Chuzhoy that is to appear in STOC 2013 but the talk is also based on several previous papers on the maximum disjoint paths problem.
4/24/2013 Robert Krauthgamer (Weizmann Institute of Science)
Cutting corners cheaply, or how to remove Steiner points
The main result I will present is that the Steiner Point Removal (SPR) problem can always be solved with polylogarithmic distortion, which resolves in the affirmative a question posed by Chan, Xia, Konjevod, and Richa (2006). Specifically, for every edge-weighted graph $G=(V,E,w)$ and a subset of terminals $T\subset V$, there is a graph only on the terminals, denoted $G’=(T,E’,w’)$, which is a minor of $G$ and the shortest-path distance between any two terminals is approximately equal in $G’$ and in $G$, namely within factor $O(\log^6|T|)$. Our existence proof actually gives a randomized polynomial-time algorithm.
Our proof features a new variant of metric decomposition. It is well-known that every $n$-point metric space $(X,d)$ admits an $O(\log n)$-separating decomposition, which roughly speaking says there is a randomized partitioning of $X$, with a certain bound on the probability of separating any two points $x,y \in X$. We introduce an additional requirement, which is a tail bound on the following random variable $Z_P$: the number of clusters of the partition that meet any shortest-path $P$ whose length is not too large.
Joint work with Lior Kamma and Huy L. Nguyen
4/17/2013 Yaron Singer (Google / Harvard)
Adaptive Seeding in Social Networks
The rapid adoption of social networking technologies throughout the past decade is bringing special attention to algorithmic and data mining techniques designed for maximizing information cascades in social networks. Despite the immense progress which has been made in the past decade, due to limited data access and the structure of social networks the application of state-of-the-art techniques often results in poor performance.
In this talk we will introduce a new framework we call adaptive seeding. The framework is a two-stage model which leverages a phenomenon known as the “friendship paradox” in social networks in order to dramatically increase the spread of information cascades. Our main result shows constant factor approximations are achievable for the most well-studied models of information spreading in social networks. The result follows from new techniques and concepts that may be of independent interest for those interested in stochastic optimization and machine learning.
Joint work with Lior Seeman
4/10/2013 Yoram Moses (Technion, on sabbatical at Stanford)
Knowledge as a Window into Distributed Coordination
This talk will review the knowledge-based approach in distributed systems, show a few fundamental connections between knowledge and multi-party coordination, and illustrate how these can provide insight into the interplay between time and communication in enabling coordination. The talk will be self-contained, intended for a general CS audience. The latter part of the talk is based on joint work with Ido Ben Zvi.
3/13/2013 Anupam Datta (CMU)
Naturally Rehearsing Passwords
We introduce quantitative usability and security models to guide the design of password management schemes — systematic strategies to help users create and remember multiple passwords. In the same way that security proofs in cryptography are based on complexity-theoretic assumptions (e.g., hardness of factoring and discrete logarithm), we quantify usability by introducing usability assumptions. In particular, password management relies on assumptions about human memory, e.g., that a user who follows a particular rehearsal schedule will successfully maintain the corresponding memory. These assumptions are informed by research in cognitive science and validated through empirical studies. Given rehearsal requirements and a user’s visitation schedule for each account, we use the total number of extra rehearsals that the user would have to do to remember all of his passwords as a measure of the usability of the password scheme. We also present a security model which accounts for the complexity of password management with multiple accounts and associated threats, including online, offline, and plaintext password leak attacks.
Observing that current password management schemes are either insecure or unusable, we present Shared Cues — a new scheme in which the underlying secret is strategically shared across accounts to ensure that most rehearsal requirements are satisfied naturally while simultaneously providing strong security. The construction uses the Chinese Remainder Theorem in a non-standard manner to achieve these competing goals.
Joint work with Jeremiah Blocki and Manuel Blum at CMU
2/20-3/6/2013: BREAK, no theory seminars are currently planned for these dates.
2/13/2013 Sergiu Hart (Hebrew University)
Dynamics and Equilibrium
An overview of a body of work on dynamical systems in multi-player environments. On the one hand, the natural informational restriction that each participant does not know the payoff functions of the other participants — “uncoupledness” — severely limits the possibilities to converge to Nash equilibria. On the other hand, there are simple adaptive heuristics — such as “regret matching” — that lead in the long run to correlated equilibria, a concept that embodies full rationality. Connections to behavioral economics, neurobiological studies, and engineering, are also mentioned.
1/16/2013 Konstantin Makarychev (MSR Redmond)
Notice Unusual Room: Telstar (usual building, SVC6)
Sorting Noisy Data with Partial Information
I will talk about semi-random models for the Minimum Feedback Arc Set Problem. In the Minimum Feedback Arc Set Problem, we are given a directed graph and our goal is to remove as few edges as possible to make this graph acyclic. This is a classical optimization problem. The best known approximation algorithm due to Seymour gives O(log n loglog n) approximation in the worst case. I will discuss whether we can do better in the “real-life” than in the worst case. To this end, I will present two models that try to capture “real-life” instances of Minimum Feedback Arc Set. Then, I will talk in more detail about one of the models. I will give an approximation algorithm that finds a solution of cost (1+epsilon) OPT + n polylog n, where OPT is the cost of the optimal solution.
Joint work with Yury Makarychev (TTIC) and Aravindan Vijayaraghavan (CMU).
1/9/2013 Shafi Goldwasser (MIT and Weizmann Institute)
Pseudo Deterministic Algorithms
We will present a new type of probabilistic algorithm, which we call pseudo-determinisc: they can not be distinguished from deterministic algorithms by a probabilistic polynomial time observer with black box access.
We will show a necessary and sufficient condition for the existence of a such an algorithm, and several examples of Bellagio algorithms which improve on deterministic solutions.
The notion of pseudo-deterministic computations extends beyond sequential polynomial time algorithms, to other domains where the use of randomization is essential such as distributed algorithms and sub-linear algorithms. We will discuss these extensions.
12/5/2012 Ashwin Badanidiyuru Varadaraja (Cornell)
Fast algorithms for maximizing submodular functions
There has been much progress recently on improved approximations for problems involving submodular objective functions, many interesting techniques have been developed. However, the resulting algorithms are often slow and impractical. In this work we develop general techniques to get very fast approximation algorithms for maximizing submodular functions subject to various constraints. These include speeding up greedy and continuous greedy based algorithms and a new potential function based local search algorithm to handle multiple constraints.
(Based on joint work with Jan Vondrak)
11/28/2012 Ilan Lobel (NYU)
Intertemporal Price Discrimination: Structure and Computation of Optimal Policies
We consider the question of how should a firm optimally set a sequence of prices in order to maximize its long-term average revenue given a continuous flow of strategic customers. In particular, customers arrive over time, are strategic in timing their purchases and are heterogeneous along two dimensions: their valuation for the firm’s product and their willingness to wait before purchasing or leaving.
The customers’ patience and valuation may be correlated in an arbitrary fashion. For this general formulation, we prove that the firm may restrict attention to short cyclic pricing policies, which have length twice the maximum willingness to wait of the customer population. We further establish results on the suboptimality of monotone pricing policies in general, and illustrate the structure of optimal policies. These are, in a typical scenario, characterized by nested sales, where the firm offers partial discounts throughout each cycle, offers a significant discount halfway through the cycle, with the largest discount offered at the end of the cycle. From a computational perspective, we exploit the structure of the underlying problem to develop a novel dynamic programming formulation for the problem that computes an optimal pricing policy in polynomial time (in the maximum willingness-to-wait). We further establish a form of equivalence between the problem of pricing for a stream of heterogeneous strategic customers and pricing for a pool of heterogeneous customers who may stockpile units of the product. Joint work with Omar Besbes (Columbia).
11/9/2012 Elchanan Mossel (U C Berkeley)
Some new proofs of Gaussian and discrete noise stability
I will discuss some new proofs of Borel’s result on Gaussian stability and of Majority is stablest and new applications that follow from these proofs in hardness of approximation and social choice theory.
11/2/2012 Aravind Srinivasan (University of Maryland)
The Lovasz Local Lemma — constructive and nonconstructive
Abstract: The Lovasz Local Lemma is a powerful probabilistic tool. We start by reviewing the breakthrough by Moser and Tardos on its constructive aspects, connections to other fields, and extensions due to Haeupler, Saha, and the speaker. We will then outline some very recent further extensions, due to David Harris and the speaker, which are nonconstructive thus far.
10/24/2012 No Talk (FOCS)
10/17/2012 Sanjeev Arora (Princeton)
Is Machine Learning Tractable? — Three Vignettes
Please note unusual time (1-2PM at Titan)
Many tasks in machine learning (especially unsupervised learning) are provably intractable: NP-hard or worse. Nevertheless, researchers have developed heuristic algorithms to try to solve these tasks in practice. In most cases, these algorithms are heuristics with no provable guarantees on their running time or on the quality of solutions they return. Can we change this state of affairs?
This talk will suggest that the answer is yes, and describe three of our recent works as illustration. (a) A new algorithm for learning topic models. (It applies to Linear Dirichlet Allocations of Blei et al. and also to more general topic models. It provably works under some reasonable assumptions and in practice is up to 50 times faster than existing software like Mallet. It relies upon a new procedure for nonnegative matrix factorization.) (b) What classifiers are worth learning? (Can theory illuminate the contentious question of what binary classifier to learn: SVM, Decision tree, etc.) (c) Provable ICA with unknown gaussian noise. (An algorithm to provably learn a “manifold” with small number of parameters but exponentially many “interesting regions.”)
(Based upon joint works with Rong Ge, Ravi Kannan, Ankur Moitra, Sushant Sachdeva.)
10/10/2012 Grigory Yaroslavtsev (Penn State)
Learning and testing submodular functions
Submodular functions capture the law of diminishing returns and can be viewed as a generalization of convexity to functions over the Boolean cube. Such functions arise in different areas, such as combinatorial optimization, machine learning and economics. In this talk we will focus on positive results about learning such functions from examples and testing whether a given function is submodular with a small number of queries.
For the class submodular functions taking values in discrete integral range of size R we show a structural result, giving concise representation for this class. The representation can be described as a maximum over a collection of threshold functions, each expressed by an R-DNF formula. This leads to efficient PAC-learning algorithms for this class, as well as testing algorithms with running time independent of the size of the domain.
Joint work with Sofya Raskhodnikova and Rocco Servedio
10/5/2012, 11AM, Fred Cate (Indiana University Maurer School of Law)
Big Data in Healthcare: The Future of Healthcare Innovation and the Regulations that Will Kill It
Please note unusual day (Friday) and time (11AM)
Personal information is increasingly recognized as the most critical resource for healthcare treatment, research, and management. While healthcare providers generate large amounts of information associated with patient interactions, far more health information, including genetic and behavioral data, are now being generated directly by individuals and as a result of individuals’ interactions with home healthcare and mobile devices, social media sites, and personal health records. These data are critical to the transformation of healthcare and the evolution of truly personalized medicine, yet privacy law imposes an inexplicably restrictive and inconsistent approach to their use. For the past three years, a blue-ribbon panel of medical practitioners and researchers, ethicists, lawyers, technologists, and privacy experts, funded by the NIH, have worked to craft an alternative approach to protecting privacy while facilitating medical research, to try to eliminate the threats posed by privacy regulations to health and privacy today and to the future of medical innovation tomorrow.
Fred H. Cate is a Distinguished Professor and C. Ben Dutton Professor of Law at the Indiana University Maurer School of Law. He is managing director of the Center for Law, Ethics, and Applied Research in Health Information and director of the Center for Applied Cybersecurity Research (a National Center of Academic Excellence in both Information Assurance Research and Information Assurance Education). A member of Microsoft’s Trustworthy Computing Academic Advisory Board, Cate serves on numerous government and industry advisory and oversight committees, and testifies frequently before congressional committees on information privacy and security issues. He is the author of more than 150 articles and books and is one of the founding editors of the Oxford University Press journal, International Data Privacy Law. He is the PI on the NIH grant “Protecting Privacy in Health Research.”
9/19/2012 Zvika Brakerski (Stanford)
Efficient Interactive Coding Against Adversarial Noise
We study the problem of constructing interactive protocols that are robust to noise, a problem that was originally considered in the seminal works of Schulman (FOCS ’92, STOC ’93), and has recently regained popularity. Robust interactive communication is the interactive analogue of error correcting codes: Given an interactive protocol which is designed to run on an error-free channel, construct a protocol that evaluates the same function (or, more generally, simulates the execution of the original protocol) over a noisy channel. As in (non-interactive) error correcting codes, the noise can be either stochastic, i.e. drawn from some distribution, or adversarial, i.e. arbitrary subject only to a global bound on the number of errors.
We show how to efficiently simulate any interactive protocol in the presence of constant-rate adversarial noise, while incurring only a constant blow-up in the communication complexity ($\cc$). Our simulator is randomized, and succeeds in simulating the original protocol with probability at least $1-2^{-\Omega(\cc)}$. Prior works could not achieve efficient simulation in the adversarial case.
Joint work with Yael Tauman Kalai
9/12/2012 Moritz Hardt (IBM Almaden)
Incoherence and privacy in spectral analysis of data
Matrix incoherence is a frequently observed property of large real-world matrices. Intuitively, the coherence of a matrix is low if the singular vectors of the matrix bear little resemblance to the individual rows of the matrix. We show that this property is quite useful in the design of differentially private approximate singular vector computation and low-rank approximation.
Our algorithms for these tasks turn out to be significantly more accurate under a low coherence assumption than what a well known worst-case lower bound would suggest. While not straightforward to analyze, our algorithms are highly efficient and easy to implement. We complement our theoretical results with several experiments on real and synthetic data.
Based on joint works with Aaron Roth
9/5/2012 Brendan Lucier (MSR-NE)
Stable Pricing and Partitions
In a combinatorial auction, a seller has m items to sell and n buyers willing to purchase, where each buyer has a value for each subset of the items. In general such auctions are complex, with specification of bids and determination of optimal allocations having exponential complexity in n and/or m. In some special cases (e.g. the “gross substitutes” condition) there exist ways to set prices on the items for sale so that a socially efficient outcome occurs when each buyer takes his most-preferred set. Such a wonderful pricing outcome is known as a Walrasian Equilibrium (WE), but unfortunately a WE does not always exist in general.
In this talk I will present some new results (joint with Michal Feldman and Nick Gravin) in which we relax the concept of a pricing equilibrium to the combinatorial setting. The essential feature of our relaxation is the ability for the seller to pre-partition the items prior to sale, then price the resulting bundles. I will describe some of the properties of this notion and some of the algorithmic problems that arise and solutions we provide. In particular, for general buyer values, we give a black-box reduction that converts an arbitrary allocation into such a “stable partition pricing” outcome with only a constant factor loss in social welfare.
8/29/2012 Noam Nisan (MSR SV)
Should Auctions Be Complex?
We consider the menu-size of an auction as a complexity measure and ask whether simple auctions suffice for obtaining high revenue. For the case of one item and IID bidders, Myerson shows that the answer is “yes”: the revenue-maximizing auction has a single deterministically chosen reserve price. For two (or more) items we show that the answer is “no”: complex auctions can yield infinitely more revenue than simple ones, even for a single bidder with an additive valuation. However, when the bidder’s values for the two items are independently distributed, the answer is “approximately yes”: selling each of two items simply and separately yields at least half the revenue of any auction.
Joint work with Sergiu Hart
8/22/2012 Shiri Chechik (MSR SV)
Fully Dynamic Approximate Distance Oracles for Planar Graphs via Forbidden-Set Distance Labels
Distance oracle is a data structure that provides fast answers to distance queries. Recently, the problem of designing distance oracles capable of answering restricted distance queries, that is, estimating distances on a subgraph avoiding some forbidden vertices, has attracted a lot of attention. In this talk, we will consider forbidden set distance oracles for planar graphs. I’ll present an efficient compact distance oracle that is capable of handing any number of failures.
In addition, we will consider a closely related notion of fully dynamic distance oracles. In the dynamic distance oracle problem instead of getting the failures in the query phase, we rather need to handle an adversarial online sequence of update and query operations. Each query operation involves two vertices s and t whose distance needs to be estimated. Each update operation involves inserting/deleting a vertex/edge from the graph.
I’ll show that our forbidden set distance oracle can be tweaked to give fully dynamic distance oracle with improved bounds compared to the previously known fully dynamic distance oracle for planar graphs.
Based on a joint work with Ittai Abraham and Cyril Gavoille
8/15/2012 Raghu Meka (IAS)
Constructive discrepancy minimization by walking on the edges.
Minimizing the discrepancy of a set system is a fundamental problem in combinatorics. One of the cornerstones in this area is the celebrated six standard deviations result of Spencer (AMS 1985): In any system of n sets in a universe of size n, there always exists a coloring which achieves discrepancy 6\sqrt{n}. The original proof of Spencer was existential in nature, and did not give an efficient algorithm to find such a coloring. Recently, a breakthrough work of Bansal (FOCS 2010) gave an efficient algorithm which finds such a coloring. His algorithm was based on an SDP relaxation of the discrepancy problem and a clever rounding procedure.
In this work we give a new randomized algorithm to find a coloring as in Spencer’s result based on a restricted random walk we call “Edge-Walk”. Our algorithm and its analysis use only basic linear algebra and is “truly” constructive in that it does not appeal to the existential arguments, giving a new proof of Spencer’s theorem and the partial coloring lemma.
Joint work with Shachar Lovett.
8/8/2012 Rocco Servedio (Columbia)
Inverse Problems for Power Indices in Weighted Voting Games
Suppose we must design a weighted voting scheme to be used by n voters to choose between two candidates. We want the i-th voter to have a certain prescribed amount of “influence” over the final outcome of the vote — for example, the voters may correspond to states with different populations, or shareholders who hold different numbers of shares in a company. How can we design a weighted voting scheme that gives each voter the prescribed amount of influence?
Of course, in order to even hope to answer such a question we need a well-defined notion of the influence of a voter in a weighted voting scheme. Many such measures of influence have been studied in the voting theory literature; such measures are sometimes called “power indices.” In this talk we’ll consider two of the most popular power indices: the “Banzhaf indices” (known in theoretical computer science as “Chow Parameters”) and the “Shapley-Shubik indices.” These are two quite different natural ways of of quantifying how much influence each voter has in a given weighted voting scheme.
As our main results, we’ll describe algorithms that solve the inverse problem of designing a weighted voting scheme for each of these power indices. Specifically,
(1) Given a vector of desired Banzhaf indices for the n voters, our first algorithm efficiently constructs a weighted voting scheme which has Banzhaf indices very close to the target indices (if any such weighted voting scheme exists). Our result gives an almost doubly exponential (in terms of the closeness parameter) running time improvement over the only previous provably correct solution.
(2) Given a vector of desired Shapley-Shubik indices, our second algorithm efficiently constructs a weighted voting scheme which has Shapley-Shubik indices very close to the target indices (if any such weighted voting scheme exists). This is the first algorithm for this problem with a poly(n) as opposed to exp(n) runtime.
A common algorithmic ingredient underlies these two results, but the structural results used to prove correctness are very different for the two indices. Our results for Banzhaf indices are based on structural properties of linear threshold functions and geometric and linear-algebraic arguments about how hyperplanes interact with the Boolean hypercube. Our results for Shapley-Shubik indices are based on anticoncentration bounds for sums of non-independent random variables.
No background in voting theory is required for the talk.
Based on joint works with Anindya De, Ilias Diakonikolas and Vitaly Feldman.
8/1/2012 Avi Wigderson (IAS Princeton)
Restriction Access, Population Recovery, and Partial Identification
We study several natural problems in which an unknown distribution over an unknown population of vectors needs to be recovered from partial or noisy samples. Such problems naturally arise in a variety of contexts in learning, clustering, statistics, data mining and database privacy, where loss and error may be introduced by nature, inaccurate measurements, or on purpose. We give fairly efficient algorithms to recover the data under fairly general assumptions, when loss and noise are close to the information theoretic limit (namely, nearly completely obliterate the original data).
Underlying one of our algorithms is a new structure we call a partial identification (PID) graph. While standard IDs are subsets of features (vector coordinates) that uniquely identify an individual in a population, partial IDs allow ambiguity (and “imposters”), and thus can be shorter. PID graphs capture this imposter-structure. PID graphs yield strategies for dimension reductions of recovery problems, and the re-assembly of this local pieces of statistical information to a global one. The combinatorial heart of this work is proving that every set of vectors admits partial IDs with “cheap” PID graphs (and hence efficient recovery). We further show how to find such near-optimal PIDs efficiently.
Time permitting, I will also describe our original motivation for studying these recovery problems above: a new learning model we call “restriction access”. This model aims at generalizing prevailing “black-box” access to functions when trying to learn the “device” (e.g.circuit, decision tree, polynomial…) which computes them. We propose a “grey-box” access that allows certain partial views of the device, obtained from random restrictions. Our recovery algorithms above allow positive learning results for the PAC-learning analog of our model, for such devices as decision trees and DNFs, which are currently beyond reach in the standard “black-box” version of PAC-learning.
Based on joint works with Zeev Dvir, Anup Rao and Amir Yehudayoff.
7/25/2012 Abhradeep Guha Thakurta (Pennsylvania State University)
Differentially Private Empirical Risk Minimization and High-dimensional Regression
In the recent past there have been several high-profile privacy breaches on machine learning based systems which satisfied various ad-hoc notions of privacy. Some of the examples being: attack on Amazon recommendation system by Calandrino et al. in 2011, attack on Facebook advertisement system by Korolova in 2011 etc. With these breaches in place, an obvious question that arises is “How can we design learning algorithms with rigorous privacy guarantees?”
In this talk, I will focus on designing convex empirical risk minimization (ERM) algorithms (a special class of learning algorithms) with differential privacy guarantees. In the recent past, differential privacy has emerged as one of the most commonly used rigorous notions of privacy.
My talk will be logically segregated into two parts:
Part a) Private ERM on offline data sets: In this part, I will discuss about various approaches for differentially private ERM when the complete data set is available at once (as opposed to the online setting, which I will consider in the next part). One of my main focuses in this part will be to discuss two of our new approaches towards private ERM in the offline setting: i)Improved objective perturbation algorithm (which is an improvement of the initial algorithm proposed by Chaudhuri et al. in 2008 and 2011) and ii) Online convex programming (OCP) based algorithm. Additionally, I will discuss about our first private ERM algorithms in high-dimensions (i.e., where the number of data elements is much lesser than the dimensionality of the underlying model parameter).
Part b) Private Online Learning: Online learning involves learning from private data in real-time, due to which the learned model as well as its predictions are continuously changing. We study the problem in the framework of online convex programming (OCP) while preserving differential privacy. For this problem, we provide a generic framework that can be used to convert any given OCP algorithm into its private variant while preserving privacy as well as sub-linear regret, provided that the given OCP algorithm satisfies the following two criteria: 1) linearly decreasing sensitivity, i.e., the effect of the new data points on the learned model decreases linearly, and 2) sub-linear regret. We instantiate our framework with two commonly used OCP algorithms: i) Generalized Infinitesimal Gradient Ascent (GIGA) and ii) Implicit Gradient Descent (IGD).
Joint work with Prateek Jain [MSR, India], Daniel Kifer [Penn State], Pravesh Kothari [University of Texas, Austin] and Adam Smith [Penn State].
7/18/2012 Alex Samorodnitsky (Hebrew University, Jerusalem)
Discrete tori and bounds for linear codes.
Let C be a linear subspace of the Hamming cube H. Let C’ be the dual code. Following Friedman and Tillich we try to estimate the rate of growth of metric balls in the discrete “torus” T = H/C’ and use this to upperbound the cardinality of T, and therefore of C.
A notion of discrete Ricci curvature of metric spaces, as defined by Ollivier, turns out to be useful in the cases where C’ has local structure (that is C is locally correctable / locally testable).
This approach leads to different (and, we would like to think, easier) proofs of some known upper bounds and to some occasional improvements in the bounds.
Joint work with Eran Iceland.
7/11/2012 Aviv Zohar (MSR SVC)
Critiques of Game Theory
Can game theorists predict social and political outcomes successfully? Should we listen when a game theorist argues that Israel should attack Iran to prevent it from obtaining nuclear weapons? How useful has game theory been in designing interactions? Do computers help somehow? In this talk I’d like to survey criticisms of Game Theory’s practical applicability. The talk will be non-technical and will assume only basic knowledge of concepts from Game Theory.
7/4/2012 No seminar (holiday)
6/27/2012 No seminar (intern-mentor baseball game)
6/20/2012 Piotr Indyk (MIT)
Faster Algorithms for Sparse Fourier Transform
The Fast Fourier Transform (FFT) is one of the most fundamental numerical algorithms. It computes the Discrete Fourier Transform (DFT) of an n- dimensional signal in O(n log n) time. The algorithm plays an important role in many areas.
In many applications (e.g., audio, image or video compression), most of the Fourier coefficients of a signal are “small” or equal to zero, i.e., the output of the transform is (approximately) sparse. In this case, there are algorithms that enable computing the non-zero coefficients faster than the FFT. However, in practice, the exponents in the runtimes of these algorithms and their complex structure have limited their applicability to very sparse signals.
In this talk, I will describe a new set of algorithms for sparse Fourier Transform. Their key feature is simplicity, which leads to efficient running times with low overhead, both in theory and in practice. One of those algorithms achieves a runtime of O(k log n), where k is the number of non-zero Fourier coefficients of the signal. This improves over the runtime of the FFT for any k = o(n).
Joint work with Haitham Hassanieh, Dina Katabi and Eric Price
6/13/2012 Guy Rothblum (MSR SVC)
How to Compute in the Presence of Leakage
We address the following problem: how to execute any algorithm, for an unbounded number of executions, in the presence of an attacker who gets to observe partial information on the internal state of the computation during executions. This general problem has been addressed in the last few years with varying degrees of success. It is important for running cryptographic algorithms in the presence of side-channel attacks, as well as for running non-cryptographic algorithms, such as a proprietary search algorithm or a game, on a cloud server where parts of the execution’s internals might be observed.
In this work, we view algorithms as running on a leaky CPU. In each (sub)-computation run on the CPU, we allow the adversary to observe the output of an arbitrary and adaptively chosen length-bounded function on the CPU’s input, output, and randomness.
Our main result is a general compiler for transforming any algorithm into one that is secure in the presence of this family of partial observation attacks (while maintaining the algorithm’s functionality). This result is unconditional, it does not rely on any secure hardware components or cryptographic assumptions.
Joint work with Shafi Goldwasser
6/6/2012 Jasmin Fisher (MSR Cambridge)
From Coding the Genome to Algorithms Decoding Life
The decade of genomic revolution following the human genome’s sequencing has produced significant medical advances, and yet again, revealed how complicated human biology is, and how much more remains to be understood. Biology is an extraordinary complicated puzzle; we may know some of its pieces but have no clue how they are assembled to orchestrate the symphony of life, which renders the comprehension and analysis of living systems a major challenge. Recent efforts to create executable models of complex biological phenomena – an approach we call Executable Biology – entail great promise for new scientific discoveries, shading new light on the puzzle of life. At the same time, this new wave of the future forces computer science to stretch far and beyond, and in ways never considered before, in order to deal with the enormous complexity observed in biology. This talk will focus on our recent success stories in using formal methods to model cell fate decisions during development and cancer, and on-going efforts to develop dedicated tools for biologists to model cellular processes in a visual-friendly way.
5/29/2012 Madhu Sudan (MSR New England)
TBA. Note Unusual Day and Time (Tuesday 11am-12pm)
5/23/2012 No Seminar (stoc week)
5/16/2012 Edith Cohen (AT&T)
Title: How to Get the Most out of Your Sampled Data
Random sampling is an important tool for retaining the ability to query data under resource limitations. It is used to summarize data too large to store or manipulate and meet resource constraints on bandwidth or battery power. Estimators that are applied to the sample provide fast approximate answers to queries posed over the original data and the value of the sample hinges on the quality of these estimators.
We are interested in queries, such as maximum or range, that span multiple data points. Sum aggregates of these queries correspond to distinct counts and difference norms and are used for planning or change/anomaly detection over traffic logs and measurement data. Unbiased estimators are particularly effective — While the estimate of each basic query inevitably has high variance, the relative error decreases with aggregation.
The sample may provide no information, exact value, or partial information on the queried value. The Horvitz-Thompson estimator, known to minimize variance for sampling with “all or nothing” outcomes (which reveal the exact or no information on the queried value), is not optimal or altogether inapplicable to such queries.
We present a general principled methodology for the derivation of (Pareto) optimal nonnegative unbiased estimators over sampled data and aim to understand its potential. We demonstrate significant improvement in estimation accuracy.
This work is joint with Haim Kaplan (Tel Aviv University).
5/9/2012 Andrew Drucker (MIT)
New Limits to Instance Compression for Hard Problems
Given an instance of a decision problem that is too difficult to solve outright, we may aim for the more limited goal of compressing that instance into a smaller, equivalent instance of the same or a different problem. Studying the power and limits of instance compression involves an intriguing interplay between computational and information-theoretic ideas.
As a representative problem, say we are given a Boolean formula Psi over n variables, and of size m >> n, and we want to determine if Psi is satisfiable. Can we efficiently reduce this question to an equivalent problem instance of size poly(n), independent of m? Harnik and Naor (FOCS ’06) and Bodlaender et al. (ICALP ’08) showed that this question has important connections to cryptography and to fixed-parameter tractability theory. Fortnow and Santhanam (STOC ’08) gave a negative answer for deterministic compression, assuming that NP is not contained in coNP/poly.
We will describe new and improved evidence against efficient instance compression schemes. Our method applies to probabilistic compression for SAT, and also gives the first evidence against deterministic compression for a number of problems. To prove our results, we exploit the information bottleneck of an instance compression scheme, using a new method to “disguise” information being fed into a compressive mapping.
5/2/2012 Matthew Weineberg (MIT)
Optimal Multi-Dimensional Mechanism Design: Reducing Revenue to Welfare Maximization
We provide a reduction from revenue maximization to welfare maximization in multi-dimensional Bayesian combinatorial auctions with arbitrary (possibly
combinatorial) feasibility constraints and independent additive bidders with arbitrary (possibly combinatorial) demand constraints, appropriately extending Myerson’s single-dimensional result [Myerson81] to this setting. We show that every feasible Bayesian auction can be implemented as a distribution over virtual VCG allocation rules. A virtual VCG allocation rule has the following simple form: Every bidder’s bid vector v_i is transformed into a virtual bid vector f_i(v_i), via a bidder-specific function. Then, the allocation maximizing virtual welfare is chosen. Using this characterization, we show how to find and run the revenue-optimal auction given only black box access to an implementation of the VCG allocation rule. We generalize this result to arbitrarily correlated bidders, introducing the notion of a second-order VCG allocation rule.
We obtain our reduction from revenue to welfare optimization via two algorithmic results on reduced form auctions in settings with arbitrary feasibility and demand constraints. First, we provide a separation oracle for determining feasibility of a reduced form auction. Second, we provide a geometric algorithm to decompose any feasible reduced form into a distribution over virtual VCG allocation rules. In addition, we show how to approximately execute both algorithms computationally efficiently given only black box access to an implementation of the VCG allocation rule, providing two fully polynomial-time randomized approximation schemes (FPRASs). With high probability, the separation oracle is correct on all points that are eps-away from the boundary of the set of feasible reduced forms (in the infinity-norm), and the decomposition algorithm returns a distribution over virtual VCG allocation rules whose reduced form is within eps (in the infinity-norm) of any given feasible reduced form that is eps-away from the boundary.
Our mechanisms run in time polynomial in the number of bidder types and not type profiles. This running time is always polynomial in the number of bidders, and scales with the cardinality of the support of each bidder’s value distribution. The running time can be improved to polynomial in both the number of bidders and number of items in item-symmetric settings by making use of results from [Daskalakis-Weinberg 12].
Joint work with Yang Cai and Costis Daskalakis.
4/25/2012 Or Meir. (Stanford).
Combinatorial Construction of Locally Testable Codes
An error correcting code is said to be locally testable if there is a test that can check whether a given string is a codeword of the code, or rather far from the code, by reading only a constant number of symbols of the string. Locally Testable Codes (LTCs) were first explicitly studied by Goldreich and Sudan (J. ACM 53(4)) and since then several constructions of LTCs were suggested.
While the best known construction of LTCs achieves very efficient parameters, it relies heavily on algebraic tools and on PCP machinery. We present a new and arguably simpler construction of LTCs that matches the parameters of the best known construction while not relying on either heavy algebra or PCP machinery. However, our construction is a probabilistic one.
4/18/2012 Shayan Oveis Gharan (Stanford).
Multi-way Spectral Partitioning and Higher-Order Cheeger Inequalities.
A basic fact in algebraic graph theory is that the number of connected components in an undirected graph is equal to the multiplicity of the eigenvalue zero in the Laplacian matrix of the graph. In particular, the graph is disconnected if and only if there are at least two eigenvalues equal to zero. Cheeger’s inequality and its variants provide an approximate version of the latter fact; they state that a graph has a sparse cut if and only if there are at least two eigenvalues that are close to zero.
In this talk I show an analogous characterization holds for higher multiplicities, i.e., there are k eigenvalues close to zero if and only if the vertex set can be partitioned into k subsets, each defining a sparse cut. Our result also provides a theoretical justification for clustering algorithms that use the bottom k eigenvectors to embed the vertices into R^k, and then apply geometric considerations to the embedding. Our techniques also yield a nearly optimal tradeoff between the expansion of sets of size~n/k, and the k-th smallest eigenvalue of the normalized Laplacian matrix.
Based on a joint work with James R. Lee, and Luca Trevisan.
4/11/2012 Jan Vondrak (IBM Almaden).
Hardness of randomized truthful mechanisms for combinatorial auctions.
The problem of combinatorial auctions is one of the basic questions in algorithmic mechanism design: how can we allocate m items to n agents with private valuations of different combinations of items, so that the agents are motivated to reveal their true valuations and the outcome is (approximately) optimal in terms of social welfare? While approximation algorithms exist for several non-trivial classes of valuations, they typically do not motivate agents to report truthfully. The classical VCG mechanism, although truthful, is not computationally efficient. Thus the main question is whether the requirements of truthfulness and computational efficiency can be combined, or whether they are incompatible.
We identify a class of explicit (succinctly represented) submodular valuations, for which it is known that combinatorial auctions without the requirement of truthfulness admit a (1-1/e)-approximation; however, we prove that unless NP \subset P/poly, there is no truthful (even truthful-in-expectation) mechanism for this class providing approximation better than polynomial in the number of agents. (Previous work by Dobzinski already ruled out deterministic and universally truthful mechanisms for submodular valuations given by a value oracle.)
Joint work with Shaddin Dughmi and Shahar Dobzinski.
4/4/2012 No Seminar.
3/28/2012 1:30-2:30 Justin Thaler (Harvard)
Practical Verified Computation with Streaming Interactive Proofs
A potential problem in outsourcing work to commercial cloud computing services is trust. If we store a large data set with a service provider, and ask them to perform a computation on that data set — for example, to compute the eigenvalues of a large graph, or to compute a linear program on a large matrix derived from a database — how can we know the computation was performed correctly? Obviously we don’t want to compute the result ourselves, and we might not even be able to store all the data locally. This leads to new problems in the streaming paradigm: we consider a streaming algorithm (modeling a user with limited memory and computational resources) that can be assisted by a powerful helper (the service provider). The goal of the service provider is to not only provide the user with answer, but to convince the user the answer is correct.
In this talk, I will give a unified overview of a recent line of work exploring the application of proof systems to problems that are streaming in nature. In all of these protocols, an honest service provider can always convince the data owner that the answer is correct, while a dishonest prover will be caught with high probability. The protocols I will discuss utilize and extend powerful ideas from communication complexity and the theory of interactive proofs, and I will argue that many are highly practical, achieving millions of updates per second and requiring little space and communication.
Joint work with Amit Chakrabarti, Graham Cormode, Andrew McGregor, Michael Mitzenmacher, and Ke Yi
3/21/2012 1:30-2:30 Mehrdad Nojoumian (University of Waterloo)
Secret Sharing Based on the Social Behaviors of Players
Initially, a mathematical model for “trust computation” in social networks is provided. Subsequently, the notion of a “social secret sharing scheme” is introduced in which shares are allocated based on a player’s reputation and the way she interacts with other parties. In other words, this scheme renews shares at each cycle without changing the secret, and allows trusted parties to gain more authority.
Finally, a novel “socio-rational secret sharing” scheme is proposed in which rational foresighted players have long-term interactions in a social context, i.e., players run secret sharing while founding and sustaining a trust network. To motivate this, consider a repeated game such as sealed-bid auctions. If we assume each party has a reputation value, we can then penalize (or reward) players who are selfish (or unselfish) from game to game. This social reinforcement stimulates players to be cooperative.
3/14/2012 1:30-2:30 Venkatesan Guruswami (Carnegie Mellon University)
Lasserre hierarchy, higher eigenvalues, and graph partitioning
Partitioning the vertices of a graph into two (roughly) equal parts to minimize the weight of edges cut is a fundamental optimization problem, arising in diverse applications. Despite intense research, there remains a huge gap in our understanding of the approximability of these problems: the best algorithms achieve a super-constant approximation factor, whereas even a factor 1.1 approximation is not known to be NP-hard.
We describe an approximation scheme for various graph partitioning problems such as sparsest cut, minimum bisection, and small set expansion. Specifically, we give an algorithm running in time n^{O_epsilon(r)} with approximation ratio (1+epsilon)/min(1,lambda_r), where lambda_r is the r’th smallest eigenvalue of the normalized graph Laplacian matrix. This perhaps indicates why even showing very weak hardness for these problems has been elusive, since the reduction must produce hard instances with slowly growing spectra.
Our algorithm is based on a rounding procedure for semidefinite programming relaxations from a strong class called the Lasserre hierarchy. The analysis uses bounds for low-rank approximations of a matrix in Frobenius norm using columns of the matrix.
Our methods apply more broadly to optimizing certain Quadratic Integer Programming problems with positive semidefinite objective functions and global linear constraints. This framework includes other notorious problems such as Unique Games, which we again show to be easy when the normalized Laplacian doesn’t have too many small eigenvalues.
Joint work with Ali Kemal Sinop.
3/7/2012 – Cancled (TechFest)
2/29/2012 1:30-2:30 Parikshit Gopalan (MSR-SVC)
The Short Code
The Long Code plays a crucial role in our understanding of the approximability of NP-hard problems. True to its name however, it is rather long. We construct a shorter code which enjoys many of the desirable properties of the Long code and can replace it in some scenarios.
The Short Code is derived from an explicit construction of a graph with several large eigenvalues which is nevertheless a good small-set expander. This answers a question raised by Arora, Barak and Steurer. We present a general recipe for constructing small-set expanders from certain locally testable codes.
Joint work with Boaz Barak, Johann Hastad, Raghu Meka, Prasad Raghavendra and David Steurer.
2/22/2012 1:30-2:30 Aleksander Madry (MSR New-England)
Online Algorithms and the K-server Conjecture
Traditionally, in the problems considered in optimization, one produces the solution only after the whole input is made available. However, in many real-world scenarios the input is revealed gradually, and one needs to make irrevocable decisions along the way while having only partial information on the whole input. This motivates us to develop models that allow us to address such scenarios.
In this talk, I will consider one of the most popular approaches to dealing with uncertainty in optimization: the online model and competitive analysis; and focus on a central problem in this area: the k-server problem. This problem captures many online scenarios – in particular, the widely studied caching problem – and is considered by many to be the “holy grail” problem of the field.
I will present a new randomized algorithm for the k-server problem that is the first online algorithm for this problem that achieves polylogarithmic competitiveness.
Based on joint work with Nikhil Bansal, Niv Buchbinder, and Joseph (Seffi) Naor.
2/15/2012 1:30-2:30 Dorothea Wagner (Karlsruhe Institute of Technology)
Algorithm Engineering for Graph Clustering
Graph clustering has become a central tool for the analysis of networks in general, with applications ranging from the field of social sciences to biology and to the growing field of complex systems. The general aim of graph clustering is to identify dense groups in networks. Countless formalizations thereof exist, among those the widespread measure modularity. However, the overwhelming majority of algorithms for graph clustering relies on heuristics, e.g., for some NP-hard optimization problem, and do not allow for any structural guarantee on their output. Moreover, most networks in the real world are not static but evolve over the time, and so do their group structures.
The lecture will focus on algorithmic aspects of graph clustering, especially on quality measures and algorithms that are based on the intuition of identifying as clusters dense subgraphs that are loosely connected among one another. We will discuss different quality measures, in particular the quality index modularity, and present an algorithm engineering approach for modularity maximization and related problems.
2/9/2012 10:30-11:30 Michael Kapralov (Stanford)
Algorithms for Bipartite Matching Problems with Connections to Sparsification and Streaming
The need to process massive modern data sets necessitates rethinking of some classical algorithmic solutions from the point of view of modern data processing architectures. Over the past years sparsification has emerged as an important primitive in the algorithmic toolkit for graph algorithms that allows one to obtain a small space representation that approximately preserves some useful properties of the graph. This talk is centered around two topics. First, we give new algorithms for some bipartite matching problems, which use both sparsification and random walks in novel ways. Second, we give efficient algorithms for constructing sparsifiers on modern computing platforms.
In the first part of the talk we consider the problem of finding perfect matchings in regular bipartite graphs, a classical problem with applications to edge-colorings, routing and scheduling. A sequence of improvements over the years have culminated in a linear-time algorithm. We use both sparsification and random walks to obtain efficient sublinear time algorithms for the problem. In particular, we give an algorithm that recovers a perfect matching in O(n log n) time, where n is the number of vertices in the graph, when the graph is given in adjacency array representation. The runtime is within O(log n) of output complexity, essentially closing the problem. Our approach also yields extremely efficient and easy to implement algorithms for edge-coloring bipartite multigraphs and computing the Birkhoff-von-Neumann decomposition of a doubly-stochastic matrix.
In the second part of the talk we describe an efficient algorithm for single pass graph sparsification in distributed stream processing systems such as Twitter’s recently introduced Storm. We also present a novel approach to obtaining spectral sparsifiers, based on a new notion of distance between nodes in a graph related to shortest path distance on random samples of the graph.
Finally, in the last part of the talk we introduce and study a notion of sparsification relevant to matching problems in general graphs, and show applications to the problem of approximating maximum matchings in a single pass in the streaming model.
1/26/2012 10:30-11:30 Gregory Valiant (UC Berkeley)
Algorithmic Solutions to Some Statistical Questions
I will discuss three classical statistical problems for which the computational perspective unlocks insights into the fundamental nature of these tasks, and suggests new approaches to coping with the increasing size of real-world datasets.
The first problem is recovering the parameters of a mixture of Gaussian distributions. Given data drawn from a single Gaussian distribution, the sample mean and covariance of the data trivially yield good estimates of the parameters of the true distribution; if, however, some of the data points are drawn according to one Gaussian, and the rest of the data points are drawn according to a different Gaussian, how can one recover the parameters of each Gaussian component? This problem was first proposed by Pearson in the 1890’s, and, in the last decade, was revisited by computer scientists. In a pair of papers with Adam Kalai and Ankur Moitra, we established that both the sample complexity, and computational complexity of this problem are polynomial in the relevant parameters (the dimension, and the inverse of the desired accuracy).
The second problem, investigated in a series of papers with Paul Valiant, considers the tasks of estimating a broad class of statistical properties, which includes entropy, L_k distances between pairs of distributions, and support size. There are several implications of our results, including resolving the sample complexity of the distinct elements problem’ (i.e. given a data table with n rows, how many random rows must one query to accurately estimate the number of distinct rows?). We show that on the order of n/log n rows is both necessary, and sufficient, improving significantly on both the prior upper and lower bounds for this problem.
Finally I’ll describe some new bounds for the problem of learning noisy juntas (and parities). Roughly, this problem captures the task of determining the relevant’ variables—for example, given a large table with columns representing the expression of many different genes, and one final column representing the incidence of some medical condition, how can one efficiently find the (possibly small) subset of genes that is relevant to predicting the condition?
1/25/2012 1:30-2:30 Virginia Vassilevska Williams (UC Berkeley)
Multiplying Matrices Faster than Coppersmith-Winograd
In 1987 Coppersmith and Winograd presented an algorithm to multiply two n by n matrices using O(n^{2.3755}) arithmetic operations. This algorithm has remained the theoretically fastest approach for matrix multiplication for 24 years. We have recently been able to design an algorithm that multiplies n by n matrices and uses at most O(n^{2.3727}) arithmetic operations, thus improving the Coppersmith-Winograd running time.
The improvement is based on a recursive application of the original Coppersmith-Winograd construction, together with a general theorem that reduces the analysis of the algorithm running time to solving a nonlinear constraint program. The final analysis is then done by numerically solving this program. To fully optimize the running time we utilize an idea from independent work by Stothers who claimed an O(n^{2.3737}) runtime in his Ph.D. thesis.
The aim of the talk will be to give some intuition and to highlight the main new ideas needed to obtain the improvement.
1/24/2012 10:30-11:30 Roy Schwartz (Technion Israel Institute of Technology)
Submodular Maximization
The study of combinatorial problems with a submodular objective function has attracted much attention in recent years, and is partly motivated by the importance of such problems to economics, algorithmic game theory and combinatorial optimization. In addition to the fact that it is common for utility functions in economics and algorithmic game theory to be submodular, such functions also play a major role in combinatorics, graph theory and combinatorial optimization. A partial list of well known problems captured by submodular maximization includes: Max-Cut, Max-DiCut, Max-k-Cover, Generalized-Assignment, several variants of Max-SAT and some welfare and scheduling problems.
Classical works on submodular maximization problems are mostly combinatorial in nature.
Recently, however, many results based on continuous algorithmic tools have emerged. The main bottleneck in the continuous approach is how to approximately solve a non-convex relaxation for the submodular problem at hand. A simple and elegant method, called “continuous greedy”, successfully tackles this issue for monotone submodular objective functions, however, only much more complex tools are known to work for general non-monotone submodular objectives. We present a new unified continuous greedy algorithm which finds approximate fractional solutions for both the non-monotone and monotone cases, and improves on the approximation ratio for various applications. Some notable immediate implications are information-theoretic tight approximations for Submodular Max-SAT and Submodular-Welfare with k players, for any number of players k, and an improved (1/e)-approximation for maximizing a non-monotone submodular function subject to a matroid or O(1)-knapsack constraints.
We show that continuous methods can be further used to obtain improved results in other settings. Perhaps the most basic submodular maximization problem is the problem of Unconstrained Submodular Maximization, which captures some well studied problems, such as: Max-Cut, Max-DiCut, and some variants of maximum facility location and Max-SAT. Exploiting some symmetry properties of the problem, we present a simple information-theoretic tight (1/2)-approximation algorithm, which unlike previous known algorithms keeps a fractional inner state, i.e., it is based on a continuous approach. We note that our algorithm can be further simplified to obtain a purely combinatorial algorithm which runs only in linear time.
1/18/2012 1:30-2:30 Paul Ohm (University of Colorado Law School)
Computer Science and the Law
Today more than ever, computer science shapes law and law shapes computer science. Computer scientists pave the way for new computing and networking platforms and applications that then set the stage for new forms of collaboration and conflict. These create opportunities for lawyers and lawmakers to respond. The relationship works in the reverse order, too, when lawyers and lawmakers enact laws in ways that constrain or (less often) expand what computer scientists are allowed to do. This interdisciplinary interplay has taken place in many different specific conflicts, from network neutrality to the crypto wars and from battles over copyright to information privacy, but there is value in looking at these more generally, examining the messy but necessary relationship between computer science and law (and policy too).
In this talk, Paul Ohm, a law professor at the University of Colorado, will examine the relationship between computer science and law and policy. He will draw from his experiences as an undergraduate computer science major, professional systems administrator, Department of Justice computer crimes prosecutor, and leading scholar in cyberlaw and information privacy, to talk about whether and how computer science can influence law and policy and vice versa. He will draw from specific examples including debates over anonymization, domain name seizure, and deep packet inspection. This is intended to be a dynamic and interactive session, one in which the audience will help shape the direction of discussion.
1/12/2012 10:30-11:30 Shiri Chechik (Weizmann Institute of Science)
Compact Routing with Failures
Routing is one of the most fundamental problems in distributed networks. A routing scheme is a mechanism that allows delivering messages between the nodes of the graph.
In this talk I’ll discuss a natural extension of compact routing schemes – routing in the presence of failures. Suppose that some of the links crash from time to time and it is still required to deliver messages between the nodes of the graph, if possible. I’ll present a compact routing scheme capable of handling multiple edge failures.
1/11/2012 1:30-2:30 Balasubramanian Sivan (University of Wisconsin-Madison)
Bayesian Multi-Parameter Scheduling
We study the makespan minimization problem with unrelated selfish machines. Specifically, our goal is to schedule a given set of jobs on n machines; the machines are strategic agents who hold the private information of the run-time of each job on them. The goal is to design a mechanism that minimizes makespan — the time taken for the last job to complete. In the strategic setting, strong impossibility results are known that show that no truthful (anonymous) mechanism can achieve better than a factor n approximation. We show that under mild Bayesian assumptions it is possible to circumvent such negative results and obtain a constant approximation.
Joint work with Shuchi Chawla, Jason Hartline and David Malec.
1/10/2012 10:30-11:30 Christian Sommer (MIT)
Exact and Approximate Shortest-Path Queries
We discuss the problem of efficiently computing a shortest path between two nodes of a network — a problem with numerous applications. The shortest-path query problem in particular occurs in transportation (route planning and navigation or also logistics and traffic simulations), in packet routing, in social networks, and in many other scenarios. Furthermore, shortest-path problems occur as subproblems in various optimization problems.
Strategies for computing answers to shortest-path queries may involve the use of pre-computed data structures (also called distance oracles) in order to improve the query time. Designing a shortest-path-query processing method raises questions such as: How can these data structures be computed efficiently? What amount of storage is necessary? How much improvement of the query time is possible? How good is the approximation quality (also termed stretch) of the query result? And, in particular, what are the tradeoffs between pre-computation time, storage, query time, and approximation quality?
The talk provides answers to these questions for static networks. In particular, we consider the tradeoff between storage and query time, both from a theoretical and from an experimental perspective. We focus on two application scenarios: First, we discuss shortest-path query methods for planar graphs, motivated by route planning in road networks. Second, we discuss distance oracles and shortest-path query methods for complex networks, motivated by small-world phenomena in social networks and Internet routing. We also outline which methods and techniques can or cannot be extended to more general networks.
Joint work with Takuya Akiba, Wei Chen, Ken-ichi Kawarabayashi, Philip Klein, Shay Mozes, Shang-Hua Teng, Mikkel Thorup, Elad Verbin, Yajun Wang, Wei Yu, and others.
12/14/2011 1:30-2:30 Dan Spielman (Yale)
Algorithms, Graph Theory and the Solution of Laplacian Linear Equations
We survey several fascinating concepts and algorithms in graph theory that arise in the design of fast algorithms for solving linear equations in the Laplacian matrices of graphs. We will begin by explaining why linear equations in these matrices are so interesting.
The problem of solving linear equations in these matrices motivates a new notion of what it means for one graph to approximate another. This leads to a problem of graph sparsification–the approximation of a graph by a sparser graph. Our algorithms for solving Laplacian linear equations will exploit surprisingly strong approximations of graphs by sparse graphs, and even by trees.
We will survey the roles that spectral graph theory, random matrix theory, graph sparsification, low-stretch spanning trees and local clustering algorithm play in the design of fast algorithms for solving Laplacian linear equations.
12/13/2011 1:30-2:30 Dan Spielman (Yale)
Fitting a Graph to Vectors
We ask “What is the right graph to fit to a set of vectors?”
We would like to associate one vertex with each vector, and choose the edges in a natural way.
We propose one solution that provides good answers to standard Machine Learning problems such as classification and regression, that has interesting combinatorial properties, and that we can compute efficiently.
Joint work with Jonathan Kelner and Samuel Daitch.
12/7/2011 1:30-2:30 Salil Vadhan (Harvard)
Computational Entropy
Shannon’s notion of entropy measures the amount of “randomness” in a process. However, to an algorithm with bounded resources, the amount of randomness can appear to be very different from the Shannon entropy. Indeed, various measures of “computational entropy” have been very useful in computational complexity and the foundations of cryptography.
In this talk, I will describe two new measures of computational entropy (“next-bit pseudoentropy” and “inaccessible entropy”) that have enabled much simpler and more efficient constructions of cryptographic primitives from one-way functions. In particular, I will present ideas underlying a construction of pseudorandom generators of seed length O(n^3) from a one-way function on n bits, improving the seed length of O(n^8) in the classic construction of Hastad, Impagliazzo, Levin, and Luby.
Joint works with Iftach Haitner, Thomas Holenstein, Omer Reingold, Hoeteck Wee, and Colin Jia Zheng.
12/6/2011 1:30-2:30 Aleksandar Nikolov (Rutgers)
Optimal Private Halfspace Counting via Discrepancy
In the range counting problem we are given a set $P$ of $n$ points in $d$-dimensional Euclidean space, an integer weight $x_p$ for each point $p$ in $P$, and a collection ${\cal R}$ of ranges, i.e. subsets of $P$. Given a query range, the task is to output the sum of weights of the points belonging to that range. Range counting is a fundamental problem in computational geometry.
We study $(\epsilon, \delta)$-differentially private algorithms for range counting. Our main results are for range spaces given by halfspaces, i.e.~the halfspace counting problem. We present an $(\epsilon, \delta)$-differentially private algorithm for halfspace counting in $d$ dimensions which achieves $O(n^{1-1/d})$ average squared error. We also show a matching lower bound of $\Omega(n^{1-1/d})$ for any $(\eps, \delta)$-differentially private algorithm.
Both bounds are obtained using discrepancy theory. Our lower bound approach also yields a lower bound of $\Omega((\log n)^{d-1})$ average squared error for $(\epsilon, \delta)$-differentially private orthogonal range counting, the first known lower bound for this problem. Our upper bound methods yield $(\epsilon, \delta)$-differentially private algorithms for range counting with polynomially bounded shatter function range spaces.
Joint work with S. Muthukrishnan.
11/30/2011 1:30-2:30 Cynthia Dwork (MSR Silicon Valley)
Lipschitz Mappings, Differential Privacy, and Fairness Through Awareness
We study fairness in classification, where individuals are classified, e.g., admitted (or not) to a university, and the goal is to prevent discrimination against individuals based on their membership in some group, while maintaining utility for the classifier (the university). We present a framework for fair classification comprising (1) a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand; (2) an algorithm for maximizing utility subject to the fairness constraint that similar individuals are treated similarly; and (3) an adaptation for achieving the complementary goal of “fair affirmative action,” which guarantees statistical parity (the demographics of the set of individuals receiving any classification are the same as the demographics of the underlying population), while treating similar individuals as similarly as possible. Our approach handles arbitrary classifiers, with arbitrary utilities. We also establish a connection to differential privacy, where similar databases give rise to similar output distributions.
Joint work with Moritz Hardt, Toni Pitassi, Omer Reingold, and Richard Zemel.
11/16/2011 1:30-2:30 Renato Paes Leme (Cornell)
Polyhedral Clinching Auctions and the Adwords Polytope
A central problem in mechanism design is how to deal with budget-constrained agents and the goal is to produce auctions that are incentive compatible, individually rational, budget feasible and Pareto-optimal. Variations of Ausubel’s clinching auction have produced successful results for multi-unit auctions(Dobzinski et al) and certain matching markets (Fiat et al). In this paper, we extend the Ausubel’s clinching auction to the setting where the allocation is a general polymatroid given to us by a separation oracle. We also show that the clinching step can be calculated efficiently using submodular minimization. Moreover, we show that polymatroids are the most general setting under clinching auctions can be applied. Many settings of interest to sponsored search can be expressed as polymatroids. In particular, we define the AdWords polytope, which can be of independent interest.
Furthermore, for the case where only one player is budget constrained, we define an auction for any generic environment satisfying Pareto-optimality, individual rationality, budget feasibility and Pareto optimality. The technique here involves approximating a convex set by smooth surfaces, designing an auction for the smooth environments and taking its limit.
For the case of two players with budget constraints and a general polyhedral (but not polymatroidal) environment, we show an impossibility result. As a byproduct, we get an impossibility result for multi-unit auctions with decreasing marginal utilities. It was conjectured by Ausubel that a variation of the clinching auction with budgets would work for this setting. This conjecture was later reinforced in Dobzinski, Lavi and Nisan. We solve the problem in a negative way. We do so by a characterization of Pareto optimal truthful auctions for polyhedral environment.
This is a joint work with Vahab Mirrokni and Gagan Goel.
11/9/2011 1:30-2:30 Moshe Babaioff (MSR Silicon Valley)
Peaches, Lemons, and Cookies: Designing Auction Markets with Dispersed Information
This paper studies the role of information asymmetries in second price, common value auctions. Motivated by information structures that arise commonly in applications such as online advertising, we seek to understand what types of information asymmetries lead to substantial reductions in revenue for the auctioneer. One application of our results concerns online advertising auctions in the presence of “cookies”, which allow individual advertisers to recognize advertising opportunities for users who, for example, are customers of their websites. Cookies create substantial information asymmetries both ex ante and at the interim stage, when advertisers form their beliefs. The paper proceeds by first introducing a new refinement, which we call “tremble robust equilibrium” (TRE), which overcomes the problem of multiplicity of equilibria in many domains of interest.
Second, we consider a special information structure, where only one bidder has access to superior information, and show that the seller’s revenue in the unique TRE is equal to the expected value of the object conditional on the lowest possible signal, no matter how unlikely it is that this signal is realized. Thus, if cookies identify especially good users, revenue may not be affected much, but if cookies can (even occasionally) be used to identify very poor users, the revenue consequences are severe. In the third part of the paper, we study the case where multiple bidders may be informed, providing additional characterizations of the impact of information structure on revenue. Finally, we consider richer market designs that ensure greater revenue for the auctioneer, for example by auctioning the right to participate in the mechanism.
This is joint work with Ittai Abraham, Susan Athey and Michael Grubb.
11/3/2011 1:30-2:30 Karthik Chandrasekaran (Georgia Tech)
Discrete entropy and the complexity of random integer programming
We consider integer feasibility of random polytopes specified by m random tangential hyperplanes to a ball of radius R centered around an arbitrary point in n-dimensional space. We show that with high probability, the random polytope defined by m=O(n) such constraints contains an integer point provided the radius R is larger than a universal constant.
For the random polytope with m constraints in n-dimensional space, where n < m < 2^O(sqrt(n)), a ball of radius about Omega(sqrt(log (2m/n))) suffices. Moreover, if the polytope contains a ball of radius Omega(log (2m/n)), then we can find an integer solution with high probability (over the input) in randomized polynomial time. Our work provides a connection between integer programming and discrepancy of set systems – in particular, we use the entropy technique from Spencer’s classical result on discrepancy of set systems and build on Bansal’s recent algorithm for finding low-discrepancy solutions efficiently.
This is joint work with Santosh Vempala.
10/28/2011 1:30-2:30 Mihai Patrascu (AT&T Labs Research)
Hashing for Linear Probing
Hash tables are ubiquitous data structures solving the dictionary problem, and they often show up in inner loops, making performance critical.
A hash table algorithm relies crucially on a hash function, which quasirandomly maps a large domain (the input keys) to a small domain (the memory space available). Many “hash tables” (in effect, algorithms for dealing with collisions) have been proposed, the best known being collision chaining, linear probing and cuckoo hashing.
Among these, linear probing is ideally suited for modern computer architectures, which tend to favor linear scans. However, linear probing is quite sensitive to the quality of the hash function and, traditionally, good performance was only guaranteed by using highly independent (but slow) hash functions.
Our finding is that tabulation hashing, despite its low degree of independence, can actually guarantee very robust performance in linear probing. This function is both easy to implement and extremely fast on current hardware (faster than 2 multiplications), offering an ideal solution both in theory and in practice.
10/20/2011 1:30-2:30 Roy Schwartz (Technion)
Continuous Methods for Submodular Maximization
The study of combinatorial problems with a submodular objective function has attracted much attention in recent years, and is partly motivated by the importance of such problems to economics, algorithmic game theory and combinatorial optimization. Classical works on these problems are mostly combinatorial in nature. Recently, however, many results based on continuous algorithmic tools have emerged.
We present several new techniques and algorithmic tools which are based on the continuous approach, yielding improved approximations for various applications and in some cases information-theoretic tight approximations.
1. The main bottleneck in the continuous approach is how to approximately solve a non-convex relaxation for the submodular problem at hand. A simple and elegant method, called “continuous greedy”, successfully tackles this issue for monotone submodular objective functions, however, only much more complex tools are known to work for general non-monotone submodular objectives. We present a new unified continuous greedy algorithm which finds approximate fractional solutions for both the non-monotone and monotone cases, and improves on the approximation ratio for various applications. Some notable immediate implications are information-theoretic tight approximations for Submodular Max-SAT and Submodular-Welfare with $k$ players, for {\em any} number of players $k$, and an improved $1/e$-approximation for maximizing a non-monotone submodular function subject to a matroid or $O(1)$-knapsack constraints.
2. Consider the Unconstrained Submodular Maximization problem in which we are given a non-negative (and possibly non-monotone) submodular function $f$ over a domain $N$, and the objective is to find a subset $S\subseteq N$ maximizing $f(S)$. This is considered one of the basic submodular optimization problems, generalizing well known problems such as Max-CUT, Max-DICUT and variants of maximum facility location and Max-SAT. We present the first rounding based algorithm for this problem, providing an improved approximation of roughly $0.45$ (a hardness of $0.5+\varepsilon$ for every constant $\varepsilon >0$ was given by Feige et al.). To achieve this goal we present a new algorithmic tool, which given a suboptimal solution $S$ whose value is small in comparison to $OPT$ but is “structurally” similar to $OPT$, improves the value of $S$ solely on this information. When two coupled executions of the unified continuous greedy algorithm are carefully combined with the above algorithmic tool, one can achieve the stated approximation of $0.45$.
Joint work with Moran Feldman and Seffi Naor.
10/19/2011 1:30-2:30 Aranyak Mehta (Google)
Online Matching and Ad Allocation
The spectacular success of search and display advertising and its huge growth potential has attracted the attention of researchers from many aspects of computer science. A core problem in this area is that of Ad allocation, an inherently algorithmic and game theoretic question – that of matching ad slots to advertisers, online, under demand and supply constraints. Put very simply: the better the matching, the more efficient the market.
The seminal algorithmic work on online matching, by Karp, Vazirani and Vazirani, was done over two decades ago, well before this motivation existed. In this talk, I will present an overview of several key algorithmic papers in this area, starting with its purely academic beginnings, to the general problem motivated by the application. The theory behind these algorithms involves new combinatorial, probabilistic and linear programming techniques. Besides the analytical results, I will also touch upon how these algorithmic ideas can be applied in practice.
10/11/2011 1:30-2:30 Lorenzo Orecchia (UC Berkeley)
A \tilde{O}(m)-time Spectral Algorithm for Balanced Cut
In the Balanced Cut problem, on input an undirected graph G with m edges, a balance parameter b and a target conductance value \gamma, we are asked to decide whether G has any b-balanced cut of conductance at most \gamma.
Approximation algorithms for Balanced Cut are a fundamental primitive in the design of recursive graph algorithms for a variety of combinatorial and numerical problems. In many of these practical applications, it is of crucial importance for the underlying Balanced-Cut algorithm to run in time as close to linear as possible and have an efficient implementation. For this reason, researchers have focused on spectral methods that guarantee speed and ease of implementation, albeit at the loss of approximation quality on some graphs.
The simplest spectral algorithm for this problem computes the graph’s slowest-mixing eigenvector, removes any unbalanced low-conductance cut it finds and recurses on the rest of the graph. This algorithm achieves an approximation guarantee that is asymptotically optimal for spectral algorithms, but unfortunately, it may need \Omega(n) recursive calls and runs in worst-case quadratic time.
In recent work with Nisheeth Vishnoi and Sushant Sachdeva, we give a spectral algorithm that achieves the same approximation guarantee, by designing a more principled recursion that allows our algorithm to remove unbalanced sparse cuts in O(\log n) stages and to run in time \tilde{O}(m). The main idea behind this improvement is the use of certain random walks as a regularized, more robust analogue of the graph’s slowest-mixing eigenvector. Our second novel contribution is a method to compute the required random-walk vectors in time \tilde{O}(m) that combines ideas in Approximation theory, Lanczos methods, and the linear equation solver by Spielman and Teng.
Our algorithm is thus the first spectral algorithm for Balanced Cut that runs in time \tilde{O}(m) and achieves the asymptotically optimal approximation guarantee for spectral methods, i.e. is able to distinguish between conductance \gamma and \Omega(\sqrt{\gamma}).
10/5/2011 1:30-2:30 Arnab Bhattacharyya (Princeton)
Tight lower bounds for 2-query locally correctable codes over finite fields
A Locally Correctable Code (LCC) is an error correcting code that has a probabilistic self-correcting algorithm that, with high probability, can correct any coordinate of the codeword by looking at only a few other coordinates, even if a fraction of the coordinates are corrupted. LCCs are a stronger form of LDCs (Locally Decodable Codes) which have received a lot of attention recently due to their many applications and surprising constructions.
In this work, we show a separation between 2-query LDCs and LCCs over finite fields of prime order. Specifically, we prove a lower bound of the form p^Ω(δd) on the length of linear 2-query LCCs over F_p, that encode messages of length d. Our bound improves over the known bound of 2^Ω(δd) which is tight for LDCs. Our proof makes use of tools from additive combinatorics which have played an important role in several recent results in Theoretical Computer Science.
We also obtain, as corollaries of our main theorem, new results in incidence geometry over finite fields. The first is an improvement to the Sylvester-Gallai theorem over finite fields and the second is a new analog of Beck’s theorem over finite fields.
Joint work with Zeev Dvir, Shubhangi Saraf, and Amir Shpilka.
9/7/2011 1:30-2:30 Aditya Bhaskara (Princeton)
Delocalization Properties of Eigenvectors of Random Graphs
A lot is known about the spectrum and the distribution of eigenvalues of G(n,p) random graphs. However many elementary questions regarding the eigenvectors remain open: for instance, do they ‘behave’ like random vectors in $\R^n$? In particular, can we say that each coordinate (in the normalized eigenvector) will be only $C/\sqrt{n}$ with high probability? Does an eigenvector have a ‘significant’ mass on a $\delta n$ fraction of the coordinates w.h.p.?
We use recent techniques due to Erdos, Schlein and Yau, and Tao and Vu to answer questions of this nature. Using these results, we answer an open question of Dekel, Lee and Linial on the number of ‘nodal domains’ in the eigenvectors of G(n,p) random graphs, when $p$ is reasonably large’ (earlier such results were not known even when $p=1/2$).
This is joint work with Sanjeev Arora.
8/31/2011 1:30-2:30 Shahar Dobzinski (Cornell)
(Approximating) Optimal Auctions with Correlated Bidders
We consider the problem of designing a revenue-maximizing auction for a single item, when the values of the bidders are drawn from a correlated distribution. We show that one can find in poly time a deterministic revenue-maximizing auction that guarantees at least 3/5 of the revenue of the best truthful-in-expectation auction. The proof consists of three steps:
(1) Showing that the optimal truthful-in-expectation auction for a constant number of bidders can be computed in polynomial time.
(2) A “derandomization” algorithm that implements every 2-player truthful-in-expectation mechanism as a universally truthful mechanism.
(3) An analysis of three new auctions that have complementary properties.
Joint work with Hu Fu and Bobby Kleinberg.
8/24/2011 1:30-2:30 Yevgeniy Dodis (NYU)
Leftover Hash Lemma, Revisited
The famous Leftover Hash Lemma (LHL) states that (almost) universal hash functions are good randomness extractors. Despite its numerous applications, LHL-based extractors suffer from the following two drawbacks:
(1) Large Entropy Loss: to extract v bits from distribution X of min-entropy m which are e-close to uniform, one must set v<= m – 2*log(1/e), meaning that the entropy loss L = m-v>= 2*log(1/e).
(2) Large Seed Length: the seed length n of universal hash function required by the LHL must be linear in the length of the source.
Quite surprisingly, we show that both limitations of the LHL — large entropy loss and large seed — can often be overcome (or, at least, mitigated) in various quite general scenarios. First, we show that entropy loss could be reduced to L=log(1/e) for the setting of deriving secret keys for most cryptographic applications, including signatures/MACs, CPA/CCA-secure encryption, etc. Specifically, the security of these schemes gracefully degrades from e to at most e + sqrt(e * 2^{-L}). (Notice that, unlike standard LHL, this bound is meaningful even for negative entropy loss, when we extract more bits than the the min-entropy we have!) Second, we study the soundness of the natural *expand-then-extract* approach, where one uses a pseudorandom generator (PRG) to expand a short “input seed” S into a longer “output seed” S’, and then use the resulting S’ as the seed required by the LHL (or, more generally, any randomness extractor). Unfortunately, we show that, in general, expand-then-extract approach is not sound if the Decisional Diffie-Hellman assumption is true.
Despite that, we show that it is sound either: (1) when extracting a “small” (logarithmic in the security of the PRG) number of bits; or (2) in *minicrypt*. Implication (2) suggests that the sample-then-extract approach is likely secure when used with “practical” PRGs, despite lacking a reductionist proof of security!
The paper can be found at http://eprint.iacr.org/2011/088.
8/23/2011 1:30-2:30 Amos Fiat (Tel-Aviv University)
Beyond Myopic Best Response (For Cournot Competition)
A Nash Equilibrium is a joint strategy profile at which each agent myopically plays a best response to the other agents’ strategies, ignoring the possibility that deviating from the equilibrium could lead to an avalanche of successive changes by other agents. However, such changes could potentially be beneficial to the agent, creating incentive to act non-myopically, so as to take advantage of others’ responses.
To study this phenomenon, we consider a non-myopic Cournot competition, where each firm selects whether it wants to maximize profit (as in the classical Cournot competition) or to maximize revenue (by masquerading as a firm with zero production costs).
The key observation is that profit may actually be higher when acting to maximize revenue, (1) which will depress market prices, (2) which will reduce the production of other firms, (3) which will gain market share for the revenue maximizing firm, (4) which will, overall, increase profits for the revenue maximizing firm. Implicit in this line of thought is that one might take other firms’ responses into account when choosing a market strategy.
The Nash Equilibria of the non-myopic Cournot competition capture this action/response issue appropriately, and this work is a step towards understanding the impact of such strategic manipulative play in markets. We study the properties of Nash Equilibria of non-myopic Cournot competition with linear demand functions and show existence of pure Nash Equilibria, that simple best response dynamics will produce such an equilibrium, and that for some natural dynamics this convergence is within linear time. This is in contrast to the well known fact that best response dynamics need not converge in the standard myopic Cournot competition.
Furthermore, we compare the outcome of the non-myopic Cournot competition with that of the standard myopic standard Cournot competition. Not surprisingly, perhaps, prices in the non-myopic game are lower and the firms, in total, produce more and have a lower aggregate utility.
Joint work with Elias Koutsoupias, Katrina Ligett, Yishay Mansour, and Svetlana Olonetsky.
8/17/2011 1:30-2:30 Rocco Servedio (Columbia)
Learning and Testing k-Model Distributions
A k-modal probability distribution over the domain {1,…,N} is one whose histogram has at most k “peaks” and “valleys”. Such distributions are a natural generalization of the well-studied class of monotone increasing (or monotone decreasing) probability distributions.
We study the problem of learning an unknown k-modal distribution from samples. We also study a related problem in property testing: given access to samples from an unknown k-modal distribution p, determine whether p is identical to q or p is “far” from q. Here q is a k-modal distribution which may be given explicitly or may be available via sample access.
We give algorithms for these problems that are provably close to the best possible in terms of sample and time complexity. An interesting feature of our approach is that our learning algorithms use ingredients from property testing and vice versa.
Joint work with Costis Daskalakis and Ilias Diakonikolas.
8/10/2011 1:30-2:30 Vasilis Gkatzelis (NYU)
Inner Product Spaces for Minsum Coordination Mechanisms
In this work we study the machine scheduling problem of minimizing the weighted sum of completion times of jobs on unrelated machines. We begin by studying the selfish scheduling setting in which each job is controlled by a selfish agent who can strategically choose the machine that will process its job, aiming to minimize its completion time. In order to reduce the inefficiency caused by selfishness, we use local scheduling policies (coordination mechanisms) and analyze their approximation ratio at equilibrium points. Finally, using the intuition acquired from these coordination mechanisms, we develop a simple local search constant-factor approximation algorithm imitating best response dynamics for an appropriately chosen local scheduling policy.
Joint work with: R. Cole, J. Correa, V. Mirrokni, and N. Olver.
8/3/2011 1:30-2:30 Adel Javanmard (Stanford)
Localization from Incomplete Noisy Distance Measurements
We consider the problem of positioning a cloud of points in the Euclidean space R^d, using noisy measurements of a subset of pairwise distances. This task has applications in various areas, such as sensor network localization and reconstruction of protein conformations from NMR measurements. Also, it is closely related to dimensionality reduction problems and manifold learning, where the goal is to learn the underlying global geometry of a data set using local (or partial) metric information. Here we propose a reconstruction algorithm based on semidefinite programming. For a random geometric graph model and uniformly bounded noise, we provide a precise characterization of the algorithm’s performance: In the noiseless case, we find a radius r0 beyond which the algorithm reconstructs the exact positions (up to rigid transformations). In the presence of noise, we obtain upper and lower bounds on the reconstruction error that match up to a factor that depends only on the dimension d, and the average degree of the nodes in the graph.
This is joint work with Andrea Montanari.
7/27/2011 1:30-2:30 Jon Kleinberg (Cornell)
Which Networks Are Least Susceptible to Cascading Failures?
Perhaps the most basic model of contagion in a network draws its motivation from epidemic disease: when a node is affected, it has a given probability of affecting its neighbors as well. In recent prior work with Blume, Easley, Kleinberg, and Tardos, we considered how strategic agents should form links when there is a danger that failures can spread through the resulting network according to such a process.
This type of probabilistic contagion, however, represents just a small portion of a much broader space of “threshold cascade models”. This more general class of threshold models has a simple formulation: each node v picks a threshold t(v) according to an underlying distribution, and it fails if at any point in time it has at least t(v) failed neighbors. The expressive power of this framework lies in the way it can capture a range of fundamentally different types of failure processes, including those where exposure to failed nodes increases one’s chance of failing in the future. Despite the simplicity of the formulation, however, it has been very challenging to analyze the failure processes that arise from arbitrary thresholds; even qualitative questions concerning which graphs are the most resilient to cascading failures in these models have been difficult to resolve.
Here we consider the full space of threshold cascade models, and develop a set of new techniques for comparing arbitrary networks with respect to their failure resilience under different distributions of thresholds. We find that the space has a surprisingly rich structure: small shifts in the behavior of the thresholds can favor clustered clique-like graph structures, branching tree-like ones, or even intermediate hybrids.
Joint work with Larry Blume, David Easley, Bobby Kleinberg, and Eva Tardos (to appear at FOCS 2011).
7/20/2011 1:30-2:30 Sigal Oren (Cornell)
Mechanisms for (Mis)allocating Scientific Credit
Scientific communities confer many forms of credit — both implicit and explicit — on their successful members, and it has long been argued that the motivation provided by these forms of credit helps to shape a community’s collective attention toward different lines of research. The allocation of scientific credit, however, has also been the focus of long-documented pathologies: certain research questions are said to command too much credit, at the expense of other equally important questions; and certain researchers (in a version of Robert Merton’s Matthew Effect) seem to receive a disproportionate share of the credit, even when the contributions of others are similar.
Here we show that the presence of each of these pathologies can in fact increase the collective productivity of a community. We consider a model for the allocation of credit, in which individuals can choose among projects of varying levels of importance and difficulty, and they compete to receive credit with others who choose the same project. Under the most natural mechanism for allocating credit, in which it is divided among those who succeed at a project in proportion to the project’s importance, the resulting selection of projects by self-interested, credit-maximizing individuals will in general be socially sub-optimal. However, we show that there exist ways of allocating credit out of proportion to the true importance of the projects, as well as mechanisms that assign credit out of proportion to the relative contributions of the individuals, that lead credit-maximizing individuals to collectively achieve social optimality. These results therefore suggest how well-known forms of misallocation of scientific credit can in fact serve to channel self-interested behavior into socially optimal outcomes.
This is joint work with Jon Kleinberg.
7/13/2011 1:30-2:30 Elette Boyle (MIT)
Leakage-Resilient Coin Tossing
The ability to collectively toss a common coin among n parties in the presence of faults is an important primitive in the arsenal of randomized distributed protocols. In the case of dishonest majority, it was shown to be impossible to achieve less than 1/r bias in O(r) rounds (Cleve STOC ’86). In the case of honest majority, in contrast, unconditionally secure O(1)-round protocols for generating common unbiased coins follow from general completeness theorems on multi-party secure protocols in the secure channels model (e.g., BGW, CCD STOC ’88).
However, in the protocols with honest majority, parties must generate and hold local secret values which are assumed to be perfectly hidden from malicious parties: an assumption which is crucial to proving the resulting common coin is unbiased. This assumption unfortunately does not seem to hold in practice, as attackers can launch side-channel attacks on the local state of honest parties and leak information on their secrets.
In this work, we present an O(1)-round protocol for collectively generating an unbiased common coin, in the presence of leakage on the local state of the honest parties. We tolerate t ≤ ( 1/3 − ϵ)n computationally-unbounded Byzantine faults and in addition a Ω(1)-fraction leakage on each (honest) party’s secret state. Our results hold in the memory leakage model (of Akavia, Goldwasser, Vaikuntanathan ’08) adapted to the distributed setting.
This is joint work with Shafi Goldwasser and Yael Tauman Kalai.
7/6/2011 1:30-2:30 Ohad Shamir (MSR New-England)
Information Trade-offs in Learning
When originally formulated in the 1980’s, the classic PAC model of learning focused on how one can learn with a small amount of high-quality data. In contrast, much of machine learning today is done on huge amounts of low-quality data. While in some sense the “total” amount of information provided remains the same, the question is how one can trade-off between the data size and the amount of information provided by any individual example. In this talk, I will discuss two concrete settings where this trade-off occurs. The first deals with learning linear predictors, where we can only obtain a few attributes from any individual example. The second deals with multi-armed-bandits problems, where one can obtain varying amounts of side-information on actions not chosen. The resulting algorithms are theoretically-grounded and completely practical.
Based on joint works with Nicolò Cesa-Bianchi, Shai Shalev-Shwartz and Shie Mannor.
6/22/2011 1:30-2:30 Huy L. Nguyen (Princeton)
Near linear lower bound for dimension reduction in L1
Given a set of n points in L1, how many dimensions are needed to represent all pairwise distances within a specific distortion ? This dimension-distortion tradeoff question is well understood for the L2 norm, where O((log n)/epsilon^2) dimensions suffice to achieve 1+epsilon distortion. In sharp contrast, there is a significant gap between upper and lower bounds for dimension reduction in L1. A recent result shows that distortion 1+epsilon can be achieved with n/epsilon^2 dimensions. On the other hand, the only lower bounds known are that distortion delta requires n^{\Omega(1/delta^2)} dimension and that distortion 1+epsilon requires n^{1/2-O(epsilon*log(1/ epsilon))} dimensions. In this work, we show the first near linear lower bounds for dimension reduction in L1. In particular, we show that 1+epsilon distortion requires at least n^{1-O(1/log(1/epsilon))} dimensions.
Our proofs are combinatorial, but inspired by linear programming. In fact, our techniques lead to a simple combinatorial argument that is equivalent to the LP based proof of Brinkman-Charikar for lower bounds on dimension reduction in L1.
This is joint work with Alexandr Andoni, Moses Charikar, and Ofer Neiman.
6/15/2011 1:30-2:30 Giuseppe F. Italiano (University of Rome “Tor Vergata”)
Finding Strong Bridges and Strong Articulation Points in Linear Time
Given a directed graph $G$, an edge is a strong bridge if its removal increases the number of strongly connected components of $G$. Similarly, we say that a vertex is a strong articulation point if its removal increases the number of strongly connected components of $G$. In this paper, we present linear-time algorithms for computing all the strong bridges and all the strong articulation points of directed graphs, solving an open problem posed in [BFL05].
Joint work with Luigi Laura and Federico Santaroni.
6/14/2011 1:30-2:30 Robert Kleinberg (Cornell)
Network Formation in the Presence of Contagious Risk
There are a number of domains where agents must collectively form a network in the face of the following trade-off: each agent receives benefits from the direct links it forms to others, but these links expose it to the risk of being hit by a cascading failure that might spread over multi- step paths. Financial contagion, epidemic disease, and the exposure of covert organizations to discovery are all settings in which such issues have been articulated.
Here we formulate the problem in terms of strategic network formation, and provide asymptotically tight bounds on the welfare of both optimal and stable networks. We find that socially optimal networks are, in a precise sense, situated just beyond a phase transition in the behavior of the cascading failures, and that stable graphs lie slightly further beyond this phase transition, at a point where most of the available welfare has been lost. Our analysis enables us to explore such issues as the trade-offs between clustered and anonymous market structures, and it exposes a fundamental sense in which very small amounts of “over-linking” in networks with contagious risk can have strong consequences for the welfare of the participants.
Joint work with Larry Blume, David Easley, Jon Kleinberg, and Eva Tardos.
6/9/2011 2:30-3:30 Rasmus Pagh (IT University of Copenhagen)
A New Data Layout For Set Intersection on GPUs
Set intersection is the core in a variety of problems, e.g. frequent itemset mining and sparse boolean matrix multiplication. It is well-known that large speed gains can, for some computational problems, be obtained by using a graphics processing unit (GPU) as a massively parallel computing device. However, GPUs require highly regular control flow and memory access patterns, and for this reason previous GPU methods for intersecting sets have used a simple bitmap representation. This representation requires excessive space on sparse data sets. In this paper we present a new data layout, BATMAP, that is particularly well suited for parallel processing, and is compact even for sparse data. We also describe experiments on the data structure, compared against CPU alternatives.
Joint work with Rasmus Resen Amossen.
6/1/2011 1:30-2:30 Vinod Vaikuntanathan (MSR Redmond)
New Paradigms for Efficient Fully Homomorphic Encryption
We present a fully homomorphic encryption scheme that is based solely on the standard learning with errors (LWE) assumption. Applying known results on LWE, the security of our scheme is based on the classical worst-case hardness of short vector problems on arbitrary lattices. As icing on the cake, our scheme is quite efficient, and has very short ciphertexts.
Our construction introduces a new paradigm and two new techniques for the construction of homomorphic encryption, and improves on previous works in two aspects.
1) We show that “somewhat homomorphic” encryption can be based on LWE, using a new re-linearization technique. In contrast, all previous schemes relied on complexity assumptions related to ideals in various rings.
2) More importantly, we deviate from the “squashing paradigm” used in all previous works. We introduce a new dimension reduction technique, which shortens the ciphertexts and reduces the decryption complexity of our scheme, without introducing additional assumptions. In contrast, all previous works required an additional, very strong assumption (namely, the sparse subset sum assumption).
A by-product of our work is a new type of fully homomorphic identity-based encryption scheme.
Joint work with Zvika Brakerski (Weizmann).
5/25/2011 1:30-2:30 Amin Saberi (Stanford)
A Randomized Rounding Approach to the Traveling Salesman Problem
For some positive constant c, we give a 3/2-c approximation algorithm for the following problem: given a graph G(V; E), find the shortest tour that visits every vertex at least once. This is a special case of the metric traveling salesman problem when the underlying metric is defined by shortest path distances in G. The result improves on the 3/2-approximation algorithm due to Christodes.
Joint work with M. Singh and S. Oveis Gharan.
5/18/2011 1:30-2:30 Gil Segev (MSR Silicon Valley)
Public-Key Cryptographic Primitives Provably as Secure as Subset Sum
We present a public-key encryption scheme whose security is equivalent to the hardness of solving random instances of the subset sum problem. The subset sum assumption required for the security of our scheme is weaker than that of existing subset-sum based encryption schemes, namely the lattice-based schemes of Ajtai and Dwork (STOC ’97), Regev (STOC ’03, STOC ’05), and Peikert (STOC ’09). Our proof of security is simple and direct.
Joint work with Vadim Lyubashevsky and Adriana Palacio.
5/11/2011 1:30-2:30 Mukund Sundararajan (Google)
Axiomatic Attribution
We study the attribution problem, that is, the problem of attributing a change in the value of a characteristic function to its independent variables. We make three contributions. First, we propose a formalization of the problem based on a standard cost-sharing model. Second, in our most important technical contribution, we show that there is a unique path attribution method that satisfies Affine Scale Invariance and Anonymity if and only if the characteristic function is multilinear. We term this the Aumann-Shapley-Shubik method. Third, we study multilinear characteristic functions in detail; we describe a computationally efficient implementation of the Aumann-Shapley-Shubik method and discuss practical applications to portfolio analysis, e-commerce, and sports statistics.
5/4/2011 1:30-2:30 Paul Valiant (UC Berkeley)
Central Limit Theorems and Tight Lower Bounds for Entropy Estimation
In joint work with Gregory Valiant, we have been investigating a new approach to estimating symmetric properties of distributions, including the standard and fundamental properties of entropy and support size. In a previous talk, Greg introduced the first explicit sublinear-sample estimators for these problems, which estimate these quantities to constant additive error using O(n/log n) samples. In this independent but complementary talk, we show that this result is in fact optimal up to constant factors. The analysis makes crucial use of two new multivariate central limit theorems that appear quite natural and general. The first is proven directly via Stein’s method; the second is proven in terms of the first using a recent generalization of Newton’s inequalities. The talk will include a high level overview of these techniques, and their application both in our context and more generally.
4/27/2011 1:30-2:30 Mihai Budiu and others (MSR Silicon Valley)
DryadLINQ and Theory Challenges
4/20/2011 1:30-2:30 Gregory Valiant (UC Berkeley)
Estimating the Unseen: A Sublinear-Sample Estimator of Distributions
In joint work with Paul Valiant, we introduce a new approach to characterizing the unobserved portion of a distribution, which provides sublinear-sample additive estimators for a class of properties that includes entropy and distribution support size (the distinct elements’ problem). Together with our new lower bounds—which Paul will discuss in a separate talk—this settles the longstanding question of the sample complexities of these estimation problems, up to constant factors. Our algorithm estimates these properties up to an arbitrarily small additive constant, using O(n log n) samples, where n is a bound on the support size (or in the case of estimating the support size, 1/n is a lower bound on the probability of any element of the domain). Previously, no explicit sublinear-sample algorithms for either of these problems were known.
3/30/2011 1:30-2:30 Yaron Singer (UC Berkeley)
Budget Feasible Mechanism Design
In this talk we will discuss the budget feasibility framework where the goal is to design incentive compatible mechanisms under a budget constraint on the mechanisms’ payments. Such a challenge typically arises in markets where the mechanism designer wishes to procure items or services from strategic agents.
While in many cases the budget limitation can render mechanisms with poor performance in terms of the utility of the buyer, there are broad classes of utility functions for which desirable approximation guarantees are achievable. We will present some of the positive results for submoudular, subaddtive and cases of non-monotone utilities, and discuss computational and strategic intricacies that arise in this setting.
We will also highlight the relevance of this framework to privacy, crowdsourcing and social networks. In particular, we will show the framework’s application to the influence maximization problem in social networks and give some evidence that suggests that beyond provable guarantees, the mechanisms perform well in practice.
3/16/2011 1:30-2:30 Jason Hartline (Northwestern University)
The Theory of Crowdsourcing
Crowdsourcing contests have been popularized by the Netflix challenge and websites like TopCoder and Taskcn. What is crowdsourcing? Imagine you are designing a new web service, you have it all coded up, but the site looks bad because you haven’t got any graphic design skills. You could hire an artist to design your logo, or you could post the design task as a competition to crowdsourcing website Taskcn with a monetary reward of $100. Contestants on Taskcn would then compete to produce the best logo. You then select your favorite logo and award that contestant the$100 prize.
In this talk, I discuss the theory of crowdsourcing contests. First, I will show how to model crowdsourcing contests using auction theory. Second, I will discuss how to solve for contestant strategies. I.e., suppose you were entering such a programming contest on TopCoder, how much work should you do on your entry to optimize your gains from winning less the cost of doing the work? Finally, I will discuss inefficiency from the fact that the effort of losing contestants is wasted (e.g., every contestant has to do work to design a logo, but you only value your favorite logo). I will show that this wasted effort is at most half of the total amount of effort. A consequence is that crowdsourcing is approximately as efficient a means of procurement as conventional methods (e.g., auctions or negotiations).
Joint work with Shuchi Chawla and Balu Sivan.
2/23/2011 1:30-2:30
Yael Tauman Kalai (MSR New-England) Leaky Pseudo Entropy Functions
Pseudo-random functions (PRFs), introduced by Goldreich, Goldwasser, and Micali (FOCS 1984), are functions that look truly random! These functions are associated with a short random seed s, and the guarantee is that no efficient adversary can tell the difference between getting oracle access to a random PRF function f_s, and getting oracle access to a truly random function.
The question we ask in this work is: Can we generate randomness (or even entropy) from a short random seed s that is partially leaked? Unfortunately, as deterministic extractors do not exist, leaky PRFs do not exist, even if a single bit about the secret seed s is leaked. Thus, we consider the following relaxation: Instead of requiring that for each input x, the value f_s(x) looks random, we require that it looks like it has high min-entropy, even given oracle access to f_s everywhere except at point x. We call such a function family a pseudo-entropy function (PEF) family. We construct such a leakage-resilient PEF family under standard cryptographic assumptions (such as DDH).
No comments yet | 2019-04-22 18:08:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.575198233127594, "perplexity": 1103.7994971190071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578577686.60/warc/CC-MAIN-20190422175312-20190422201312-00368.warc.gz"} |
https://crypto.stackexchange.com/questions/80837/is-it-possible-to-construct-a-prng-where-the-output-numbers-have-a-certain-distr | # Is it possible to construct a PRNG where the output numbers have a certain distribution of hamming weights?
I am in need of a non-uniform random number generator where each n-bit output has a hamming weight with a certain binomial distribution.
For example, I would like a non-uniform PRNG which generates 32-bit outputs with a hamming weight whose binomial distribution is n=32, p=0.1. For instance, 0xFF should be output with significantly less probability than 0x200, which in turn should have the same probability as 0x1.
Perhaps I can modify the output of a PRNG like xorshift or a LFSR to accomodate for this? I thought about rejection sampling the output, but the distribution of hamming weights for a uniform PRNG does not necessarily envelope a given binominal distribution with a variable parameter p, especially when p << 0.5.
I am not concerned about the cryptographic quality of the output. However, I am working on a 8 bit microcontroller with 2 KB SRAM, so memory and speed are both my primary concern. In the most naive case, I would just generate an array of random numbers and convert each element to 0 and 1 given a threshold probability, and finally convert this resulting array of 0's and 1's to an integer. But I would really, really like to avoid this memory overhead of an n-element array.
• You don't need to store an N-element array, you can update your integer bit by bit on the fly. Since order doesn't matter you can just do this: output = (output << 1) | (1 or 0), 32 times or as many times as needed, shifting the bits in as you go. – Thomas May 20 '20 at 17:17
The obvious way to do this is to generate N words, and use logical operations to combine them in a single word such that each bit of the output word is a 1 with probability approximately 0.1 (and the individual bits are uncorrelated).
In the simplest case, you could generate 3 words, and just AND them together into a single one. In C, this would be:
r1 = rand();
r2 = rand();
r3 = rand();
return r1 & r2 & r3;
This gives each bit set with probability 0.125, which is close to 0.1
If that's not quite close enough, you can get a closer approximation by using more bits; for example, r1 & r2 & r3 & ~(r4 & r5) results with bits set with probability $$3/32 = 0.09375$$
With this technique, you use $$n$$ random words to generate bits set with probability $$k 2^{-n}$$ for some integer $$k$$; this can be made arbitrarily close to 0.1.
This obviously uses minimal memory; the computation time isn't too bad (assuming your rand implementation is cheap), unless you insist on a quite good approximation to your target probability.
And, while I said 'words', your implementation would use whatever size it finds most convenient; for an 8 bit CPU, each word might be 8 bits (and you just do it 4 times to generate the required 32 bits).
• An approximate probability for each Bernoulli trial is perfectly fine for my application. Interesting technique that can scale in accuracy with the number of words - Thanks! – Ollie May 20 '20 at 18:56
• @Ollie: you just need to make sure that adjacent calls to the underlying rng don't have strong bit correlations; an LFSR-based rng might, a linear congruential (state = a*state + b mod m for m odd) one would be less likely to cause problems – poncho May 20 '20 at 19:04 | 2021-05-11 22:13:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6612920761108398, "perplexity": 562.9176297833408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990419.12/warc/CC-MAIN-20210511214444-20210512004444-00509.warc.gz"} |
http://www.sciphysicsforums.com/spfbb1/viewtopic.php?f=6&t=271&p=6850 | ## Schmelzer's and Gill's mathematical nonsense
Foundations of physics and/or philosophy of physics, and in particular, posts on unresolved or controversial issues
### Re: Schmelzer's and Gill's mathematical nonsense
Joy Christian wrote:In other words, anyone who is familiar with the Bell literature, as the "expert" Richard D. Gill should be, should not have any difficulty in understanding the model.
***
LOL! Well, I don't think he does understand so for him and others, you might want to consider making the time parameter explicit. For most people that know EPR it is no problem but some people seem to be having trouble with it. Bottom line, the math doesn't happen all at once. For someone not familiar with EPR such as a mathematician, that fact is not clear. They look at the math without taking anything else you have said in the paper into consideration and say it is nonsense.
Most likely in Gill's case he knows the truth but is ignoring it on purpose because otherwise he has no case.
FrediFizzx
Independent Physics Researcher
Posts: 1699
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA
### Re: Schmelzer's and Gill's mathematical nonsense
Is there a complete, numbered list of Richard Gill's criticisms of Joy Christian's paper: Local Causality in a Friedmann-Robertson-Walker Spacetime, version 4?
I note that Richard had another five or so criticisms (Ref: https://pubpeer.com/publications/06814F ... 6B37BB4EF1) on an earlier version of this paper, some of which were in connection with simulations which are still referred to in the later versions of the paper, and these criticisms are not in the list of four criticisms noted by Joy at viewtopic.php?f=6&t=271#p6808 above. But these extra criticisms are presumably extant. A complete and numbered (presumably with only a finite number of bullet points ) might be useful to overcome:
"But year after year Gill keeps coming back with new objections, while endlessly repeating the already debunked ones. So I am highly sceptical that he will ever stop. He will keep coming back like the Terminator. " as quoted from Joy at viewtopic.php?f=6&t=271#p6833
Is version 4 of Joy's paper the same as the version 'retracted'? Having a new version 5 while all this discussion wrt retracted paper, on version 4 or earlier, is still ongoing may be a complicating factor. The new text in the new version(s) will undoubtedly attract new wordings of criticisms.
Ben6993
Posts: 287
Joined: Sun Feb 09, 2014 12:53 pm
### Re: Schmelzer's and Gill's mathematical nonsense
Ben6993 wrote:Is version 4 of Joy's paper the same as the version 'retracted'? Having a new version 5 while all this discussion wrt retracted paper, on version 4 or earlier, is still ongoing may be a complicating factor. The new text in the new version(s) will undoubtedly attract new wordings of criticisms.
Hi Ben,
Version 5 of my paper is exactly the same as version 4, with the same text, apart from the 8 new equations in a new paragraph whose snapshot I have posted above.
Should the confusions about my model be called criticisms? If my model is X, but someone insists on misrepresenting it as Y, and then criticises Y, should we take that seriously? I have of course responded to all kinds of confusions and straw-man arguments. But should I have bothered to do so for nine painful years? Why is there never any peer-review of the critics and their so-called criticisms? It is quite evident that Annals of Physics, for example, did not bother to recognise that Gill's "criticisms" are actually not those of my model at all, but of his misrepresentations of it. That is not difficult to see once you spend some time peer-reviewing Gill's "criticisms."
***
Joy Christian
Research Physicist
Posts: 2131
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom
### Re: Schmelzer's and Gill's mathematical nonsense
Hi Joy
Sorry if the word 'criticisms' was unpleasant for you. I did not intend offence.
I still think you should draw a line under version 4 and say that is the paper which was retracted and that is the paper which needs to be defended and have Richard Gill's confusions countered one by one, item by item. Making new versions of the retracted paper gives more scope for confusion, with the potential implication (for any 'hostile' opponents) that the retracted version was somehow inadequate. Making more ammunition for the opponents when all you intended to do was add clarifications to overcome the misunderstandings of readers you consider to be not versed well enough in Bell literature. Anyway, that is my humble opinion. As you know, I am not an expert in this, or in any other (!), matter.
BTW Jay seems to have made an excellent start at a 'clarification of confusions' concerning the paper.
Ben6993
Posts: 287
Joined: Sun Feb 09, 2014 12:53 pm
### Re: Schmelzer's and Gill's mathematical nonsense
Ben6993 wrote:Hi Joy
Sorry if the word 'criticisms' was unpleasant for you. I did not intend offence.
I still think you should draw a line under version 4 and say that is the paper which was retracted and that is the paper which needs to be defended and have Richard Gill's confusions countered one by one, item by item. Making new versions of the retracted paper gives more scope for confusion, with the potential implication (for any 'hostile' opponents) that the retracted version was somehow inadequate. Making more ammunition for the opponents when all you intended to do was add clarifications to overcome the misunderstandings of readers you consider to be not versed well enough in Bell literature. Anyway, that is my humble opinion. As you know, I am not an expert in this, or in any other (!), matter.
BTW Jay seems to have made an excellent start at a 'clarification of confusions' concerning the paper.
Hi Ben,
No offence taken. I was just venting my frustration at the unfair "system", which seems to favour mediocrity and ambidexterity over creativity and originality.
Anyway, I think your suggestions are good, and I should concentrate on defending version 4, which, as you note, is the retracted, withdrawn, or removed paper.
***
Joy Christian
Research Physicist
Posts: 2131
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom
### Re: Schmelzer's and Gill's mathematical nonsense
Joy Christian wrote:Hi Ben,
No offence taken. I was just venting my frustration at the unfair "system", which seems to favour mediocrity and ambidexterity over creativity and originality.
Anyway, I think your suggestions are good, and I should concentrate on defending version 4, which, as you note, is the retracted, withdrawn, or removed paper.
***
I agree. I think eqs. (57) to (64) just add more confusion because if you take the limit of both s1's and s2's, the results are respectively $\lambda^k$ and $- \lambda^k$. Which is actually OK because it proves the A and B outcomes are $\pm 1$ thus proving eqs (54) and (55).
Now that Gill has been shot down by the fact that a is only equal to b if the experimenter sets them that way, he is back on the -1 thing. Poor guy doesn't seem to realize the -1 result is the $R^3$ result not the $S^3$ result. Of course all the -1 $R^3$ result shows is that A and B are anti-correlated. If the $S^3$ properties of the model are to be maintained, the correlation calculation must be done as Joy shows.
FrediFizzx
Independent Physics Researcher
Posts: 1699
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA
### Re: Schmelzer's and Gill's mathematical nonsense
Hi Joy and Fred
I woke in the night with the following eureka idea in the spirit of clarifying confusions about version 4 of your paper. My idea may just illustrate my own confusion, but I hope the idea is useful.
Two items on my incomplete list of Richard Gill's 'confusions' with respect to the paper and the computer simulations are RGC5 and RGC6. (If Richard is reading this I hope he does not counter-object to my use of the word 'confusions' ... no offence intended.) Item RGC5 is that your simulations have missing valid data and hence the simulations are not loophole-free. Item RG6 is that invalid operations in geometric algebra are being performed in the computer simulations by using left-hand and right hand background torsions within 'one and the same' calculation. These are my terse re-wordings of points on a Pubpeer website.
I think that your very short answer to RGC5 could be that only invalid data are excluded and to RGC6 could be that the geometric algebra operations are valid. But in the spirit of clarifying confusions I have the following suggestion for an interactive display of new simulation results which may throw some light on RG5 and RGC6.
First, am I correct in assuming that if in nature all the particle pairs were produced in a background of left-hand torsion then a simulation based on these pairs would produce only a saw-tooth Bell curve? And ditto for an exclusively right-hand torsion background. But when you have an equal number of pairs with left- and right-handed backgrounds then you get the cosine curve?
This assumes that by restricting the simulation background to only one type of handedness, you would be be modelling an artificially constrained nature somehow acting more like having an R^3 space. (Although R^3 does not actually have a background handedness and you cannot actually constrain nature in this way. )
So I envisage an interactive display being made and posted somewhere (eg here http://einstein-physics.org/) showing the effects of having various mixes of left- and right- handednesses. One example of such a type of display is http://nilesjohnson.net/hopf.html . But perhaps there could be a control panel on screen where the user could control the proportion of left over right backgrounds of pairs. It would mean running the simulation a hundred times or so in advance to get smooth transitions of curves from saw-toothed to cosine and back to saw-toothed as the proportion varied from 0 to 100%. The interactive display would use these pre-prepared outputs so the interactiveness could be speedy. And it would mean adding a subroutine of new code to the simulation which controlled the background torsion of a particle pair without compromising the random nature of generation of data with respect to all the other factors.
What I hope is that such an interactive display could help show that the transition from saw-tooth to cosine was not so much dependent on a data loophole as on having an equal mix of the background torsions. So as the slider is moved from 0% to 50% to 100%, it replicates pseudo R^3 then S^3 then pseudo R^3 respectively.
Ben6993
Posts: 287
Joined: Sun Feb 09, 2014 12:53 pm
### Re: Schmelzer's and Gill's mathematical nonsense
***
Hi Everyone,
I have cleaned up Jay's analysis so that there is no "backtracking" (which could be subject to objection). Here it is, with only "front-tracking":
***
Joy Christian
Research Physicist
Posts: 2131
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom
### Re: Schmelzer's and Gill's mathematical nonsense
Yeah, the argument is better like that. It is only possible for a to equal b if the experimenters set them equal.
FrediFizzx
Independent Physics Researcher
Posts: 1699
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA
### Re: Schmelzer's and Gill's mathematical nonsense
Now that Gill has been shot down by that, he is back on the -1 thing. Poor guy doesn't seem to realize the -1 result is the $R^3$ result not the $S^3$ result. Of course all the -1 $R^3$ result shows is that A and B are anti-correlated. If the $S^3$ properties of the model are to be maintained, the correlation calculation must be done as Joy shows.
FrediFizzx
Independent Physics Researcher
Posts: 1699
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA
### Re: Schmelzer's and Gill's mathematical nonsense
FrediFizzx wrote:Now that Gill has been shot down by that, he is back on the -1 thing. Poor guy doesn't seem to realize the -1 result is the $R^3$ result not the $S^3$ result. Of course all the -1 $R^3$ result shows is that A and B are anti-correlated. If the $S^3$ properties of the model are to be maintained, the correlation calculation must be done as Joy shows.
I replied to him on Retraction Watch and explained the same thing, but my reply is still waiting for moderator's approval. Here is what I explained:
Perhaps I should elaborate.
We live in Einstein’s Universe. One of the solutions of Einstein’s theory of spacetime for the physical 3D space is a 3-sphere. My local model is based on the assumption that we live in this 3-sphere, S^3, not in the flat Euclidean space, R^3, as is usually assumed. Therefore the EPRB correlations we observe in Nature are correlations among the points of this 3-sphere, not among the points of a Euclidean R^3. Now the functions A(a, lambda) and B(b, lambda) defined by my equations (54) and (55) represent points of this Einsteinian 3-sphere. What is then being calculated in equations (67) to (75) are correlations among the points of this 3-sphere. lambda on the other hand is the orientation of this 3-sphere. Therefore it makes no sense to calculate correlations between the two values of lambda, which in any case is just -1. It is a gross misrepresentation of the physics being considered to write A(a, lambda) = +lambda and B(b, lambda) = -lambda. That is like writing Cat = Dog. It means nothing.
Joy Christian
Research Physicist
Posts: 2131
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom
### Re: Schmelzer's and Gill's mathematical nonsense
Hi Fred
I couldn't find online anything Richard Gill posted in the last two days.
In the interests of clarity is it one of the following six points of 'confusion'?
RGC1. According to (55) and (56), A(a, lambda) = lambda and B(b, lambda) = - lambda where lambda = +/-1. This should lead to E(a, b), computed in (60)-(68), equal to -1. But instead the author gets the result - a . b. How is it done?
RGC2. Notice formula (58) where s_1 and s_2 are argued to be equal, leading in (59) to L(s_1, lambda)L(s_2, lambda) = -1. This result is then substituted inside a double limit as s_1 converges to a and s_2 converges to b in the transition from equation (62) to (63).
So s_1 and s_2 are equal yet converge to different limits a and b.
RGC3. But that is not enough. A second trick is put into play a few lines later. According to (57) we should have L(a, lambda)L(b, lambda) = D(a)D(b) independent of lambda, which means that the step from (65) to (66) can't be correct.
(For Item RGC1 to RGC3 the source is https://pubpeer.com/publications/AEF49D ... B4#fb53868 , July 5th, 2016)
RGC4. To take that step he uses (50), but this contradicts (51) and (52). If L(a, lambda) = lambda I a and L(b, lambda) = lambda I b then L(a, lambda)L(b, lambda) = -ab independent of whether lambda = -1 or +1 (lambda and I both commute with a and b; lambda^2 = 1, I^2 = -1). (I can't find a source for this point, except in a post by Joy.)
RGC5 is that your simulations have missing valid data and hence the simulations are not loophole-free.
RGC6 is that invalid operations in geometric algebra are being performed in the computer simulations by using left-hand and right hand background torsions within 'one and the same' calculation.
(RGC5 and RGC6 are are my terse re-wordings of points on a Pubpeer website.)
If you do not keep detailed track of the points of confusion you will not be able to label accurately if and when the points of confusion are cleared up. Or is it just me that has problems finding old posts on the www?
Ben6993
Posts: 287
Joined: Sun Feb 09, 2014 12:53 pm
### Re: Schmelzer's and Gill's mathematical nonsense
Ben6993 wrote:Hi Fred
I couldn't find online anything Richard Gill posted in the last two days.
Sorry, I should have included the link.
http://retractionwatch.com/2016/09/30/p ... nt-1137639
It is all settled now. Gill is completely shot down once again.
FrediFizzx
Independent Physics Researcher
Posts: 1699
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA
### Re: Schmelzer's and Gill's mathematical nonsense
Joy Christian wrote:
FrediFizzx wrote:Now that Gill has been shot down by that, he is back on the -1 thing. Poor guy doesn't seem to realize the -1 result is the $R^3$ result not the $S^3$ result. Of course all the -1 $R^3$ result shows is that A and B are anti-correlated. If the $S^3$ properties of the model are to be maintained, the correlation calculation must be done as Joy shows.
I replied to him on Retraction Watch and explained the same thing, but my reply is still waiting for moderator's approval. Here is what I explained:
Perhaps I should elaborate.
We live in Einstein’s Universe. One of the solutions of Einstein’s theory of spacetime for the physical 3D space is a 3-sphere. My local model is based on the assumption that we live in this 3-sphere, S^3, not in the flat Euclidean space, R^3, as is usually assumed. Therefore the EPRB correlations we observe in Nature are correlations among the points of this 3-sphere, not among the points of a Euclidean R^3. Now the functions A(a, lambda) and B(b, lambda) defined by my equations (54) and (55) represent points of this Einsteinian 3-sphere. What is then being calculated in equations (67) to (75) are correlations among the points of this 3-sphere. lambda on the other hand is the orientation of this 3-sphere. Therefore it makes no sense to calculate correlations between the two values of lambda, which in any case is just -1. It is a gross misrepresentation of the physics being considered to write A(a, lambda) = +lambda and B(b, lambda) = -lambda. That is like writing Cat = Dog. It means nothing.
Yeah, Gill got a double whammy from us on Retraction Watch. Of course he will never admit that you are right; he will find something else to erroneously complain about.
...
FrediFizzx
Independent Physics Researcher
Posts: 1699
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA
### Re: Schmelzer's and Gill's mathematical nonsense
FrediFizzx wrote:Yeah, Gill got a double whammy from us on Retraction Watch. Of course he will never admit that you are right; he will find something else to erroneously complain about.
...
When the physics community finally wakes up from the sirenic Bell-spell, there will be gasps of amazement: Why did no one see something so simple, so elegant, and so natural as the 3-sphere model of locally explicable correlations even after decades of efforts to explain it by its proponents? Why couldn't people see that quantum correlations --- at least the EPRB ones --- are a natural consequence of Einstein's geometric theory of gravity?
***
Joy Christian
Research Physicist
Posts: 2131
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom
### Re: Schmelzer's and Gill's mathematical nonsense
Fred,
Thanks for the reference to http://retractionwatch.com/2016/09/30/p ... nt-1137639 .
When I previously tried to load Retraction Watch, my computer failed to obtain the site. The site may have been overloaded at that time (!?) which is why I did not find the comment previously.
Richard Gill's latest post at Retraction Watch, on 12 October 12 2016 at 8:01 am, might be paraphrased as 'the correlation clearly equals minus one' which is very similar to Item RGC1 on my list of "Richard Gill's confusions". However, the equation now referred to is not identical to that in Item RGC1 as the former includes the expression ^k throughout. So I will call the new confusion RGC7.
Item RGC7 (not using an exact quote):
From definitions (54) and (55), it follows that A(a, lambda^k) = lambda^k and B(b, lambda^k) = – lambda^k.
The measurement outcomes are equal and opposite (and do not depend on the measurement settings).
Why compute the correlation by a roundabout route if it is now already clear that it equals minus one?
Jay has subsequently posted what seems to me to be an excellent clarification of that confusion on Retraction Watch on October 12, 2016 at 10:09 pm.
However, Fred, you appear in your post of October 13, 2016 at 12:01 am at Retraction Watch to have taken (mild?) offence at Jay's use of the words "fancy dressing"? When I read Jay's post I simply took it all as positive for Joy's paper. Perhaps if Jay had used the words: "is just a creative and original way of transforming +lambda" or the like instead of "is just a fancy way of dressing up the +lambda" the words might not have grated on you? I agree that there could be negative implications of both "fancy" [implying unnecessarily complicated?] and "dressing" [implying not the real essence?], but how you interpret that maybe depends on your personality. As I say, I took it as a positive comment for the paper.
BTW did you read Jay's own recent paper at http://vixra.org/pdf/1609.0387v1.pdf , section 10 page 33 where Jay writes: " ... the number “1” constructed in ... is useful in a variety of circumstances ...". To be fair Jay did not say that his own usage was "fancy dressing".
Further. you have later commented at viewtopic.php?f=6&t=271&start=20#p6857
"It is all settled now. Gill is completely shot down once again."
You may be privy to personal posts that I have not seen, but afaik we are waiting for an update by Jay, or Richard, on how clarification of that particular point is progressing?
Ben6993
Posts: 287
Joined: Sun Feb 09, 2014 12:53 pm
### Re: Schmelzer's and Gill's mathematical nonsense
Ben6993 wrote:Jay has subsequently posted what seems to me to be an excellent clarification of that confusion on Retraction Watch on October 12, 2016 at 10:09 pm.
However, Fred, you appear in your post of October 13, 2016 at 12:01 am at Retraction Watch to have taken (mild?) offence at Jay's use of the words "fancy dressing"? When I read Jay's post I simply took it all as positive for Joy's paper. Perhaps if Jay had used the words: "is just a creative and original way of transforming +lambda" or the like instead of "is just a fancy way of dressing up the +lambda" the words might not have grated on you? I agree that there could be negative implications of both "fancy" [implying unnecessarily complicated?] and "dressing" [implying not the real essence?], but how you interpret that maybe depends on your personality. As I say, I took it as a positive comment for the paper.
BTW did you read Jay's own recent paper at http://vixra.org/pdf/1609.0387v1.pdf , section 10 page 33 where Jay writes: " ... the number “1” constructed in ... is useful in a variety of circumstances ...". To be fair Jay did not say that his own usage was "fancy dressing".
Well, I will admit to being a little colorful to try to make a point. So I am willing to say that I am also doing a "fancy dressing" with my 1 as well. But think about it: Every equation ever written can be rewritten in the form
"$something = 0$."
Just move everything onto one side and you have a zero. (And to be really clear the zero may have all sort of finite or infinite structure to it; think tensors, and think SU(N), and think Hilbert spaces, and think Heisenberg matrices.) Richard said "Why compute the correlation by a roundabout route if it is now already clear that it equals minus one?" That would be analogous to saying the "Why compute anything about the 'something' if it is now already clear that it equals 0?" That is a slippery slope which degenerates into an argument that mathematical calculation serves no purpose.
Yablon
Independent Physics Researcher
Posts: 355
Joined: Tue Feb 04, 2014 10:39 pm
Location: New York
### Re: Schmelzer's and Gill's mathematical nonsense
Ben6993 wrote:Further. you have later commented at viewtopic.php?f=6&t=271&start=20#p6857
"It is all settled now. Gill is completely shot down once again."
You may be privy to personal posts that I have not seen, but afaik we are waiting for an update by Jay, or Richard, on how clarification of that particular point is progressing?
Everything has been on the public table for quite some time now; there is nothing "privy". I wouldn't hold your breath whilst you are waiting but I will explain some more. Gill is stuck in R^3 "flatland". He has been making that -1 result argument for years now and always falls back on it when his other criticisms are shot down. But Joy's model is an S^3 theory so the correlations must be calculated via a method that respects S^3. Not R^3. Gill wants to always use an R^3 theory. So does Schmelzer. It is just plain nonsense as represented by the title of this thread. So basically they are rejecting Joy's S^3 postulate. But that is OK as it doesn't matter. Joy's S^3 model is still a valid counter-example to Bell's junk physics theory because it predicts the same result as quantum mechanics in a local-realistic way. There are no flaws in Joy's model. It has been numerically validated by the computer program GAViewer to extraordinary precision. So quite frankly we don't really care what Gill and his cronies believe as we know what the truth is. They can stay stuck in flatland if they wish as the truth ultimately and eventually wins.
FrediFizzx
Independent Physics Researcher
Posts: 1699
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA
### Re: Schmelzer's and Gill's mathematical nonsense
Hi Jay
I wish I had written my final thought in my last email. I left it out for brevity's sake. I was going to add the same point about " = 0".
I have already followed about 200 hours of Susskind's Theoretical Minimum online courses with only about 40 hours left to see.
Susskind likes to move asap all the terms to one side so as to set them equal to zero. I think this is because he gets worried about the signs and its easier to get the signs right if all terms stay all on one side of the equation.
That's a different reason than yours though, but it does show lots of different ways you can get zero.
Talking about signs, I think that a sign flip is at the heart of one of the confusions wrt the simulations, but it hasn't come up explicitly yet. Can't thing of a good previous reference post for this, but the issue will certainly come up.
Also, I may be imagining this but I thought one page of your report had a whole string of half a dozen or more different structures for '1'. But I couldn't find that page, if it exists.
I have thought more about my idea for an interactive display. I might even try to make one myself, when I have finished tiling the kitchen. I am of the view that spinors are fundamental to creating the space metric and I think that the counterbalancing LH and RH torsions are very important.
All the best
Ben
Ben6993
Posts: 287
Joined: Sun Feb 09, 2014 12:53 pm
### Re: Schmelzer's and Gill's mathematical nonsense
***
Ben,
When I wrote above that the so-called "criticisms" of Gill are in fact his own confusions about, and misrepresentations of my model, I was being polite and charitable.
The truth is that Gill is mathematically quite incompetent. He himself has admitted on this very forum that he cannot do algebra. I have proven his incompetence in mathematics time and again, by debunking his fallacious claims about my local model. Most recently Jay has skilfully disproved at least two of his silly claims. Let me give you one very clear example of how Gill makes false claims with such an authoritative panache that even the editors of Annals of Physics are fooled by them. Very recently he made a foolish claim on PubPeer about the version 5 of my paper that is not only manifestly wrong, but Jay has now proven it to be mathematically wrong:
Richard D. Gill wrote:
The expression concerned is continuous in s_1, a and lambda and the limit can therefore be computed by simply evaluating it with s_1 set equal to a. That results in lambda, as already claimed in (54).
So the step from (59) to (60) is non-sense, and the final result moreover contradicts (54).
What Gill is questioning here is the following elementary equality:
One does not have to be a genius to see that (at least in the present context) this equality is trivially true. Gill is either unable to see that, or being disingenuous.
***
Joy Christian
Research Physicist
Posts: 2131
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom
PreviousNext
### Who is online
Users browsing this forum: No registered users and 1 guest | 2019-12-11 22:33:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6318334341049194, "perplexity": 1886.4154874346664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540533401.22/warc/CC-MAIN-20191211212657-20191212000657-00210.warc.gz"} |
http://mathoverflow.net/questions/40944/ranks-of-iterated-extensions-of-a-group-by-free-groups?sort=votes | # Ranks of iterated extensions of a group by free groups.
Let $G_0$ be a finitely generated group, and suppose there are groups $G_i$ and $K_i$ as in the following short exact sequences
$1\to K_i\to G_{i+1}\to G_i\to 1$
with $K_i$ free and nonabelian (you may assume finitely generated), and $G_i$ commutative transitive. (If $a$ is nontrivial and $b$ and $c$ both commute with $a$, then $b$ and $c$ commute.) Does it follow that $\mathrm{rank}(G_i)\to\infty$ as $i\to\infty$? Are there examples of extensions of this sort where the rank doesn't increase?
-
What's your definition of rank? – HJRW Oct 3 '10 at 18:17
Minimal number of elements in a generating set. – nolte Oct 3 '10 at 18:30
I assume that you consider the infinite cyclic group to be free. Then take the free nilpotent group $G_n$ of class $c\gg 1$ with 2 generators. It has an infinite cyclic central subgroup $K_n$, the factor-group $G_{n-1}=G_n/K_n$ is nilpotent and torsion-free, it has an infinite cyclic subgroup $K_{n-1}$, and so on. Every group $G_i$ is 2-generated, the chain can be arbitrary long ($n$ depends on $c$).
Edit 1: Since you do not want to consider $\mathbb Z$ free enough, here is another example. Take the Baumslag-Solitar group $BS(2,3)=\langle a,t | ta^2t^{-1}=a^3\rangle$. It is non-Hopfian, and has a free normal subgroup $K$ such that $BS(2,3)/K$ is isomorphic to $BS(2,3)$. So all $G_i$ are isomorphic to $BS(2,3)$, all $K_i$ are infinitely generated free groups. Is that what you want?
Edit 2: Since you have another condition now, "commutative transitive", then here is another example. Take a non-elementary torsion-free hyperbolic group $G$. It has a free normal subgroup $N$ such that $G/N$ is still non-elementary torsion-free and hyperbolic (that is proved by Olshanskii, and also by several others, including Delzant). You can continue as long as you wish. All $G_i$ will be hyperbolic and torsion-free (hence commutative-transitive), all $K_i$ will be infinitely generated free groups.
Just to anticipate a future change in the formulation of the question: if you really insist that $K_i$ are finitely generated, the question becomes harder and I am not sure the answer is still the same.
Edit 3: Since you want to have an infinite sequence, here is what to do. For every hyperbolic group $G$ with 2 generators, there exists another 2-generated hyperbolic group $G'$ and a free normal subgroup $N \le G'$ such that $G'/N=G$. This can be done using Rips' construction. In the original Rips' construction (and in all modifications) $N$ was not free, because he wanted $N$ to be finitely generated. But if you do not want $N$ to be finitely generated, it is easy to modify Rips' construction to make $N$ free. Using this you can construct your sequence $G_0=G, G_1=G', G_2=G_1', ...$.
Edit 4:In fact Rips' construction does not quite work because the number of generators increases. Certainly if $G$ is free of rank 2, $G'$ cannot be of rank 2. But here is another construction. Take a (torsion-free) lacunary hyperbolic group given by an infinite presentation satisfying a small cancelation condition (see http://front.math.ucdavis.edu/0701.5365). Let $r_1,r_2,...$ be the presentation of $G$. Then $G$ is commutative transitive (it is easily deduced from the fact that $G$ is an inductive limit of hyperbolic groups and surjective homomorphisms). Now the group $G'$ given by the same presentation but without $r_1$ is again lacunary hyperbolic, $G$ is a factor-group of $G'$ over the normal subgroup $N$ generated by $r_1$. It is possible to prove that $N$ is free. Indeed, if some product of conjugates of $r_1$ is equal to 1 in $G$, consider the corresponding van Kampen diagram. The boundary of that diagram has parts labeled by $r_1$ and parts labeled by the conjugators. By Greendlinger lemma, if the diagram has cells, it must have a cell with more than, say, $90\%$ of its boundary common with the boundary of the diagram (take the small cancelation condition $C'(1/300)$). Then more than a half of that part of the boundary must be inside a conjugator, the conjugator can be shortened, and a shorter product of conjugates of $r_1$ is equal to 1 in $G'$. Since $G'$ is lacunary hyperbolic again and satisfies the same small cancelation condition as $G$, we can repeat the construction. Since the presentation is infinite, the process will continue indefinitely.
Edit 5: A more clean way to prove that$N$ is free in Edit 4 is the following. suppose that some product of conjugates of $r_1$ is equal to 1 in $G'$. Consider the corresponding van Kampen diagram $\Delta$. Its boundary label is equal to 1 in the group given by 1 relator $r_1$. Consider the diagram $\Psi$ corresponding to that equality. Now identify the boundaries of $\Psi$ and $\Delta$. We get a diagram over the presentation of $G$ on a sphere: $\Delta$ occupies the northern hemisphere, $\Psi$ occupies the southern hemisphere, and the product of conjugates of $r_1$ labels the equator. Reduce that diagram. Since we can assume that $\Psi$ is reduced, and the $r_1$-cells are only in the south, the $r_1$-cells won't cancel. Hence we shall have a reduced non-empty diagram over the presentation of $G$ on a sphere which is impossible because of the Greendlinger lemma (the boundary of the spherical diagram is empty).
-
I should have been more specific. $K_i$ should be nonabelian. I apologize. – nolte Oct 3 '10 at 18:36
Again, I should learn to be more specific. The $G_i$ should be commutative transitive. Editing to reflect. Thanks. – nolte Oct 3 '10 at 19:08
Regarding Edit 2, I am trying to extend $G_0$ to $G_1$, $G_1$ to $G_2$, etc., not take further and further quotients of $G_0$. I think examples where $\mathrm{rank}(G_i)$ is bounded independently of $i$, with the ranks of $K_i$ not necessarily finite, would be very interesting as well. – nolte Oct 3 '10 at 19:49
Example with hyperbolic groups give you the sequence in the opposite order: $G_n, G_{n-1},...$ where $G=G_n$. Here $G_n/free group=G_{n-1}, G_{n-1}/free group=G_{n-2}$, etc. – Mark Sapir Oct 3 '10 at 19:55
You are supposed to go the other way, and build $G_{n+1}$, $G_{n+2}$, $G_{n+3}$, etc. – nolte Oct 3 '10 at 20:01 | 2014-07-25 01:54:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9113273620605469, "perplexity": 195.85025263362857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997892648.47/warc/CC-MAIN-20140722025812-00199-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://preexistingconditioninsuranceplan.com/docs/2ml3h.php?page=891d78-calculate-the-enthalpy-of-formation-of-ethylene | calculate the enthalpy of formation of ethylene
Asking for help, clarification, or responding to other answers. If the heat of formation of C O 2 and H 2 O are 3 9 3. Given the average bond dissociation enthalpies of a $\ce{C-C}$ bond (say $x$) and a $\ce{C=C}$ bond (say $y$), find the enthalpy of the following polymerisation reaction (per mole of ethylene): where $n$ is a large integer and $x, y$ are in $\pu{kJ/mol}$. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Now, you would combine it to give FORMATION ENTHALPY OF ETHYLENE (e.g. Választásait bármikor módosíthatja az Adatvédelmi lehetőségek oldalon. Is the standard enthalpy of formation always non-positive? But the source book of this problem states the answer to be $y-2x$ claiming that for every double bond broken, 2 single bonds are formed. Before launching into the solution, notice I used "standard enthalpy of combustion." 2 k J, respectively. To learn more, see our tips on writing great answers. Which one burns hotter? It only takes a minute to sign up. Ha engedélyezi a Verizon Media és partnerei részére, hogy feldolgozzák az Ön személyes adatait, válassza a(z) Elfogadom lehetőséget, ha pedig további tájékoztatást szeretne, vagy kezelné adatvédelmi lehetőségeit, akkor válassza a(z) Beállítások kezelése lehetőséget. Another way to prevent getting this page in the future is to use Privacy Pass. You may need to download version 2.0 now from the Chrome Web Store. C2H4). Információ az eszközéről és internetkapcsolatáról, beleértve az IP-címét, Böngészési és keresési tevékenysége a Verizon Media webhelyeinek és alkalmazásainak használata közben. rev 2020.11.24.38066, The best answers are voted up and rise to the top, Chemistry Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us, Calculating the enthalpy of polymerisation of ethylene given the bond strengths, MAINTENANCE WARNING: Possible downtime early morning Dec 2/4/9 UTC (8:30PM…, “Question closed” notifications experiment results and graduation, A theoretical molecule that could match uranium mole-for-mole, Comparing formula for enthalpy change with bond dissociation energy and formation enthalpy. Faster, More Elegant Way to Produce a Recursive Sequence of Rational Numbers, Advice for getting a paper published as a highschooler, Padding Oracle Attack CBC Mode: If the set of characters of plaintext is known, then can you speed up the process. In a former step, you have to write the FORMATION WRITING. Use the given heats of formation to calculate the enthalpy change for this reaction. How do I use zsh to pipe results from one command to another (while in a loop)? This means that in forming 2 moles of CO2(g) and two moles of H2O(l) from their atoms, 1358.6 kJ are released. Active 2 years, ... How to calculate the enthalpy change for the formation of ethylene epoxide from ethylene? Completing the CAPTCHA proves you are a human and gives you temporary access to the web property. Calculating the enthalpy of polymerisation of ethylene given the bond strengths. You misled the terms, those aren't Formation Enthalpies but it is COMBUSTION ENTHALPies since it relate Combustion Reactions. According to me, this should simply be $y-x$ as for every double bond broken, a single bond is formed. Ethylene - Thermal Conductivity - Online calculator, figures and table showing thermal conductivity of ethylene, also called ethene or acetene, C 2 H 4, at varying temperature and pressure - Imperial and SI Units ; Ethylene Gas - Specific Heat - Specific heat of Ethylene Gas - C 2 H 4 - temperatures ranging 175 - … If you are at an office or shared network, you can ask the network administrator to run a scan across the network looking for misconfigured or infected devices. Calculate the heat of formation of ethylene. Why? Use MathJax to format equations. • By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. Formation of polymers - addition polymerisation, Understanding the bond dissociation energy of poly-atomic molecules. If you are having trouble with Chemistry, Organic, Physics, Calculus, or Statistics, we got your back! Why do the mountain people make roughly spherical houses? Calculating bond energy given the enthalpy value of burning reaction, Heats of formation of neutral molecules and homolytic vs heterolytic bond dissociation in mass spectrometry. How do you make the cool sound by touching the string with your index finger? ; completely broken, 2 single bonds are formed in its place. This is a very common chemical reaction, to take … Our videos prepare you to succeed in your college classes. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. Is a Circle of Stars Druid's Chalice form affected by Grave Cleric's Circle of Mortality? 2 C(s) + 2 H2(g) ----> C2H4(g) so THERE IS ONLY WAY TO COMBINE THREE PREVIOUS EQUATIONs Ethylene on combustion gives carbon dioxide and water. 0 k J. m o l − 1. | 2021-01-18 07:05:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2263881266117096, "perplexity": 3838.114713601921}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514423.60/warc/CC-MAIN-20210118061434-20210118091434-00086.warc.gz"} |
https://ncatlab.org/nlab/show/periodic%20map | # nLab periodic map
A periodic map is a self-map of a spectrum $X$ of the form
$\Sigma^d X \xrightarrow{f} X$
for some $d \geq 0$, the condition of periodicity is that, when we iterate $f$, no $\Sigma^{t d}X$ is null homotopic. (note that it’s increasing dimension by d):
$... \to \Sigma^{2d} X \xrightarrow{\Sigma^d f} \Sigma^d X \xrightarrow{f} X$
A related concept: If $\exists t$ s.t. $\Sigma^{t d} X$ is null homotopic, then $f$ is instead called a nilpotent map.
## A verbose introduction to periodic maps
Let us take the mindset of a harmonic analyst as we are handed one period of an interesting function $S^d X \xrightarrow{v} X$. Our first inclination is to shift this map a step $d$ to the left and lay them next to each other, ad infinitum.
$... \xrightarrow{S^{3d} v} S^{3d} X \xrightarrow{S^{2d}v} S^{2d} X \xrightarrow{S^d v} S^d X \xrightarrow{v} X$
for notational simplicity:
$... \xrightarrow{v^4} S^{3d} X \xrightarrow{v^3} S^{2d} X \xrightarrow{v^2} S^d X \xrightarrow{v} X$
If, when we iterate $v$ enough times, we find ourselves looking at a sad contractible space, tired by our repetitive antics, shriveled to a point, we quite naturally call $v$ nilpotent. But we are harmonic analysts – seekers of periodicity! We wish to look at $v$ which do not die when we suspend them over and over, those that are periodic with period $d$. So…in what case does $v$ not die?
With $v$ on our mind we look at an element $f$ in the homotopy classes of maps from $S^k X$ to $Y$, represented by the map:
$S^k X\xrightarrow{f} Y$
How do we fit this together with our self map $v$?
$S^d X \xrightarrow{v} X$
We shift $v$
$S^d(S^k X) \xrightarrow{S^k v} (S^k X)$
and lay it in place next to $f$.
$... \xrightarrow{S^k v^3} S^{2d}(S^k X) \xrightarrow{S^k v^2} S^d(S^k X) \xrightarrow{S^k v}(S^k X) \xrightarrow{f} Y$
We might look at this collection of maps,
$S^{j d}(S^k X) \xrightarrow{f \circ S^k v^j} Y$
In slightly more palatable notation, we’re looking at the collection $v^j f \in [S^{j d+k} X, Y]$ as $j$ ranges over the natural numbers – forbidding ourselves from looking at $v$ if $v^j$ is constant for any $j$.
Why limit ourselves to $S^{j d+k}$? Let us look at all $S^* := {S^n}_{n \in \mathbb{Z}}$, that is, we look at the set $v^j f \in [S^* X, Y]$ (still demanding that all $v^j f$ are nontrivial):
$\{f, v f, v^2f, ...\}$
We name this set of $v$-periodic elements in $[S^*X, Y]$ a “$v$-periodic family”
Note: The $v_n$-periodic families are not actually periodic in $\pi_*^S$, they are maps induced by periodic maps between CW-complexes.
## Examples of periodic maps
An example motivating the study of periodic maps: the bott element in $ku$. (I’m not sure that this is entirely correct).
$S^2 \xrightarrow{\beta} ku$
$S^2 \wedge ku \to ku \wedge ku \to ku$
$\Sigma^2 ku \xrightarrow{\times \beta} ku$
Last revised on June 10, 2016 at 14:06:28. See the history of this page for a list of all contributions to it. | 2021-10-22 16:25:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 49, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7440224885940552, "perplexity": 509.0343656028979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585516.51/warc/CC-MAIN-20211022145907-20211022175907-00346.warc.gz"} |
https://pos.sissa.it/282/153/ | Volume 282 - 38th International Conference on High Energy Physics (ICHEP2016) - Beyond the Standard Model
BSM physics at CLIC
R. Simoniello* On behalf of the CLICdp collaboration
*corresponding author
Full text: pdf
Pre-published on: 2017 February 06
Published on: 2017 April 19
Abstract
The Compact Linear Collider (CLIC) is an option for a future electron-positron collider operating at centre-of-mass energies from a few hundred GeV up to 3 TeV. The search for phenomena beyond the Standard Model through direct observation of new particles and precision measurements is one of the main motivations for the high-energy stages of CLIC. An overview of physics benchmark studies assuming different new physics scenarios is given in this contribution. These studies are based on full detector simulations.
New particles can be discovered in most of the considered scenarios almost up to the kinematic limit ($\sqrt{s} / 2$ for pair production).
The low background conditions at CLIC provide extended discovery potential compared to hadron colliders, for example in the case of non-coloured TeV-scale SUSY particles. In addition to direct particle searches, BSM models can be probed up to scales of tens of TeV through precision measurements. Examples, including recent results on the reaction $e^+e^-\to\gamma\gamma$, are given. Beam polarisation allows to constrain the underlying theory further in many cases. The discussion of LHC results relevant for the CLIC physics case is also included.
DOI: https://doi.org/10.22323/1.282.0153
Open Access
Copyright owned by the author(s) under the term of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | 2018-05-20 10:13:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5558180212974548, "perplexity": 2215.9408647664154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863277.18/warc/CC-MAIN-20180520092830-20180520112830-00122.warc.gz"} |
https://www.albert.io/ie/trigonometry/triangle-law-of-sines-area-non-included-angle | ?
Free Version
Moderate
Triangle, Law of Sines, Area, Non-included Angle
TRIG-QCSZR3
Given $\text{ triangle ABC}$ with the given measurements, what is the area of $\text{ triangle ABC}$, rounded to a tenth of a sq. cm?
A
$123.5\; sq.\; cm$
B
$122.1\; sq.\; cm$
C
$58.4\; sq.\; cm$
D
$102.1\; sq.\; cm$ | 2016-12-10 10:55:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8273226618766785, "perplexity": 3369.345176766467}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543035.87/warc/CC-MAIN-20161202170903-00118-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://imathworks.com/tex/tex-latex-biblatex-still-wont-print-bibliography/ | # [Tex/LaTex] Biblatex still won’t print bibliography
biblatex
I have read all the related questions and taken all the suggested steps (well, the ones that had anything to do with my situation since for example I am not using XeLaTeX). As far as I can see, this file ought to produce a pdf with one citation in it and a bibliography with that entry.
\documentclass{amsart}
\usepackage[]{biblatex}
\begin{document}
\cite{FefCF}
\printbibliography
\end{document}
Instead it produces a pdf with the name FefCF printed in boldface and no bibliography.
I am using TeXworks and have run both BibTeX and pdfLaTex+MakeIndex+BibTeX repeatedly in many combinations.
To be clear, the reference FefCF is indeed in the file refs.bib (and has been used many time with BibTeX) and it is in the same folder as this file, namely C:/Users/Colin/Documents/TeX Files. I have also tried the full location name C:/Users/Colin/Documents/TeX Files/refs.bib to specify the bib resource. That makes no difference.
1. You are not calling biber at all but bibtex (check the .blg file if it starts with "biber" or "bibtex").
2. biber fails due to a problem with the cache files. Run biber --cache on the command line and delete the folder you get as output.
3. biber fails due to an error in your bib file: Check the blg file. | 2023-03-29 09:41:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9087451100349426, "perplexity": 2622.719414247541}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948965.80/warc/CC-MAIN-20230329085436-20230329115436-00757.warc.gz"} |
https://email.esm.psu.edu/pipermail/macosx-tex/2018-February/055883.html | # [OS X TeX] Beamer question: interaction between columns and overlays
Nitecki, Zbigniew H. Zbigniew.Nitecki at tufts.edu
Mon Feb 26 16:14:33 EST 2018
I am trying to create a slide in two columns: the left column has text in several overlays, and the right column is a figure which is to stay constant.
The following code
\frame{
\begin{columns}
\column{5 cm}
\only<1->
{
First comment.
\pause
Second comment.
\pause
Third comment.
}
\column{5 cm}
\only<1->
{
Constant comment.
}
\end{columns}
}
results in a set of slides where the left side occurs as desired, but the right column only appears at the end (ie, only on the third slide).
What is the right syntax to get the right column to appear in all overlays?
Zbigniew Nitecki
Department of Mathematics
Tufts University
Medford, MA 02155
telephones:
Office (617)627-3843
Dept. (617)627-3234
Dept. fax (617)627-3966
http://www.tufts.edu/~znitecki/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://email.esm.psu.edu/pipermail/macosx-tex/attachments/20180226/b664358e/attachment.html> | 2023-02-02 08:52:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8392266631126404, "perplexity": 8499.559319880223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499967.46/warc/CC-MAIN-20230202070522-20230202100522-00011.warc.gz"} |
https://www.elastic.co/blog/elasticsearch-6-0-0-beta1-released | Elasticsearch 6.0.0-beta1 released | Elastic Blog
Releases
# Elasticsearch 6.0.0-beta1 released
We are excited to announce the release of Elasticsearch 6.0.0-beta1, based on Lucene 7-SNAPSHOT. This is the third in a series of pre-6.0.0 releases designed to let you test out your application with the features and changes coming in 6.0.0, and to give us feedback about any problems that you encounter. Open a bug report today and become an Elastic Pioneer.
IMPORTANT: This is a beta release and is intended for testing purposes only. Indices created in this version will not be compatible with Elasticsearch 6.0.0 GA. Upgrading 6.0.0-beta1 to any other version is not supported.
DO NOT DEPLOY IN PRODUCTION
Also see:
You can read about all the changes in the release notes linked above, but there are some big changes worth mentioning below.
## Sequence numbers and fast recovery
While synced flush has greatly improved shard recovery times for indices that are not being written to, recovery of active indices is still a slow and heavy operation. An active replica on a node that leaves the cluster for a brief period still needs to copy over all or most of the files in the primary shard in order to bring itself up to date.
The new Sequence Numbers infrastructure assigns an incremental operation ID to every index, update, or delete. This new infrastructure allows a replica to ask the primary for all operations from X onwards. If these operations are found in the primary’s translog an older replica can bring itself up to date by just replaying the transaction log and avoid the need to copy files.
This release features:
• fast operation-based recovery for active indices after a node rejoins the cluster or is restarted,
• a custom translog retention policy (defaulting to 12 hours or 512 MB) to make fast recovery more likely,
• cleanup of old transaction logs on idle indices, and
• primary-to-replica sync when a replica is promoted to be the new primary.
We also have the infrastructure we need to start developing the cross data centre replication (xDCR) X-Pack feature. We will continue to use the new infrastructure to tackle more complex correctness problems, specifically the rollback of unacknowledged operations in replicas and the usage of sequence numbers for optimistic locking.
## Search scalability
A search against an index pattern like logstash-* can fan out to a huge number of shards. Usually, queries like these include a date range filter which means that the majority of shards won’t contain any matching documents. We already include an optimization to abort search requests on these shards early, before any real work is done, but this is not enough. Imagine a multi-search request containing 10 search requests, each of which target 2,000 shards. That’s 20,000 shard-level search requests which are added to the search threadpool queue. This could easily result in rejections, even though the majority of these requests are very quick.
Previously, Kibana used the _field_stats API with a date range filter to figure out which indices might contain matching documents, and then ran the search request against only those indices. We wanted to remove this API because it was much heavier than users expected and open to abuse. Instead, a search request now has a light shard prefiltering phase which is triggered if a search request targets at least 128 shards (by default). These prefilter requests are not added to the search queue and so cannot be rejected because the queue is full. The prefilter request rewrites the query at the shard level and determines whether the query has any chance of matching any documents at all. The full search request is then sent only to those shards which have a chance of matching.
But what if the user actually does want to search all 2,000 shards, or searches all indices by mistake? These wide-ranging requests should not overwhelm the cluster nor get in the way of search requests from other users. In order to solve this, we introduced the max_concurrent_shard_requests parameter whose default value depends on the number of nodes in the cluster, but which has a fixed upper limit of 256. This may make a single search request that targets many shards slower, but it makes for fairer concurrent searches by many users.
## Preventing full disks
We have long had the cluster.routing.allocation.disk.watermark.low and cluster.routing.allocation.disk.watermark.high settings which prevent shards being assigned to full disks and actively move shards away from full disks. However, if all of the disks in your cluster are full, then there is nowhere to move shards to and eventually you will run out of disk space. Now, we have added the cluster.routing.allocation.disk.watermark.flood_stage setting. When a disk passes this level, indices that have shards on this node will be set to read only. No more writes will be accepted. Instead, you will need to either delete the index or add more space and set it back to read-write.
To prevent a persistent logged failure from filling up the disk, Elasticsearch is switching to the following out-of-the-box logging config:
• Roll logs every 128MB
• Compress rolled logs
• Maintain a sliding window of logs
• Remove the oldest logs to keep all compressed logs under 2GB
In 5.x, X-Pack Security set the passwords of internal users to changeme by default, in order to make the getting-started experience easier, but it is never a good idea to ship with default passwords. In 6.0, we have added a bootstrap.password setting which can be added to the secure keystore before startup. When the cluster starts up, any node with this setting will try to set the password for the elastic user unless that user already has a password, so that the cluster will start in a secure state. On top of that, we’ve added a setup-passwords command line tool which will generate and set strong passwords for all of the internal users.
• The RPM and Debian packages put the config directory in the appropriate location for those distributions. Scripts that need access to the config directory (such as the plugin script and the secure-settings script) previously required that the CONF_DIR environment variable be set by the user, which frequently lead to confusion. Now, all scripts use the new elasticsearch-env include script which sets the CONF_DIR variable setting correctly for each package. Custom locations can still be set by the user using the CONF_DIR variable. For consistency, the bin/elasticsearch script no longer accepts the --path.conf setting, but relies on CONF_DIR instead. | 2020-06-03 10:46:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1754721999168396, "perplexity": 2373.3323147855654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347432521.57/warc/CC-MAIN-20200603081823-20200603111823-00396.warc.gz"} |
https://www.physicsforums.com/threads/changing-electric-field-and-refractive-index.801559/ | # Changing electric field and refractive index
Tags:
1. Mar 5, 2015
I am learning sky wave propagation and in my book, a relation between refractive index, dielectric constant and electro field strength is given.
$$\mu=\mu_0\sqrt{1-\frac{Ne^2}{\epsilon_0m\omega^2}}$$
Is this a form of Kerr opto-electric effect? How do you get this expression? If you think I cannot understand the derivation, can you explain its meaning?
2. Mar 5, 2015
### blue_leaf77
What is $\mu$ in that expression?
3. Mar 5, 2015
Refractive index
4. Mar 5, 2015
### Vagn
That looks like the refractive index for a plasma, where the plasma frequency is given by $\omega_p^2=\frac{Ne^2}{\epsilon_0 m}$. In the context of the atmosphere, this would be referring to the ionosphere.
5. Mar 5, 2015
### blue_leaf77
Yes it is refractive index of plasma, derived using Drude model for free electrons motion.
6. Mar 5, 2015
Yes. It is for ionosphere. Its about reflection of waves from ionosphere. As N the number density of electrons increase, the value of $\mu$ decreases hence the critical angle increases.
I am studying in 12th standard. I know single variable variable integration and I know how to solve linear first order differential equations. Can you explain the relation between mu and Eletric field?
7. Mar 5, 2015
Drude model explains the drift of electrons in a coductor when an electric field is applied right?
8. Mar 6, 2015
### DelcrossA
There's a few different approaches to the derivation. The most straight-forward, in my opinion, is to start by solving the equation of motion of a free electron under the influence of some incident plane-wave. This will allow you to calculate the polarization of the free electron gas and then find the dielectric function and then the refractive index.
9. Mar 6, 2015
### blue_leaf77
Agree with DelcrossA.
10. Mar 6, 2015 | 2018-03-17 23:26:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5810876488685608, "perplexity": 1140.095259574734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645362.1/warc/CC-MAIN-20180317214032-20180317234032-00052.warc.gz"} |
https://www.gradesaver.com/textbooks/math/calculus/calculus-with-applications-10th-edition/chapter-2-nonlinear-functions-chapter-review-review-exercises-page-113/30 | ## Calculus with Applications (10th Edition)
$(-\infty, -4)\cup(4, \infty)$
$y=\ln(x^{2}-16)$ For y to be defined, the argument of the logarithm must be positive, $x^{2}-16>0$ $x^{2}>16$ $x>4$ or $x\lt-4$ | 2019-12-10 19:20:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9522228837013245, "perplexity": 432.41474779307583}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540528490.48/warc/CC-MAIN-20191210180555-20191210204555-00033.warc.gz"} |
http://harvard.voxcharta.org/2012/03/07/photoionization-cross-section-calculations-for-the-halogen-like-ions-kr-and-xe/ | Photoionization cross sections calculations on the halogen-like ions; Kr$^+$ and Xe$^+$ have been performed for a photon energy range from each ion threshold to 15 eV, using large-scale close-coupling calculations within the Dirac-Coulomb R-matrix approximation. The results from our theoretical work are compared with recent measurements made at the ASTRID merged-beam set-up at the University of Aarhus in Denmark and from the Fourier transform ion cyclotron resonance (FT-ICR) trap method at the SOLEIL synchrotron radiation facility in Saint-Aubin, France and the Advanced Light Soure (ALS). For each of these complex ions our theoretical cross section results over the photon energy range investigated are seen to be in excellent agreement with experiment. Resonance energy positions and quantum defects of the prominent Rydberg resonances series identified in the spectra are compared with experiment for these complex halogen like-ions. | 2015-07-04 01:38:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4797767400741577, "perplexity": 2809.4695563745554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096293.72/warc/CC-MAIN-20150627031816-00026-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://www.mpim-bonn.mpg.de/de/node/11878 | # On some counting questions for integral matrices
Posted in
Speaker:
Igor Shparlinski
Zugehörigkeit:
University of New South Wales/MPIM
Datum:
Mit, 25/01/2023 - 14:30 - 15:30
Parent event:
Number theory lunch seminar
Hybrid.
Contact: Pieter Moree (moree@mpim-bonn.mpg.de)
We discuss two questions related to counting $n\times n$-matrices with integer elements of size at most $H$ which satisfy various arithmetic conditions such as:
A. Multiplicative dependence (due to non-commutativity there are two natural and equally interesting definitions), which in turn leads to counting such matrices with a given characteristic polynomial.
B. Avoidance of being a square of another matrix modulo a prime p, which leads us to various questions on character sums with determinants (so far only in dimension 2).
A is a joint project with Alina Ostafe, B is a joint project with Etienne Fouvry, both are in progress and both lead us to counting solutions to various Diophantine equations associated with determinants.
For these we need upper bounds which seems to be beyond the capabilities of the determinant method, so we use different approaches.
Several open problems will be discussed as well.
© MPI f. Mathematik, Bonn Impressum & Datenschutz | 2023-04-01 21:07:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48183974623680115, "perplexity": 1785.1625074998828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00505.warc.gz"} |
https://independency.askdefine.com/ | # Dictionary Definition
independency n : freedom from control or influence of another or others [syn: independence]
# Extensive Definition
In mathematics and computer science, a dependency relation is a binary relation that is finite, symmetric, and reflexive. That is, it is a finite set of ordered pairs D, such that
• If (a,b)\in D then (b,a) \in D (symmetric)
• If a is an element of the set on which the relation is defined, then (a,a) \in D (reflexive)
In general, dependency relations are not transitive; thus, they generalize the notion of an equivalence relation by discarding transitivity.
Let \Sigma denote the alphabet of all the letters of D. Then the independency induced by D is the binary relation I
I = \Sigma \times \Sigma - D
That is, the independency is the set of all ordered pairs that are not in D. Clearly, the independency is symmetric and irreflexive.
The pairs (\Sigma, D) and (\Sigma, I), or the triple (\Sigma, D, I) (with I induced by D) are sometimes called the concurrent alphabet or the reliance alphabet.
The pairs of letters in an independency relation induce an equivalence relation on the free monoid of all possible strings of finite length. The elements of the equivalence classes induced by the independency are called traces, and are studied in trace theory.
## Examples
Consider the alphabet \Sigma=\. A possible dependency relation is
\begin D
&=& \\times\ \quad \cup \quad \\times\ \\ &=& \^2 \cup \^2 \\ &=& \ \end
The corresponding independency is
I_D=\
Therefore, the letters b,c commute, or are independent of one-another. | 2018-01-16 11:35:08 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9989686012268066, "perplexity": 1407.5571353394923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886416.17/warc/CC-MAIN-20180116105522-20180116125522-00214.warc.gz"} |