url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://milosophical.me/tags/nikola.html
|
# Upgrading Nikola: some pitfalls and how I climbed out of them
After some hacking of my dotfiles and python settings, I lost my nikola virtual environment (I think it broke after a brew update or something. The hacking's only partly recorded in the issues on GitHub).
But that's no biggie, just make a new one and re-install, right? Well, not quite. The re-install gives you the latest Nikola (great!) and that means I have to review and update my conf.py (okay...) and figure out runtime errors like this:
[src:?][[email protected]:~/hax/net/blog/milosophical.me]
[07:27](nikola)\$ nikola version
Traceback (most recent call last):
File "/Users/mjl/lib/nikola/bin/nikola", line 11, in <module>
sys.exit(main())
File "/Users/mjl/lib/nikola/lib/python3.6/site-packages/nikola/__main__.py", line 171, in main
_ = DN.run(oargs)
File "/Users/mjl/lib/nikola/lib/python3.6/site-packages/nikola/__main__.py", line 339, in run
self.nikola.init_plugins()
File "/Users/mjl/lib/nikola/lib/python3.6/site-packages/nikola/nikola.py", line 1077, in init_plugins
self._activate_plugins_of_category("SignalHandler")
File "/Users/mjl/lib/nikola/lib/python3.6/site-packages/nikola/nikola.py", line 1233, in _activate_plugins_of_category
plugin_info.plugin_object.set_site(self)
File "/Users/mjl/lib/nikola/lib/python3.6/site-packages/nikola/plugins/misc/taxonomies_classifier.py", line 328, in set_site
self._register_path_handlers(taxonomy)
File "/Users/mjl/lib/nikola/lib/python3.6/site-packages/nikola/plugins/misc/taxonomies_classifier.py", line 316, in _register_path_handlers
doc = taxonomy.path_handler_docstrings[name]
KeyError: 'page_index_folder_index'
(well, pooh).
I decided a while back that I wasn't going to meta-blog (otherwise most of my posts would be about blogging!), but I think in this case, Rule 4 will come to the rescue. Anyway at least you know this story has a happy ending, or else I wouldn't be able to add this new_post!
|
2018-07-15 23:22:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26483461260795593, "perplexity": 5175.8203466087525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589022.38/warc/CC-MAIN-20180715222830-20180716002830-00033.warc.gz"}
|
https://api-project-1022638073839.appspot.com/questions/how-do-you-determine-whether-each-function-represents-exponential-growth-or-deca-1
|
# How do you determine whether each function represents exponential growth or decay y=10(3.5)^x?
May 18, 2016
In $y = 10 {\left(3.5\right)}^{x}$, as $a > 0$ and $b > 1$, we have exponential growth.
#### Explanation:
In a function $y = a \cdot {b}^{x}$,
if $a > 0$ and $b$ lies between $0$ and $1$ i.e. $0 < b < 1$,
it is exponential decay.
and if $a > 0$ and $b$ is greater than $1$ i.e. $b > 1$,
it is exponential growth.
Here in $y = 10 {\left(3.5\right)}^{x}$, as $a > 0$ and $b > 1$, we have exponential growth.
|
2020-05-24 23:38:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 16, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9146603941917419, "perplexity": 307.7986392072908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347385193.5/warc/CC-MAIN-20200524210325-20200525000325-00365.warc.gz"}
|
http://www.elbosquecountry.com.ar/maharashtra-vidhan-yrejc/cca2fa-simplifying-complex-numbers-square-roots
|
Then simply add or subtract the coefficients (numbers in front of the radical sign) and keep the original number in the radical sign. 1) 96 4 6 2) 216 6 6 3) 98 7 2 4) 18 3 2 5) 72 6 2 6) 144 12 7) 45 3 5 8) 175 5 7 9) 343 7 7 10) 12 2 3 11) 10 96 40 6 12) 9 245 63 5-1-©Y R2 S0f1 N18 5Kbu3t 9aO hSFoKf3t Dwqaar ge6 5L nL XCz. The set of real numbers is a subset of the set of complex numbers C. ... Every complex number (and hence every positive real number) has two square roots. To multiply complex numbers, distribute just as with polynomials. This Digital Interactive Activity is an engaging practice of working with “Simplifying Square Roots With "i"" . Simplifying complex expressions The following calculator can be used to simplify ANY expression with complex numbers. Square roots of negative numbers can be discussed within the framework of complex numbers. The perfect square method is suitable for small numbers for example less than 1000. We simplify any expressions under the radical sign before performing other operations. The free calculator will solve any square root, even negative ones and you can mess around with decimals too!The square root calculator below will reduce any square root to its simplest radical form as well as provide a brute force rounded approximation of any real or imaginary square root.. To use the calculator simply type any positive or negative number into the text box. Under a single radical sign. ... $for complex numbers? Related SOL A.2, A.4 Materials Graphing calculators What is 16? Expressions containing square roots can frequently be simplified if we identify the largest perfect square that divides evenly into the radicand (the number or expression under the radical sign). the real parts with real parts and the imaginary parts with imaginary parts). What is 9? Note that both (2i) 2 = -4 and (-2i) 2 = -4. How to simplify square roots using the perfect square method? 100. a+bi -----> a-bi. The square root of a number x is denoted with a radical sign √x or x 1/2.A square root of a number x is such that, a number y is the square of x, simplify written as y 2 = x.. For instance, the square root of 25 is represented as: √25 = 5. Warns about a common trick question. 100. sqrt(25) What is 5? A complex number is a number that can be written in the form a + bi, where a and b are real numbers and i = . Miscellaneous. What is 9? Simplifying Square Roots Reporting Category Expressions and Operations Topic Simplifying square roots Primary SOL A.3 The student will express the square roots and cube roots of whole numbers and the square root of a monomial algebraic expression in simplest radical form. 100. What is 16? Section 13.3 Simplifying Square Root Expressions. What is the conjugate? Simplifying Square Roots of a Negative Number. Remember that when a number is multiplied by itself, we write and read it “n squared.” For example, reads as “15 squared,” and 225 is called the square of 15, since . Complex numbers can be multiplied and divided. This activity is designed to help students practice reducing square roots involving negative numbers. 1. 100. Example 1: to simplify$(1+i)^8$type (1+i)^8 . As we noted back in the section on radicals even though $$\sqrt 9 = 3$$ there are in fact two numbers that we can square to get 9. The goal of simplifying a square root is to rewrite it in a form that is easy to understand and to use in math problems. But we can find a fraction equivalent to by multiplying the numerator and denominator by .. Now if we need an approximate value, we divide . 5. Complex Numbers and Simplifying Square Roots. In this lesson, we are going to take it one step further, and simplify square roots that contain variables. Simplifying Square Roots Date_____ Period____ Simplify. This method requires you to create a box. Thus, in simplified form, Note: 1) In general, 9 is a factor of a number if the sum of the digits of the number is divisible by 9. When radical values are alike. 100. a+bi -----> a-bi. When faced with square roots of negative numbers the first thing that you should do is convert them to complex numbers. The following video shows more examples of simplifying square roots using the perfect square method. This chapter is the study of square roots and complex numbers with their sums and differences, products and quotients, binomial multiplication and conjugates. We simplify any expressions under the radical sign before performing other operations. Perform the operation indicated. You can add or subtract square roots themselves only if the values under the radical sign are equal. This products has a total of 12 questions assessing the ability to work with many aspects of Radicals & Complex Numbers. When using the order of operations to simplify an expression that has square roots, we treat the radical sign as a grouping symbol. Square Roots and the Order of Operations. A variety of different types of algebra problems provide interactive practice with comprehensive algebra help and an algebra test. This activity is great for DIFFERENTIATION.This activity Addition / Subtraction - Combine like terms (i.e. To divide complex numbers, multiply both the numerator and denominator by the complex conjugate of the denominator to eliminate the complex number from the denominator. LESSON 2: Simplifying Square Roots LESSON 3: Imaginary Numbers Day 1 of 2LESSON 4: Imaginary Numbers Day 2 of 2LESSON 5: Complex Numbers Day 1 of 2LESSON 6: Complex Numbers Day 2 of 2LESSON 7: Completing the Square Day 1 of 2LESSON 8: Completing the Square Day 2 of 2LESSON 9: Real and Complex Number System QuizLESSON 10: Quadratic Formula Simplify Square Root Expressions. Because the square of each of these complex numbers is -4, both 2i and -2i are square roots of -4. A perfect square between 5 and 24. Complex Numbers and Simplifying Square Roots. Square root is an inverse operation of the squaring a number.. Simplify Expressions with Square Roots. 5-5 Complex Numbers and Roots Every complex number has a real part a and an imaginary part b. Ask Question Asked 4 years, 8 months ago. Square roots of numbers that are not perfect squares are irrational numbers. Learn to solve equations using radicals and complex numbers. 1. If you are looking to simplify square roots that contain numerals as the radicand, then visit our page on how to simplify square roots.. Technically, a regular number just describes a special case of a complex number where b = 0, so all numbers could be considered complex. When we rationalize the denominator, we write an equivalent fraction with a rational number in the denominator.. Let’s look at a numerical example. By … These include function spaces and square matrices, among other mathematical structures Square Root of a Negative Number 100. Simplifying Imaginary Numbers - Displaying top 8 worksheets found for this concept.. Rationalizing Monomial Denominators That Contain a Square Root Expression; Rationalizing Binomial Denominators That Contain Square Root Expressions; Explore the Meaning of Rational Exponents; Simplifying Square Roots of Negative Integers; Multiplication of Complex Numbers Some of the worksheets for this concept are Operations with complex numbers, Complex numbers and powers of i, Rationalizing imaginary denominators, Simplifying complex numbers, Simplifying radical expressions date period, 1 simplifying square roots, Simplifying radicals date period, Imaginary and complex numbers. When using the order of operations to simplify an expression that has square roots, we treat the radical sign as a grouping symbol. Example The powers of $$i$$ are cyclic, repeating every fourth one. Simplifying complex expressions Simplifying complex expressions with square roots Skills Practiced. Understand factoring. Square Roots of Negative Complex Numbers . Vocabulary. 1. This method requires you to create a box. In the complex number system the square root of any negative number is an imaginary number. More generally, square roots can be considered in any context in which a notion of "squaring" of some mathematical objects is defined. 100. There is one final topic that we need to touch on before leaving this section. For example: 9 is a factor of 198 since 1 + 9 + 8 = 18 and 18 is divisible by 9.. 2) If you realize that 36 is the largest perfect square factor of 108, the you can write: If you realize that 36 is the largest perfect Square Roots and the Order of Operations. Factoring breaks down a large number into two or more smaller factors, for instance turning 9 into 3 x 3.Once we find these factors, we can rewrite the square root in simpler form, sometimes even turning it into a normal integer. Ask Question Asked 4 years, 9 months ago. Simplification Square Root, Complex Numbers. What is the conjugate? Helps students with rewriting negative square roots as imaginary numbers and identifying if they need to use an i or a negative sign.For each perfect square from 1 to 64, students will reduce each Simplifying Square Roots that Contain Variables. Since all square roots of negative numbers can be represented by multiples of i , this is the form for all complex numbers. Simplifying Square Roots. The topic of complex numbers is beyond the scope of this tutorial. D H dAul Mlx frCiMgmhXtMsH 7r 8eFs xe HrkvXexdL. 100. sqrt(25) What is 5? When using the order of operations to simplify an expression that has square roots, we treat the radical as a grouping symbol. So any time you talk about "the" square root you need to be careful. Let us Discuss c omplex numbers, complex imaginary numbers, complex number , introduction to complex numbers , operations with complex numbers such as addition of complex numbers , subtraction, multiplying complex numbers, conjugate, modulus polar form and their Square roots of the complex numbers and complex numbers questions and answers . Example 1. Introduces the imaginary number 'i', and demonstrates how to simplify expressions involving the square roots of negative numbers. Simplify complex square roots. Simplifying expressions with square roots. Free Complex Numbers Calculator - Simplify complex expressions using algebraic rules step-by-step This website uses cookies to ensure you get the best experience. You may perform operations under a single radical sign.. Simplify fraction of Gamma functions. How to Simplify Square Roots with Negative Numbers - Every nonnegative actual number 'x', has a unique nonnegative square root, known as the principal square root, which is signified by '√x', where the symbol '√' is called the radical sign or radix. For bigger numbers the prime factorization method may be better. How to simplify an expression with assumptions. Simplifying Roots Worksheets. Miscellaneous. A perfect square between 5 and 24. Simplifying Square Roots – Techniques and Examples. Vocabulary. all imaginary numbers and the set of all real numbers is the set of complex numbers. Example - 2−3 − 4−6 = 2−3−4+6 = −2+3 Multiplication - When multiplying square roots of negative real numbers, We write . Simplifying Square Roots. Method is suitable for small numbers for example less than 1000 in complex... Numbers can be multiplied and divided powers of \ ( i\ ) are cyclic repeating. And demonstrates how to simplify expressions involving the square roots, we are going to take one. Involving the square roots using the order of operations to simplify an expression that has square involving!, and demonstrates how to simplify square roots that contain variables roots themselves only the! Comprehensive algebra help and an algebra test touch on before leaving this section may! Inverse operation of the set of real numbers is -4, both 2i and are! This products has a real part a and an algebra test any time you talk about the. I\ ) are cyclic, repeating Every fourth one if the values under the radical as grouping... Repeating Every fourth one real number ) has two square roots of a negative number to simplify expression... Multiplied and divided roots, we treat the radical as a grouping symbol Every complex has! Treat the radical sign as a grouping symbol roots of negative numbers if the under. Mlx frCiMgmhXtMsH 7r 8eFs xe HrkvXexdL this is the form for all complex numbers -4!, among other mathematical structures simplifying square roots using the order of operations to simplify an expression has. Perfect squares are irrational numbers roots Date_____ Period____ simplify this is the simplifying complex numbers square roots for all complex.... Distribute just as with polynomials imaginary numbers - Displaying top 8 worksheets found for this concept part b students... This lesson, we treat the radical sign years, 9 months ago imaginary parts ) practice! This tutorial total of 12 questions assessing the ability to work with many aspects radicals. Of operations to simplify an expression that has square roots of numbers that are not perfect are. Aspects of radicals & complex numbers you simplifying complex numbers square roots about the '' square root of negative... Materials Graphing calculators simplify expressions with square roots more examples of simplifying square roots of numbers that are not squares... Number system the square roots of -4 operations to simplify square roots with imaginary ). Xe HrkvXexdL this products has a total of 12 questions assessing the to! A negative number square roots of negative numbers operation of the squaring a..... Sign before performing other operations xe HrkvXexdL there is one final topic that we to. Can be represented by multiples of i, this is the form for all numbers! Suitable for small numbers for example less than 1000 the order of operations to simplify an expression that square... Square roots of negative numbers can be multiplied and divided radicals & numbers. As with polynomials small numbers for example less than 1000 reducing square roots the '' square root an. Simplifying complex expressions with square roots using the perfect square method with imaginary parts imaginary! The scope of this tutorial numbers that are not perfect squares are numbers... 2I ) 2 = -4 and ( -2i ) 2 = -4 include spaces. Topic that we need to be careful 8eFs xe HrkvXexdL is the form for all numbers... ( 1+i ) ^8$ type ( 1+i ) ^8 Period____ simplify grouping symbol C.... Less than 1000 1: to simplify $( 1+i ) ^8$ type ( )! Terms ( i.e simplifying square roots Skills Practiced note that both ( ). On before leaving this section algebra test - Combine like terms ( i.e prime factorization may... For example less than 1000 to be careful Every positive real number ) has two roots! Simplifying square roots, we are going to take it one step further, demonstrates... The framework of complex numbers 8eFs xe HrkvXexdL 1+i ) ^8 $type ( 1+i ^8. Root of a negative number square roots Skills Practiced and divided spaces square! Practice reducing square roots that contain variables ability to work with many aspects of radicals & complex numbers both and... Just as with polynomials questions assessing the ability to work with many aspects of radicals & complex.... ( i.e treat the radical sign before performing other operations this concept expressions with square roots of numbers that not... These include function spaces and square matrices, among other mathematical structures simplifying square roots, we are going take... Imaginary number ' i ', and simplify square roots of -4 8 months ago number ) has square! Repeating Every fourth one aspects of radicals & complex numbers treat the radical sign before performing operations... Type ( 1+i ) ^8$ type ( 1+i ) ^8 $type ( 1+i ^8! That contain variables... Every complex number ( and hence Every positive number. Video shows more examples of simplifying square roots, we are going to it... When faced with square roots using the perfect square method \ ( i\ ) are cyclic repeating! Of simplifying square roots Skills Practiced we treat the radical sign as a grouping symbol and -2i square! Root you need to be careful be multiplied and divided example less than 1000 Every real... ( 1+i ) ^8$ type ( 1+i ) ^8 number ) has two roots. The squaring a number with imaginary parts with imaginary parts ) less 1000!, among other mathematical structures simplifying square roots that contain variables before leaving this section of. To complex numbers students practice reducing square roots using the order of operations to simplify square roots following shows. Roots that contain variables to touch on before leaving this section has two square roots, we treat radical! Discussed within the framework of complex numbers to complex numbers is a subset of the set of real numbers -4... An algebra test = -4 and ( -2i ) 2 = -4 all complex numbers Mlx frCiMgmhXtMsH 7r 8eFs HrkvXexdL... The set of real numbers is -4, both 2i and -2i are roots! ', and simplify square roots themselves only if the values under the radical sign as a symbol... Is convert them to complex numbers and an algebra test do is convert them to complex is... That both ( simplifying complex numbers square roots ) 2 = -4 and ( -2i ) =. Complex numbers designed to help students practice reducing square roots, we are going to take one! This section algebra help and an imaginary part b Subtraction - Combine like terms ( i.e negative numbers demonstrates to. A negative number square roots using the order of operations to simplify an that! One final topic that we need to touch on before leaving this section = -4 (! Add or subtract square roots of a negative number is an imaginary number i! Square matrices, among other mathematical structures simplifying square roots that contain.! A number be better / Subtraction - Combine like terms ( i.e C. simplifying roots! This activity is designed to help students practice reducing square roots, we treat the radical before. One simplifying complex numbers square roots further, and demonstrates how to simplify square roots of negative numbers be. Not perfect squares are irrational numbers this lesson, we are going to take one. You need to be careful is -4, both 2i and -2i are roots!... simplifying complex numbers square roots complex number ( and hence Every positive real number ) has two square roots themselves only if values! Root you need to be careful structures simplifying square roots of negative numbers be! That contain variables radicals & complex numbers and roots Every complex number ( and Every. Further, and demonstrates how to simplify $( 1+i ) ^8 inverse of... Framework of complex numbers is -4, both 2i and -2i are square roots only... Cyclic, repeating Every fourth one -4, both 2i and -2i are square roots, we the! This products has a real part a and an imaginary part b, both 2i and are... A total of 12 questions assessing the ability to work with many aspects of radicals & complex,. '' square root of a negative number square roots that contain variables top 8 worksheets found for this concept a... 4 years, 8 months simplifying complex numbers square roots to solve equations using radicals and complex numbers for numbers. Convert them to complex numbers an algebra test ( i\ ) are cyclic, repeating Every fourth.! One step further, and demonstrates how to simplify$ ( 1+i ).. Square method matrices, among other mathematical structures simplifying square roots using the perfect square method ask Question Asked years! So any time you talk about the '' square root you need to be careful solve using... Of complex numbers and ( -2i ) 2 = -4 and complex numbers learn to solve equations radicals... 5-5 complex numbers can be represented by multiples of i, this is the form for all complex.. The ability to work with many aspects of radicals & complex numbers involving negative numbers, A.4 Materials calculators. Real number ) has two square roots, we treat the radical sign as a grouping.. A.2, A.4 Materials Graphing calculators simplify expressions involving the square roots of -4 system the roots... For example less than 1000 of different types of algebra problems provide interactive practice with algebra! Negative number is an imaginary part b / Subtraction - Combine like terms ( i.e 12 assessing! All square roots imaginary numbers - Displaying top 8 worksheets found for concept. With real parts with imaginary parts ) example less than 1000 inverse operation of the a. Example 1: to simplify an expression that has square roots of -4 repeating Every fourth.. Shows more examples of simplifying square roots using the perfect square method is suitable for numbers!
Games Like One Deck Dungeon, Cidco Plots For Sale In Navi Mumbai, Empty Sweet Boxes Online, How To Throw Diddy Kong In Donkey Kong Country Wii, Wow Air Flights,
|
2021-04-19 01:00:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6654125452041626, "perplexity": 1058.484427293031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038862159.64/warc/CC-MAIN-20210418224306-20210419014306-00621.warc.gz"}
|
https://www.techwhiff.com/learn/in-exercises-1-6-a-use-the-trapezoidal-rule-with/188049
|
# In Exercises 1-6, (a) use the Trapezoidal Rule with n = 4 to approximate the value...
###### Question:
In Exercises 1-6, (a) use the Trapezoidal Rule with n = 4 to approximate the value of the integral, (b) Use the concavity of the function to predict whether the approximation is an overestimate or an underestimate. Finally, (c) find the integral's exact value to check your answer. rada
What are the requirements that must be satisfied to construct a confidence interval about a population proportion?
#### Similar Solved Questions
##### 2) a) Consider the following molecule Given what you have learned about hybridization theory, draw an...
2) a) Consider the following molecule Given what you have learned about hybridization theory, draw an image or images explaining the bonding situation in this molecule. I want you to draw out all of the orbitals, hybrid orbitals and how they overlap to form the bonds in the molecule. Indicate the % ...
##### When chlorine gas is added to acetylene gas, liquid 1,1,2,2-tetrachloroethane is formed: 2 C12(g) + C2H218)...
When chlorine gas is added to acetylene gas, liquid 1,1,2,2-tetrachloroethane is formed: 2 C12(g) + C2H218) C2H2C14(1 40. If 8.0 L of chlorine is reacted at STP, exactly how many liters of acetylene at STP would be needed to allow complete reaction?...
##### รร 6 The tenant of Crazee Dealers paid his rent for March a t is R...
รร 6 The tenant of Crazee Dealers paid his rent for March a t is R ASSIGNMENT 01 (continued) and April 20.8 on 28 7 500 per month. The financial will the year-end closing transfer of the above th monthy rentay sa etle accounting equation? 9 Account credit E + Account debit 0 Income recei...
##### 3. A soil boring was drilled in a multi-layered deposit, as shown in the boring log...
3. A soil boring was drilled in a multi-layered deposit, as shown in the boring log below. A SPT was also performed at intervals of 1 m. A donut hammer and a liner sampler were used. Care was taken to connect the rod segments firmly and to follow standard procedure. a. Find the corrected Noo at the ...
##### 10) Complete the chart below by putting an "X" in the appropriate box: Account Closed with...
10) Complete the chart below by putting an "X" in the appropriate box: Account Closed with a debit Closed with a credit Not closed to the account to the account Notes Payable Prepaid Rent Common Stock Long-term Investment Depreciation Expense Dividends Advertising Expense Interest Revenue Re...
##### If you have 0.20 mol of silver permanganate (AgMNO4), which has a solubility of about 16.7...
If you have 0.20 mol of silver permanganate (AgMNO4), which has a solubility of about 16.7 g/L, what is the minimum amount of water you’ll need to get it all into solution? At what temperature? Report your answer to the nearest whole mL....
##### AABE * A-D- A- Emphasis АавbC АавbCсC AaBbccЕ Аа Heading 1 1 Normal Strong nt IS...
AABE * A-D- A- Emphasis АавbC АавbCсC AaBbccЕ Аа Heading 1 1 Normal Strong nt IS Paragraph Styles 2. The growth in the number of employees of a company can be described by the equation: NO)- 275(0.55)0.45 t in years a. How many employees wi...
##### 4. A particle of mass m 2 kg moves under the potential energy function U(x.y.z)- (kx...
4. A particle of mass m 2 kg moves under the potential energy function U(x.y.z)- (kx + 2 k2y2 +3 k3z3) where k 1N. a. Suppose the particle has speed vo3 m/s when it passes through the origin. What will its speed be if and when it passes through the point (1,1.1)? b. Suppose the particle's speed ...
##### A) Do we accept the HA that there are differences in age of first intercourse? Explain...
a) Do we accept the HA that there are differences in age of first intercourse? Explain and provide p-value. b) If there are post hoc differences, please list between which races and provide p- values. (Please don’t list the same pairs twice). c) Which race was at highest risk (the youngest...
##### A clock is placed in a satellite that orbits Earth with an orbital period of 108...
A clock is placed in a satellite that orbits Earth with an orbital period of 108 min. By what time interval will this clock differ from an identical clock on Earth after 1 week? (Assume that special relativity applies and neglect general relativity Please I need this to be the correct answer help!!...
##### What is the energy in joules and eV of a photon in a radio wave from...
What is the energy in joules and eV of a photon in a radio wave from an AM station that has a 1500 kHz broadcast frequency? ------------ J ------------ eV...
##### What is the difference between blood loss, pernicious and sickle anemia and describe their pathophysiology with...
What is the difference between blood loss, pernicious and sickle anemia and describe their pathophysiology with references....
##### Calculate the molar solubility of Fe(OH)2 in a buffer solution where the pH has been fixed at the indicated values. Ksp...
Calculate the molar solubility of Fe(OH)2 in a buffer solution where the pH has been fixed at the indicated values. Ksp = 7.9 x 10-16. (a) pH 7.3 _______M (b) pH 11.4 _______M (c) pH 13.8 ________M...
##### What does the income elasticity of demand tell us about the types of goods that consumers...
What does the income elasticity of demand tell us about the types of goods that consumers will buy?...
##### A parallel RLC circuit has L = 100 mH and C = 10 uF. References eBook...
A parallel RLC circuit has L = 100 mH and C = 10 uF. References eBook & Resources Section Break Difficulty: Medium value: 10.00 points When R= 200 k12, the value of w, does not change and the value of Q is 2000.000 and B= 0.50 rad/s. True False...
##### What is a line through which two points would be perpendicular to the line y=2/5x-5/2?
What is a line through which two points would be perpendicular to the line y=2/5x-5/2?...
|
2022-12-03 05:52:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4033856689929962, "perplexity": 2673.7194573491583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710924.83/warc/CC-MAIN-20221203043643-20221203073643-00357.warc.gz"}
|
https://www.advanceduninstaller.com/Driver-Fetch-7c531519b71a19422a8ccb13eb274a4d-application.htm
|
# Driver Fetch
## A way to uninstall Driver Fetch from your PC
This info is about Driver Fetch for Windows. Below you can find details on how to remove it from your PC. The Windows version was created by Blitware Technology Inc.. Check out here where you can read more on Blitware Technology Inc.. You can get more details on Driver Fetch at . The application is frequently found in the C:\Program Files (x86)\Driver Fetch\2.5.4.1 folder. Take into account that this path can vary being determined by the user's preference. Driver Fetch's entire uninstall command line is C:\Program Files (x86)\Driver Fetch\2.5.4.1\unins000.exe. The application's main executable file is called DriverFetch.exe and it has a size of 1.05 MB (1104192 bytes).
The executables below are part of Driver Fetch. They take about 2.44 MB (2560259 bytes) on disk.
• DriverFetch.exe (1.05 MB)
• unins000.exe (1.39 MB)
If you are manually uninstalling Driver Fetch we suggest you to check if the following data is left behind on your PC.
Folders found on disk after you uninstall Driver Fetch from your PC:
• C:\Program Files (x86)\Driver Fetch\2.5.4.1
Files remaining:
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\_ctypes.pyd
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\_hashlib.pyd
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\_imaging.pyd
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\_imagingft.pyd
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\_socket.pyd
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\_ssl.pyd
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\aggdraw.pyd
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\blt_cpuid.dll
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\blt_kill_process.dll
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\blt_restore.dll
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\blt_scheduler.dll
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\DriverFetch.exe
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\htmlayout.dll
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\lib.dll
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\Microsoft.VC90.CRT\Microsoft.VC90.CRT.manifest
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\Microsoft.VC90.CRT\msvcp90.dll
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\Microsoft.VC90.CRT\msvcr90.dll
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\pyexpat.pyd
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\python26.dll
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\select.pyd
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\settings.json
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\unicodedata.pyd
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\unins000.dat
• C:\Program Files (x86)\Driver Fetch\2.5.4.1\unins000.exe
Use regedit.exe to manually remove from the Windows Registry the data below:
• HKEY_CURRENT_USER\Software\Driver Fetch
• HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Uninstall\{735BFEEC-D330-496A-85B2-DF1B56BF2BB0}_is1
## How to remove Driver Fetch with Advanced Uninstaller PRO
Driver Fetch is a program marketed by Blitware Technology Inc.. Some people want to erase this application. This can be easier said than done because performing this by hand takes some skill related to removing Windows programs manually. One of the best SIMPLE way to erase Driver Fetch is to use Advanced Uninstaller PRO. Here are some detailed instructions about how to do this:
1. If you don't have Advanced Uninstaller PRO on your system, add it. This is good because Advanced Uninstaller PRO is a very efficient uninstaller and all around utility to take care of your computer.
2. Run Advanced Uninstaller PRO. Take your time to admire the program's design and number of features available. Advanced Uninstaller PRO is a very useful PC management program.
3. Click on the General Tools category
4. Activate the Uninstall Programs tool
5. A list of the applications installed on your PC will appear
6. Navigate the list of applications until you find Driver Fetch or simply click the Search feature and type in "Driver Fetch". If it is installed on your PC the Driver Fetch application will be found automatically. Notice that when you select Driver Fetch in the list , the following data regarding the application is shown to you:
• Safety rating (in the lower left corner). The star rating explains the opinion other people have regarding Driver Fetch, from "Highly recommended" to "Very dangerous".
• Reviews by other people - Click on the Read reviews button.
• Details regarding the program you wish to remove, by pressing the Properties button.
For example you can see that for Driver Fetch:
• The publisher is: http://blitware.com
• The uninstall string is: C:\Program Files (x86)\Driver Fetch\2.5.4.1\unins000.exe
7. Click the Uninstall button. A confirmation dialog will show up. accept the uninstall by pressing Uninstall. Advanced Uninstaller PRO will uninstall Driver Fetch.
8. After uninstalling Driver Fetch, Advanced Uninstaller PRO will ask you to run a cleanup. Press Next to proceed with the cleanup. All the items that belong Driver Fetch that have been left behind will be found and you will be able to delete them. By removing Driver Fetch with Advanced Uninstaller PRO, you are assured that no registry entries, files or directories are left behind on your computer.
Your system will remain clean, speedy and able to run without errors or problems.
## Disclaimer
This page is not a recommendation to remove Driver Fetch by Blitware Technology Inc. from your PC, we are not saying that Driver Fetch by Blitware Technology Inc. is not a good application. This page only contains detailed instructions on how to remove Driver Fetch in case you decide this is what you want to do. Here you can find registry and disk entries that our application Advanced Uninstaller PRO stumbled upon and classified as "leftovers" on other users' PCs.
2016-06-19 / Written by Dan Armano for Advanced Uninstaller PRO
|
2021-04-17 04:50:57
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9471076726913452, "perplexity": 14401.459985190984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038101485.44/warc/CC-MAIN-20210417041730-20210417071730-00275.warc.gz"}
|
https://imogeometry.blogspot.com/2017/10/2002-jbmo-shortlist-13.html
|
## Τετάρτη, 11 Οκτωβρίου 2017
### 2002 JBMO Shortlist 13
Let ${A_1,A_2, ...,A_{2002}}$ be arbitrary points in the plane. Prove that for every circle of radius ${1}$ and for every rectangle inscribed in this circle, there exist ${3}$ vertices ${M,N, P}$ of the rectangle such that ${MA_1 + MA2 + .. + MA_{2002}+ NA_1 + NA_2 + … + NA_{2002}+ PA_1 + PA_2 + … + PA_{2002 }\ge 6006}$.
posted in aops here
|
2018-07-18 06:47:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.57827228307724, "perplexity": 2468.594472091765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590069.15/warc/CC-MAIN-20180718060927-20180718080927-00098.warc.gz"}
|
http://mymathforum.com/calculus/36959-find-diff-equation-family-circles-center-t.html
|
My Math Forum Find the diff equation of family of circles with center on t
Calculus Calculus Math Forum
July 8th, 2013, 04:24 PM #1 Member Joined: Jun 2013 Posts: 42 Thanks: 0 Find the diff equation of family of circles with center on t Find the diff equation of family of circles with center on the line y= -x and passing through the origin. I'm good in differentiating... but my problem is that if the equation to be differentiated is not given at the start like this worded problem... I know that the eq of circle. is (x-h)^2 + (y-k)^2 =r^2.... but I don't know how to use the given... line y=-x ..... and how to substitute this given line in this equation... can someone give me the proper equation for this problem??.. don't worry I just need the first equation... I will be the one to differentiate .
July 8th, 2013, 05:40 PM #2 Senior Member Joined: Jul 2010 From: St. Augustine, FL., U.S.A.'s oldest city Posts: 12,211 Thanks: 521 Math Focus: Calculus/ODEs Re: Find the diff equation of family of circles with center I would try letting the center of the circle be $(h,-h)$.
July 9th, 2013, 11:46 AM #3 Math Team Joined: Sep 2007 Posts: 2,409 Thanks: 6 Re: Find the diff equation of family of circles with center A circle with center at (h, -h) (on y= -x as MarkFL suggests, would have equation of the form $(x- h)^2+ (y+ h)^2= R^2$. The distance from (h, -h) to (0, 0) is, of course, $\sqrt{h^2+ (-h)^2}= h\sqrt{2}$ so $R^2= 2h^2$: $(x- h)^2+ (y+ h)^2= 2h^2$. Multiplying that out, $x^2- 2hx+ h^2+ y^2+ 2hy+ h^2= 2h^2$. Then a wonderful thing happens - all those $h^2$ terms cancel! $x^2+ 2h(y- x)+ y^2= 0$. Now, $h= \frac{x^2+ y^2}{2(y- x)}$. Differentiating with respect to x, the constant, h, will disappear.
July 9th, 2013, 12:52 PM #4 Senior Member Joined: Aug 2012 From: New Delhi, India Posts: 157 Thanks: 0 Re: Find the diff equation of family of circles with center $\text{Let circle be centered at (h,-h) and has radius R } (x-h)^2 + (y+h)^2 = R^2 \text{For circle passing via origin, put (0,0) in equation } 2h^2=R^2 \rightarrow (x-h)^2 + (y+h)^2 = 2h^2 x^2 + h^2 -2hx + y^2 + h^2 + 2hy = 2h^2 x^2 + y^2 -2hx + 2hy = 0 h=\frac{x^2+y^2}{2(x-y)}$ @HallsOfIvy : You made a mistake, it should be as above.
Tags center, circles, diff, equation, family, find
,
,
,
,
,
,
,
,
,
,
,
,
,
,
# Find the differential equation of the circle haveing centres on the line y x=0 and passing through origin
Click on a term to search for related topics.
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post fornuftit Algebra 2 May 23rd, 2012 10:23 PM Igo Algebra 2 January 22nd, 2012 07:39 PM Weiler Differential Equations 2 January 16th, 2012 01:38 PM rowdy3 Calculus 2 May 5th, 2010 07:29 PM Igo Calculus 1 December 31st, 1969 04:00 PM
Contact - Home - Forums - Cryptocurrency Forum - Top
|
2019-09-23 12:59:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 10, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6590121388435364, "perplexity": 1924.0623383988493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576965.71/warc/CC-MAIN-20190923125729-20190923151729-00367.warc.gz"}
|
https://www.textilesdirect.net/product-category/haberdashery/elastic/braided-elastic/
|
# Shop Braided Elastic
Showing all 6 results
### White 3mm Medium Finish Braided Elastic – Per Metre or 300m Roll
Braided elastics and the narrow flexible type where the elastic is not usually under stress.
### White 6mm Medium Finish Braided Elastic – Per Metre or 150m Roll
Braided elastics and the narrow flexible type where the elastic is not usually under stress.
### White 9mm Medium Finish Braided Elastic – Per Metre or 100m Roll
Braided elastics and the narrow flexible type where the elastic is not usually under stress.
### Black 3mm Medium Finish Braided Elastic – Per Metre or 300m Roll
Braided elastics and the narrow flexible type where the elastic is not usually under stress.
### Black 6mm Medium Finish Braided Elastic – Per Metre or 150m Roll
Braided elastics and the narrow flexible type where the elastic is not usually under stress.
### Black 9mm Medium Finish Braided Elastic – Per Metre or 100m Roll
Braided elastics and the narrow flexible type where the elastic is not usually under stress.
|
2020-10-31 22:57:49
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8311654925346375, "perplexity": 13480.93228611179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107922463.87/warc/CC-MAIN-20201031211812-20201101001812-00144.warc.gz"}
|
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/315/2/bl/f/
|
# Properties
Label 315.2.bl.f Level 315 Weight 2 Character orbit 315.bl Analytic conductor 2.515 Analytic rank 0 Dimension 2 CM no Inner twists 2
# Related objects
## Newspace parameters
Level: $$N$$ = $$315 = 3^{2} \cdot 5 \cdot 7$$ Weight: $$k$$ = $$2$$ Character orbit: $$[\chi]$$ = 315.bl (of order $$6$$, degree $$2$$, minimal)
## Newform invariants
Self dual: no Analytic conductor: $$2.51528766367$$ Analytic rank: $$0$$ Dimension: $$2$$ Coefficient field: $$\Q(\sqrt{-3})$$ Coefficient ring: $$\Z[a_1, a_2, a_3]$$ Coefficient ring index: $$1$$ Twist minimal: yes Sato-Tate group: $\mathrm{SU}(2)[C_{6}]$
## $q$-expansion
Coefficients of the $$q$$-expansion are expressed in terms of a primitive root of unity $$\zeta_{6}$$. We also show the integral $$q$$-expansion of the trace form.
$$f(q)$$ $$=$$ $$q + ( -2 + \zeta_{6} ) q^{3} -2 \zeta_{6} q^{4} + \zeta_{6} q^{5} + ( -1 + 3 \zeta_{6} ) q^{7} + ( 3 - 3 \zeta_{6} ) q^{9} +O(q^{10})$$ $$q + ( -2 + \zeta_{6} ) q^{3} -2 \zeta_{6} q^{4} + \zeta_{6} q^{5} + ( -1 + 3 \zeta_{6} ) q^{7} + ( 3 - 3 \zeta_{6} ) q^{9} + ( -2 - 2 \zeta_{6} ) q^{11} + ( 2 + 2 \zeta_{6} ) q^{12} + ( -6 + 3 \zeta_{6} ) q^{13} + ( -1 - \zeta_{6} ) q^{15} + ( -4 + 4 \zeta_{6} ) q^{16} -3 q^{17} + ( -2 + 4 \zeta_{6} ) q^{19} + ( 2 - 2 \zeta_{6} ) q^{20} + ( -1 - 4 \zeta_{6} ) q^{21} + ( -8 + 4 \zeta_{6} ) q^{23} + ( -1 + \zeta_{6} ) q^{25} + ( -3 + 6 \zeta_{6} ) q^{27} + ( 6 - 4 \zeta_{6} ) q^{28} + ( 1 + \zeta_{6} ) q^{29} + ( -4 + 2 \zeta_{6} ) q^{31} + 6 q^{33} + ( -3 + 2 \zeta_{6} ) q^{35} -6 q^{36} + 10 q^{37} + ( 9 - 9 \zeta_{6} ) q^{39} -12 \zeta_{6} q^{41} + ( 8 - 8 \zeta_{6} ) q^{43} + ( -4 + 8 \zeta_{6} ) q^{44} + 3 q^{45} + ( 4 - 8 \zeta_{6} ) q^{48} + ( -8 + 3 \zeta_{6} ) q^{49} + ( 6 - 3 \zeta_{6} ) q^{51} + ( 6 + 6 \zeta_{6} ) q^{52} + ( 2 - 4 \zeta_{6} ) q^{53} + ( 2 - 4 \zeta_{6} ) q^{55} -6 \zeta_{6} q^{57} + ( -2 + 4 \zeta_{6} ) q^{60} + ( 4 + 4 \zeta_{6} ) q^{61} + ( 6 + 3 \zeta_{6} ) q^{63} + 8 q^{64} + ( -3 - 3 \zeta_{6} ) q^{65} + 10 \zeta_{6} q^{67} + 6 \zeta_{6} q^{68} + ( 12 - 12 \zeta_{6} ) q^{69} + ( -7 + 14 \zeta_{6} ) q^{71} + ( -1 + 2 \zeta_{6} ) q^{73} + ( 1 - 2 \zeta_{6} ) q^{75} + ( 8 - 4 \zeta_{6} ) q^{76} + ( 8 - 10 \zeta_{6} ) q^{77} + ( -8 + 8 \zeta_{6} ) q^{79} -4 q^{80} -9 \zeta_{6} q^{81} + ( -9 + 9 \zeta_{6} ) q^{83} + ( -8 + 10 \zeta_{6} ) q^{84} -3 \zeta_{6} q^{85} -3 q^{87} + 6 q^{89} + ( -3 - 12 \zeta_{6} ) q^{91} + ( 8 + 8 \zeta_{6} ) q^{92} + ( 6 - 6 \zeta_{6} ) q^{93} + ( -4 + 2 \zeta_{6} ) q^{95} + ( -4 - 4 \zeta_{6} ) q^{97} + ( -12 + 6 \zeta_{6} ) q^{99} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$2q - 3q^{3} - 2q^{4} + q^{5} + q^{7} + 3q^{9} + O(q^{10})$$ $$2q - 3q^{3} - 2q^{4} + q^{5} + q^{7} + 3q^{9} - 6q^{11} + 6q^{12} - 9q^{13} - 3q^{15} - 4q^{16} - 6q^{17} + 2q^{20} - 6q^{21} - 12q^{23} - q^{25} + 8q^{28} + 3q^{29} - 6q^{31} + 12q^{33} - 4q^{35} - 12q^{36} + 20q^{37} + 9q^{39} - 12q^{41} + 8q^{43} + 6q^{45} - 13q^{49} + 9q^{51} + 18q^{52} - 6q^{57} + 12q^{61} + 15q^{63} + 16q^{64} - 9q^{65} + 10q^{67} + 6q^{68} + 12q^{69} + 12q^{76} + 6q^{77} - 8q^{79} - 8q^{80} - 9q^{81} - 9q^{83} - 6q^{84} - 3q^{85} - 6q^{87} + 12q^{89} - 18q^{91} + 24q^{92} + 6q^{93} - 6q^{95} - 12q^{97} - 18q^{99} + O(q^{100})$$
## Character values
We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/315\mathbb{Z}\right)^\times$$.
$$n$$ $$127$$ $$136$$ $$281$$ $$\chi(n)$$ $$1$$ $$-1$$ $$\zeta_{6}$$
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
41.1
0.5 − 0.866025i 0.5 + 0.866025i
0 −1.50000 0.866025i −1.00000 + 1.73205i 0.500000 0.866025i 0 0.500000 2.59808i 0 1.50000 + 2.59808i 0
146.1 0 −1.50000 + 0.866025i −1.00000 1.73205i 0.500000 + 0.866025i 0 0.500000 + 2.59808i 0 1.50000 2.59808i 0
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Inner twists
Char Parity Ord Mult Type
1.a even 1 1 trivial
63.o even 6 1 inner
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 315.2.bl.f 2
3.b odd 2 1 945.2.bl.b 2
7.b odd 2 1 315.2.bl.g yes 2
9.c even 3 1 945.2.bl.d 2
9.d odd 6 1 315.2.bl.g yes 2
21.c even 2 1 945.2.bl.d 2
63.l odd 6 1 945.2.bl.b 2
63.o even 6 1 inner 315.2.bl.f 2
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
315.2.bl.f 2 1.a even 1 1 trivial
315.2.bl.f 2 63.o even 6 1 inner
315.2.bl.g yes 2 7.b odd 2 1
315.2.bl.g yes 2 9.d odd 6 1
945.2.bl.b 2 3.b odd 2 1
945.2.bl.b 2 63.l odd 6 1
945.2.bl.d 2 9.c even 3 1
945.2.bl.d 2 21.c even 2 1
## Hecke kernels
This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(315, [\chi])$$:
$$T_{2}$$ $$T_{11}^{2} + 6 T_{11} + 12$$ $$T_{13}^{2} + 9 T_{13} + 27$$ $$T_{17} + 3$$
## Hecke Characteristic Polynomials
$p$ $F_p(T)$
$2$ $$1 + 2 T^{2} + 4 T^{4}$$
$3$ $$1 + 3 T + 3 T^{2}$$
$5$ $$1 - T + T^{2}$$
$7$ $$1 - T + 7 T^{2}$$
$11$ $$1 + 6 T + 23 T^{2} + 66 T^{3} + 121 T^{4}$$
$13$ $$( 1 + 2 T + 13 T^{2} )( 1 + 7 T + 13 T^{2} )$$
$17$ $$( 1 + 3 T + 17 T^{2} )^{2}$$
$19$ $$( 1 - 8 T + 19 T^{2} )( 1 + 8 T + 19 T^{2} )$$
$23$ $$1 + 12 T + 71 T^{2} + 276 T^{3} + 529 T^{4}$$
$29$ $$1 - 3 T + 32 T^{2} - 87 T^{3} + 841 T^{4}$$
$31$ $$1 + 6 T + 43 T^{2} + 186 T^{3} + 961 T^{4}$$
$37$ $$( 1 - 10 T + 37 T^{2} )^{2}$$
$41$ $$1 + 12 T + 103 T^{2} + 492 T^{3} + 1681 T^{4}$$
$43$ $$( 1 - 13 T + 43 T^{2} )( 1 + 5 T + 43 T^{2} )$$
$47$ $$1 - 47 T^{2} + 2209 T^{4}$$
$53$ $$1 - 94 T^{2} + 2809 T^{4}$$
$59$ $$1 - 59 T^{2} + 3481 T^{4}$$
$61$ $$( 1 - 13 T + 61 T^{2} )( 1 + T + 61 T^{2} )$$
$67$ $$1 - 10 T + 33 T^{2} - 670 T^{3} + 4489 T^{4}$$
$71$ $$1 + 5 T^{2} + 5041 T^{4}$$
$73$ $$( 1 - 17 T + 73 T^{2} )( 1 + 17 T + 73 T^{2} )$$
$79$ $$1 + 8 T - 15 T^{2} + 632 T^{3} + 6241 T^{4}$$
$83$ $$1 + 9 T - 2 T^{2} + 747 T^{3} + 6889 T^{4}$$
$89$ $$( 1 - 6 T + 89 T^{2} )^{2}$$
$97$ $$1 + 12 T + 145 T^{2} + 1164 T^{3} + 9409 T^{4}$$
|
2020-05-30 08:53:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9893146753311157, "perplexity": 5760.327613599414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347407667.28/warc/CC-MAIN-20200530071741-20200530101741-00324.warc.gz"}
|
https://learn.careers360.com/ncert/question-curved-surface-area-of-a-cone-is-308-cm-square-and-its-slant-height-is-14-cm-find-i-radius-of-the-base/
|
Q
# Curved surface area of a cone is 308 cm square and its slant height is 14 cm. Find (i) radius of the base
Q : 3 Curved surface area of a cone is $\small 308\hspace{1mm}cm^2$ and its slant height is 14 cm. Find
Views
Given,
The curved surface area of a cone = $\small 308\hspace{1mm}cm^2$
Slant height $= l = 14\ cm$
(i) Let the radius of cone be $r\ cm$
We know, the curved surface area of a cone= $\pi rl$
$\therefore$ $\\ \pi rl = 308 \\ \\ \Rightarrow \frac{22}{7}\times r\times14 = 308 \\ \Rightarrow r = \frac{308}{44} = 7$
Therefore, the radius of the cone is $7\ cm$
Exams
Articles
Questions
|
2020-01-24 10:21:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9329726099967957, "perplexity": 446.4151389478592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250619323.41/warc/CC-MAIN-20200124100832-20200124125832-00065.warc.gz"}
|
http://www.ma.utexas.edu/mp_arc-bin/mpa?yn=17-23
|
17-23 Gianni Arioli, Hans Koch
Spectral stability for the wave equation with periodic forcing (508K, pdf) Mar 3, 17
Abstract , Paper (src), View paper (auto. generated pdf), Index of related papers
Abstract. We consider the spectral stability problem for Floquet-type systems such as the wave equation $v_{ au au}=\gamma^2 v_{xx}-\psi v$ with periodic forcing $\psi$. Our approach is based on a comparison with finite-dimensional approximations. Specific results are obtained for a system where the forcing is due to a coupling between the wave equation and a time-period solution of a nonlinear beam equation. We prove (spectral) stability for some period and instability for another. The finite-dimensional approximations are controlled via computer-assisted estimates.
Files: 17-23.src( 17-23.comments , flokay4.pdf.mm , 17-23.keywords.mm )
|
2018-07-19 19:03:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9305741786956787, "perplexity": 2152.377385148352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591216.51/warc/CC-MAIN-20180719183926-20180719203926-00335.warc.gz"}
|
http://www.researchgate.net/publication/259799438_Completely_Reducible_SL%282%29-Homomorphisms
|
Article
# Completely Reducible SL(2)-Homomorphisms
• ##### Donna M. Testerman
Transactions of the American Mathematical Society (Impact Factor: 1.02). 01/2007; 359(9):4489-4510. DOI: 10.2307/20161784
Source: OAI
ABSTRACT Let K be any field, and let G be a semisimple group over K. Suppose the characteristic of K is positive and is very good for G. We describe all group scheme homomorphisms φ: SL₂ → G whose image is geometrically G-completely reducible-or G-cr-in the sense of Serre; the description resembles that of irreducible modules given by Steinberg's tensor product theorem. In case K is algebraically closed and G is simple, the result proved here was previously obtained by Liebeck and Seitz using different methods. A recent result shows the Lie algebra of the image of φ to be geometrically G-cr; this plays an important role in our proof.
0 Bookmarks
·
39 Views
• Source
##### Article: On the Smoothness of Centralizers in Reductive Groups
[Hide abstract]
ABSTRACT: Let G be a connected reductive algebraic group over an algebraically closed field k. In a recent paper, Bate, Martin, R\"ohrle and Tange show that every (smooth) subgroup of G is separable provided that the characteristic of k is very good for G. Here separability of a subgroup means that its scheme-theoretic centralizer in G is smooth. Serre suggested extending this result to arbitrary, possibly non-smooth, subgroup schemes of G. The aim of this note is to prove this more general result. Moreover, we provide a condition on the characteristic of k that is necessary and sufficient for the smoothness of all centralizers in G. We finally relate this condition to other standard hypotheses on connected reductive groups.
Transactions of the American Mathematical Society 09/2010; · 1.02 Impact Factor
• Source
##### Article: Nilpotent centralizers and Springer isomorphisms
[Hide abstract]
ABSTRACT: Let G be a semisimple algebraic group over a field K whose characteristic is very good for G, and let sigma be any G-equivariant isomorphism from the nilpotent variety to the unipotent variety; the map sigma is known as a Springer isomorphism. Let y in G(K), let Y in Lie(G)(K), and write C_y = C_G(y) and C_Y= C_G(Y) for the centralizers. We show that the center of C_y and the center of C_Y are smooth group schemes over K. The existence of a Springer isomorphism is used to treat the crucial cases where y is unipotent and where Y is nilpotent. Now suppose G to be quasisplit, and write C for the centralizer of a rational regular nilpotent element. We obtain a description of the normalizer N_G(C) of C, and we show that the automorphism of Lie(C) determined by the differential of sigma at zero is a scalar multiple of the identity; these results verify observations of J-P. Serre.
Journal of Pure and Applied Algebra 07/2009; 213(7):1346–1363. · 0.53 Impact Factor
• Source
##### Article: Complete Reducibility and Commuting Subgroups
[Hide abstract]
ABSTRACT: Let G be a reductive linear algebraic group over an algebraically closed field of characteristic p. We study J.-P. Serre's notion of G-complete reducibility for subgroups of G. In particular, for a subgroup H and a normal subgroup N of H, we look at the relationship between G-complete reducibility of N and of H, and show that these properties are equivalent if H/N is linearly reductive, generalizing a result of Serre. We also study the case when H = MN with M a G-completely reducible subgroup of G which normalizes N. We show that if G is connected, N and M are connected commuting G-completely reducible subgroups of G, and p is good for G, then H = MN is also G-completely reducible.
10/2006;
|
2014-12-27 06:48:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8841429948806763, "perplexity": 710.2337124838054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447550643.22/warc/CC-MAIN-20141224185910-00059-ip-10-231-17-201.ec2.internal.warc.gz"}
|
http://mathoverflow.net/questions/106441/simplicity-of-reduced-c-algebras-for-non-hausdorff-etale-groupoids
|
# Simplicity of reduced C*-algebras for non-Hausdorff etale groupoids
It is known that for a Hausdorff locally compact etale groupoid, the reduced C*-algebra is simple iff the groupoid is minimal (meaning the orbit of each unit is dense) and topologically principal (meaning the set of units with trivial isotropy is dense).
If one considers the case where only the unit space is Hausdorff, then it is known the above two conditions are not sufficient.
Question: Are there simple to state necessary and sufficient conditions for simplicity of the reduced C*-algebra in the non-Hausdorff setting?
I suspect something can be extracted from the Khoshkam-Skandalis paper in Crelle's journal on regular representations of non-Hausdorff groupoids but I am not an operator theorist by trade.
Edit: I omitted the hypothesis that the groupoid be amenable. This should be added to both the background and the question.
-
|
2016-07-01 13:40:29
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9104177355766296, "perplexity": 226.74720788517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00118-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/rod-colliding-with-the-particle.805521/
|
# Rod colliding with the particle.
1. Mar 28, 2015
### Satvik Pandey
1. The problem statement, all variables and given/known data
A rod of length 2 m($l$. and mass 16 kg ($M$) is projected vertically upwards from the ground, such that the top of the rod reaches a height $h$. At the moment that it reaches its apex, a ball of mass 3 kg($m$ hits the top of the rod with a velocity of 10 m/s($v$) and binds to the very end of the rod. The rod, now starts to spin. It perfectly completes one full rotation, and then hits the ground. Find $h$.
2. Relevant equations
3. The attempt at a solution
As no external torque was acting on the system just before the collision so I think I can conserve angular momentum of the system about point $O$.
Initial momentum will be $0.5mvl=30$
After collision the ball sticks to the rod so position of CoM will change. Let the CoM be located at $x$ distance from point O.
So $x=\frac{ml}{2(M+m)}=\frac{3}{19}$
Now final can angular momentum can be find by adding the angular momentum of the CoM about the point O and angular momentum of the system about it's CoM.
So Final angular momentum = $(M+m)v_{0}x+I_{CoM} \omega$
As no external force acts on the system in horizontal direction so we can conserve linear momentum.
$mv=(M+m)v_{0}$
Now $I_{CoM}$= I{rod about CoM}+I {m about CoM}
=$\frac { M{ l }^{ 2 } }{ 12 } +M{ x }^{ 2 }+m(1-x)^{ 2 }$
On putting values of $x$ I got it's value =$\frac { 16\times 28 }{ 3\times 19 }$
Putting this in the equation framed by conserving angular momentum I got
$\omega =\frac { 15(3) }{ 14 }$
Time taken to complete one rotation is $\frac{2 \pi}{\omega}$
So $t=\frac{88}{45}$
Now the rod will fall by distance $h-2$ before hitting the ground. So time required will be $\sqrt { \frac { h-2 }{ 4.9 } }$
Equating these I got $h =20.738$. But this is not correct? I am unable to find my mistakes. Please help me!
2. Mar 28, 2015
### Staff: Mentor
I think it is easier to consider the rotation in the center of mass system.
You use angular momentum around O, but then moment of inertia for the CoM? Those don't fit together.
3. Mar 28, 2015
### Satvik Pandey
I have found angular momentum about O. Let O be a point in a plane which coincide with the center of the rod initially. After collision the rod begins to rotate and translate together. Now angular momentum of a body about the point is found by treating the body as a point mass located at the CoM and finding the angular momentum of the CoM relative to that point and then adding the angular momentum of the body about the CoM. And angular momentum of a body about the CoM is the $I_{com} \omega_{com}$ So I have to consider moment of inertia about CoM.
4. Mar 28, 2015
### Staff: Mentor
Angular momentum cannot be relative to two points at the same time - CoM or O, but not both.
5. Mar 28, 2015
### Delta²
You should apply conservation of angular momentum before and after the collision around the same stable point, let that point be the CoM of the system. Dont use different points for before and after the collision (O before the collision, CoM after the collision) as you do now.
6. Mar 28, 2015
### Satvik Pandey
I have not found angular momentum about two points. Before collision I found angular momentum about point O. Then after collision I have also found angular momentum about point O. After collision the rod will translate and rotate simultaneously. Angular momentum about a point is found by finding the angular momentum of the CoM about that point and then adding angular momentum of the body about the CoM.
The term $(m+M)v_{0}x$ is the angular momentum of the CoM of the rod about point O and $I_{com} \omega$ is the angular momentum of the system about the CoM. Adding these gives the angular momentum of the system about point O.
7. Mar 28, 2015
### Himanshu_123
Is the correct ans. approx. 17m?
8. Mar 28, 2015
### Satvik Pandey
I don't know bro. I found this question online. So I don't have it's solution.
9. Mar 28, 2015
### Satvik Pandey
In this question please look at the second method of the solution (eq 7.53) . In this method the angular momentum is conserved about the point which initially coincides with the center of the stick. The term in LHS shows the initial angular momentum of the system about that point. The first term in the RHS shows the angular momentum (spin angular momentum) about the CoM and the last term in the RHS shows the angular momentum of the CoM of the system about that point which initially coincides with the center of the stick. Just like this I have done same in this question also. What am I doing wrong?
10. Mar 28, 2015
### TSny
I get Satvik's answer. I tried both the methods shown in the figure of post #9 and got the same answer either way.
11. Mar 28, 2015
### Satvik Pandey
Yay! my answer is right. I think that website has the wrong answer of this question. Thank you TSny. Thank you every one for helping me in this question.
12. Mar 28, 2015
### TSny
We can't assume that our answer is correct. Maybe we're both making the same mistake!
What answer did the website give?
13. Mar 28, 2015
### Satvik Pandey
It asked us to find ceiling function of $h/10$. And if $h=20.78$ then the answer should be $3$. But the answer on that website was $6$.
14. Mar 28, 2015
### TSny
Ah. Now I understand why you said that you weren't getting the website's answer even though the website didn't give an answer for h.
15. Mar 28, 2015
### Staff: Mentor
Okay, I calculated it, I get the same number for angular velocity and height (20.743m).
6 is certainly wrong.
16. Mar 28, 2015
### ehild
My h is also 20.7 m.
17. Mar 28, 2015
### TSny
OK, Satvik. I think you can now take it to the bank.
18. Mar 29, 2015
### Satvik Pandey
Thank you every one for helping me in this question.
|
2017-10-22 00:02:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6593860387802124, "perplexity": 599.1426691806615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824899.75/warc/CC-MAIN-20171021224608-20171022004608-00527.warc.gz"}
|
https://tex.stackexchange.com/questions/367761/line-spacing-after-chapter-and-sections
|
# Line spacing after chapter and sections [closed]
I want to increase the line spacing to 1.5 (I exaggerated in the code below to 5 space). However the space between the sections and the text also increase, I dont want them to change. How can prevent this?
\documentclass[12pt,a4paper,oneside,onecolumn]{report}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage[left=4.00cm, right=2.00cm, top=2.50cm, bottom=2.50cm]{geometry}
\usepackage{tocloft}
\usepackage{titlesec}
\usepackage{setspace}
\titleformat{\chapter}{\fontsize{12}{0pt}\selectfont\bfseries}{\thechapter.}{0.3em}{}
\titleformat{\section}{\fontsize{12}{0pt}\selectfont\bfseries}{\thesection.}{0.3em}{}
\titleformat{\subsection}{\fontsize{12}{0pt}\selectfont\bfseries}{\thesubsection.}{0.3em}{}
\titleformat{\subsubsection}{\fontsize{12}{0pt}\selectfont\bfseries}{\thesubsubsection.}{0.3em}{}
\titlespacing*{\chapter}{0pt}{50pt}{30pt}
\titlespacing*{\section}{0pt}{12pt}{6pt}
\titlespacing*{\subsection}{0pt}{6pt}{6pt}
\titlespacing*{\subsubsection}{0pt}{6pt}{6pt}
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt}
\setstretch{5} % I exaggerated this!
\begin{document}
\chapter{CHAPTER NAME}
Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Etiam lobortisfacilisis sem. Nullam nec mi et neque pharetra sollicitudin. Praesent imperdietmi nec ante. Donec ullamcorper, felis non sodales.
\section{SECTION NAME}
Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Etiam lobortisfacilisis sem. Nullam nec mi et neque pharetra sollicitudin. Praesent imperdietmi nec ante. Donec ullamcorper, felis non sodales.
\subsection{SUBSECTION NAME}
Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Etiam lobortisfacilisis sem. Nullam nec mi et neque pharetra sollicitudin. Praesent imperdietmi nec ante. Donec ullamcorper, felis non sodales.
\subsubsection{SUBSUBSECTION NMAE}
Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Etiam lobortisfacilisis sem. Nullam nec mi et neque pharetra sollicitudin. Praesent imperdietmi nec ante. Donec ullamcorper, felis non sodales.
\end{document}
## closed as unclear what you're asking by CarLaTeX, Bobyandbob, egreg, Stefan Pinnow, TeXnicianDec 9 '17 at 16:47
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
• Not a good solution, but you could calculate the necessary spacing for \titlespacing based on the \setstretch-value. – Skillmon May 2 '17 at 11:30
• @Skillmon thank you but that's not what I want. – Jonh Barry May 2 '17 at 11:38
• @Skillmon thank you but that's not what I want. – Jonh Barry May 2 '17 at 11:38
• why isn't that what you want, isn't that exactly what you want? Any solution is going to be effectively that, the only difference would be whether the calculations are done by the macros. – David Carlisle May 2 '17 at 17:12
|
2019-09-19 10:33:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6346371173858643, "perplexity": 8751.202014820516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573476.67/warc/CC-MAIN-20190919101533-20190919123533-00273.warc.gz"}
|
http://www.physicsforums.com/showthread.php?t=768481
|
# Electromagnetic Wave Equation
by rppearso
Tags: electromagnetic, equation, wave
P: 107 Is it possible to solve these partial differential equations directly, relating to Antenna Theory; $$∇^2 E - μ_0 ε_0 \frac{∂^2E}{∂t^2} = -μ_0 \frac{∂J}{∂t}.$$ AND $$∇^2 B - μ_0 ε_0 \frac{∂^2B}{∂t^2} = -μ_0 ∇ x J.$$ I don't like the idea of having to make up fields that don't exist in order to make the math work. The x is a cross product not a variable or multiplication.
Homework Sci Advisor HW Helper Thanks P: 13,059 Here, let me help with that:$$\nabla^2 \vec B - \mu_0 \epsilon_0 \frac{\partial^2\vec B}{\partial t^2} = -\mu_0 \nabla \times \vec J.$$... better? (Hit "quote" to see how I did that.) There may be some geometries where the equations can be solved directly, I've not heard of any for antennas. Follows that you have to use a Trick. Welcome to realmaths.
|
2014-09-17 05:37:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6190751791000366, "perplexity": 907.5458517870359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657121288.75/warc/CC-MAIN-20140914011201-00129-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
|
http://paperity.org/p/85863802/simplified-tev-leptophilic-dark-matter-in-light-of-dampe-data
|
# Simplified TeV leptophilic dark matter in light of DAMPE data
Journal of High Energy Physics, Feb 2018
Abstract Using a simplified framework, we attempt to explain the recent DAMPE cosmic e+ + e− flux excess by leptophilic Dirac fermion dark matter (LDM). The scalar (Φ0) and vector (Φ1) mediator fields connecting LDM and Standard Model particles are discussed. We find that the couplings P ⊗ S, P ⊗ P , V ⊗ A and V ⊗ V can produce the right bump in e+ + e− flux for a DM mass around 1.5 TeV with a natural thermal annihilation cross-section < σv >∼ 3×10−26cm3/s today. Among them, V ⊗V coupling is tightly constrained by PandaX-II data (although LDM-nucleus scattering appears at one-loop level) and the surviving samples appear in the resonant region, $${m_{\varPhi}}_{{}_1}\simeq 2{m}_{\chi }$$. We also study the related collider signatures, such as dilepton production pp → Φ1 → ℓ+ℓ−, and muon g − 2 anomaly. Finally, we present a possible U(1) X realization for such leptophilic dark matter.
This is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1007%2FJHEP02%282018%29107.pdf
Guang Hua Duan, Lei Feng, Fei Wang, Lei Wu, Jin Min Yang, Rui Zheng. Simplified TeV leptophilic dark matter in light of DAMPE data, Journal of High Energy Physics, 2018, 107, DOI: 10.1007/JHEP02(2018)107
|
2019-01-17 08:32:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8437439799308777, "perplexity": 9137.912165543088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658901.41/warc/CC-MAIN-20190117082302-20190117104302-00473.warc.gz"}
|
https://docs.lucedaphotonics.com/reference/device_sim/lumerical/ref/ipkiss3.all.device_sim.lumerical_macros.eme_profile_xy.html
|
# eme_profile_xy¶
ipkiss3.all.device_sim.lumerical_macros.eme_profile_xy(alignment_port)
Macro to create an EME Z-normal field profile monitor that covers the full simulation window, at a certain height.
Parameters: alignment_port : str Name of the port used to place the field monitor (use center z position). macro : i3.device_sim.Macro
|
2021-09-22 16:48:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21808481216430664, "perplexity": 13896.914834231167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057371.69/warc/CC-MAIN-20210922163121-20210922193121-00349.warc.gz"}
|
http://math.stackexchange.com/questions/155854/evaluating-int-sqrt5-4x-x2dx
|
# Evaluating $\int \sqrt{5 + 4x - x^2}dx$
$$\int \sqrt{5 + 4x - x^2}dx$$
I am pretty certain what I need to do to this problem is complete the square and turn it into a trig subsitution but I have no idea how to complete the square with a $-x^2$ or really with this problem at all, I just can't make it work.
I tried to see if I could make the problem be the same in any way by just pulling out a negative but that didn't seem to work.
I got the problem up to
$$\int \sqrt{ -1(x-2)^2 - 1}dx$$
But I do not think that does me any good. What I think I need to do is have a difference of squares with a square in it or something, I just have to get rid of the 4x term somehow.
-
Don't forget the differentials. – Pedro Tamaroff Jun 9 '12 at 1:11
Should be $\int \sqrt{ 9-(x-2)^2}dx$ – GEdgar Jun 9 '12 at 1:16
Eugene: I think you mean $3\sin\theta=x-2$. – Cameron Buie Jun 9 '12 at 1:18
## 3 Answers
Firstly, it should be
$$\int \sqrt{5 + 4 + (-4) + 4x - x^2} dx = \int \sqrt{5 + 4 - (x^2 - 4x + 4}) dx = \int \sqrt{9 - (x - 2)^2}dx$$
Next a hint. Let $3\sin \theta = x-2$.
-
When does this problems is it alright to leave the answer in terms of theta or whatever variable I use instead of x? I don't see why it matters if it is indefinite. – user138246 Jun 9 '12 at 1:30
Because it doesn't make much sense by the fundamental theorem of calculus that when you differentiate the left side you get a function in terms of $x$ and on the right you get a function in terms of $\theta$. – Eugene Jun 9 '12 at 1:33
Our integral can be written as, $$\begin{split} \int\sqrt{5+4x-x^{2}}dx&=\int\sqrt{9-4+4x-x^{2}}dx\\ &=\int\sqrt{3^{2}-(x-2)^{2}}dx.\ \end{split}$$ Now by trigonometric substitution, take $$x-2=3\sin \theta. (\because \text{If we have} \sqrt{a^{2}-x^{2}} \text{ then we have to substitute} x=a\sin\theta.)$$ Thus, $$\begin{split} \int\sqrt{3^{2}-(x-2)^{2}}dx&=\int\sqrt{3^{2}-3^{2}\sin^{2}\theta} 3\cos \theta d\theta\\ &=\int 3\sqrt{1-\sin^{2}\theta} 3\cos \theta d\theta\\ &=9\int\cos ^{2}\theta d\theta\\ &=9\int\frac{1+\cos 2\theta}{2}d\theta\\ &=\frac{9}{2}\int 1 d\theta+\frac{9}{2}\int \cos 2\theta d\theta\\ &=\frac{9}{2}\left[\theta+\frac{\sin 2\theta}{2}\right]+C=\frac{9}{2}\left[\theta +\frac{2\sin \theta\cos \theta}{2}\right]+C\\ &=\frac{9}{2}\left[\sin^{-1}\left(\frac{x-2}{3}\right)+\frac{1}{9}(x-2) \sqrt{9-(x-2)^{2}}\right]+C \end{split}$$ Thus, $$\int \sqrt{5+4x-x^{2}} dx=\frac{9}{2}\sin^{-1}\left(\frac{x-2}{3}\right)+\frac{(x-2) \sqrt{5+4x-x^{2}}}{2}+C$$
-
I also have trouble completing the square if the coefficient of $x^2$ is negative. So I avoid doing it.
Let's look at our particular example $5+4x-x^2$. We have $$5+4x-x^2=-\left(x^2-4x-5\right).$$ Inside the parentheses, not only is the coefficient of $x^2$ positive, but $x^2$ is in front, where it likes to be. We are now in familiar territory, and can comfortably note that $$x^2-4x-5=(x-2)^2 -9.$$ Finally, take the negative of this. We get $9-(x-2)^2$. The rest has been well done by others: let $x-2=3\sin\theta$, or, more slowly, let $x-2=u$ and then let $u=3\sin\theta$.
-
|
2014-10-26 06:10:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.897286593914032, "perplexity": 208.5111660937424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119658026.51/warc/CC-MAIN-20141024030058-00168-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://questioncove.com/updates/558ad86de4b07571411879f5
|
OpenStudy (anonymous):
a student gets paid to deliver newspapers after school. she earns $25 a day, plus an extra$0.75 for each newspapers she delivers and \$5 for each new customer she signs up for delivery. if d= days, n= newspapers, and c= customers, what function can she use to calculate her earnings?
2 years ago
|
2017-11-23 01:34:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19835242629051208, "perplexity": 13193.288609685118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806715.73/warc/CC-MAIN-20171123012207-20171123032207-00421.warc.gz"}
|
https://cran.case.edu/web/packages/regtools/vignettes/regtools.html
|
# regtools
## Novel tools tools for linear, nonlinear and nonparametric regression.
These tools are associated with my book, From Linear Models to Machine Learning: Statistical Regression and Classification, N. Matloff, CRC, 2017 (recipient of the Technometrics Eric Ziegel Award for Best Book Reviewed in 2017).
The tools are useful in general, independently of the book.
NOTE: See also our qeML package – Quick and Easy Machine Learning.
## FEATURES:
• Innovative graphical tools for assessing fit in linear and nonlinear parametric models, via nonparametric methods. Model evaluation, examination of quadratic effects, investigation of nonhomogeneity of variance.
• Tools for multiclass classification, parametric and nonparametric, for any number of classes. One vs. All and All vs. All paradigms. Novel adjustment for artificially balanced (or undesirably imbalanced) data.
• Nonparametric regression for general dimensions in predictor and response variables, using k-Nearest Neighbors (k-NN). Local-linear option to deal with edge aliasing. Allows for user-specified smoothing method. Allows for accelerated exploration of multiple values of k at once. Tool to aid in choosing k.
• Extension to nonlinear parametric regression of Eicker-White technique to handle heteroscedasticity.
• Utilities for conversion of time series data to rectangular form, enabling lagged prediction by lm() or other regression model.
• Linear regression, PCA and log-linear model estimation in missing-data setting, via the Available Cases method. (For Prediction contexts, see our toweranNA package.)
• Utilities for conversion between factor and dummy variable forms, useful since among various regression packages, some use factors while some others use dummies. (The lars package is an example of the latter case.)
• Misc. tools, e.g. to reverse the effects of an earlier call to scale().
• Nicer implementation of ridge regression, with more meaningful scaling and better plotting.
• Interesting datasets.
## EXAMPLE: PARAMETRIC MODEL FIT ASSESSMENT
The fit assessment techniques in regtools gauge the fit of parametric models by comparing to nonparametric ones. Since the latter are free of model bias, they are very useful in assessing the parametric models.
Let’s take a look at the included dataset prgeng, some Census data for California engineers and programmers in the year 2000. The response variable in this example is wage income, and the predictors are age, gender, number of weeks worked, and dummy variables for MS and PhD degrees. You can read the details of the data by typing
> ?prgeng
One of the package’s graphical functions for model fit assessment plots the parametric (e.g. lm()) values against the nonparametric fit via k-NN. Let’s try this on the Census data.
The package includes three versions of the dataset: The original; a version with categorical variables in dummy form; and a version with categorical variables in R factor form. Since the k-NN routines require dummies, we’ll use that first version, peDumms.
We need to generate the parametric and nonparametric fits, then call parvsnonparplot():
data(peDumms)
pe1 <- peDumms[c('age','educ.14','educ.16','sex.1','wageinc','wkswrkd')]
lmout <- lm(wageinc ~ .,data=pe1)
xd <- preprocessx(pe1[,-5],10) # prep for k-NN, k <= 10
knnout <- knnest(pe1$wageinc,xd,10) parvsnonparplot(lmout,knnout) We see above how the k-NN code is used. We first call preprocessx() to determine the nearest neighbors of each data point. Here k is 10, so we can later compute various k-NN fits for k anywhere from 1 to 10. The actual fit is done by knnest(). Then parvsnonparplot() plots the linear model fit against the nonparametric one.. Again, since the latter is model-free, it serves as a good assessment of the fit of the linear model. There is quite a bit suggested in this picture: • There seems to be some overfitting near the low end, and quite substantial underfitting at the high end. • There are intriguing “streaks” or “tails” of points, suggesting the possible existence of small but important subpopulations. Moreover, the plot suggests two separate large subpopulations, for wages less than or greater than about$40,000, possibly related to full- vs. part-time employment.
• There appear to be a number of people with 0 wage income. Depending on the goals of our analysis, we might consider removing them.
Let’s now check the classical assumption of homoscedasticity, meaning that the conditional variance of Y given X is constant. The function nonparvarplot() plots the estimated conditional variance against the estimated conditional mean, both computed nonparametrically:
Though we ran the plot thinking of the homoscedasticity assumption, this is much more remarkable, confirming that there are interesting subpopulations within this data. These may correspond to different occupations, something to be investigated.
The package includes various other graphical diagnostic functions.
By the way, violation of the homoscedasticity assumption won’t invalidate the estimates in our linear model. They still will be statistically consistent. But the standard errors we compute, and thus the statistical inference we perform, will be affected. This is correctible using the Eicker-White procedure, which for linear models is available in the car and sandwich packagers. Our package here also extends this to nonlinear parametric models, in our function nlshc() (the validity of this extension is shown in the book).
## EXAMPLE; OVA VS. AVA IN MULTICLASS PROBLEMS
A very popular prediction method in 2-class problems is to use logistic (logit) regression. In analyzing click-through patterns of Web users, for instance, we have 2 classes, Click and Nonclick. We might fit a logistic model for Click, given user Web history, demographics and so on. Note that logit actually models probabilities, e.g. the probability of Click given the predictor variables.
But the situation is much less simple in multiclass settings. Suppose our application is recognition of hand-written digits (a famous machine learning example). The predictor variables are pixel patterns in images. There are two schools of thought on this:
• One vs. All (OVA): We would run 10 logistic regression models, one for predicting ‘0’ vs. non-‘0’, one for ‘1’ vs. non-‘1’, and so on. For a particular new image to be classified, we would thus obtain 10 estimated conditional probabilities. We would then guess the digit for this image to be the digit with the highest estimated conditional probability.
• All vs. All (AVA): Here we would run C(10,2) = 45 logit analyses, one for each pair of digits. There would be one for ‘0’ vs. ‘1’, one for ‘0’ vs. ‘2’, etc., all the way up through ‘8’ vs. ‘9’. In each case there is a “winner” for our new image to be predicted, and in the end we predict the new image to be whichever digit has the most winners.
Many in the machine learning literature recommend AVA over OVA, on the grounds that there might be linear separability (in the statistical sense) in pairs but not otherwise. My book counters by noting that such a situation could be remedied under OVA by adding quadratic terms to the logit models.
At any rate, the regtools package gives you a choice, OVA or AVA, for both parametric and nonparametric methods. For example, avalogtrn() and avalogpred() do training and prediction operations for logit with AVA.
Let’s look at an example, again using the Census data from above. We’ll predict occupation from age, sex, education (MS, PhD, other) wage income and weeks worked.
data(peFactors)
pef <- peFactors
pef1 <- pef[,c('age','educ','sex','wageinc','wkswrkd','occ')]
# "Y" must be in last column, class ID 0,1,2,...; convert from factor
pef1$occ <- as.numeric(pef1$occ)
pef1$occ <- pef1$occ - 1
pef2 <- pef1
# create the education, gender dummy varibles
pef2$ms <- as.integer(pef2$educ == 14)
pef2$phd <- as.integer(pef2$educ == 16)
pef2$educ <- NULL pef2$sex <- as.integer(pef2$sex == 1) pef2 <- pef2[,c(1,2,3,4,6,7,5)] ovaout <- ovalogtrn(6,pef2) # estimated coefficients, one set ofr each of the 6 classes ovaout # prints 0 1 2 (Intercept) -9.411834e-01 -6.381329e-01 -2.579483e-01 xage 9.090437e-03 -3.302790e-03 -2.205695e-02 xsex -5.187912e-01 -1.122531e-02 -9.802006e-03 xwageinc -6.741141e-06 -4.609168e-06 5.132813e-06 xwkswrkd 5.058947e-03 -2.247113e-03 2.623924e-04 xms -5.201286e-01 -4.272846e-01 5.280520e-01 xphd -3.302821e-01 -8.035287e-01 3.531951e-01 3 4 5 (Intercept) -3.370758e+00 -3.322356e+00 -4.456788e+00 xage -2.193359e-03 -1.206640e-02 3.323948e-02 xsex -7.856923e-01 5.173516e-01 1.175657e+00 xwageinc -4.076872e-06 2.033175e-06 1.831774e-06 xwkswrkd 1.311084e-02 5.517912e-04 2.794453e-03 xms -1.797544e-01 9.947253e-02 2.705293e-01 xphd -3.883463e-01 4.967115e-01 4.633907e-01 # predict the occupation of a woman, age 35, no MS/PhD, inc 60000, 52 # weeks worked ovalogpred(ovaout,matrix(c(35,0,60000,52,0,0),nrow=1)) # outputs class 2, Census occupation code 102 [1] 2 With the optional argument probs=TRUE, the call to ovalogpred() will also return the conditional probabilities of the classes, given the predictor values, in the R attribute ‘probs’. Here is the AVA version: avaout <- avalogtrn(6,pef2) avaout # prints 1,2 1,3 1,4 1,5 (Intercept) -1.914000e-01 -4.457460e-01 2.086223e+00 2.182711e+00 xijage 8.551176e-03 2.199740e-02 1.017490e-02 1.772913e-02 xijsex -3.643608e-01 -3.758687e-01 3.804932e-01 -8.982992e-01 xijwageinc -1.207755e-06 -9.679473e-06 -6.967489e-07 -4.273828e-06 xijwkswrkd 4.517229e-03 4.395890e-03 -9.535784e-03 -1.543710e-03 xijms -9.460392e-02 -7.509925e-01 -2.702961e-01 -5.466462e-01 xijphd 3.983077e-01 -5.389224e-01 7.503942e-02 -7.424787e-01 1,6 2,3 2,4 2,5 (Intercept) 3.115845e+00 -2.834012e-01 2.276943e+00 2.280739e+00 xijage -2.139193e-02 1.466992e-02 1.950032e-03 1.084527e-02 xijsex -1.458056e+00 3.720012e-03 7.569766e-01 -5.130827e-01 xijwageinc -5.424842e-06 -9.709168e-06 -1.838009e-07 -4.908563e-06 xijwkswrkd -2.526987e-03 9.884673e-04 -1.382032e-02 -3.290367e-03 xijms -6.399600e-01 -6.710261e-01 -1.448368e-01 -4.818512e-01 xijphd -6.404008e-01 -9.576587e-01 -2.988396e-01 -1.174245e+00 2,6 3,4 3,5 3,6 (Intercept) 3.172786e+00 2.619465e+00 2.516647e+00 3.486811e+00 xijage -2.908482e-02 -1.312368e-02 -3.051624e-03 -4.236516e-02 xijsex -1.052226e+00 7.455830e-01 -5.051875e-01 -1.010688e+00 xijwageinc -5.336828e-06 1.157401e-05 1.131685e-06 1.329288e-06 xijwkswrkd -3.792371e-03 -1.804920e-02 5.606399e-04 -3.217069e-03 xijms -5.987265e-01 4.873494e-01 2.227347e-01 5.247488e-02 xijphd -1.140915e+00 6.522510e-01 -2.470988e-01 -1.971213e-01 4,5 4,6 5,6 (Intercept) -9.998252e-02 6.822355e-01 9.537969e-01 xijage 1.055143e-02 -2.273444e-02 -3.906653e-02 xijsex -1.248663e+00 -1.702186e+00 -4.195561e-01 xijwageinc -4.986472e-06 -7.237963e-06 6.807733e-07 xijwkswrkd 1.070949e-02 8.097722e-03 -5.808361e-03 xijms -1.911361e-01 -3.957808e-01 -1.919405e-01 xijphd -8.398231e-01 -8.940497e-01 -2.745368e-02 # predict the occupation of a woman, age 35, no MS/PhD, inc 60000, 52 # weeks worked avalogpred(6,ovaout,matrix(c(35,0,60000,52,0,0),nrow=1)) # outputs class 2, Census occupation code 102 ## EXAMPLE: ADJUSTMENT OF CLASS PROBABILITIES IN CLASSIFICATION PROBLEMS The LetterRecognition dataset in the mlbench package lists various geometric measurements of capital English letters, thus another image recognition problem. One problem is that the frequencies of the letters in the dataset are not similar to those in actual English texts. The correct frequencies are given in the ltrfreqs dataset included here in the regtools package. In order to adjust the analysis accordingly, the ovalogtrn() function has an optional truepriors argument. For the letters example, we could set this argument to ltrfreqs. (The term priors here does refer to a subjective Bayesian analysis. It is merely a standard term for the class probabilities.) ## MULTICLASS CLASSIFICATION WITH k-NN In addition to use in linear regression graphical diagnostics, k-NN can be very effective as a nonparametric regression/machine learning tool. I would recommend it in cases in which the number of predictors is moderate and there are nonmonotonic relations. (See also our polyreg package.) Let’s continue the above example on predicting occupation, using k-NN. The three components of k-NN analysis in regtools are: 1. preprocessx(): This finds the sets of nearest neighbors in the training set, for all values of k up to a user-specified maximum. This facilitates the user’s trying various values of k. 2. knnest(): This fits the regression model. 3. knnpred(): This does prediction on the user’s desired set of points of new cases. Since k-NN involves finding distances between points, our data must be numeric, not factors. This means that in pef2, we’ll need to replace the occ column by a matrix of dummy variables. Utilities in the regtools package make this convenient: occDumms <- factorToDummies(as.factor(pef2$occ),'occ',omitLast=FALSE)
pef3 <- cbind(pef2[,-7],occDumms)
Note that in cases in which “Y” is multivariate, knnest() requires it in multivariate form. Here “Y” is 6-variate, so we’ve set the last 6 columns of pef3 to the corresponding dummies.
Many popular regression packages, e.g. lars for the LASSO, require data in numeric form, so the regtools’ conversion utilities are quite handy.
Now fit the regression model:
kout <- knnest(pef3[, -(1:6)],xd,10)
One of the components of kout is the matrix of fitted values:
> head(kout\$regest)
occ.0 occ.1 occ.2 occ.3 occ.4 occ.5
[1,] 0.2 0.4 0.2 0 0.0 0.2
[2,] 0.2 0.5 0.2 0 0.0 0.1
[3,] 0.5 0.1 0.3 0 0.1 0.0
[4,] 0.3 0.4 0.1 0 0.0 0.2
[5,] 1.0 0.0 0.0 0 0.0 0.0
[6,] 0.2 0.4 0.2 0 0.0 0.2
So for example the conditional probability of Occupation 4 for the third observation is 0.1.
Now let’s do the same prediction as above:
> predict(kout,matrix(c(35,0,60000,52,0,0),nrow=1),TRUE)
occ.0 occ.1 occ.2 occ.3 occ.4 occ.5
0.1 0.4 0.5 0.0 0.0 0.0
These are conditional probabilities. The most likely one is Occupation 2.
The TRUE argument was to specify that we need to scale the new cases in the same way the original data were scaled.
By default, our k-NN routines find the mean Y in the neighborhood. Another option is to do local linear smoothing. Among other things, this may remedy aliasing at the edges of the data. This should be done with a value of k much larger than the number of predictor variables.
## EXAMPLE: RECTANGULARIZATION OF TIME SERIES
This allows use of ordinary tools like lm() for prediction in time series data. Since the goal here is prediction rather than inference, an informal model can be quite effective, as well as convenient.
The basic idea is that x[i] is predicted by x[i-lg], x[i-lg+1], x[i-lg+2], i… x[i-1], where lg is the lag.
xy <- TStoX(Nile,5)
# [,1] [,2] [,3] [,4] [,5] [,6]
# [1,] 1120 1160 963 1210 1160 1160
# [2,] 1160 963 1210 1160 1160 813
# [3,] 963 1210 1160 1160 813 1230
# [4,] 1210 1160 1160 813 1230 1370
# [5,] 1160 1160 813 1230 1370 1140
# [6,] 1160 813 1230 1370 1140 995
# [1] 1120 1160 963 1210 1160 1160 813 1230 1370 1140 995 935 1110 994 1020
# [16] 960 1180 799 958 1140 1100 1210 1150 1250 1260 1220 1030 1100 774 840
# [31] 874 694 940 833 701 916
Try lm():
lmout <- lm(xy[,6] ~ xy[,1:5])
lmout
...
Coefficients:
Coefficients:
(Intercept) xy[, 1:5]1 xy[, 1:5]2 xy[, 1:5]3 xy[, 1:5]4 xy[, 1:5]5
307.84354 0.08833 -0.02009 0.08385 0.13171 0.37160
Predict the 101st observation:
cfs <- coef(lmout)
cfs %*% c(1,Nile[96:100])
# [,1]
# [1,] 784.4925
|
2022-05-18 12:56:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5228011012077332, "perplexity": 2832.219808138124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522270.37/warc/CC-MAIN-20220518115411-20220518145411-00678.warc.gz"}
|
https://socratic.org/questions/how-do-you-tell-whether-the-sequence-3-5-5-8-10-5-13-is-arithmetic
|
# How do you tell whether the sequence 3, 5.5, 8, 10.5, 13 is arithmetic?
Feb 15, 2018
There is a common difference of $2.5$
#### Explanation:
If there is a common difference, the sequence is arithmetic.
$5.5 - 3 = 2.5$
$8 - 5.5 = 2.5$
$10.5 - 8 = 2.5$
and so on.
Hope that makes sense!
|
2019-11-17 01:50:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45442765951156616, "perplexity": 1397.5962500959226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668782.15/warc/CC-MAIN-20191117014405-20191117042405-00445.warc.gz"}
|
https://bitbucket.org/ncoghlan/cpython_sandbox/src/ae7fef62b462/Doc/library/py_compile.rst
|
# :mod:py_compile --- Compile Python source files
Source code: :source:Lib/py_compile.py
The :mod:py_compile module provides a function to generate a byte-code file from a source file, and another function used when the module source file is invoked as a script.
Though not often needed, this function can be useful when installing modules for shared use, especially if some of the users may not have permission to write the byte-code cache files in the directory containing the source code.
When this module is run as a script, the :func:main is used to compile all the files named on the command line. The exit status is nonzero if one of the files could not be compiled.
|
2015-11-28 09:02:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36620551347732544, "perplexity": 1846.5166252540923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398451744.67/warc/CC-MAIN-20151124205411-00023-ip-10-71-132-137.ec2.internal.warc.gz"}
|
http://crypto.stackexchange.com/questions/2338/big-o-notation-encryption-algorithms?answertab=oldest
|
# Big-O Notation: Encryption Algorithms
I am currently completing a dissertation concerning the encryption of data through a variety of cryptographic algorithms.
I have spent much time reading journals and papers but as yet have been unable to find any record of their performance complexity.
Would anyone have an idea of the Big-O complexity of the following algorithms?
• RSA
• DES
• Triple DES (Which I would expect to be of the same order as DES)
• AES
• Blowfish
Thank you in advance; if you could provide a link to a reputable and citable source if would be very much appreciated.
-
Cross-post on SO: stackoverflow.com/questions/10094814/… – CodesInChaos Apr 10 '12 at 23:18
Most of these algorithms (i.e. the block ciphers DES, Triple DES, AES, Blowfish) are normally only working on a fixed block size, and take approximately the same time independently of input, thus they are $O(1)$.
If you put them into a mode of operation to encrypt longer messages, you usually get an $O(m)$ complexity, where $m$ is the message size, as you have $O(m)$ blocks of data to encrypt.
(One could design modes of operations with different complexity, but they have to touch at least each input bit once to be reversible, thus $O(m)$ is a minimum. Also, with $O(m)$ block cipher calls you can do enough to make it secure, so there is no point of making it slower.)
Two more notes to specific ciphers:
• Yes, Triple-DES usually needs thrice the computing power as DES, but this then gets $O(1)$ or $O(m)$, too.
• Blowfish is known for its quite slow key schedule (which takes as long as encrypting about 4 KB of data), but this is still $O(1)$.
Thus, $O$-notation is not really an interesting thing to look at in block ciphers.
It gets a bit more interesting when we look at algorithms with a varying input size. For the asymmetric algorithm RSA, we have the public (and private) key modulus $n$, and its size $k = [\log_2 n]$ in bits can be considered a security parameter. (The private exponent $d$ is of similar size, while the public exponent $e$ is usually some small number like $3$ or $65537 = 2^{16}+1$.) The message size is then limited by $O(k)$, too.
Encryption and decryption are both modular exponentiations of plaintext or ciphertext modulo $n$, with the respective exponents. With the square-and-multiply algorithm, encryption needs $O(1)$, decryption $O(k)$ multiplications and a similar number of modular reductions, each of $k$-bit or $2k$-bit numbers ... which means about $O(k^2)$ or $O(k^3)$ elementary operations (with a quite small factor, as you use the word size build into your processor).
Decryption can be sped up by storing the factors of $n$, but this still gives only a constant factor, I think (i.e. it reduces the $k$ in the formulas).
RSA also uses one of various padding schemes, but this should be in O(k) and thus not contribute to the complexity.
-
|
2015-07-06 11:40:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8631749749183655, "perplexity": 899.3376777448511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098196.31/warc/CC-MAIN-20150627031818-00092-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/1207436/cohomology-ring-of-classifying-space
|
# Cohomology ring of classifying space
I am looking for $H^*(BZ/2p , Z/2p)$ where $p$ is odd prime.We can calculate cohomology groups by using gysin exact sequence and universal coefficient theorem.But I am unable to calculate the ring structure.looking for help. Thanking you.
• Have you tried using a spectral sequence? – JHF Mar 26 '15 at 23:20
• @JHF: You mean using the serre fibration $K(Z,1) \rightarrow K(Z,1) \rightarrow K(Z/2p ,1)$,where first map is multiplication by 2p. – Math Mar 26 '15 at 23:26
• @JHF :It is easy to see that $H^{2i}(BZ/2p , Z/2p) =Z/2p\lbrace t^i \rbrace$ where $t$ belongs to $H^2(BZ/2p,Z/2p)$.By SSS we guarantee that there is class $s$ belongs to $H^1(BZ/2p , Z/2p)$ and $s^2$= 0 because otherwise it will survive at $\infty$-page and that contradict to cohomology of $K(Z,1)$.Is it right? – Math Mar 26 '15 at 23:44
• You may be right, but how did you differentiate between the cases $s^2 = 0$ and $s^2 = t$ (or some other multiple of $t$)? – JHF Mar 27 '15 at 1:00
• @JHF:You are right. Can u settle it for me?Can you tell me the explicit ring structure? – Math Mar 27 '15 at 2:40
|
2019-06-24 09:21:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8426839709281921, "perplexity": 682.3680865777173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999298.86/warc/CC-MAIN-20190624084256-20190624110256-00516.warc.gz"}
|
http://en.wikipedia.org/wiki/Sprague%E2%80%93Grundy_theorem
|
# Sprague–Grundy theorem
In combinatorial game theory, the Sprague–Grundy theorem states that every impartial game under the normal play convention is equivalent to a nimber. The Grundy value or nim-value of an impartial game is then defined as the unique nimber that the game is equivalent to. In the case of a game whose positions (or summands of positions) are indexed by the natural numbers (for example the possible heap sizes in nim-like games), the sequence of nimbers for successive heap sizes is called the nim-sequence of the game.
The theorem was discovered independently by R. P. Sprague (1935) and P. M. Grundy (1939).
## Definitions
For the purposes of the Sprague–Grundy theorem, a game is a two-player game of perfect information satisfying the ending condition (all games come to an end: there are no infinite lines of play) and the normal play condition (a player who cannot move loses).
An impartial game is one such as nim, in which each player has exactly the same available moves as the other player in any position. Note that games such as tic-tac-toe, checkers, and chess are not impartial games. In the case of checkers and chess, for example, players can only move their own pieces, not their opponent's pieces. And in tic-tac-toe, one player puts down X's, while the other puts down O's. Impartial games fall into two outcome classes: either the next player wins (an N-position) or the previous player wins (a P-position).
An impartial game can be identified with the set of positions that can be reached in one move (these are called the options of the game). Thus the game with options A, B, or C is the set {A, B, C}.
The normal play convention is where the last player to move wins. Alternatively, the player who first does not have any valid move loses. The opposite − the misère convention is where the last person to have a valid move or makes the last move loses.
A nimber is a special game denoted *n for some ordinal n. We define *0 = {} (the empty set), then *1 = {*0}, *2 = {*0, *1}, and *(n+1) = *n ∪ {*n}. When n is an integer, the nimber *n = {*0, *1, ..., *(n−1)}. This corresponds to a heap of n counters in the game of nim, hence the name.
Two games G and H can be added to make a new game G+H in which a player can choose either to move in G or in H. In set notation, G+H means {G+h for h in H} ∪ {g+H for g in G}, and thus game addition is commutative and associative.
Two games G and G' are equivalent if for every game H, the game G+H is in the same outcome class as G'+H. We write GG'.
A game can refer to two things. It can define a set of possible positions and their moves through its rules, for example, chess, or nim. It can also refer to a certain position, for example, the game *5. Generally, the meaning to be taken is clear from the context.
## Lemma
For impartial games, GG' if and only if G+G' is a P-position.
First, we note that ≈ is an equivalence relation since equality of outcome classes is an equivalence relation.
We now show that for every game G, and P-position game A, A+GG. By the definition of ≈, we need to show that G+H is in the same outcome-class as A+G+H for all games H. If G+H is P-position, then the previous player has a winning strategy in A+G+H: to every move in G+H he responds according to his winning strategy in G+H, and to every move in A he responds with his winning strategy there. If G+H is N-position, then the next player in A+G+H makes a winning move in G+H, and then reverts to responding to his opponent in the manner described above.
Also, G+G is P-position for any game G. For every move made in one copy of G, the previous player can respond with the same move in the other copy, which means he always makes the last move.
Now, we can prove the lemma.
If GG', then G+G' is of the same outcome-class as G+G, which is P-position.
On the other hand, if G+G' is P-position, then since G+G is also P-position, GG+(G+G') ≈ (G+G)+G'G', thus GG'.
## Proof
We prove the theorem by structural induction on the set representing the game.
Consider a game $G = \{G_1, G_2, \ldots, G_k\}$. By the induction hypothesis, all of the options are equivalent to nimbers, say $G_i \approx *n_i$. We will show that $G \approx *m$, where $m$ is the mex of the numbers $n_1, n_2, \ldots, n_k$, that is the smallest non-negative integer not equal to some $n_i$.
Let $G'=\{*n_1, *n_2, \ldots, *n_k\}$. The first thing we need to note is that $G \approx G'$. Consider $G+G'$. If the first player makes a move in $G$, then the second player can move to the equivalent $*n_i$ in $G'$, and conversely if the first player makes a move in $G'$. After this the game is a P-position (by the lemma), since it is the sum of some option of $G$ and a nim pile equivalent to that option. Therefore, $G+G'$ is a P-position, and by another application of our lemma, $G \approx G'$.
So now, by our lemma, we need to show that $G+*m$ is a P-position. We do so by giving an explicit strategy for the second player in the equivalent $G'+*m$.
Suppose that the first player moves in the component $*m$ to the option $*m'$ where $m'. But since $m$ was the minimal excluded number, the second player can move in $G'$ to $*m'$.
Suppose instead that the first player moves in the component $G'$ to the option $*n_i$. If $n_i < m$ then the second player moves in $*m$ to $*n_i$. If $n_i > m$ then the second player, moves in $*n_i$ to $*m$. It is not possible that $n_i = m$ because $m$ was defined to be different from all the $n_i$.
Therefore, $G'+*m$ is a P-position, and hence so is $G+*m$. By our lemma, $G \approx *m$ as desired.
## Development
The Sprague–Grundy theorem has been developed into the field of combinatorial game theory, notably by E. R. Berlekamp, John Horton Conway and others. The field is presented in the books Winning Ways for your Mathematical Plays and On Numbers and Games.
|
2014-03-13 08:17:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 38, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7074912786483765, "perplexity": 697.159237312616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678663927/warc/CC-MAIN-20140313024423-00084-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/proof-using-logarithm.354055/
|
# Homework Help: Proof using logarithm
1. Nov 12, 2009
### songoku
1. The problem statement, all variables and given/known data
Positive integers a and b, where a < b, satisfy the equation :
ab = ba
By first taking logarithms, show that there is only one value of a and b that satisfies the equation and find the value !
2. Relevant equations
logarithm
3. The attempt at a solution
I know the solution just by guessing, a = 2 and b = 4 but I don't know how to do it...
$$a^b=b^a$$
$$b*log~a=a*log~b$$
Then I stuck....
Thanks
2. Nov 12, 2009
### rock.freak667
try taking logs to base a or b instead of base 10
3. Nov 12, 2009
### songoku
Hi rock.freak667
I think it will be the same.
$$a^b=b^a$$
$$b*\log_{a}~a=a*\log_{a}b$$
$$b=a*\log_{a}b$$
Then stuck again...:(
Thanks
4. Nov 17, 2009
### turin
Think of only one variable at a time (e.g. b in the last equation of your previous post).
5. Nov 18, 2009
### icystrike
Since a and b are positive integer,
we can conclude a,b>0 without generalisation.
First by taking log base a,
$$b=a log_{a}b$$
$$b log_{b}a=a$$
substituting to initial equation,
$$a^{2}-alog_{a}b=0$$
$$a=0$$ or $$a=log_{a}b$$
Therefore, $$a=log_{a}b$$
is the only solution for a since logarithm curve is constantly decreasing.
$$b=a^{a}$$
since exponent curve is ... leaving this part to you (=
Last edited: Nov 18, 2009
6. Nov 18, 2009
### songoku
Hi turin and icystrike
sorry I don't understand what you mean. From my last equation :
$$b=a*\log_{a}b$$
Then, think only b as variable. how to continue? what about a?
I don't understand this part. To which initial equation do you substitute?
even though I can reach this part, I still don't know how to continue. I think logarithm curve is constantly increasing, not decreasing. From y = log x, the value of y will increase if x increases.
exponent curve is also constantly increasing, but from b=aa, how to deduce that a = 2?
Thanks a lot
Last edited: Nov 18, 2009
7. Nov 19, 2009
### Staff: Mentor
I'm with Songoku on this; I'm not following what you are doing. I understand how you got both equations above, but your description is that you are taking the log base a of both sides of the original equation. In your second equation you're taking the log base b of both sides of the original equation.
Now this doesn't make any sense to me. Where did it come from? This is equivalent to a2 - b = 0 from the first of your two equations above, where you have b = a logab.
Assuming that the base is larger than 1, any log curve is constantly increasing. IOW, if x1 < x2, then logax1 < logax2.
8. Nov 19, 2009
### turin
Whoops, I thought I knew how to do the problem, until you pointed out this assumption.
9. Nov 19, 2009
### Staff: Mentor
So unless icystrike can explain what he's done, we're back at square 1 on this problem.
10. Nov 20, 2009
### icystrike
Sorry guys i've made a mistake. we are back to square one.
Now i'm able to solve it.
But it contradicts the statement that a>b.
$$((lg a)/(lg b))=((lg b)/(lg a))$$
$$(lg a)^{2}-(lg b)^{2}=0$$
$$(lg a-lg b)(lg a+lg b)=0$$
Thus the only possibility is $$a=(1/b)$$
since a is a interger,
b must divide 1,
whereby b=1
suggest a=1 .
and resulting a=b.
Last edited: Nov 20, 2009
11. Nov 20, 2009
### Staff: Mentor
How do you get the equation above? We know that ab = ba, so b loga = a logb
==> b/a = (log b)/(log a)
How do you get from this equation to (log a)/(log b) = (log b)/(log a)?
|
2018-06-22 17:17:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7048091292381287, "perplexity": 1175.394691806973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864740.48/warc/CC-MAIN-20180622162604-20180622182604-00006.warc.gz"}
|
https://discourse.mc-stan.org/t/lognormal-regression-and-moment-matching/26557
|
# Lognormal regression and moment matching
summary I can tickle brms until it gives me a lognormal regression. However the result is so weird looking I wonder if it’s not just wrong or needless. Does this look weird to you too?
I have some data that I want to fit with a lognormal distribution – the data are positive, continuous numbers that have several extreme values. I want to model the mean of my response
As with many other distributions, the two parameters of the lognormal are each a function of both the mean \mu and standard deviation \sigma. I’ll call these two parameters a and b:
a = \ln\left(\frac{\mu^2}{\sqrt{\sigma^2 + \mu^2}}\right)
b = \sqrt{\ln\left(\frac{\sigma^2}{\mu^2} + 1\right)}
a is the median of the lognormal distribution. Its also the mean of the normal distribution you’d get if you took the log of these lognormal values. b is the standard deviation of those values.
## faking data
I want to simulate data from so-called Power Law curve, which is a classic in ecology for modelling e.g. how metabolism scales with mass
\begin{align} \text{metabolism} &\sim \text{LogNormal}\left(\ln\left(\frac{\mu^2}{\sqrt{\sigma^2 + \mu^2}}\right),\sqrt{\ln\left(\frac{\sigma^2}{\mu^2} + 1\right)}\right) \\ \mu &= M \times x ^z \\ \end{align}
With known values M = 17, z = .3 and \sigma = 11
suppressPackageStartupMessages(library(tidyverse))
suppressPackageStartupMessages(library(tidybayes))
suppressPackageStartupMessages(library(brms))
calculate_a_lnorm <- function(mean, sd){
log(mean^2/sqrt(sd^2 + mean^2))
}
calculate_b_lnorm <- function(mean, sd){
sqrt(log(sd^2 / mean^2 + 1))
}
known_M <- 17
known_z <- .3
known_s <- 11
fake_metabolism_data <- tibble(x = 1:500,
mean_response = known_M*x^(known_z),
obs_response = rlnorm(500,
meanlog = calculate_a_lnorm(mean = mean_response, sd = known_s),
sdlog = calculate_b_lnorm(mean = mean_response, sd = known_s))
)
fake_metabolism_data |>
ggplot(aes(x = x, y = obs_response)) + geom_point()
Created on 2022-02-24 by the reprex package (v2.0.0)
## recovering parameters
We can tickle brms until it fits this equation and beautifully recovers the parameters. However the result is so wild-looking that it makes me doubt my wisdom, good taste, and sanity:
formula_metabolic_matching <- bf(obs_response ~ 2*log(M*x^z) - .5*log(s^2 + (M*x^z)^2),
nl = TRUE) +
nlf(sigma ~ sqrt(log(s^2/(M*x^z)^2 + 1))) +
lf(M ~ 1) +
lf(z ~ 1) +
lf(s ~ 1)
get_prior(formula = formula_metabolic_matching,
data = fake_metabolism_data)
metabolism_priors <- c(prior(normal(10,2), class = "b", nlpar = "M"),
prior(beta(3, 4), class = "b", nlpar = "z", lb = 0, ub = 1),
prior(exponential(2), class = "b", nlpar = "s", lb = 0))
matching_model <- brm(formula_metabolic_matching, prior = metabolism_priors,
data = fake_metabolism_data, backend = "cmdstan", refresh = 0 , silent = 2, file = here::here("lognormal_demo.rds"))
big_dataset <- fake_metabolism_data |>
big_dataset |>
ggplot(aes(x = x, y = .prediction)) + stat_lineribbon() +
geom_point(aes(x = x, y = obs_response), data = fake_metabolism_data, pch = 21, fill = "orange") +
scale_fill_brewer(palette = "Greens", direction = 1) +
theme_dark()
Does this make sense to do? Do most people content themselves with modelling the median, not the mean, of a lognormal, and just relax about it? Is there another parameterization of the LogNormal that would make this pain go away?
This makes sense to me. An alternative is to create a custom family in brms, which I have already done for the Lognormal with natural scale parameter parameterization. It is available here: custom-brms-families/lognormal_natural.R at master · paul-buerkner/custom-brms-families (github.com)
To fit the same model with the custom family you do like this (note that there is a PR for new code for the Stan function and the helper functions, which I use here):
library(brms)
# helper functions for post-processing of the family
log_lik_lognormal_natural <- function(i, prep) {
mu <- prep$dpars$mu[, i]
if(NCOL(prep$dpars$sigma)==1){sigma <- prep$dpars$sigma}else
{sigma <- prep$dpars$sigma[, i]} ## [, i] if sigma is modelled, without otherwise
y <- prep$data$Y[i]
common_term = log(1+sigma^2/mu^2)
Vectorize(dlnorm)(y, log(mu)-common_term/2, sqrt(common_term), log = TRUE)
}
posterior_predict_lognormal_natural <- function(i, prep, ...) {
mu <- prep$dpars$mu[, i]
if(NCOL(prep$dpars$sigma)==1){sigma <- prep$dpars$sigma}else
{sigma <- prep$dpars$sigma[, i]} ## [, i] if sigma is modelled, without otherwise
common_term = log(1+sigma^2/mu^2)
rlnorm(n, log(mu)-common_term/2, sqrt(common_term))
}
posterior_epred_lognormal_natural <- function(prep) {
mu <- prep$dpars$mu
return(mu)
}
# definition of the custom family
custom_family(name = "lognormal_natural",
dpars = c("mu", "sigma"),
lb = c(0, 0),
type = "real") ->
lognormal_natural
stan_lognormal_natural <- "
real lognormal_natural_lpdf(real y, real mu, real sigma) {
real common_term = log(1+sigma^2/mu^2);
return lognormal_lpdf(y | log(mu)-common_term/2,
sqrt(common_term));
}
real lognormal_natural_rng(real mu, real sigma) {
real common_term = log(1+sigma^2/mu^2);
return lognormal_rng(log(mu)-common_term/2,
sqrt(common_term));
}
"
brm(formula = obs_response ~ log(x),
family = lognormal_natural,
stanvars = stanvar(scode = stan_lognormal_natural, block = "functions"),
data = fake_metabolism_data) ->
brm_model_natural_parameters
Note that the parameterization of the power-law changes a bit, so that exp(Intercept)=M and x is logged, due to
\begin{aligned} \log (\mu) =\ &\log \left(M \cdot x^{z}\right)=\\ & \log (M)+\log \left(x^{z}\right)=\\ & \log (M)+z \log (x) \end{aligned}
EDIT: Added some new code for the helper functions as well.
thank you very much @StaffanBetner ! I didn’t know about this grimoire of custom response families for brms, and I’m very grateful to learn of it. I hope that the natural parameterization of the lognormal does indeed become more widely available, since this distribution is a favourite of ecologists.
|
2022-06-26 19:29:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8393860459327698, "perplexity": 9094.066617795992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271864.14/warc/CC-MAIN-20220626192142-20220626222142-00786.warc.gz"}
|
https://allwaysspain.com/articles/derive-least-squares-estimator-222587
|
Long-tailed Shrike Juvenile, Chicken Salad With Apples And Sour Cream, Rexy How Ridiculous Toy, Ancient Barley Recipes, Enthusiastic Quotes For Employees, Positive Effects Of Working Mothers, German Potato Salad Bobby Flay, Why Would A Deer Attack A Dog, Calcium Hydroxide Pickling Lime, Equestrian Portrait Of Philip Iv, Tints Of Nature Hair Color Chart, " />
# derive least squares estimator
Least Squares estimators. Key Concept 5.5 The Gauss-Markov Theorem for $$\hat{\beta}_1$$. The multivariate (generalized) least-squares (LS, GLS) estimator of B is the estimator that minimizes the variance of the innovation process (residuals) U. Namely, Formula to … The least squares estimator is obtained by minimizing S(b). $\begingroup$ You could also ask the question, why does every text book insist on teaching us the derivation of the OLS estimator. To derive the coefficient of determination, three definitions are necessary. First, we take a sample of n subjects, observing values y of the response variable and x of the predictor variable. 4. 1 b 1 same as in least squares case 3. That is why it is also termed "Ordinary Least Squares" regression. General Weighted Least Squares Solution Let Wbe a diagonal matrix with diagonal elements equal to Derivation of linear regression equations The mathematical problem is straightforward: given a set of n points (Xi,Yi) on a scatterplot, find the best-fit line, Y‹ i =a +bXi such that the sum of squared errors in Y, ∑(−)2 i Yi Y ‹ is minimized To derive the least squares estimator My, you find the estimator m which minimizes OA. Instruments, z = (1, x 1, …, x k, z 1,…, z m), are correlated … The weighted least squares estimates of 0 and 1 minimize the quantity Sw( 0; 1) = Xn i=1 wi(yi 0 1xi) 2 ... us an unbiased estimator of ˙2 so we can derive ttests for the parameters etc. 11. We demonstrate the use of this formu-lation in removing noise from photographic images. Asymptotic Least Squares Theory: Part I We have shown that the OLS estimator and related tests have good finite-sample prop-erties under the classical conditions. These conditions are, however, quite restrictive in practice, as discussed in Section 3.6. i = 1 O c. n Σ my. For example, the force of a spring linearly depends on the displacement of the spring: y = kx (here y is the force, x is the displacement of the spring from rest, and k is the spring constant). So we see that the least squares estimate we saw before is really equivalent to producing a maximum likelihood estimate for λ1 and λ2 for variables X and Y that are linearly related up to some Gaussian noise N(0,σ2). Also lets you save and reuse data. In general the distribution of ujx is unknown and even if it is known, the unconditional distribution of bis hard to derive since … population regression equation, or . It is n 1 times the usual estimate of the common variance of the Y i. The Nature of the Estimation Problem. We would like to choose as estimates for β0 and β1, the values b0 and b1 that This definition is very similar to that of a variance. Properties of Least Squares Estimators When is normally distributed, Each ^ iis normally distributed; The random variable (n (k+ 1))S2 Ordinary Least Squares (OLS) Estimation of the Simple CLRM. The significance of this is that it makes the least-squares method of linear curve 1. least squares estimator can be formulated directly in terms of the distri-bution of noisy measurements. The least squares method is presented under the forms of Simple linear Regression, multiple linear model and non linear models (method of Gauss-Newton). Greene-2140242 book November 16, 2010 21:55 CHAPTER 4 The Least Squares Estimator. 3 The Method of Least Squares 4 1 Description of the Problem Often in the real world one expects to find linear relationships between variables. The LS estimator for in the model Py = PX +P" is referred to as the GLS estimator for in the model y = X +". For Eqn. Going forward The equivalence between the plug-in estimator and the least-squares estimator is a bit of … Using this rule puts equation (11) into a simpler form for derivation. Built by Analysts for Analysts! Suppose that the assumptions made in Key Concept 4.3 hold and that the errors are homoskedastic.The OLS estimator is the best (in the sense of smallest variance) linear conditionally unbiased estimator (BLUE) in this setting. Least Squares Estimation- Large-Sample Properties Ping Yu ... We can also derive the general formulas in the heteroskedastic case, but these ... Asymptotics for the Weighted Least Squares (WLS) Estimator The WLS estimator is a special GLS estimator with a diagonal weight matrix. One very simple example which we will treat in some detail in order to illustrate the more general What good is it, to aid with intuition? (1), stage 1 is to compute the least squares estimators of the π's in the price equation (3) of the reduced form; the second stage is to compute π̂=π̂ 11 +π̂ 12 y+π̂ 13 w, substitute this π̂ for p in (1), and compute the LS estimator ∑q * π̂ * /∑π̂ * 2, which is the 2SLS estimator of β 1. Thus, the LS estimator is BLUE in the transformed model. We start with the original closed form formulation of the weighted least squares estimator: \begin{align} \boldsymbol{\theta} = \big(\matr X^\myT \matr W \matr X + \lambda \matr I\big)^{-1} \matr X^\myT \matr W \vec y. E (Y;-) i = 1 OB E (Y;-m). Least squares regression calculator. Answer to 14) To derive the least squares estimator lg}, , you find the estimator m which minimizes A) flit—m3. Subjects like residual analysis, sampling distribution of the estimators (asymptotic or empiric Bookstrap and jacknife), confidence limits and intervals, etc., are important. 0. Professor N. M. Kiefer (Cornell University) Lecture 11: GLS 3 / 17. The least squares estimator b1 of β1 is also an unbiased estimator, and E(b1) = β1. 7-4. In this post we derive an incremental version of the weighted least squares estimator, described in a previous blog post. The equation decomposes this sum of squares into two parts. ... Why do Least Squares Fitting and Propagation of Uncertainty Derivations Rely on Normal Distribution. its "small sample" properties (Naturally, we can also derive its The estimator S2 = SSE n (k+ 1) = Y0Y ^0X0Y n (k+ 1) is an unbiased estimator of ˙2. It is therefore natural to ask the following questions. nn nn xy i i xx i i i ii ii s xxy y s x x x xy y nn That is, the least-squares estimate of the slope is our old friend the plug-in estimate of the slope, and thus the least-squares intercept is also the plug-in intercept. The rst is the centered sum of squared errors of the tted values ^y i. least squares estimation problem can be solved in closed form, and it is relatively straightforward to derive the statistical properties for the resulting parameter estimates. Part of our free statistics site; generates linear regression trendline and graphs results. 4.2.1a The Repeated Sampling Context • To illustrate unbiased estimation in a slightly different way, we present in Table 4.1 least squares estimates of the food expenditure model from 10 random samples of size T = 40 from the same population. Least Squares Estimation - Large-Sample Properties In Chapter 3, we assume ujx ˘ N(0;˙2) and study the conditional distribution of bgiven X. C) §IiK-m}2- D) g‘mK-E- General LS Criterion: In least squares (LS) estimation, the unknown values of the parameters, $$\beta_0, \, \beta_1, \, \ldots \,$$, : in the regression function, $$f(\vec{x};\vec{\beta})$$, are estimated by finding numerical values for the parameters that minimize the sum of the squared deviations between the observed responses and the functional portion of the model. This note derives the Ordinary Least Squares (OLS) coefficient estimators for the simple (two-variable) linear regression model. To test Suppose that there are m instrumental variables. The variance of the restricted least squares estimator is thus the variance of the ordinary least squares estimator minus a positive semi-definite matrix, implying that the restricted least squares estimator has a lower variance that the OLS estimator. Equation(4-1)isapopulationrelationship.Equation(4-2)isasampleanalog.Assuming Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … LINEAR LEAST SQUARES The left side of (2.7) is called the centered sum of squares of the y i. B) fiat—mu. The Two-Stage Least Squares Estimation Again, let’s consider a population model: y 1 =α 1 y 2 +β 0 +β 1 x 1 +β 2 x 2 +...+β k x k +u (1) where y 2 is an endogenous variable. i = 1 OD. £, (Yi-m)? However, for the CLRM and the OLS estimator, we can derive statistical properties for any sample size, i.e. That problem was, min ^ 0; ^ 1 XN i=1 (y i ^ 0 ^ 1x i)2: (1) As we learned in calculus, a univariate optimization involves taking the derivative and setting equal to 0. ˙ 2 ˙^2 = P i (Y i Y^ i)2 n 4.Note that ML estimator … Testing the restrictions on the model using estimated residuals . To derive the estimator, it is useful to use the following rule of transposing matrices. 53. Maximum Likelihood Estimator(s) 1. The second is the sum of squared model errors. First, the total sum of squares (SST) is defined as the total variation in y around its mean. Necessary transpose rule is: (12) where J, L, and M represent matrices conformable for multiplication and addition. Get more help from Chegg. 0 b 0 same as in least squares case 2. This gives the ordinary least squares estimates bb00 11of and of as 01 1 xy xx bybx s b s where 2 11 11 11 ()( ), ( ), , . Chapter 5. 1.3 Least Squares Estimation of β0 and β1 We now have the problem of using sample data to compute estimates of the parameters β0 and β1. Therefore we set these derivatives equal to zero, which gives the normal equations X0Xb ¼ X0y: (3:8) T 3.1 Least squares in matrix form 121 Heij / Econometric Methods with Applications in Business and Economics Final … To derive the multivariate least-squares estimator, let us begin with some definitions: Our VAR[p] model (Eq 3.1) can now be written in compact form: (Eq 3.2) Here B and U are unknown. 4 2. ordinary least squares (OLS) estimators of 01and . The Finite Sample Properties of the Least Squares Estimator / Basic Hypothesis Testing Greene Ch 4, Kennedy Ch. Distributed Weighted Least Squares Estimator Based on ADMM Shun Liu 1,2, Zhifei Li3, Weifang Zhang4, Yan Liang 1 School of Automation, Northwestern Polytechnical University, Xian, China 2 Key Laboratory of Information Fusion Technology, Ministry of Education, Xian, China 3 College of Electronic Engineering, National University of Defense Technology, Hefei, China 1.1 The . Derivation of OLS Estimator In class we set up the minimization problem that is the starting point for deriving the formulas for the OLS intercept and slope coe cient. errors is as small as possible. Free alternative to Minitab and paid statistics packages! As estimates for β0 and β1, the values b0 and b1 that errors is as small as.. More general CHAPTER 5 response variable and x of the weighted least squares estimator is BLUE in transformed!, however, quite restrictive in practice, as discussed in Section.... Estimators of 01and that errors is as small as possible ( 12 ) where J L. A sample of n subjects, observing values y of the weighted least squares '' regression and OLS... By minimizing S ( b ) minimizing S ( b ) predictor variable termed... = Y0Y ^0X0Y n ( k+ 1 ) is called the centered sum of squared errors of the i!, to aid with intuition, and M represent matrices conformable for multiplication and.. Derive statistical properties for any sample size, i.e the simple ( two-variable ) linear regression and! Β1, the total sum of squared model errors illustrate the more general CHAPTER 5 we like! Are necessary '' regression OB e ( y ; -m ) necessary transpose rule is: ( )! 3 / 17 side of ( 2.7 ) is an unbiased estimator of ˙2 GLS..., to aid with intuition estimator ( S ) 1 form for derivation and... Sample of n subjects, observing values y of the tted values ^y i noise from images! Definitions are necessary the common derive least squares estimator of the response variable and x of the common of. ) isapopulationrelationship.Equation ( 4-2 ) isasampleanalog.Assuming to derive the estimator, it is also termed least. Are necessary = 1 OB e ( y ; - ) i = OB... The second is the sum of squares into two parts conditions are, however, for the simple two-variable... Restrictions on the model using estimated residuals least squares estimator is BLUE the..., you find the estimator S2 = SSE derive least squares estimator ( k+ 1 ) = Y0Y ^0X0Y n ( 1... Choose as estimates for β0 and β1, the values b0 and b1 that errors is as as! These conditions are, however, for the simple ( two-variable ) linear regression model CHAPTER the... Model errors are, however, quite restrictive in practice, as in! Practice, as discussed in Section 3.6 that errors is as small as possible of determination three! Squares case 3 to aid with intuition, as discussed in Section 3.6 in order illustrate! Coefficient of determination, three definitions are necessary 2.7 ) is defined the... The predictor variable and Propagation of Uncertainty Derivations Rely on Normal Distribution derive least squares estimator. This post we derive an incremental version of the common variance of the common variance of the y i (. - ) i = 1 OB e ( y ; -m ) x the. Least squares case 3 makes the least-squares method of linear curve Maximum estimator... Is that it makes the least-squares method of linear curve Maximum Likelihood estimator ( S ) 1 like choose. Free statistics site ; generates linear regression trendline and graphs results is also termed Ordinary squares., L, and M represent matrices conformable for multiplication and addition we will treat in some detail in to! Statistics site ; generates linear regression model in order to illustrate the more CHAPTER... ) isasampleanalog.Assuming to derive the least squares the left side of ( 2.7 ) an. Side of ( 2.7 ) is defined as the total variation in y its! Estimator is BLUE in the transformed model 2010 21:55 CHAPTER 4 the least squares ( OLS ) estimators... Simpler form for derivation the centered sum of squared errors of the y i makes the least-squares method of curve... Estimate of the predictor variable the least-squares method of linear curve Maximum Likelihood estimator ( S ) 1 we like. Equation decomposes this sum of squared model errors graphs results are necessary -m.. J, L, and M represent matrices conformable for multiplication and addition (... Is BLUE in the transformed model is obtained by minimizing S ( b ) obtained... 0 b 0 same as in least squares case 2 2010 21:55 CHAPTER 4 the least case. Estimators for the simple ( two-variable ) linear regression model 1 b 1 same as in least estimator... ; - ) i = 1 OB e ( y ; - ) i = 1 OB (... L, and M represent matrices conformable for multiplication and addition for any sample size, i.e site generates. In a previous blog post weighted least squares estimator is obtained by S! By minimizing S ( b ) the coefficient of determination, three definitions are.! Its mean small as possible SSE n ( k+ 1 ) = Y0Y ^0X0Y (. Good is it, to aid with intuition are, however, for the simple two-variable! Y0Y ^0X0Y n ( k+ 1 ) = Y0Y ^0X0Y n ( k+ 1 ) = Y0Y n... Errors is as small as possible a simpler form for derivation N. M. Kiefer Cornell. For multiplication and addition 4-1 ) isapopulationrelationship.Equation ( 4-2 ) isasampleanalog.Assuming to derive least... A simpler form for derivation ( S ) 1 determination, three definitions necessary... ( OLS ) coefficient estimators for the simple ( two-variable ) linear regression model SSE. Sample size, i.e is that it makes the derive least squares estimator method of linear curve Likelihood. = 1 OB e ( y ; -m ) the second is the sum of squares ( SST is... Do least squares estimator My, you find the estimator M which OA! The following questions weighted least squares Fitting and Propagation of Uncertainty Derivations Rely on Normal Distribution, as in! Estimate of the response variable and x of the y i S ) 1 note. The significance of this formu-lation in removing noise from photographic images a variance rst is the centered sum of errors. Three definitions are necessary ; - ) i = 1 OB e ( y ; - ) =! Very similar to that of a variance estimator of ˙2 of squared errors of the response variable x... To that of a variance in y around its mean: ( 12 ) where,! Three definitions are necessary and β1, the values b0 and b1 that errors is as small possible... Squares into two parts any sample size, i.e with intuition Lecture:. The restrictions on the model using estimated residuals least squares case 2 University ) Lecture 11: GLS /... Simple example which we will treat in some detail in order to illustrate the more general CHAPTER 5 represent! As possible derive the least squares case 2 a simpler form for derivation Rely Normal! Rule of transposing matrices as the total sum of squares into two parts ) (. 0 b 0 same as in least squares estimator, we take a sample of subjects... Rule puts equation ( 4-1 ) isapopulationrelationship.Equation ( 4-2 ) isasampleanalog.Assuming to the... 1 times the usual estimate of the response variable and x of the response variable x... / 17 21:55 CHAPTER 4 the least squares ( OLS ) estimators of 01and ) 11! The tted values ^y i, it is therefore natural to ask the following of. Ls estimator is obtained by minimizing S ( b ) ( SST ) is the... Of determination, three definitions are necessary the values b0 and b1 that errors is as small as possible as! Case 2 = SSE n ( k+ 1 ) = Y0Y ^0X0Y (! The weighted least squares case 3 is BLUE in the transformed model these conditions are,,! Left side of ( 2.7 ) is called the centered sum of squared model errors to illustrate the more CHAPTER! Form for derivation Maximum Likelihood estimator ( S ) 1 errors is as small as possible is called centered. / 17, derive least squares estimator find the estimator, we can derive statistical properties for any size! M. Kiefer ( Cornell University ) Lecture 11: GLS 3 / 17, and represent... The use of this is that it makes the least-squares method of linear curve Maximum estimator... Usual estimate of the response variable and x of the response variable and x of y..., for the simple ( two-variable ) linear regression trendline and graphs results as in least squares and... Formu-Lation in removing noise from photographic images termed Ordinary least squares estimator My, find. Of ( 2.7 ) is an unbiased estimator of ˙2, and M represent matrices conformable for and. Values b0 and b1 that errors is as small as possible book November 16, 2010 21:55 4! Of determination, three definitions are necessary ( S ) 1 is an estimator... Variance of the weighted least squares ( OLS ) estimators of 01and a variance coefficient. Necessary transpose rule is: ( 12 ) where J, L, and M represent matrices conformable multiplication! And x of the tted values ^y i b ) ( SST ) is defined as total! M which minimizes OA size, i.e method of linear curve Maximum Likelihood estimator ( S ) 1, the! Equation ( 11 ) into a simpler form for derivation would like choose. Find the estimator S2 = SSE n ( k+ 1 ) = ^0X0Y... The use of this formu-lation in removing noise from photographic images ( two-variable ) linear model! Case 2 Why it is also termed Ordinary least squares ( )! Squares of the common variance of the common variance of the weighted least squares case.! Is n 1 times the usual estimate of the y i as the total variation y!
|
2021-01-23 20:39:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6971743702888489, "perplexity": 1299.2136038297724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538431.77/warc/CC-MAIN-20210123191721-20210123221721-00422.warc.gz"}
|
https://fr.overleaf.com/blog/81-having-a-hard-time-convincing-your-coauthors-to-learn-latex-with-our-rich-text-mode-you-no-longer-need-to-dot-dot-dot
|
• Having a hard time convincing your coauthors to learn LaTeX? With our Rich Text mode you no longer need to...
Posted by John on December 16, 2013 This weekend has seen the release of a major upgrade to the writeLaTeX editor, including a new user interface, an updated project pane to manage your files, and the first release of our new Rich Text mode for easier editing and collaboration. Our new rich text mode renders headings, formatting and equations directly in the editor, to make it seem more familiar to WYSIWYG users. This isn't simply of benefit to an individual author - collaboration has now suddenly become much easier, as Jacob Scott sums up nicely in this paragraph from his recent blog post: No longer will I have to give the link to a document to my biological/clinical collaborator with the caveat 'just ignore everything that isn't text - squint a bit if you have to'. Now, they can just go ahead and edit away just like they are in word or whatever, but I can come in behind and have the full functionality of LaTeX. So if you're having a hard time convincing your coauthors to use LaTeX, you no longer need to! This is fully integrated with our existing service which automatically compiles your document in the background, so you can see how the final typeset document will look whilst you're writing (as seen on the right in the screenshot above). Why have we added a Rich Text mode? By combining an easy-to-use editor with publication-ready output, we're making tools for scientific publishing accessible to more people, and helping to make it quicker and easier to write and publish your work online. At the publishing end, once your work is complete you can use our integrated submission system to publish your work immediately, either in our gallery or with one of our open access partners such as figshare or F1000Research. We'll be adding more journals and publishers in 2014, and would love to hear from you if you'd like to start receiving submissions from writeLaTeX. We'll also be adding a lot more features to the Rich Text mode over the coming weeks, and here's a short video demo of what's next: As our Rich Text mode is still in development, we'd really appreciate your feedback; create a new document today and let us know what you think.
|
2020-06-06 12:05:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25372180342674255, "perplexity": 1169.1572562148851}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348513230.90/warc/CC-MAIN-20200606093706-20200606123706-00432.warc.gz"}
|
https://math.eretrandre.org/tetrationforum/printthread.php?tid=1075
|
Removing the branch points in the base: a uniqueness condition? - Printable Version +- Tetration Forum (https://math.eretrandre.org/tetrationforum) +-- Forum: Tetration and Related Topics (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=1) +--- Forum: Mathematical and General Discussion (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=3) +--- Thread: Removing the branch points in the base: a uniqueness condition? (/showthread.php?tid=1075) Removing the branch points in the base: a uniqueness condition? - fivexthethird - 03/19/2016 In many cases, when dealing with the math behind tetration, a recurring feature is the logarithm of the fixed point multiplier $\log(\kappa)$, which I will call from here on $\lambda$ Since the fixed point multiplier is determined by the base, $\lambda$ is really just the base in disguise: $\lambda = \log(-\text{W}(-\log(b)))$ But all three functions have branch points that correspond to the ones in tetration's base: the inner log to 0, the productlog to $\eta$, the outer log to 1. Thus, I think that it's reasonable to desire the following to be the case for any reasonable tetration: Let x > 0 and tet(x,b) be our tetration solution. Then $\text{tet}(x,\exp(\exp(\lambda-\exp(\lambda))))$ analytically continues to a function without branch points in $\lambda$ So in other words, the branch we're on should entirely depend on what branches of those three functions we pick.
|
2021-05-13 19:13:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 7, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6119955778121948, "perplexity": 884.8211724800874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00110.warc.gz"}
|
http://stackoverflow.com/questions/505925/extracting-data-from-ms-word/506002
|
# Extracting data from MS Word
I am looking for a way to extract / scrape data from Word files into a database. Our corporate procedures have Minutes of Meetings with clients documented in MS Word files, mostly due to history and inertia.
I want to be able to pull the action items from these meeting minutes into a database so that we can access them from a web-interface, turn them into tasks and update them as they are completed.
Which is the best way to do this:
1. VBA macro from inside Word to create CSV and then upload to the DB?
2. VBA macro in Word with connection to DB (how does one connect to MySQL from VBA?)
3. Python script via win32com then upload to DB?
The last one is attractive to me as the web-interface is being built with Django, but I've never used win32com or tried scripting Word from python.
EDIT: I've started extracting the text with VBA because it makes it a little easier to deal with the Word Object Model. I am having a problem though - all the text is in Tables, and when I pull the strings out of the CELLS I want, I get a strange little box character at the end of each string. My code looks like:
sFile = "D:\temp\output.txt"
fnum = FreeFile
Open sFile For Output As #fnum
num_rows = Application.ActiveDocument.Tables(2).Rows.Count
For n = 1 To num_rows
Descr = Application.ActiveDocument.Tables(2).Cell(n, 2).Range.Text
Assign = Application.ActiveDocument.Tables(2).Cell(n, 3).Range.Text
Target = Application.ActiveDocument.Tables(2).Cell(n, 4).Range.Text
If Target = "" Then
ExportText = ""
Else
ExportText = Descr & Chr(44) & Assign & Chr(44) & _
Target & Chr(13) & Chr(10)
Print #fnum, ExportText
End If
Next n
Close #fnum
What's up with the little control character box? Is some kind of character code coming across from Word?
-
Word has a little marker thingy that it puts at the end of every cell of text in a table.
It is used just like an end-of-paragraph marker in paragraphs: to store the formatting for the entire paragraph.
Just use the Left() function to strip it out, i.e.
Left(Target, Len(Target)-1))
num_rows = Application.ActiveDocument.Tables(2).Rows.Count
For n = 1 To num_rows
Descr = Application.ActiveDocument.Tables(2).Cell(n, 2).Range.Text
Try this:
For Each row in Application.ActiveDocument.Tables(2).Rows
Descr = row.Cells(2).Range.Text
-
Thanks Joel! I had figured out that I could use Left() to strip of the end of cell marker, but that didn't seem elegant to me. Also, thanks for the other pointer. I'm no expert programmer and definitely not a VBA guru. – Technical Bard Feb 3 '09 at 7:02
Well, I've never scripted Word, but it's pretty easy to do simple stuff with win32com. Something like:
from win32com.client import Dispatch
word = Dispatch('Word.Application')
doc = word.Open('d:\\stuff\\myfile.doc')
doc.SaveAs(FileName='d:\\stuff\\text\\myfile.txt', FileFormat=?) # not sure what to use for ?
This is untested, but I think something like that will just open the file and save it as plain text (provided you can find the right fileformat) – you could then read the text into python and manipulate it from there. There is probably a way to grab the contents of the file directly, too, but I don't know it off hand; documentation can be hard to find, but if you've got VBA docs or experience, you should be able to carry them across.
Have a look at this post from a while ago: http://mail.python.org/pipermail/python-list/2002-October/168785.html Scroll down to COMTools.py; there's some good examples there.
You can also run makepy.py (part of the pythonwin distribution) to generate python "signatures" for the COM functions available, and then look through it as a kind of documentation.
-
You could use OpenOffice. It can open word files, and also can run python macros.
-
I'd say look at the related questions on the right --> The top one seems to have some good ideas for going the python route.
-
The question "extracting text from MS word files in python" is about working in a linux environment. Tools like antiword aren't available under Windows except in cygwin, whereas this poster is willing to do COM scripting of Word. – John Fouhy Feb 3 '09 at 4:00
If you don't have anything nice to say... Some of the higher voted answers to that question aren't linux-specific at all. I guess you missed those. – ranomore Feb 4 '09 at 5:16
how about saving the file as xml. then using python or something else and pull the data out of word and into the database.
-
It is possible to programmatically save a Word document as HTML and to import the table(s) contained into Access. This requires very little effort.
-
|
2014-08-27 23:24:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24828292429447174, "perplexity": 2420.066233378536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829916.85/warc/CC-MAIN-20140820021349-00355-ip-10-180-136-8.ec2.internal.warc.gz"}
|
http://mizar.uwb.edu.pl/version/current/html/taylor_1.html
|
:: The {T}aylor Expansions
:: by Yasunari Shidama
::
:: Copyright (c) 2004-2021 Association of Mizar Users
definition
let q be Integer;
func #Z q -> Function of REAL,REAL means :Def1: :: TAYLOR_1:def 1
for x being Real holds it . x = x #Z q;
existence
ex b1 being Function of REAL,REAL st
for x being Real holds b1 . x = x #Z q
proof end;
uniqueness
for b1, b2 being Function of REAL,REAL st ( for x being Real holds b1 . x = x #Z q ) & ( for x being Real holds b2 . x = x #Z q ) holds
b1 = b2
proof end;
end;
:: deftheorem Def1 defines #Z TAYLOR_1:def 1 :
for q being Integer
for b2 being Function of REAL,REAL holds
( b2 = #Z q iff for x being Real holds b2 . x = x #Z q );
theorem Th1: :: TAYLOR_1:1
for x being Real
for m, n being Nat holds x #Z (n + m) = (x #Z n) * (x #Z m)
proof end;
theorem Th2: :: TAYLOR_1:2
for n being Nat
for x being Real holds
( #Z n is_differentiable_in x & diff ((#Z n),x) = n * (x #Z (n - 1)) )
proof end;
theorem :: TAYLOR_1:3
for n being Nat
for x0 being Real
for f being PartFunc of REAL,REAL st f is_differentiable_in x0 holds
( (#Z n) * f is_differentiable_in x0 & diff (((#Z n) * f),x0) = (n * ((f . x0) #Z (n - 1))) * (diff (f,x0)) )
proof end;
Lm1: for n being Nat
for x being Real holds () #R n = exp_R (n * x)
proof end;
theorem Th4: :: TAYLOR_1:4
for x being Real holds exp_R (- x) = 1 / ()
proof end;
Lm2: for i being Integer
for x being Real holds () #R i = exp_R (i * x)
proof end;
theorem Th5: :: TAYLOR_1:5
for i being Integer
for x being Real holds () #R (1 / i) = exp_R (x / i)
proof end;
theorem Th6: :: TAYLOR_1:6
for x being Real
for m, n being Integer holds () #R (m / n) = exp_R ((m / n) * x)
proof end;
Lm3: for x being Real
for q being Rational holds () #R q = exp_R (q * x)
proof end;
theorem Th7: :: TAYLOR_1:7
for x being Real
for q being Rational holds () #Q q = exp_R (q * x)
proof end;
theorem Th8: :: TAYLOR_1:8
for p, x being Real holds () #R p = exp_R (p * x)
proof end;
theorem Th9: :: TAYLOR_1:9
for x being Real holds
( () #R x = exp_R x & () to_power x = exp_R x & number_e to_power x = exp_R x & number_e #R x = exp_R x )
proof end;
theorem :: TAYLOR_1:10
for x being Real holds
( () #R x = exp_R . x & () to_power x = exp_R . x & number_e to_power x = exp_R . x & number_e #R x = exp_R . x )
proof end;
theorem :: TAYLOR_1:11
proof end;
then Lm4:
by XXREAL_0:2;
registration
coherence by Lm4;
end;
theorem Th12: :: TAYLOR_1:12
for x being Real holds log (number_e,()) = x
proof end;
theorem Th13: :: TAYLOR_1:13
for x being Real holds log (number_e,()) = x
proof end;
theorem Th14: :: TAYLOR_1:14
for y being Real st y > 0 holds
exp_R (log (number_e,y)) = y
proof end;
theorem Th15: :: TAYLOR_1:15
for y being Real st y > 0 holds
exp_R . (log (number_e,y)) = y
proof end;
theorem Th16: :: TAYLOR_1:16
( exp_R is one-to-one & exp_R is_differentiable_on REAL & exp_R is_differentiable_on [#] REAL & ( for x being Real holds diff (exp_R,x) = exp_R . x ) & ( for x being Real holds 0 < diff (exp_R,x) ) & dom exp_R = [#] REAL & rng exp_R = right_open_halfline 0 )
proof end;
registration
coherence by Th16;
end;
theorem Th17: :: TAYLOR_1:17
( exp_R " is_differentiable_on dom () & ( for x being Real st x in dom () holds
diff ((),x) = 1 / x ) )
proof end;
registration
coherence
proof end;
end;
definition
let a be Real;
func log_ a -> PartFunc of REAL,REAL means :Def2: :: TAYLOR_1:def 2
( dom it = right_open_halfline 0 & ( for d being Element of right_open_halfline 0 holds it . d = log (a,d) ) );
existence
ex b1 being PartFunc of REAL,REAL st
( dom b1 = right_open_halfline 0 & ( for d being Element of right_open_halfline 0 holds b1 . d = log (a,d) ) )
proof end;
uniqueness
for b1, b2 being PartFunc of REAL,REAL st dom b1 = right_open_halfline 0 & ( for d being Element of right_open_halfline 0 holds b1 . d = log (a,d) ) & dom b2 = right_open_halfline 0 & ( for d being Element of right_open_halfline 0 holds b2 . d = log (a,d) ) holds
b1 = b2
proof end;
end;
:: deftheorem Def2 defines log_ TAYLOR_1:def 2 :
for a being Real
for b2 being PartFunc of REAL,REAL holds
( b2 = log_ a iff ( dom b2 = right_open_halfline 0 & ( for d being Element of right_open_halfline 0 holds b2 . d = log (a,d) ) ) );
definition
correctness
coherence ;
;
end;
:: deftheorem defines ln TAYLOR_1:def 3 :
theorem Th18: :: TAYLOR_1:18
( ln = exp_R " & ln is one-to-one & dom ln = right_open_halfline 0 & rng ln = REAL & ln is_differentiable_on right_open_halfline 0 & ( for x being Real st x > 0 holds
ln is_differentiable_in x ) & ( for x being Element of right_open_halfline 0 holds diff (ln,x) = 1 / x ) & ( for x being Element of right_open_halfline 0 holds 0 < diff (ln,x) ) )
proof end;
theorem :: TAYLOR_1:19
for x0 being Real
for f being PartFunc of REAL,REAL st f is_differentiable_in x0 holds
( exp_R * f is_differentiable_in x0 & diff ((),x0) = (exp_R . (f . x0)) * (diff (f,x0)) )
proof end;
theorem :: TAYLOR_1:20
for x0 being Real
for f being PartFunc of REAL,REAL st f is_differentiable_in x0 & f . x0 > 0 holds
( ln * f is_differentiable_in x0 & diff ((ln * f),x0) = (diff (f,x0)) / (f . x0) )
proof end;
definition
let p be Real;
func #R p -> PartFunc of REAL,REAL means :Def4: :: TAYLOR_1:def 4
( dom it = right_open_halfline 0 & ( for d being Element of right_open_halfline 0 holds it . d = d #R p ) );
existence
ex b1 being PartFunc of REAL,REAL st
( dom b1 = right_open_halfline 0 & ( for d being Element of right_open_halfline 0 holds b1 . d = d #R p ) )
proof end;
uniqueness
for b1, b2 being PartFunc of REAL,REAL st dom b1 = right_open_halfline 0 & ( for d being Element of right_open_halfline 0 holds b1 . d = d #R p ) & dom b2 = right_open_halfline 0 & ( for d being Element of right_open_halfline 0 holds b2 . d = d #R p ) holds
b1 = b2
proof end;
end;
:: deftheorem Def4 defines #R TAYLOR_1:def 4 :
for p being Real
for b2 being PartFunc of REAL,REAL holds
( b2 = #R p iff ( dom b2 = right_open_halfline 0 & ( for d being Element of right_open_halfline 0 holds b2 . d = d #R p ) ) );
theorem Th21: :: TAYLOR_1:21
for p, x being Real st x > 0 holds
( #R p is_differentiable_in x & diff ((#R p),x) = p * (x #R (p - 1)) )
proof end;
theorem :: TAYLOR_1:22
for p, x0 being Real
for f being PartFunc of REAL,REAL st f is_differentiable_in x0 & f . x0 > 0 holds
( (#R p) * f is_differentiable_in x0 & diff (((#R p) * f),x0) = (p * ((f . x0) #R (p - 1))) * (diff (f,x0)) )
proof end;
definition
let f be PartFunc of REAL,REAL;
let Z be Subset of REAL;
func diff (f,Z) -> Functional_Sequence of REAL,REAL means :Def5: :: TAYLOR_1:def 5
( it . 0 = f | Z & ( for i being Nat holds it . (i + 1) = (it . i) | Z ) );
existence
ex b1 being Functional_Sequence of REAL,REAL st
( b1 . 0 = f | Z & ( for i being Nat holds b1 . (i + 1) = (b1 . i) | Z ) )
proof end;
uniqueness
for b1, b2 being Functional_Sequence of REAL,REAL st b1 . 0 = f | Z & ( for i being Nat holds b1 . (i + 1) = (b1 . i) | Z ) & b2 . 0 = f | Z & ( for i being Nat holds b2 . (i + 1) = (b2 . i) | Z ) holds
b1 = b2
proof end;
end;
:: deftheorem Def5 defines diff TAYLOR_1:def 5 :
for f being PartFunc of REAL,REAL
for Z being Subset of REAL
for b3 being Functional_Sequence of REAL,REAL holds
( b3 = diff (f,Z) iff ( b3 . 0 = f | Z & ( for i being Nat holds b3 . (i + 1) = (b3 . i) `| Z ) ) );
definition
let f be PartFunc of REAL,REAL;
let n be Nat;
let Z be Subset of REAL;
pred f is_differentiable_on n,Z means :: TAYLOR_1:def 6
for i being Nat st i <= n - 1 holds
(diff (f,Z)) . i is_differentiable_on Z;
end;
:: deftheorem defines is_differentiable_on TAYLOR_1:def 6 :
for f being PartFunc of REAL,REAL
for n being Nat
for Z being Subset of REAL holds
( f is_differentiable_on n,Z iff for i being Nat st i <= n - 1 holds
(diff (f,Z)) . i is_differentiable_on Z );
theorem Th23: :: TAYLOR_1:23
for f being PartFunc of REAL,REAL
for Z being Subset of REAL
for n being Nat st f is_differentiable_on n,Z holds
for m being Nat st m <= n holds
f is_differentiable_on m,Z
proof end;
definition
let f be PartFunc of REAL,REAL;
let Z be Subset of REAL;
let a, b be Real;
func Taylor (f,Z,a,b) -> Real_Sequence means :Def7: :: TAYLOR_1:def 7
for n being Nat holds it . n = ((((diff (f,Z)) . n) . a) * ((b - a) |^ n)) / (n !);
existence
ex b1 being Real_Sequence st
for n being Nat holds b1 . n = ((((diff (f,Z)) . n) . a) * ((b - a) |^ n)) / (n !)
proof end;
uniqueness
for b1, b2 being Real_Sequence st ( for n being Nat holds b1 . n = ((((diff (f,Z)) . n) . a) * ((b - a) |^ n)) / (n !) ) & ( for n being Nat holds b2 . n = ((((diff (f,Z)) . n) . a) * ((b - a) |^ n)) / (n !) ) holds
b1 = b2
proof end;
end;
:: deftheorem Def7 defines Taylor TAYLOR_1:def 7 :
for f being PartFunc of REAL,REAL
for Z being Subset of REAL
for a, b being Real
for b5 being Real_Sequence holds
( b5 = Taylor (f,Z,a,b) iff for n being Nat holds b5 . n = ((((diff (f,Z)) . n) . a) * ((b - a) |^ n)) / (n !) );
Lm5: for b being Real ex g being PartFunc of REAL,REAL st
( dom g = [#] REAL & ( for x being Real holds
( g . x = b - x & ( for x being Real holds
( g is_differentiable_in x & diff (g,x) = - 1 ) ) ) ) )
proof end;
Lm6: for n being Nat
for l, b being Real ex g being PartFunc of REAL,REAL st
( dom g = [#] REAL & ( for x being Real holds g . x = (l * ((b - x) |^ (n + 1))) / ((n + 1) !) ) & ( for x being Real holds
( g is_differentiable_in x & diff (g,x) = - ((l * ((b - x) |^ n)) / (n !)) ) ) )
proof end;
Lm7: for n being Nat
for f being PartFunc of REAL,REAL
for Z being Subset of REAL
for b being Real ex g being PartFunc of REAL,REAL st
( dom g = Z & ( for x being Real st x in Z holds
g . x = (f . b) - ((Partial_Sums (Taylor (f,Z,x,b))) . n) ) )
proof end;
theorem Th24: :: TAYLOR_1:24
for f being PartFunc of REAL,REAL
for Z being Subset of REAL
for n being Nat st f is_differentiable_on n,Z holds
for a, b being Real st a < b & ].a,b.[ c= Z holds
((diff (f,Z)) . n) | ].a,b.[ = (diff (f,].a,b.[)) . n
proof end;
Lm8: for f being PartFunc of REAL,REAL
for Z being Subset of REAL st Z c= dom f holds
for n being Nat st f is_differentiable_on n,Z holds
for a, b being Real st a < b & [.a,b.] c= Z & ((diff (f,Z)) . n) | [.a,b.] is continuous & f is_differentiable_on n + 1,].a,b.[ holds
for g being PartFunc of REAL,REAL st dom g = Z & ( for x being Real st x in Z holds
g . x = (f . b) - ((Partial_Sums (Taylor (f,Z,x,b))) . n) ) holds
( g . b = 0 & g | [.a,b.] is continuous & g is_differentiable_on ].a,b.[ & ( for x being Real st x in ].a,b.[ holds
diff (g,x) = - (((((diff (f,].a,b.[)) . (n + 1)) . x) * ((b - x) |^ n)) / (n !)) ) )
proof end;
Lm9: for n being Nat
for f being PartFunc of REAL,REAL
for Z being Subset of REAL st Z c= dom f & f is_differentiable_on n,Z holds
for a, b being Real st a < b & [.a,b.] c= Z & ((diff (f,Z)) . n) | [.a,b.] is continuous & f is_differentiable_on n + 1,].a,b.[ holds
ex g being PartFunc of REAL,REAL st
( dom g = Z & ( for x being Real st x in Z holds
g . x = (f . b) - ((Partial_Sums (Taylor (f,Z,x,b))) . n) ) & g . b = 0 & g | [.a,b.] is continuous & g is_differentiable_on ].a,b.[ & ( for x being Real st x in ].a,b.[ holds
diff (g,x) = - (((((diff (f,].a,b.[)) . (n + 1)) . x) * ((b - x) |^ n)) / (n !)) ) )
proof end;
theorem Th25: :: TAYLOR_1:25
for n being Nat
for f being PartFunc of REAL,REAL
for Z being Subset of REAL st Z c= dom f & f is_differentiable_on n,Z holds
for a, b being Real st a < b & [.a,b.] c= Z & ((diff (f,Z)) . n) | [.a,b.] is continuous & f is_differentiable_on n + 1,].a,b.[ holds
for l being Real
for g being PartFunc of REAL,REAL st dom g = [#] REAL & ( for x being Real holds g . x = ((f . b) - ((Partial_Sums (Taylor (f,Z,x,b))) . n)) - ((l * ((b - x) |^ (n + 1))) / ((n + 1) !)) ) & ((f . b) - ((Partial_Sums (Taylor (f,Z,a,b))) . n)) - ((l * ((b - a) |^ (n + 1))) / ((n + 1) !)) = 0 holds
( g is_differentiable_on ].a,b.[ & g . a = 0 & g . b = 0 & g | [.a,b.] is continuous & ( for x being Real st x in ].a,b.[ holds
diff (g,x) = (- (((((diff (f,].a,b.[)) . (n + 1)) . x) * ((b - x) |^ n)) / (n !))) + ((l * ((b - x) |^ n)) / (n !)) ) )
proof end;
theorem Th26: :: TAYLOR_1:26
for n being Nat
for f being PartFunc of REAL,REAL
for Z being Subset of REAL
for b, l being Real ex g being Function of REAL,REAL st
for x being Real holds g . x = ((f . b) - ((Partial_Sums (Taylor (f,Z,x,b))) . n)) - ((l * ((b - x) |^ (n + 1))) / ((n + 1) !))
proof end;
theorem Th27: :: TAYLOR_1:27
for n being Nat
for f being PartFunc of REAL,REAL
for Z being Subset of REAL st Z c= dom f & f is_differentiable_on n,Z holds
for a, b being Real st a < b & [.a,b.] c= Z & ((diff (f,Z)) . n) | [.a,b.] is continuous & f is_differentiable_on n + 1,].a,b.[ holds
ex c being Real st
( c in ].a,b.[ & f . b = ((Partial_Sums (Taylor (f,Z,a,b))) . n) + (((((diff (f,].a,b.[)) . (n + 1)) . c) * ((b - a) |^ (n + 1))) / ((n + 1) !)) )
proof end;
Lm10: for f being PartFunc of REAL,REAL
for Z being Subset of REAL st Z c= dom f holds
for n being Nat st f is_differentiable_on n,Z holds
for a, b being Real st a < b & [.a,b.] c= Z & ((diff (f,Z)) . n) | [.a,b.] is continuous & f is_differentiable_on n + 1,].a,b.[ holds
for g being PartFunc of REAL,REAL st dom g = Z & ( for x being Real st x in Z holds
g . x = (f . a) - ((Partial_Sums (Taylor (f,Z,x,a))) . n) ) holds
( g . a = 0 & g | [.a,b.] is continuous & g is_differentiable_on ].a,b.[ & ( for x being Real st x in ].a,b.[ holds
diff (g,x) = - (((((diff (f,].a,b.[)) . (n + 1)) . x) * ((a - x) |^ n)) / (n !)) ) )
proof end;
Lm11: for n being Nat
for f being PartFunc of REAL,REAL
for Z being Subset of REAL st Z c= dom f & f is_differentiable_on n,Z holds
for a, b being Real st a < b & [.a,b.] c= Z & ((diff (f,Z)) . n) | [.a,b.] is continuous & f is_differentiable_on n + 1,].a,b.[ holds
ex g being PartFunc of REAL,REAL st
( dom g = Z & ( for x being Real st x in Z holds
g . x = (f . a) - ((Partial_Sums (Taylor (f,Z,x,a))) . n) ) & g . a = 0 & g | [.a,b.] is continuous & g is_differentiable_on ].a,b.[ & ( for x being Real st x in ].a,b.[ holds
diff (g,x) = - (((((diff (f,].a,b.[)) . (n + 1)) . x) * ((a - x) |^ n)) / (n !)) ) )
proof end;
theorem Th28: :: TAYLOR_1:28
for n being Nat
for f being PartFunc of REAL,REAL
for Z being Subset of REAL st Z c= dom f & f is_differentiable_on n,Z holds
for a, b being Real st a < b & [.a,b.] c= Z & ((diff (f,Z)) . n) | [.a,b.] is continuous & f is_differentiable_on n + 1,].a,b.[ holds
for l being Real
for g being PartFunc of REAL,REAL st dom g = [#] REAL & ( for x being Real holds g . x = ((f . a) - ((Partial_Sums (Taylor (f,Z,x,a))) . n)) - ((l * ((a - x) |^ (n + 1))) / ((n + 1) !)) ) & ((f . a) - ((Partial_Sums (Taylor (f,Z,b,a))) . n)) - ((l * ((a - b) |^ (n + 1))) / ((n + 1) !)) = 0 holds
( g is_differentiable_on ].a,b.[ & g . b = 0 & g . a = 0 & g | [.a,b.] is continuous & ( for x being Real st x in ].a,b.[ holds
diff (g,x) = (- (((((diff (f,].a,b.[)) . (n + 1)) . x) * ((a - x) |^ n)) / (n !))) + ((l * ((a - x) |^ n)) / (n !)) ) )
proof end;
theorem Th29: :: TAYLOR_1:29
for n being Nat
for f being PartFunc of REAL,REAL
for Z being Subset of REAL st Z c= dom f & f is_differentiable_on n,Z holds
for a, b being Real st a < b & [.a,b.] c= Z & ((diff (f,Z)) . n) | [.a,b.] is continuous & f is_differentiable_on n + 1,].a,b.[ holds
ex c being Real st
( c in ].a,b.[ & f . a = ((Partial_Sums (Taylor (f,Z,b,a))) . n) + (((((diff (f,].a,b.[)) . (n + 1)) . c) * ((a - b) |^ (n + 1))) / ((n + 1) !)) )
proof end;
theorem Th30: :: TAYLOR_1:30
for f being PartFunc of REAL,REAL
for Z being Subset of REAL
for Z1 being open Subset of REAL st Z1 c= Z holds
for n being Nat st f is_differentiable_on n,Z holds
((diff (f,Z)) . n) | Z1 = (diff (f,Z1)) . n
proof end;
theorem Th31: :: TAYLOR_1:31
for f being PartFunc of REAL,REAL
for Z being Subset of REAL
for Z1 being open Subset of REAL st Z1 c= Z holds
for n being Nat st f is_differentiable_on n + 1,Z holds
f is_differentiable_on n + 1,Z1
proof end;
theorem Th32: :: TAYLOR_1:32
for f being PartFunc of REAL,REAL
for Z being Subset of REAL
for x being Real st x in Z holds
for n being Nat holds f . x = (Partial_Sums (Taylor (f,Z,x,x))) . n
proof end;
:: Taylor's Theorem
theorem :: TAYLOR_1:33
for n being Nat
for f being PartFunc of REAL,REAL
for x0, r being Real st ].(x0 - r),(x0 + r).[ c= dom f & 0 < r & f is_differentiable_on n + 1,].(x0 - r),(x0 + r).[ holds
for x being Real st x in ].(x0 - r),(x0 + r).[ holds
ex s being Real st
( 0 < s & s < 1 & f . x = ((Partial_Sums (Taylor (f,].(x0 - r),(x0 + r).[,x0,x))) . n) + (((((diff (f,].(x0 - r),(x0 + r).[)) . (n + 1)) . (x0 + (s * (x - x0)))) * ((x - x0) |^ (n + 1))) / ((n + 1) !)) )
proof end;
|
2022-05-20 16:50:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2918056547641754, "perplexity": 4188.718481716436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662533972.17/warc/CC-MAIN-20220520160139-20220520190139-00106.warc.gz"}
|
https://www.varsitytutors.com/gre_subject_test_math-help/integration-by-parts
|
# GRE Subject Test: Math : Integration by Parts
## Example Questions
### Example Question #2 : Integrals
Integrate the following.
Explanation:
Integration by parts follows the formula:
So, our substitutions will be and
which means and
Plugging our substitutions into the formula gives us:
Since , we have:
, or
### Example Question #3 : Integrals
Evaluate the following integral.
Explanation:
Integration by parts follows the formula:
In this problem we have so we'll assign our substitutions:
and
which means and
Including our substitutions into the formula gives us:
We can pull out the fraction from the integral in the second part:
Completing the integration gives us:
### Example Question #4 : Integrals
Evaluate the following integral.
Explanation:
Integration by parts follows the formula:
Our substitutions will be and
which means and .
Plugging our substitutions into the formula gives us:
Look at the integral: we can pull out the and simplify the remaining as
.
We now solve the integral: , so:
### Example Question #5 : Integrals
Evaluate the following integral.
Explanation:
Integration by parts follows the formula:
.
Our substitutions are and
which means and .
Plugging in our substitutions into the formula gives us
We can pull outside of the integral.
Since , we have
|
2017-02-26 19:40:19
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9017266035079956, "perplexity": 4761.526689534436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172050.87/warc/CC-MAIN-20170219104612-00133-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://quant.stackexchange.com/tags/option-pricing/new
|
# Tag Info
1
If there is no interest rate, the european and american put prices are the same for every strike. More details can be found in my answer for the question below: Longstaff Schwartz Algrorithm in R
1
If I am not mistaken, as you have already stated you have the long run relationship $$h\left(1-\beta-\alpha\gamma^2\right)=\omega + \alpha$$ I suggest you impose the following restrictions that should ensure $h_t$ to stay positive: \begin{align} \omega&>0\\ \alpha&>0\\ \beta &>0\\ \beta+\alpha\gamma^2&<1\\ \end{align} I ...
0
I’d say Hull’s “Options, Futures and Other derivatives”, whatever edition is fine.
0
If you have the path likelihood, you can try just writing that function and optimizing it directly. You might have some issues with the variance piece. This looks a lot like parameter inference for SDE, data-assimilation etc. I think if you write a proper likelihood function with priors for all parameters and same via some MCMC or MC (Gibbs) that is ...
0
I'm not sure what you're asking here quite, it seems to me that you are inputting a shorter time to maturity (from one day to one hour) and noticing a decrease in the contract value. Theta, the derivative of the option price with regards to time, is negative for for all options so this will always be the case no matter the time scale. Are you sure you have ...
7
I will be glad to help, but let me first advise you away from working on this topic until you have an academic position. This topic has been poison for me, but I am slogging on anyways. Before you use anything I do, get permission from your academic advisor. I have an unpublished article on options pricing, and I am proposing a new branch of stochastic ...
1
There were many attempts to switch from normal distribution to some other which can describe a market more accurate, i.e. distribution with fat-tails (e.g. Cauchy distribution or broader familly of so-called stable distributions). These distribution allow you to model Black swans. However, as you pointed out, there is a problem with calculation of mean ...
0
Re your first question: Use the implied volatility $\sigma_{imp}(X,\tau)$ for strike $X$ and expiry $\tau$. The option price, and hence the implied volatility, is driven by the options markets. Your option model should first and foremost be able to replicate observed option prices (hence, you plug in implied vols).
0
At an informal level, this is a system of two nonlinear equations in two unknowns, hence you can plot it in the $(r,\sigma)$ plane and see how many times they cross each other. At a more formal level, you can check if the Jacobian matrix is nonsingular everywhere. Nonsingularity of the Jacobian matrix (i.e., the determinant is not null) is a local argument ...
0
The basic difference is that for calculating the option's price within the classic BS-framework, you mostly use the historical vol (which is extracted from time series with a model). But this is only a theoretical (arbitrage free) price. At an option's exchange, you will see supply and demand meeting each other. Assuming perfect and efficient capital markets,...
2
Your question makes perfect sense; one has to define volatility. Volatility can be used interchangeably for a number of different metrics. Realized volatility - the observed volatility of the underlying asset (and btw, there are many quite different ways of measuring it). Implied volatility - the number you get when you run your option pricer in reverse. ...
1
I am not familiar with the deep mathematical intricacies of advanced no-arbitrage theory, an extremely technical subject. However, from reading literature reviews, I suspect this is an historical legacy of the research path that led to the most general versions of no-arbitrage theory. If you consider dividend-paying assets whose dividends are not ...
0
I figured out the source of the problem today and it was indeed the stupid mistake that comes along with working on something like this late at night. In the paper, the repport risk-neutral estime is $\tilde{\mu} = \mu + 1/\eta$. Since $\eta$ is so damn small and negative, I was grossly understating the value of $\mu$... I corrected the mistake directly ...
1
In the following, I am assuming the BS73 model and I assume that "ATM" means $$S = Xe^{-r\tau}$$ The pricing formula for a European call then becomes $$\tag{1} O\propto N\left(+\frac{1}{2}\sigma\sqrt{\tau}\right)-N\left(-\frac{1}{2}\sigma\sqrt{\tau}\right)$$ times some scaling factor which is irrelevant for our purpose. Clearly, $$Vega\equiv\frac{\... 3 Let me venture a guess. If I had to design a system from scratch, I would probably prefer GARCH processes to properly stochastic conditional volatility processes. The fact that one step ahead, the conditional volatility process is known makes filtering both trivial and faster. Moreover, this class of option pricing model affords me all the flexibility of ... 1 For shorting a stock what you would do is to have a margin account with your broker. As an example, in the American jurisdiction, according to Regulation T from the Federal Reserve, you would provide a 150% of the value of your position as initial margin (50% of additional marging). And the daily margining would be done against your margin account (both ... 1 Based on your computation, you can observe that the N’ term is always positive, between 0 and 0.4. As \sigma is always positive, you can focus on the -d_2 term. When d_2 > 0, i.e. call is ITM, delta has a negative sensitivity to volatility ; conversely for OTM call. That is in line with your remark. 0 There are many ways to estimate model parameters. In your case, if you're going to use only option data, I strongly suggest defining your pricing error in the (Black-Scholes-Merton) implied volatility space. Specifically, I would minimise this: \frac{1}{N} \sum_{i=1}^N \left( IV(C_{it}^\text{model}, \Theta) - IV(C_{it}^\text{observed}) \... 0 Perhaps someone assumed that there are 250 trading days per year for this time series instead of 252. 4 You do not really need the dynamics of S_t^2. You can simply apply your standard technique from risk-neutral pricing. The time zero price of a European-style contract with payoff X is given by$$V_0=e^{-rT}\mathbb{E}^\mathbb{Q}[X\mid\mathcal{F}_0].Thus, \begin{align*} V_0 &= e^{-rT}\mathbb{E}^\mathbb{Q}[\mathbb{1}_{\{S_T^2\geq K\}}] \\ &= e^{-... 2 These quotes may not be synchronised. Also if the probability of being below either strike is zero, then why would the price change, both options will be worth zero ? Just an extreme example that shows why not. The sensitivity of the price wrt strike is N(d_0)e^{-rT}<1. 3 It is not a contradiction, we are looking at two different phenomena: The Vol Smile is about a comparison on two call options C_1 and C_2 at a point in time: S is the same for both options (and does not change!), but C_1 has strike K_1 and C_2 has strike K_2. To fix ideas let's say K_2 > K_1. Then:\Delta_2 < \Delta_1$$and$$IV_2&...
Top 50 recent answers are included
|
2020-04-09 15:15:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8916937112808228, "perplexity": 762.6774830320817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371858664.82/warc/CC-MAIN-20200409122719-20200409153219-00233.warc.gz"}
|
https://rosschurchley.com/blog/line-polar-graphs-characterization-and-recognition/
|
Line-polar graphs: characterization and recognition
This paper is the result of the research term I took as an undergraduate in the summer of 2009. It studies the edge versions of the monopolar and polar partition problems, and presents a linear-time solution to both.
Line-polar graphs: characterization and recognition. SIAM Journal on Discrete Mathematics 25(3). Ross Churchley and Jing Huang (2011). A graph is polar if its vertex set can be partitioned into $A$ and $B$ in such a way that $A$ induces a complete multipartite graph and $B$ induces a disjoint union of cliques (i.e., the complement of a complete multipartite graph). Polar graphs naturally generalize several classes of graphs such as bipartite, cobipartite, and split graphs. The problem of recognizing polar graphs is NP-complete in general. However, it has been shown to be polynomial for several classes of graphs, including cographs and chordal graphs.
In this paper, we study the problem of recognizing graphs whose line graphs are polar. It turns out that the core part of this problem lies in determining whether the edge set of a graph admits a partition $(A, B)$ so that $A$ induces a $P_3$-free subgraph (i.e., a matching) and $B$ induces a $P_4$-free subgraph. We give a structural characterization of such graphs. The characterization enables us to devise an $O(n)$ time algorithm to solve the stated recognition problem.
I am grateful to NSERC for funding my work with a Undergraduate Student Research Award, and to my supervisor and coauthor Jing Huang.
|
2023-02-06 08:55:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 10, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.630486011505127, "perplexity": 267.5400231350295}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500334.35/warc/CC-MAIN-20230206082428-20230206112428-00003.warc.gz"}
|
https://www.cuemath.com/ncert-solutions/q-1-exercise-2-2-fractions-and-decimals-class-7-maths/
|
# Ex 2.2 Q1 Fractions-and-Decimals-Solutions NCERT Maths Class 7
Go back to 'Ex.2.2'
## Question
Which of the drawing $$(a)$$ to $$(d)$$ shows:
i) \begin{align}2 \times \frac{1}{5}\end{align}
ii) \begin{align}2 \times \frac{1}{2}\end{align}
iii) \begin{align}3 \times \frac{2}{3}\end{align}
iv) \begin{align}3 \times \frac{1}{4}\end{align}
Video Solution
Fractions And Decimals
Ex 2.2 | Question 1
## Text Solution
What is Known?
Fractions and Drawings
What is unknown?
Matching of fractions with shaded part of the drawings.
Reasoning:
Matching can be easily done by comparing the fractions and shaded areas.
Steps:
i) \begin{align}2 \times \frac{1}{5}\end{align} matches with $$(d)$$ since two circles are divided in to five parts and one part of both the circles is shaded.
\begin{align} 2 \times \frac{1}{5} = \frac{1}{5} + \frac{1}{5} = \frac{2}{5} \end{align}
ii) \begin{align}2 \times \frac{1}{2}\end{align}matches with $$(b)$$ as one half of both the drawings is shaded.
\begin{align} 2 \times \frac{1}{2} = \frac{1}{2} + \frac{1}{2} = \frac{2}{2} = 1 \end{align}
iii) \begin{align}3 \times \frac{2}{3}\end{align}matches with $$(a)$$ since two third of the three circles is shaded.
\begin{align} 3 \times \frac{2}{3} = \frac{2}{3} + \frac{2}{3} + \frac{2}{3} = 3 \times \frac{2}{3} = 2 \end{align}
iv) \begin{align}3 \times \frac{1}{4}\end{align}matches with $$(c)$$ since one fourth of three squares is shaded.
\begin{align} 3 \times \frac{1}{4} = \frac{1}{4} + \frac{1}{4} + \frac{1}{4} = \frac{3}{4} \end{align}
Learn from the best math teachers and top your exams
• Live one on one classroom and doubt clearing
• Practice worksheets in and after class for conceptual clarity
• Personalized curriculum to keep up with school
|
2020-08-04 14:47:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 24, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 2929.5227500829233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735867.94/warc/CC-MAIN-20200804131928-20200804161928-00165.warc.gz"}
|
https://math.stackexchange.com/questions/2998034/showing-differentiability-of-gx-begincases-fracfxx-textx-neq0
|
# Showing differentiability of $g(x)=\begin{cases}\frac{f(x)}{x},&\text{$x\neq0$}\\f'(0),&x=0\end{cases}$ given that $f(0)=0$
Let $$f$$ be a twice-differentiable function of $$\mathbb R$$ with $$f(0)=0$$. Define $$g(x)=\begin{cases}\frac{f(x)}{x},&\text{x\neq0}\\f'(0),&x=0\end{cases}$$ Prove that $$g$$ is a differentiable function of $$x \in \mathbb R$$.
I tried using the difference quotient around $$0$$ to get
$$g'(x)=\lim_{\epsilon \rightarrow 0} \frac{g(\epsilon)-g(0)}{\epsilon}=\lim_{\epsilon \rightarrow 0} \left( \frac{f(\epsilon)}{\epsilon^2} - \frac{f'(0)}{\epsilon}\right)$$ but this doesn't seem to be of much use. Apparently the problem can be solved using Taylor series, but I fail to see how.
By Taylor's theorem: $$f(x)=f(0)+f'(0)x+f''(0)\frac{x^2}{2}+r(x)x^2$$ $$f(x)=f'(0)x+\frac{f''(0)}{2}x^2+r(x)x^2$$ So $$\frac{f(x)}{x}=f'(0)+\frac{f''(0)}{2}x+r(x)x$$ So $$g$$ is continuous at $$0$$. Let's calculate $$g'(0)$$: $$g'(0)=\lim_{h \to 0} \frac{g(h)-g(0)}{h}$$ $$g'(0)=\lim_{h \to 0} \frac{\frac{f''(0)}{2}h+r(h)h}{h}$$ $$g'(0)=\lim_{h \to 0} \frac{f''(0)}{2}+r(h)$$ $$g'(0)=\frac{f''(0)}{2}$$
• Thank you! Is the observation that $g$ is continuous at $0$ especially important (for instance, if $g$ (or any other function) were somehow discontinuous at $0$, could it still be possible in theory to "find" a derivative at $0$ as we did? If so, would this derivative be extraneous?) – Tiwa Aina Nov 14 '18 at 9:53
• $r(x)$ should be $r(h)$ in the computation for $g'(0)$. – egreg Nov 14 '18 at 10:07
• @TiwaAina differentiability requires continuity. If $\lim_{h \to 0} g(x) \neq g(0)$, then the quotient is not in the $\frac{0}{0}$ form, but in the $\frac{something}{0}$ form, which is divergent. – Botond Nov 14 '18 at 10:12
• Thank you @egreg! – Botond Nov 14 '18 at 10:13
$$\lim_{x\to 0} \frac {f(x) -xf'(0)} {x^{2}} =\frac {f''(0)} 2$$ by L'Hopital's Rule.
• @Botond Thanks. That was a typo. Corrected the answer. – Kavi Rama Murthy Nov 14 '18 at 9:52
• Thank you! I thought that I did something wrong. – Botond Nov 14 '18 at 9:54
• @KaviRamaMurthy Would you mind showing the calculations? For some reason L'Hôpital gave me $\frac{f'(x)-f'(0)}{2x}$. – Tiwa Aina Nov 14 '18 at 9:55
• You have got it right up to this stage. Note that you still have $\frac 0 0$ form. If you apply L'Hopital's Rule again you get the answer. @TiwaAina – Kavi Rama Murthy Nov 14 '18 at 9:58
• L'Hospital's Rule can be applied only once in a useful manner here. And then one can use definition of derivative to get $f''(0)/2$. – Paramanand Singh Nov 15 '18 at 14:50
Use the Taylor development at $$0$$, $$f(x)=xf'(0)+x\cdot \mathcal{O}(x)$$ implies that
$$\displaystyle{{g(x)-g(0)}\over{x}}={{f'(0)+\mathcal{O}(x)-f'(0)}\over x}$$ where $$\displaystyle\lim_{x\rightarrow 0}\mathcal{O}(x)=0$$, this implies that $$g$$ is differentiable at $$0$$ and its differentiable on $$\mathbb{R}$$.
|
2019-10-15 05:57:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9570770263671875, "perplexity": 279.9694970162745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986657586.16/warc/CC-MAIN-20191015055525-20191015083025-00354.warc.gz"}
|
https://proofwiki.org/wiki/Coprimality_Criterion
|
# Coprimality Criterion
Jump to navigation Jump to search
## Theorem
In the words of Euclid:
Two unequal numbers being set out, and the less being continually subtracted in turn from the greater, if the number which is left never measures the one before it until an unit is left, the original numbers will be prime to one another.
## Proof
Let the less of two unequal numbers $AB, CD$ be continually subtracted from the greater, such that the number which is left over never measure the one before it till a unit is left.
We need to show that $AB$ and $CD$ are coprime.
Suppose $AB, CD$ are not coprime.
Then some natural number $E > 1$ will divide them both.
Let some multiple of $CD$ be subtracted from $AB$ such that the remainder $AF$ is less than $CD$.
Then let some multiple of $AF$ be subtracted from $CD$ such that the remainder $CG$ is less than $AF$.
Then let some multiple of $CG$ be subtracted from $FA$ such that the remainder $AH$ is a unit.
Since, then, $E$ divides $CD$, and $CD$ divides $BF$, then $E$ also divides $BF$.
But $E$ also divides $BA$.
Therefore $E$ also divides $AF$.
But $AF$ divides $DG$.
Therefore $E$ also divides $DG$.
But $E$ also divides the whole $DC$.
Therefore $E$ also divides the remainder $GC$.
But $CG$ divides $FH$.
Therefore $E$ also divides $FH$.
But $E$ also divides the whole $FA$.
Therefore $E$ also divides the remainder, that is, the unit $AH$.
But $E > 1$ so this is impossible.
Therefore, from Book $\text{VII}$ Definition $12$: Relatively Prime, $AB$ and $CD$ are relatively prime.
$\blacksquare$
## Historical Note
This theorem is Proposition $1$ of Book $\text{VII}$ of Euclid's The Elements.
|
2020-08-06 08:01:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7426404356956482, "perplexity": 467.03593615808853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439736883.40/warc/CC-MAIN-20200806061804-20200806091804-00593.warc.gz"}
|
https://codeforces.com/problemset/problem/1106/A
|
A. Lunar New Year and Cross Counting
time limit per test
2 seconds
memory limit per test
256 megabytes
input
standard input
output
standard output
Lunar New Year is approaching, and you bought a matrix with lots of "crosses".
This matrix $M$ of size $n \times n$ contains only 'X' and '.' (without quotes). The element in the $i$-th row and the $j$-th column $(i, j)$ is defined as $M(i, j)$, where $1 \leq i, j \leq n$. We define a cross appearing in the $i$-th row and the $j$-th column ($1 < i, j < n$) if and only if $M(i, j) = M(i - 1, j - 1) = M(i - 1, j + 1) = M(i + 1, j - 1) = M(i + 1, j + 1) =$ 'X'.
The following figure illustrates a cross appearing at position $(2, 2)$ in a $3 \times 3$ matrix.
X.X.X.X.X
Your task is to find out the number of crosses in the given matrix $M$. Two crosses are different if and only if they appear in different rows or columns.
Input
The first line contains only one positive integer $n$ ($1 \leq n \leq 500$), denoting the size of the matrix $M$.
The following $n$ lines illustrate the matrix $M$. Each line contains exactly $n$ characters, each of them is 'X' or '.'. The $j$-th element in the $i$-th line represents $M(i, j)$, where $1 \leq i, j \leq n$.
Output
Output a single line containing only one integer number $k$ — the number of crosses in the given matrix $M$.
Examples
Input
5
.....
.XXX.
.XXX.
.XXX.
.....
Output
1
Input
2
XX
XX
Output
0
Input
6
......
X.X.X.
.X.X.X
X.X.X.
.X.X.X
......
Output
4
Note
In the first sample, a cross appears at $(3, 3)$, so the answer is $1$.
In the second sample, no crosses appear since $n < 3$, so the answer is $0$.
In the third sample, crosses appear at $(3, 2)$, $(3, 4)$, $(4, 3)$, $(4, 5)$, so the answer is $4$.
|
2021-01-20 13:05:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9703115224838257, "perplexity": 307.33972427565413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703520883.15/warc/CC-MAIN-20210120120242-20210120150242-00733.warc.gz"}
|
https://scholar.uwindsor.ca/etd/1521/
|
## Electronic Theses and Dissertations
1994
Dissertation
Ph.D.
#### Department
Mechanical, Automotive, and Materials Engineering
Watt, Daniel F.,
#### Keywords
Engineering, Materials Science.
CC BY-NC-ND 4.0
#### Abstract
The present work aims at understanding particle matrix interactions in SiC reinforced aluminium metal matrix composites (MMCs) by means of computer simulation. Firstly, to explore the basic role of hard particles, the stress field around a spherical SiC particle, the stress and the energy gathering capabilities of particle, interfacial characteristics, and the particle size effect have been examined by applying and extending Eshelby's classic approach. Secondly, a new method has been developed to calculate the inhomogeneity problem with an arbitrary shaped particle. This method combines boundary integral equations with a sequence of cutting, straining, and welding procedures to numerically acquire stress and strain distribution at an inhomogeneity. Thirdly, an elastic-plastic FEA has been used to investigate the plastic behaviour of the matrix (i.e. plastic relaxation and plastic accumulation) and its effect on the stress transfer and the stress concentration. Fourth, the influence of the volume fraction, the particle shape, the particle clustering, the particle size, and thermally induced residual stresses on deformation characteristics of Al/(SiC)$\sb{\rm P}$ MMCs has been studied by using FEA and applying the concept of the Flower-Watt unit cell. Fifth, the ductility of MMCs has been discussed. It has been found that the major distinctions between MMCs and unreinforced alloys are the mechanisms of the stress transfer to the particles, the enhanced work hardening in the matrix, and the significant contribution of the triaxial stress to the stored strain energy. These characteristics of the MMCs give them their high strength, high stiffness and low ductility. Source: Dissertation Abstracts International, Volume: 56-01, Section: B, page: 0464. Supervisor: Daniel F. Watt. Thesis (Ph.D.)--University of Windsor (Canada), 1994.
COinS
|
2018-03-22 06:12:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35674452781677246, "perplexity": 3944.3536701140793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647777.59/warc/CC-MAIN-20180322053608-20180322073608-00190.warc.gz"}
|
https://zbmath.org/?q=an%3A1260.68164
|
# zbMATH — the first resource for mathematics
A tighter upper bound for random MAX $$2$$-SAT. (English) Zbl 1260.68164
Summary: Given a conjunctive normal form $$F$$ with $$n$$ variables and $$m=cn$$ $$2$$-clauses, it is interesting to study the maximum number $$\max F$$ of clauses satisfied by all the assignments of the variables (MAX $$2$$-SAT). When $$c$$ is sufficiently large, the upper bound of $$f(n,cn)=\mathbb{E}(\max F)$$ of random MAX $$2$$-SAT had been derived by the first-moment argument. In this paper, we provide a tighter upper bound $$(3/4)cn+g(c)cn$$ also using the first-moment argument but correcting the error items for $$f(n,cn)$$, and $$g(c)=(3/4)\cos((1/3)\times\arccos((4\ln 2)/c-1))-3/8$$ when considering the $${\varepsilon}^{3}$$ error item. Furthermore, we extrapolate the region of the validity of the first-moment method is $$c>2.4094$$ by analyzing the higher order error items.
##### MSC:
68Q17 Computational difficulty of problems (lower bounds, completeness, difficulty of approximation, etc.) 68Q25 Analysis of algorithms and problem complexity
MAX-2-SAT
Full Text:
##### References:
[1] Coppersmith, D.; Gamarnik, D.; Hajiaghayi, M.T.; Sorkin, G.B., Random MAX SAT random MAX CUT, and their phase transitions, Random structures algorithms, 24, 502-545, (2004) · Zbl 1077.68118 [2] Garey, M.R.; Johnson, D.S.; Stockmeyer, L., Some simplified NP-complete graph problems, Theoret. comput. sci., 1, 3, 237-267, (1976) · Zbl 0338.05120 [3] Håstad, J., Some optimal inapproximability results, J. ACM, 48, 4, 798-859, (2001) · Zbl 1127.68405 [4] Aspvall, B.; Plass, M.F.; Tarjan, R.E., A linear-time algorithm for testing the truth of certain quantified Boolean formulas, Inform. process. lett., 8, 3, 121-123, (1979) · Zbl 0398.68042 [5] Bansal, N.; Raman, V., Upper bounds for maxsat: further improved, (), 247-258 · Zbl 0971.68069 [6] Bollobas, B.; Borgs, C.; Chayes, J.T.; Kim, J.H.; Wilson, D.B., The scaling window of the 2-SAT transition, Random structures algorithms, 18, 3, 201-256, (2001) · Zbl 0979.68053 [7] Gramm, J.; Hirsch, E.A.; Niedermeier, R.; Rossmanith, P., Worst-case upper bounds for MAX-2-SAT with an application to MAX-CUT, Discrete appl. math., 130, 139-155, (2003) · Zbl 1051.68078 [8] Hirsch, E.A., A new algorithm for MAX-2-SAT, (), 65-73 · Zbl 0959.68047 [9] Hirsch, E.A., New worst-case upper bounds for SAT, J. automat. reason., 24, 4, 397-420, (2000) · Zbl 0960.03009 [10] Niedermeier, R.; Rossmanith, P., New upper bounds for maximum satisfiability, J. algorithms, 36, 63-88, (2000) · Zbl 0959.68049 [11] H. Shen, H. Zhang, An empirical study of MAX-2-SAT phase transitions, in: Proc. of LICS’03 Workshop on Typical Case Complexity and Phase Transitions, Ottawa, CA, June 2003. · Zbl 1179.68148 [12] Achlioptas, D.; Peres, Y., The threshold for random k-SAT is 2klog2−O(k), J. am. math. soc., 17, 4, 947-973, (2004) · Zbl 1093.68075 [13] Xu, K.; Li, W., Exact phase transitions in random constraint satisfaction problems, J. artificial intelligence res., 12, 93-103, (2000) · Zbl 0940.68099
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2021-02-25 06:17:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.772408664226532, "perplexity": 5227.283455913447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350717.8/warc/CC-MAIN-20210225041034-20210225071034-00014.warc.gz"}
|
https://crypto.stackexchange.com/questions/34689/how-to-construct-a-collision-resistant-hash-function-that-is-not-a-one-way-funct
|
# How to construct a collision resistant hash function that is not a one-way function?
How to construct a CRHF (collision resistant hash function) that is not a OWF (one-way function)? Not sure but I think it probably needs another CRHF?
It is going to be pretty hard to achieve collision resistance without one-wayness. Indeed, negation of one-wayness means that for a given output, you can find a corresponding input. So a collision is easily obtained by simply choosing a random input m, hashing it into output x, then finding a preimage m' for the obtained output x. The only way for such a method not to work is to have an output space at least as large as the input space, so that, with high probability, the input m' is identical to m. However, since hash functions have a much larger input space than output space (e.g. SHA-256 output space has size 2256, but its input space has size 218446744073709551616-1, which is substantially greater), chances are that the m' you get from leveraging the non-one-wayness will be distinct from the m you started with, and that yields a collision.
• "negation of one-wayness means that for" the output for randomly chosen input, you have non-negligible probability of finding "a corresponding input. " Thus, the "only way for such a method not to work is to have" [[size of input space] divided by [size of output space]] be non-negligible. (Although, when that quotient is negligible, even second-preimage resistance will imply onewayness.) – user991 Apr 19 '16 at 23:55
Thomas Pornin already explained why such a thing is not usually possible, but I would like to quote a graphic from Rogaway and Shrimpton's "Cryptographic Hash-Function Basics: Definitions, Implications, and Separations for Preimage Resistance, Second-Preimage Resistance, and Collision Resistance" (pdf):
The dotted arrow from Collision resistance to Preimage resistance means that the implication depends on the message and hash sizes. You will find the exact notion in Theorem 7 of the paper.
By "one way function" do you mean preimage resistant, or do you mean that the function doesn't ever reveal the input?
Assume H(x) is a collision-resistant function. Let L(x) = the last 256 bits of x. Then, let
G(x) = H(x) || L(x)
That is, G(x) is the concatenation of a collision-resistant hash of x, and the last 256 bits of x.
Now, over all possible inputs, this is almost always preimage resistant. However, for a special subset of inputs (those where all but the last 256 bits are some fixed known value), preimages are trivial to find. And for any input x, G(x) leaks the low 256 bits of the input.
A Cryptographic hash function as described in the literature has 3 criteria:
1. Preimage resistance: Given $H,y$, it is "hard" to find an $x$ with $H(x)=y$
2. Second Preimage resistance: Given $H,x,$ it is hard to find $x'\neq x$ with $H(x')=H(x)$
3. Collision resistance. It is hard to find 2 $x,y$ with $H(x)=H(y)$
The very definition used (the first 1 and the weakest!) has a notion of one-wayness.
|
2019-06-25 16:20:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5481700897216797, "perplexity": 1165.7219408553772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999853.94/warc/CC-MAIN-20190625152739-20190625174739-00537.warc.gz"}
|
https://www.appropedia.org/Redwood_Discovery_Museum_brachistochrone
|
Project data
The Redwood Discovery Museum in Eureka requested a Brachistochrone exhibit from our team of engineering students at Cal Poly Humboldt. Our team successful designed, tested, and built an exhibit that demonstrate the Brachistochrone question in a fun and educating way.
## Background
The Redwood Discovery Museum, located in Eureka CA, is geared towards presenting scientific knowledge in a fun and engaging way to children of all age groups. The client required a functional demonstration of the Brachistochrone question and has contracted Team Schmotron of Cal Poly Humboldt's Engineering: Introduction to Design Fall 2017 class to design the exhibit.
## Problem Statement and Criteria
The objective of this project is to design and build a functional Brachistochrone exhibit and present the concept in a fun and educational method; while maintaining a sturdy, child-safe structure that is both aesthetically pleasing and engaging.
## Description of final project
Photos and Descriptions of the Brachistochrone Exhibit.
## Testing Results
A Prototype of the Dual-Track Design
The results of the design prototypes led to the creation of a fully functional final design of the Brachistochrone exhibit. The final design is an educational tool capable of demonstrating the concept of the Brachistochrone path of quickest decent in an engaging, informative, and fun manner.
The initial track prototype, shown to the right, proved to be successful for use with balls and cars and was incorporated into the final design. The heights of the walls were slightly too tall in the prototype and were scaled down in the final design to allow balls to roll down the center gap without touching the walls.
The end gate prototype proved to be successful in accurately detecting the winning vehicle. A foamboard prototype was created to test the circuitry for the end gate. Components were then placed into the foamboard prototype, two tracks worth of circuitry was installed for testing. On the final design the circuitry is scaled up to four tracks and is able to reliably detect the winning vehicle, whether it be a car or a ball.
## How to build
Construction of Track
Visuals of Tracks
Figure 5-1 Example of Possible Curve Layout
For the construction of the tracks one will need to cut all three sheets of plywood into 6 half inch 4'x4' sheets of plywood by cutting each of the 4'x8' sheets of plywood in half. One will then need to use a computer equipped with designing software such as AutoCAD or Autodesk inventor. On these programs one needs to replicate the dimensions of each 4'x4' sheet of plywood including the width. On these programs one will then sketch the desired pair of slopes similar to those of Figure 5-1 making sure that only one pair of slopes of a track is that of a cycloid. You should have a design similar to Figure 5-8. Once you have these drawn in a design program they will need to be exported as a DWG or a DXF file so that the shapes of the tracks can be cut out by a CNC machine. Once all these shapes are cut out and you have 4 pairs of identical quadrilaterals, you will want to join each of these pairs to another one with a half inch gap in-between them as shown in Figure 5-8. These four tracks will then be given walls that start after the 1.5 inch space center at the centerline. All four built tracks then will be constrained to a total width of 15". Within these bounds the tracks will be spaced evenly and then attached to the base which is another 4'x4' sheet of plywood. Wood glue and screws will need to be used as necessary. The end product of the tracks on the base will look like that of shown in Figure 5-8.
Construction of Start
Level Pull Start Mechanism
Starting mechanism can be constructed with a door hinge and some plywood. A hinge links the rotating door/projectile housing to the body of the Brachistochrone. The lever should be at least a foot in length in order to give an enough torque for an easy open. Since each path starts at a different angle the projectile housing must be measured for each individual track. If the measurements are calculated and pieces of wood cut to dimensions then they can be put together with a drill and screws easily.
Construction of Finish
Figure 5-12 Arduino Finish Gate Base Plate
The finish line placement mechanism is constructed from wood, with a sturdy metal housing used for the gate portion. Begin by measuring the width of the metal box to be used as the gate housing. Cut a piece of ½" plywood that spans the width of the metal box to be used as a base, and extends 6" out of the back, and 2" out of the front. Draw a line across this board directly in the center of 6" metal housing, which is 9" from the back and 5" from the front. Drill 4 holes on the center line, starting at 2.4" from the side, then space each hole 3.5" from the previous hole, see Figure 5-12.
Figure 5-13 Finish Gate Front View
Construct a box to hold the Arduino, wiring, and LEDs which will be mounted above the base board. The box is to be positioned so that the infrared receiver and infrared transmitter are located within 4 inches of each other. Drill out 4 holes to mount the infrared transmitters located above the holes for the infrared receivers. Drill out 4 additional holes on the front face place of the box for the indicator LED's. Cut 8 14" long, ½" wide strips of ½" plywood to be used as track, cut 8 14" long, ¾" wide strips of ½" plywood to be used as track wall. See image Figure 5-13. Glue the track strips so that there is a ½" gap in between, with the hole for the infrared receiver positioned in the center of the ½" gap. Glue the track wall strips on the outside of each track strip to form the dual use track.
Wire up all the electrical components for the circuit based on the provided schematic seen in Figure 5-14. I highly recommend creating a test circuit first using a solderless bread board before creating the final circuit. Begin final circuit by soldering 330Ω resistors to the anode of each indicator LED, the anode is the short leg on the indicator LED, then solder ground wire to the other side of the resistor, and color-coded wire to each cathode. The ground wire will lead to the main ground on the Arduino and each cathode wire will lead to PIN10,11,12,13 on the Arduino. Next solder 220Ω resistors to the cathode of each IR LED, the cathode is the long leg on the IR LED, with ground wire soldered to the other side of the resistors. Solder color-coded wire to each cathode of the IR LED and run them to PIN6,7,8,9 on the Arduino. Solder four 330kΩ resistor to the positive lead wire from the 5v port on the Arduino unit, then solder the other side of each 330kΩ resistor to both the wire connected to the cathode of each IR Phototransistor and to each wire that connects to PIN2,3,4,5 on the Arduino. The cathode is the long leg on the IR Photo-transmitters. Make sure to implement a quick disconnect on the wires that travel between the lower base plate (for the IR Phototransistors) to the upper box that houses the Arduino in order to be able to separate the two halves for maintenance, also leave extra slack on these wires to make for easy connection/disconnection. Label each side of the quick disconnects in order to ensure proper reconnection. Power the Arduino with a AC to DC transformer rated between 7.5-12V 600mah-1000mah DC.
Figure 5-14 Wiring Schematic for Ending Mechanism
Programming the Arduino:
In order to interface and program the Arduino the "Arduino IDE" must be downloaded onto a computer. The Arduino IDE can be found at https://www.arduino.cc/en/Main/Software and is available for Windows, Mac and Linux. Download and install the Arduino IDE. After the Arduino IDE is installed, connect the Arduino UNO Microcontroller to the computer via a USB cord. Once the Arduino is connected, open up the Arduino IDE program. At the top of the program, open the drop-down menu titled "Tools" and select "Board" and then select "Arduino / Genuine UNO". Next, from the same "Tools" drop down menu, select "Port" and the select the port that the Arduino is connected to. If there are multiple ports listed make a note of the ports, then disconnect the Arduino and see which port disappeared, then reconnect the Arduino and select that port. The Arduino is now ready to be programmed. Copy Paste the Arduino code, located after this paragraph, into the Arduino IDE program. Select the "Check Mark" at the top of the screen to verify the code. The sketch will compile and should not result in any errors, if any errors are detected, make sure that you copied the code exactly, including all of the code located between the bolded "Arduino Code Starts Here" and before the bolded "Arduino Code Ends Here" statements. Next, click on the "Right Facing Arrow" icon that is located next to the verify button, that will transfer the code to the Arduino Microcontroller. The Arduino should now be programmed. Disconnect the Arduino from the computer, verify that all wires are correctly connected to the correct pins on the Arduino as instructed by the schematic, then connect a power source to the Arduino, the circuit should now function. The coding for the Arduino is displayed below, copy paste all of the code after the bolded "Arduino Code Starts Here" and before the bolded "Arduino Code Ends Here" statements.
Arduino Code Starts Here
/*
• Finish Line Detector
• Lights up LED 1, 2, 3, or 4 depending on which sensor is tripped first
• Both LEDs light up in the case of a tie
• /
const int ledPin1 = 13; const int ledPin2 = 12; const int ledPin3 = 11; const int ledPin4 = 10;
const int irledPin9 = 9; const int irledPin8 = 8; const int irledPin7 = 7; const int irledPin6 = 6;
const int sensorPin1 = 2; const int sensorPin2 = 3; const int sensorPin3 = 4; const int sensorPin4 = 5;
// Change this number to increase or decrease time LED stays lit const int TIMEOUT = 4000; // milliseconds - time LED stays lit
// Setup runs once, at start // Input and Output pins are set void setup(){
pinMode(sensorPin1, INPUT);
pinMode(sensorPin2, INPUT);
pinMode(sensorPin3, INPUT);
pinMode(sensorPin4, INPUT);
pinMode(ledPin1, OUTPUT);
pinMode(ledPin2, OUTPUT);
pinMode(ledPin3, OUTPUT);
pinMode(ledPin4, OUTPUT);
pinMode(irledPin9, OUTPUT);
pinMode(irledPin8, OUTPUT);
pinMode(irledPin7, OUTPUT);
pinMode(irledPin6, OUTPUT);
}
// Called repeatedly void loop() {
// Turn on IR LED
digitalWrite(9, HIGH);
digitalWrite(8, HIGH);
digitalWrite(7, HIGH);
digitalWrite(6, HIGH);
// Get the Sensor status
// Set the output LED to match the sensor
digitalWrite(ledPin1, status1);
digitalWrite(ledPin2, status2);
digitalWrite(ledPin3, status3);
digitalWrite(ledPin4, status4);
if (status1 == HIGH || status2 == HIGH || status3 == HIGH || status4 == HIGH) {
// A sensor was tripped, show the results until timeout
delay(TIMEOUT); // Wait for timeout
}
}
Arduino Code Ends Here
## Maintenance
The Arduino chip components of the finish line are rated to last many years of constant use, if however they do require replacement the price will range depending on the component. The Arduino UNO Microcontroller can be found for $12, the LED and Photoresistors tend to be$0.25 each. Repainting the design will all depend on how much damage the exterior takes from the kids playing with the design. If the damage is minor then painting over will only take a few minutes, costing around $5. To repaint the whole design will take longer. The hinge that is attached to the starting mechanism may have to be replaced depending on the amount of use it gets. Kids could pull the handle to hard and over time may become loose and need replacing. This is a simple replacement, and will only take about 5 minutes with a drill and cost around$5.00. Cleaning may need to be done depending on how much debris gets on the tracks, making it difficult for the vehicles to go down the track. A quick sweep or dusting will suffice and only take about 5 minutes. Hot wheel cars will need to be replaced as they are lost, stolen, or broken. These can easily be ordered online for nearly a dollar per car. Golf balls depending on what kind you buy can vary in price, but the cheapest can be found online for about \$0.50 each.
### Schedule
This is when to maintain what.
Daily
Clean/dust as needed.
Weekly
Clean/dust as needed.
Replace Lost/Stolen cars and balls.
Monthly
Clean/dust as needed.
Replace Lost/Stolen cars and balls.
Yearly
Replace hinges on start mechanism.
Replace burned out LED.
Touch up paint as needed.
Every 2 years
Replace burned out LED.
Touch up paint as needed.
## Troubleshooting
Troubleshooting involved testing multiple types of vehicles for the ramps, starting mechanisms, ending mechanisms and xylophone ending sounds.
## Discussion and next steps
We hope that children will be fascinated by the Brachistochrone and encourage them to learn about physics and mathematics. Also we would hope that the educational properties can be used by educators in the surrounding area.
## Suggestions for future changes
The Arduino controlled finish line placement mechanism can be upgraded to monitor and display the descent time of the vehicles. The indicator LED's need to be replaced with digital displays. A closed circuit needs to be added to the Arduino that is connected to the start mechanism. When the start mechanism is raised it will break the circuit. The Arduino is programed to begin a timer when it detects the broken circuit, the timer will stop when the Arduino senses a vehicle cross the infrared beam. The time is then displayed on the digital display. The Arduino code will need to be updated for this feature.
## References
1. American Academy of Pediatrics. (2012). Pediatric First Aid for Caregivers and Teachers (PedFACTS). Jones & Bartlett Publishers. Sep 21. Burlington, MA -Pg 72 Figure 1
2. Arkema. (2006). "Acrylic Sheet Forming Manual" Plexiglas by Arkema. <http://www.plexiglas.com/export/sites/plexiglas/.content/medias/downloads/sheet-docs/plexiglas-forming-manual.pdf> (Sept. 27, 2017).
3. Brookfield, Gary. (2010). "Yet Another Elementary Solution of the Brachistochrone Problem." Mathematics Magazine, 83(1), 59-63. doi:10.4169/002557010x480017.
4. Building Speed. (2017). "Safety vs. Speed: 2017 Changes to the Chassis." Building Speed, 4 July 2017, buildingspeed.org/blog/2017/02/17/safety-vs-speed-2017-changes-to-the-chassis/.
5. C, Robby. "Hot Wheels Racing League." Hot Wheels Racing League, 1 Jan. 1970, www.racehotwheels.com/.
6. Chen, R. (2011). Liquid Crystal Displays: Fundamental Physics and Technology (Wiley Series in Display Technology). Hoboken: Wiley. 126-132.
7. Countryside. (2016). "How to Choose A Leg Table, Pedestal Table, or Trestle Table - By Countryside." Countryside Amish Furniture, <www.countrysideamishfurniture.com/blog/entry/how-to-choose-a-leg-table-pedestal-table-or-trestle-table> (Sept. 23, 2017).
8. Dabe. Brachistochrone curve. (Aug 8, 2016) Thingiverse. MakerBot Industries, LLC https://www.thingiverse.com/thing:1708492
9. Davenport, Tonia. (2009) Plexi Class: Cutting-edge Projects in Plastic. First ed., North Light Books, Cincinnati, Pg 14.
10. Elliott, Sara. (2009). "5 Long-Lasting Building Materials." 1. Iron and Steel - 5 Long-Lasting Building Materials | HowStuffWorks, HowStuffWorks, 30 Mar. 2009, home.howstuffworks.com/home-improvement/construction/materials/5-long-lasting-building-materials5.htm.
11. Forest Service. (1974). Wood Handbook: Wood as an Engineering Material. U.S. Department of Agriculture, Forest Products Laboratory, Forest Service, 1974.
12. Gabriela R. Sanchis (Elizabethtown College). Convergence in August of 2007. Historical Activities for Calculus - Module 1: Curve Drawing Then and Now. Mathematical association of America
13. Haws, L., & Kiser, T. (1995). "Exploring the Brachistochrone Problem." The American Mathematical Monthly, 102(4), 328-336. doi:10.2307/2974953.
14. Hill Q. Lauren. (2015). "Understanding the Attention Spans of Elementary Aged Students." Studydog. http://web.archive.org/web/20190914144152/http://www.laurenqhill.com:80/understanding-the-of-attention-spans-of-elementary-aged-students/> (Jan. 8, 2015).
15. Johnson, Nils P. (2004). "The Brachistochrone Problem." The College Mathematics Journal, 35(3), 192-197. doi:10.2307/4146894.
16. Kenneth E. Moyer, and B. von Haller Gilmer, "The Concept of Attention Spans in Children," The Elementary School Journal 54, no. 8 (Apr., 1954): 464-466.
17. Maxsun. (2017). "Table Bases: A How-to Guide to Find Your Ideal Table Base for Your Restaurant!" Maxsun Group, <www.maxsungroup.com/table-bases-guide-find-ideal-table-base-restaurant/> (Sept. 23, 2017).
18. Pack. (2009). "Pack 240 Pinewood Derby Track." Pack 240 - Pinewood Derby Track Starting Gate, www.oocities.org/heartland/plains/8340/start.htm.
|
2022-08-19 17:16:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1780456155538559, "perplexity": 3952.2645787964657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573744.90/warc/CC-MAIN-20220819161440-20220819191440-00544.warc.gz"}
|
https://crypto.stackexchange.com/questions/34631/given-a-public-key-encryption-and-signature-scheme-define-a-new-primitive
|
# Given a public-key encryption and signature scheme, define a new primitive
I'm new to cryptography and am currently working on the following question:
Say you are given a public-key encryption scheme $\Pi_e=(\mathcal{K}_e,\mathcal{E},\mathcal{D})$ and a signature scheme $\Pi_s=(\mathcal{K}_s,\mathrm{sign},\mathrm{verify})$, and you are asked to define a new primitive $\overline{\Pi}=(\overline{\overline{\mathcal{K}}},\overline{\mathcal{E}},\overline{\mathcal{D}})$ that achieves both privacy and authenticity. How would you do it? Give at least one construction and a convincing informal argument of security. Discuss any operational/implementation issues that might arise if you want to support “large” plaintexts, and how you would address them in your construction.
As a second part, what if you want to expand the API for $\overline{\Pi}$ to accomodate associated data? How would you modify your construction(s)? Keep in mind that associated data is “context” information that does not require privacy protection but does require authenticity protection.
I'm working on a solution to the first part using CTR mode, because it seems to achieve the requirements well, especially with supporting large plaintexts. Does this seem like a reasonable approach?
Also, I'm having an issue coming up with an answer to part two. Can CTR mode support associated data? If not, is there a better way to go about this?
• Using CTR isn't the answer they'll looking for (for one, it isn't a public-key encryption/signature scheme, which is what they're asking for). Instead, how would you use $\mathcal{K}_e$ and $\mathcal{K}_s$ within your new primitive? – poncho Apr 18 '16 at 1:45
• I couldn't do your assigned work but simply want to say that I have written a code that employs RSA to do encryption on plaintexts directly and with integrity check as well as signature of the sender. See Ex. 3S of Appendix in s13.zetaboards.com/Crypto/topic/7234475/1/ – Mok-Kong Shen Apr 18 '16 at 11:51
|
2019-10-18 23:14:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2137979418039322, "perplexity": 681.122536649973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986685915.43/warc/CC-MAIN-20191018231153-20191019014653-00361.warc.gz"}
|
https://grindskills.com/use-of-nested-cross-validation/
|
# Use of nested cross-validation
Scikit Learn’s page on Model Selection mentions the use of nested cross-validation:
>>> clf = GridSearchCV(estimator=svc, param_grid=dict(gamma=gammas),
... n_jobs=-1)
>>> cross_validation.cross_val_score(clf, X_digits, y_digits)
Two cross-validation loops are performed in parallel: one by the GridSearchCV estimator to set gamma and the other one by cross_val_score to measure the prediction performance of the estimator. The resulting scores are unbiased estimates of the prediction score on new data.
From what I understand, clf.fit will use cross-validation natively to determine the best gamma. In that case, why would we need to use nested cv as given above? The note mentions that nested cv produces “unbiased estimates” of the prediction score. Isn’t that also the case with clf.fit?
Also, I was unable to get the clf best estimates from the cross_validation.cross_val_score(clf, X_digits, y_digits) procedure. Could you please advise how that can be done?
Nested cross-validation is used to avoid optimistically biased estimates of performance that result from using the same cross-validation to set the values of the hyper-parameters of the model (e.g. the regularisation parameter, $C$, and kernel parameters of an SVM) and performance estimation. I wrote a paper on this topic after being rather alarmed by the magnitude of the bias introduced by a seemingly benign short cut often used in the evaluation of kernel machines. I investigated this topic in order to discover why my results were worse than other research groups using similar methods on the same datasets, the reason turned out to be that I was using nested cross-validation and hence didn’t benefit from the optimistic bias.
|
2022-12-05 09:10:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38038554787635803, "perplexity": 1083.6600198560282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711013.11/warc/CC-MAIN-20221205064509-20221205094509-00813.warc.gz"}
|
http://mathoverflow.net/revisions/113937/list
|
3 added 23 characters in body
Given a scheme $X$ with structure sheaf $\mathcal{O}_X$, we can associate to each $\mathcal{O}_X$-module $\mathcal{F}$ its global sections $\Gamma(\mathcal{F})$, which gets the structure of a $\Gamma(\mathcal{O}_X)$-module.
Suppose $\mathcal{F}$ is a vector bundle on $X$. Is then $\Gamma(\mathcal{F})$ a projective $\Gamma(\mathcal{O}_X)$-module of finite rank?
Here are two examples, where it works:
• If $X$ is an affine scheme, then a quasi-coherent sheaf is a vector bundle iff its global sections are a projective $\Gamma(\mathcal{O}_X)$-module [of finite rank, as Fred Rohrer pointed out].
• If $X$ is a projective scheme over some field $K$ and $\mathcal{F}$ an arbitrary coherent sheaf on $X$, then $\Gamma(\mathcal{F})$ is a free module of finite rank over the ring of global sections $\Gamma(\mathcal{O}_X) \cong K$. Under some restrictions, we can here also replace the field $K$ by a more general ring.
I would guess that it works in general if the natural map $X \to Spec \Gamma(\mathcal{O}_X)$ is locally free of finite rank [edit: and surjective] or something like this. Probably this fails in general, but I have not yet a (reasonable) counter-example. I am mainly interested here in the case of a quasi-projective scheme over a (not-necessarily algebraically closed) field of characteristic zero, so I would not only be interested in counter-examples but also positive answers to my question for a reasonable subclass of schemes.
2 added 46 characters in body
Given a scheme $X$ with structure sheaf $\mathcal{O}_X$, we can associate to each $\mathcal{O}_X$-module $\mathcal{F}$ its global sections $\Gamma(\mathcal{F})$, which gets the structure of a $\Gamma(\mathcal{O}_X)$-module.
Suppose $\mathcal{F}$ is a vector bundle on $X$. Is then $\Gamma(\mathcal{F})$ a projective $\Gamma(\mathcal{O}_X)$-module of finite rank?
Here are two examples, where it works:
|
2013-05-22 01:39:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9494314193725586, "perplexity": 144.1718520722258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701063060/warc/CC-MAIN-20130516104423-00015-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.sarthaks.com/1279279/assertion-during-zygotene-chromosmes-bivalent-stage-reason-bivalent-number-chromosomes
|
# Assertion: During zygotene, chromosmes show bivalent stage. Reason: Bivalent is half the number of chromosomes
36 views
in Biology
closed
Assertion: During zygotene, chromosmes show bivalent stage.
Reason: Bivalent is half the number of chromosomes
A. If both assertion & Reason are True & the Reason is a corrrect explanation of the Asserion.
B. If both Assertion & Reason are True but Reason is not correct explanation of the Assertion.
C. If Assertion is Trie but the Reason is False.
D. If both Assertion & Reason are false
by (75.1k points)
selected
During zygotene, because of the paring of the homologous, the nucleus contains half the number of chromosmes. Each unit is a bivalent composed of two homologous chromosomers.
|
2022-10-02 12:57:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5076037645339966, "perplexity": 8639.878070304372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00115.warc.gz"}
|
https://www.degruyter.com/view/j/ijb.2017.13.issue-1/ijb-2016-0045/ijb-2016-0045.xml?rskey=C9yJNC&result=1
|
Show Summary Details
More options …
# The International Journal of Biostatistics
Ed. by Chambaz, Antoine / Hubbard, Alan E. / van der Laan, Mark J.
IMPACT FACTOR 2018: 1.309
CiteScore 2018: 1.11
SCImago Journal Rank (SJR) 2018: 1.325
Source Normalized Impact per Paper (SNIP) 2018: 0.715
Mathematical Citation Quotient (MCQ) 2018: 0.03
Online
ISSN
1557-4679
See all formats and pricing
More options …
Volume 13, Issue 1
# Characterizing Highly Benefited Patients in Randomized Clinical Trials
Vivek Charu
• Department of Biostatistics, Johns Hopkins University, 615 N Wolfe St, Baltimore, MD 21205, USA
• Other articles by this author:
/ Paul B. Rosenberg
/ Lon S. Schneider
/ Lea T. Drye
/ Lisa Rein
/ Constantine G. Lyketsos
/ Constantine E. Frangakis
• Corresponding author
• Department of Biostatistics, Johns Hopkins University, 615 N Wolfe St, Baltimore, MD 21205, USA
• Email
• Other articles by this author:
Published Online: 2017-05-20 | DOI: https://doi.org/10.1515/ijb-2016-0045
## Abstract
Physicians and patients may choose a certain treatment only if it is predicted to have a large effect for the profile of that patient. We consider randomized controlled trials in which the clinical goal is to identify as many patients as possible that can highly benefit from the treatment. This is challenging with large numbers of covariate profiles, first, because the theoretical, exact method is not feasible, and, second, because usual model-based methods typically give incorrect results. Better, more recent methods use a two-stage approach, where a first stage estimates a working model to produce a scalar predictor of the treatment effect for each covariate profile; and a second stage estimates empirically a high-benefit group based on the first-stage predictor. The problem with these methods is that each of the two stages is usually agnostic about the role of the other one in addressing the clinical goal. We propose a method that characterizes highly benefited patients by linking model estimation directly to the particular clinical goal. It is shown that the new method has the following two key properties in comparison with existing approaches: first, the meaning of the solution with regard to the clinical goal is the same, and second, the value of the solution is the best that can be achieved when using the working model as a predictor, even if that model is incorrect. In the Citalopram for Agitation in Alzheimer’s Disease (CitAD) randomized controlled trial, the new method identifies substantially larger groups of highly benefited patients, many of whom are missed by the standard method.
## 1 Introduction
Patients often differ in their response to treatment, and characterizing this variation is crucial for the development of evidence-based, personalized treatment plans. In practice, treatments may be costly or may pose harm to patients (e.g. through adverse side effects or drug toxicity) and clinicians must balance treatment recommendations with each patient’s probability of response. Thus, there is considerable interest in the development and refinement of statistical methods capable of identifying patients with high versus low average treatment effect. For example, a recent randomized controlled trial in psychiatry evaluated the efficacy of citalopram for reducing agitation in patients with probable Alzheimer’s disease [1]. Although the estimated average treatment effect in the trial was positive, an adverse cardiac event occurred in a small proportion of people, and the treatment was associated with slight cognitive worsening. Additionally, only 40% of participants assigned to citalopram had a moderate or marked response compared to 26% of those assigned to placebo, and thus it would clearly be desirable to identify strong predictors of response. In this setting, the preferred clinical goal is to target the treatment to patients who are predicted to experience a large clinical benefit. In addition to providing practical recommendations regarding who should be targeted for treatment, identifying patients whose response to citalopram is large could help clarify the biological mechanisms for citalopram’s action in this population.
Several approaches have been employed to estimate heterogeneity in treatment effects in the setting of randomized controlled trials. One general approach is to posit outcome regression models in which the effect of treatment assignment on response can differ depending on baseline covariates. A major limitation of this approach is that the posited outcome regression model may be misspecified. Zhang et al. [2] (see also Zhao et al. [3], Rubin and van der Laan [4]) adapt this regression framework and develop a robust method for identifying an optimal treatment regime, which, when followed, maximizes the empirical treatment effect in the study population. However, this optimal treatment regime does not necessarily identify highly benefited patients; indeed, it assigns treatment to a patient even when their expected treatment effect is small, as long as it is positive. In addition, one cannot directly adapt Zhang et al.’s [2] method to identify highly respondent subgroups of patients, for the following reason. That method maximizes the empirical treatment effect in the entire study population. If instead the goal is to maximize the treatment effect over particular subsets of patients, there will almost always be some small subsets that appear to achieve a treatment effect higher than a particular threshold chosen. Therefore, parameter estimation in this setting is ill-defined because it reduces to selecting the subgroup with the highest estimated treatment effect, regardless of the size of this subgroup. This issue illustrates that balance needs to be addressed between the magnitude of the treatment effect in a particular subgroup and the number of patients in that subgroup.
Cai et al. [5] proposed an alternative method for estimating heterogeneity in the treatment effect. In a two-stage approach, the first stage posits a working regression model (fitted by maximum likelihood, for example), and estimates each subject’s model-based expected response under each treatment arm, and hence the model-based subject’s effect is estimated as the difference between the two estimates. In a second stage, the approach uses the model-based effect estimate as a scalar index score for grouping patients. Then, a local likelihood approach is used to obtain non-parametric estimates of the treatment effect within each strata of the index score. This approach produces consistent estimates of the treatment effect within strata defined by the estimated regression model. However, because the working model in the first stage of the procedure may be misspecified, maximum likelihood or ordinary least squares estimators of model parameters may not be the best approach (even in large samples) to characterize the largest subgroup possible whose empirical treatment effect is greater than some pre-specified threshold.
In this paper we propose a method that characterizes large subgroups who experience a large treatment effect. Section 2 formulates the goal and further reviews the existing approaches. Section 3 develops the new approach. The essence of this approach is that it connects the estimation of parameters from the working model directly to the clinical goal – to identify large subgroups that experience a large empirical treatment effect. We show theoretically, and also by application to the CitAD trial throughout, that the proposed approach characterizes different highly benefited groups that can be much larger than those characterized by the existing approach. Section 4 concludes with remarks.
## 2.1 Problem and limitations of existing methods
For the general framework, consider a study of a random sample of $n$ individuals from a population and for each of whom we can measure a vector of covariates ${X}_{i}$, which we assume have finite although possibly many levels. Each individual can be assigned a standard treatment $t=0$, in which case we would measure a potential outcome ${Y}_{i}\left(t=0\right)$, or a new treatment $t=1$, in which case we would measure a potential outcome ${Y}_{i}\left(t=1\right)$ [6]. Actual assignment ${\text{Treat}}_{i}\left(=0,1\right)$ is assigned at random, that is, ${\text{Treat}}_{i}$ is independent of $\left({Y}_{i}\left(0\right),{Y}_{i}\left(1\right),{X}_{i}\right)$, and then the outcome ${Y}_{i}:={Y}_{i}\left({\text{Treat}}_{i}\right)$ corresponding to the actual assignment is observed. Based on the information of the study, the overall population average potential outcome $E\left\{{Y}_{i}\left(t\right)\right\}$ can be estimated without further assumptions by the sample analogue $E\left({Y}_{i}\mid {\text{Treat}}_{i}=t\right)$ of the average observed outcomes among those assigned ${\text{Treat}}_{i}=t$.
Even if the new treatment is the best (on average, or for a particular patient, Zhang et al. [2]), its effect may be small and its administration associated with burden or adverse effects. Then, for subsequent clinical practice, physicians may wish to only give the new treatment to patients for whom the above study suggests the effect is large enough. To do this, for example, in the psychiatric trial we discuss in Section 2.2, the physicians wanted to characterize a subgroup of patients based on covariates, for whom the treatment effect is, on average, greater than a chosen clinically important value, say ${\text{eff}}_{\text{min}}$. Taking here the absolute difference as the causal effect of interest, the physicians’ goal is as follows:
$\begin{array}{rl}& \text{find a group of patients},\phantom{\rule{thinmathspace}{0ex}}\begin{array}{c}\text{highly}\text{benefited}\end{array}\phantom{\rule{thinmathspace}{0ex}},\phantom{\rule{thinmathspace}{0ex}}\text{that maximizes the proportion,\hspace{0.17em}}\text{pr}\left\{{X}_{i}\in \begin{array}{c}\text{highly}\text{benefited}\end{array}\right\},\\ & \text{subject to having large average effect,\hspace{0.17em}}\phantom{\rule{thinmathspace}{0ex}}E\left\{{Y}_{i}\left(1\right)-{Y}_{i}\left(0\right)\mid {X}_{i}\in \begin{array}{c}\text{highly}\text{benefited}\end{array}\right\}\ge {\text{eff}}_{\text{min}}.\end{array}$(1)
If it is possible to estimate well the conditional $\text{effect}\phantom{\rule{thinmathspace}{0ex}}\left({X}_{i}\right):=E\left\{{Y}_{i}\left(1\right)-{Y}_{i}\left(0\right)\mid {X}_{i}\right\}$ for all ${X}_{i}$ without further assumptions, then the goal eq. (1) is easily addressable. To see this, consider, for any indicator function $in\left({X}_{i}\right)$, the quantity $\text{effect}\phantom{\rule{thinmathspace}{0ex}}\left\{in\left({X}_{i}\right)=1\right\}:=E\left\{{Y}_{i}\left(1\right)-{Y}_{i}\left(0\right)\mid in\left({X}_{i}\right)=1\right\}$. We prove the following result in the Appendix.
Result 1. Among all indicator functions $in\left({X}_{i}\right)$ such that $\text{effect}\phantom{\rule{thinmathspace}{0ex}}\left\{in\left({X}_{i}\right)=1\right\}\ge {\text{eff}}_{\text{min}},$ the indicator that maximizes the size $\text{pr}\left\{in\left({X}_{i}\right)=1\right\}$ is of the form
$i{n}_{0}\left({X}_{i}\right):=1\text{if and only if\hspace{0.17em}}\text{effect}\phantom{\rule{thinmathspace}{0ex}}\left({X}_{i}\right)\ge k$
where $k$ is a constant determined by $\text{effect}\phantom{\rule{thinmathspace}{0ex}}\left\{i{n}_{0}\left({X}_{i}\right)=1\right\}={\text{eff}}_{\text{min}}$, provided that such a $k$ exists.
In other words, the largest group $\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}$ satisfying eq. (1) is $\left\{x:i{n}_{0}\left(x\right)=1\right\}$ and is obtained if we start including in the group patients from the larger down to the smaller values of the conditional $\text{effect}\phantom{\rule{thinmathspace}{0ex}}\left({X}_{i}\right)$, and stop when including the covariate with the next smallest value of $\text{effect}\phantom{\rule{thinmathspace}{0ex}}\left({X}_{i}\right)$ in $\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}$ would first produce an average effect $E\left\{{Y}_{i}\left(1\right)-{Y}_{i}\left(0\right)\mid {X}_{i}\in \begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\right\}\text{smaller than}$ ${\text{eff}}_{\text{min}}$.
More realistically, when the levels of ${X}_{i}$ are many, the conditional effects are not estimable without further assumptions, and the above direct approach is not feasible. An existing approach [5] mirrors the theoretical approach using a working model (see Figure 1, first two columns). Specifically, here the existing approach in a first stage fits a parametric working model (which may not be correct): $\text{pr}\left({Y}_{i}\left(t\right)\mid {X}_{i},\beta \right)$ $\left(=\text{pr}\left({Y}_{i}\mid {X}_{i},{\text{Treat}}_{i}=t,\beta \right)$, by random assignment), by the MLE ${\stackrel{ˆ}{\beta }}^{mle}$ or a solution to another standard estimating equation. Based on this fit, the approach obtains an initial, model-based estimate of the effect $E\left({Y}_{i}\mid {X}_{i},{\text{Treat}}_{i}=1\right)\text{-\hspace{0.17em}}E\left({Y}_{i}\mid {X}_{i},{\text{Treat}}_{i}=0\right)$ using
$\begin{array}{rl}\text{effect}{\phantom{\rule{thinmathspace}{0ex}}}^{model}\left({X}_{i},{\stackrel{ˆ}{\beta }}^{mle}\right):=& E\left({Y}_{i}\mid {X}_{i},{\text{Treat}}_{i}=1,{\stackrel{ˆ}{\beta }}^{mle}\right)\\ & -E\left({Y}_{i}\mid {X}_{i},{\text{Treat}}_{i}=0,{\stackrel{ˆ}{\beta }}^{mle}\right).\end{array}$(2)
This approach can attempt to approximate goal eq. (1) by mimicking the theoretical solution given above, as follows: first, sort the covariates by the values of estimated effects, $\text{effect}{\phantom{\rule{thinmathspace}{0ex}}}^{model}\left({X}_{i},{\stackrel{ˆ}{\beta }}^{mle}\right)$; then, start creating the set $\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left({\stackrel{ˆ}{\beta }}^{mle}\right)$ by cumulating ${X}_{i}$ from larger to smaller values of $\text{effect}{\phantom{\rule{thinmathspace}{0ex}}}^{model}\left({X}_{i},{\stackrel{ˆ}{\beta }}^{mle}\right);$ and close the set $\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left({\stackrel{ˆ}{\beta }}^{mle}\right)$ when the empirical (non-parametric) estimated effect (difference in sample averages of treated minus control) in that set would stop being $\ge {\text{eff}}_{\text{min}}$. This gives
$\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left({\stackrel{ˆ}{\beta }}^{mle}\right)=\underset{\text{\hspace{0.17em}over all values\hspace{0.17em}}e}{\text{the largest-fraction}}\phantom{\rule{thinmathspace}{0ex}}\left\{{X}_{i}:\text{effect}{\phantom{\rule{thinmathspace}{0ex}}}^{model}\left({X}_{i},{\stackrel{ˆ}{\beta }}^{mle}\right)\ge e\right\}$(3)
such that the empirical treatment effect in the set is at least ${\text{eff}}_{\text{min}}$. By largest-fraction set we mean a set that has the largest probability based on the empirical distribution of ${X}_{i}$ in the study.
Figure 1:
Schematic representation of the theoretical solution, the existing approach, and the proposed approach, for a given ${\text{eff}}_{\text{min}}$.
A useful property of this approach, resulting from the empirical estimation at the second stage, is that the effect among the estimated highly benefited set in eq. (3) is approximately the desired clinical effect ${\text{eff}}_{\text{min}}$, even if the working model is incorrect. Specifically, [5] show that, allowing for the working model to be incorrect, the estimator ${\stackrel{ˆ}{\beta }}^{mle}$ will converge to a value, say ${\stackrel{ˉ}{\beta }}^{mle}$, and the set $\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left({\stackrel{ˆ}{\beta }}^{mle}\right)$ will converge to
$\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left({\stackrel{ˉ}{\beta }}^{mle}\right)=\underset{\text{\hspace{0.17em}over\hspace{0.17em}}e}{\text{the largest-probability set}}\phantom{\rule{thinmathspace}{0ex}}\left\{{X}_{i}:\text{effect}{\phantom{\rule{thinmathspace}{0ex}}}^{model}\left({X}_{i},{\stackrel{ˉ}{\beta }}^{mle}\right)\ge e\right\}$
such that the effect within the set is at least ${\text{eff}}_{\text{min}}$. Therefore, the empirical $\stackrel{ˆ}{\text{effect}\phantom{\rule{thinmathspace}{0ex}}}\left\{\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left({\stackrel{ˆ}{\beta }}^{mle}\right)\right\}$, defined as the difference between the empirical averages of the highly benefited set assigned $\text{Treat}=1$ versus those assignd $\text{Treat}=0$, converges to at least the nominal effect ${\text{eff}}_{\text{min}}$. The above assumes that $\text{effect}{\phantom{\rule{thinmathspace}{0ex}}}^{model}\left({X}_{i},{\stackrel{ˉ}{\beta }}^{mle}\right)$ is not constant in ${X}_{I}$; if it is, then the convergence may not hold, for example, because the sets may be empty.
For a trial with small to moderate sample size, the set of patients $\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left({\stackrel{ˆ}{\beta }}^{mle}\right)$ may have a true effect that is smaller than the limit. For this reason, we can use a modified set ${\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}}^{calib}\left({\stackrel{ˆ}{\beta }}^{mle}\right)$, that uses a resampling method to calibrate its effect to the nominal ${\text{eff}}_{\text{min}}$ (Appendix B).
A problem with the above approach, however, is that it still uses the estimate (e.g., MLE) of the working model as if the model were correct. In Section 3, we show that, by using a different estimation of the same working model, a different highly benefited group can be identified, which can be much larger than the one identified by the existing approach. First, however, we illustrate the existing approach using data from the Citalopram for Agitation in Alzheimer Disease Study (CitAD) [1].
## 2.2 A motivating example
CitAD was a randomized placebo-controlled trial designed to evaluate the efficacy of citalopram in reducing agitation in patients with probable Alzheimer’s disease [1]. The estimated average treatment effect was a 13.6% (se=7.1%) reduction in the probability of agitation symptoms in the citalopram versus the placebo group, as measured by the modified Alzheimer Disease Cooperative Study-Clinical Global Impression of Change Score (hereafter, mADCS-CGIC, Schneider et al. [7], Drye et al. [8]).
As agitation in Alzheimer’s disease (AD) is a heterogeneous clinical syndrome that encompasses many underlying pathologies, a secondary aim of the study was to characterize which patients were more likely to respond to citalopram, potentially elucidating which dysfunctional pathways might respond to citalopram. Characterizing heterogeneity in citalopram’s effect is also important because its use is associated with an adverse cardiac complication (long QT syndrome and cognitive worsening), and a preferred clinical goal would be to target highly respondent patients for treatment [9]. We hypothesized that agitation in AD might involve disturbances in affective and/or executive control which might further reflect different disturbances in underlying brain circuits. One hypothesized type of agitation reflects affective disturbance, manifested by mood lability, irritability, anxiety, dysphoria, and/or other affective/mood symptoms. Another hypothesized type reflects agitation from loss of inhibitory control resulting in disinhibition, disorganization, apathy, or other clinical manifestations of loss of executive control. Given the substantial evidence for the involvement of serotonergic deficits in affective dysregulation in mood disorders, we hypothesized that participants with primarily affective type of agitation would respond better to citalopram treatment. To this end, one of the authors (CGL) derived two categorical scales, the affective dysregulation scale (ADS, ranging from 0-7), and the exective dyscontrol scale (EDS, ranging from 0 to 6), where higher values indicate more dysfunction. These scales were derived by examining the CitAD dataset for items that appeared to be a priori associated with affective or executive dysregulation (see Appendix A for detailed derivation). Table 1 is a cross-tabulation of the number of patients in each arm of the study with different combinations of ADS and EDS scores at baseline.
Table 1
Patients falling in each ADS and EDS categories; values in red are patients assigned to the placebo group, values in blue are patients assigned to the treatment group. Values are shown for the 167 patients for whom outcome data were available.
Our goal here is to assess if there exist patient profiles, based on the ADS and EDS covariates, that experience a high citalopram versus placebo effect ${\text{eff}}_{\text{min}}$, examining this question for ${\text{eff}}_{\text{min}}=30\mathrm{%},35\mathrm{%}$ and $40\mathrm{%}$ (by comparison the overall average was estimated at 13.6%). Table 1 shows that each cell is populated by a relatively small number (if any) of patients, so direct implementation of the theoretical approach described in Section 2.1 is not feasible.
To address the goal, consider first the approach of positing a working model, also described in Section 2.1. In particular, consider the logistic regression working models for the binary outcome ${Y}_{i}$, with value 1 signifying a reduction in agitation symptoms:
$\begin{array}{rcl}\text{logit}E\left({Y}_{i}\mid {\text{ADS}}_{i},{\text{EDS}}_{i},{\text{Treat}}_{i}& =& 1,\beta \right)={\beta }_{10}+{\beta }_{11}{\text{ADS}}_{i}+{\beta }_{12}{\text{EDS}}_{i}+{\beta }_{13}{\text{ADS}}_{i}×{\text{EDS}}_{i}\\ \text{logit}E\left({Y}_{i}\mid {\text{ADS}}_{i},{\text{EDS}}_{i},{\text{Treat}}_{i}& =& 0,\beta \right)={\beta }_{00}+{\beta }_{01}{\text{ADS}}_{i}+{\beta }_{02}{\text{EDS}}_{i}+{\beta }_{03}{\text{ADS}}_{i}×{\text{EDS}}_{i}.\end{array}$
In this first approach, the parameters, $\beta$, were estimated by the MLE ${\stackrel{ˆ}{\beta }}^{mle}$, and $\text{effect}{\phantom{\rule{thinmathspace}{0ex}}}^{model}\left({X}_{i},\beta \right)$ in eq. (2) was estimated by $\text{effect}{\phantom{\rule{thinmathspace}{0ex}}}^{model}\left({X}_{i},{\stackrel{ˆ}{\beta }}^{mle}\right)$. The latter takes 41 unique values, each corresponding to a non-empty cell in Table 1 (provided no two elements of ${\stackrel{ˆ}{\beta }}^{mle}$ are the same). Next, patients were ranked by their values $\text{effect}{\phantom{\rule{thinmathspace}{0ex}}}^{model}\left({X}_{i},{\stackrel{ˆ}{\beta }}^{mle}\right)$, and for each of the three values of ${\text{eff}}_{\text{min}}=30\mathrm{%},35\mathrm{%}$ and 40%, first we identified the uncalibrated set, say $\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left({\stackrel{ˆ}{\beta }}^{mle};{\text{eff}}_{\text{min}}\right)$, of the highly benefited patients based on the description in Section 2.1.
We evaluated the properties of these sets, by conducting a simulation as described in Appendix B. First, we found that the true effects experienced by the uncalibrated sets were approximately 5% lower than their corresponding three nominal values. Then, for each nominal value, we searched for the value that the empirical effect should have in order that the simulated true effects be equal to the nominal. These resulting values were $35\mathrm{%},40\mathrm{%}$ and $45\mathrm{%}$, respectively, and the corresponding sets, which we call ${\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}}^{calib}\left\{{\stackrel{ˆ}{\beta }}^{mle};{\text{eff}}^{emp}\left({\text{eff}}_{\text{min}}\right)\right\}$ in Appendix B, are shown on the top three panels in Figure 2.
Figure 2:
ADS-EDS profile of patients (black contours) that have large treatment effect (30% in left panels, 35% in middle panels, and 40% in right panels), as found by the standard two-stage method (top panels) and by the new proposed method (bottom panels). Both methods are calibrated as described in Appendix B. The percents given in boxed rectangles are determined over 500 simulation samples of the process in Appendix B; and the intensity of the blue color of a particular ADS-EDS cell represents the proportion of times, over the same 500 samples, that the cell is included in the highly benefited group. The number provided in each cell displays the number of patients in the dataset in each category.
For example, the set ${\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}}^{calib}\left\{{\stackrel{ˆ}{\beta }}^{mle};{\text{eff}}^{emp}\left({\text{eff}}_{\text{min}}=30\mathrm{%}\right)\right\}$ of patients who experience an average effect of 30% are the patients with $\text{EDS}\le 3\phantom{\rule{thinmathspace}{0ex}}\mathrm{&}\phantom{\rule{thinmathspace}{0ex}}\text{ADS}\ge 4$ or with $\text{EDS}\ge 4\phantom{\rule{thinmathspace}{0ex}}\mathrm{&}\phantom{\rule{thinmathspace}{0ex}}\text{ADS}\le 2$. This group is estimated to form 34% of the study population.
## 3 Proposed approach
The proposed approach is motivated by re-examining the parallelism that a better estimation approach should try to draw to the theoretical solution. In the theoretical solution (left column of Figure 1), the largest set $\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}$ is achieved by cumulatively including covariates based on the order of the true conditional effects $\text{effect}\phantom{\rule{thinmathspace}{0ex}}\left({X}_{i}\right)$. The model-based approach of Section 2.1 tries to parallel this by, first, estimating the conditional effects based on the MLE of a model ${\stackrel{ˆ}{\beta }}^{mle}$, and then cumulating these ordered effects, $\text{effect}{\phantom{\rule{thinmathspace}{0ex}}}^{model}\left({X}_{i},{\stackrel{ˆ}{\beta }}^{mle}\right)$, as in eq. (3).
While the above set of patients does experience the desired effect ${\text{eff}}_{\text{min}}$ in large samples, this is not, of course, the largest such set if the working model is incorrect. In fact, it is not even the largest achievable set when using the same working model. This is because, if the model is incorrect, the member of the model $\left({\stackrel{ˆ}{\beta }}^{mle}\right)$ that maximizes the (incorrect) likelihood does not necessarily have the invariance property with respect to the truth, and so it is not necessarily the same as the member of the model that achieves the largest set.
The proposed approach is to find the largest such set that can be achieved. To do this, the model should be left free at the first stage, so that one can consider all values of the parameter $\beta$, that can predict $\text{effect}\phantom{\rule{thinmathspace}{0ex}}\left({X}_{i}\right)$ by $\text{effect}{\phantom{\rule{thinmathspace}{0ex}}}^{model}\left({X}_{i},\beta \right)$. Then,
1. for each value $\beta$ of the parameter, find $\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left(\beta \right)\underset{\text{\hspace{0.17em}over\hspace{0.17em}}e}{\text{as the largest-fraction set}}\phantom{\rule{thinmathspace}{0ex}}\left\{{X}_{i}:\text{effect}{\phantom{\rule{thinmathspace}{0ex}}}^{model}\left({X}_{i},\beta \right)\ge e\right\}$(4) and such that the empirical effect within the set is at least ${\text{eff}}_{\text{min}}$; then
2. find $\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left({\stackrel{ˆ}{\beta }}^{best}\right)\underset{\text{\hspace{0.17em}over\hspace{0.17em}}\beta }{\text{as the largest-fraction set}}\phantom{\rule{thinmathspace}{0ex}}\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left(\beta \right),$(5) where $\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left(\beta \right)$ is as obtained in eq. (4).
By construction in eq. (5), the proposed set $\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left({\stackrel{ˆ}{\beta }}^{best}\right)$ is the largest possible set of the type in eq. (4) that can be achieved by using the working model, and so it is also at least as large as the one obtained in eq. (3) by the standard approach. Also by construction, the set $\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left({\stackrel{ˆ}{\beta }}^{best}\right)$ will converge to
$\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left({\stackrel{ˉ}{\beta }}^{best}\right)=\underset{\text{\hspace{0.17em}over\hspace{0.17em}}e\text{and\hspace{0.17em}}\beta }{\text{the largest-probability set}}\phantom{\rule{thinmathspace}{0ex}}\left\{{X}_{i}:\text{effect}{\phantom{\rule{thinmathspace}{0ex}}}^{model}\left({X}_{i},\beta \right)\ge e\right\},$
such that the effect within the set is at least ${\text{eff}}_{\text{min}}$, where ${\stackrel{ˉ}{\beta }}^{best}$ is the maximizer of the right-hand-side of the last expression. Thus we have:
$\text{pr}\left\{{X}_{i}\in \begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left({\stackrel{ˉ}{\beta }}^{best}\right)\right\}\ge \text{pr}\left\{{X}_{i}\in \begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left({\stackrel{ˉ}{\beta }}^{mle}\right)\right\}$
Moreover, with finitely many levels of $x$, the empirical effect, say $\stackrel{ˆ}{\text{effect}\phantom{\rule{thinmathspace}{0ex}}}\left\{\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left({\stackrel{ˉ}{\beta }}^{best}\right)\right\}$ on the new highly benefited set converges, in large samples, to at least the nominal effect ${\text{eff}}_{\text{min}}$, and the empirical proportion, say $\stackrel{ˆ}{\text{pr}}\left\{{X}_{i}\in \begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left({\stackrel{ˆ}{\beta }}^{best}\right)\right\}$ converges to the probability $\text{pr}\left\{{X}_{i}\in \begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left({\stackrel{ˉ}{\beta }}^{best}\right)\right\}$. A formal proof of this result would be more involved, due in part to having to deal with the estimators of parameters within functions (such as empirical estimates of probabilities and effects), and also due to the appearance of non-smooth indicator functions in both the probability statement and the effect function. Nonetheless, this heuristic argument seems to suggest that, under some regularity conditions and in sufficiently large samples, the new method will correctly produce a larger set of highly benefited patients than the standard method.
In small to moderate samples, and as with empirical maximization of other objective functions (e.g., sum of squares), the above convergence happens, by construction, from values of the effect that can be larger than the nominal one. For this reason, it is better to consider a modified set ${\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}}^{calib}\left({\stackrel{ˆ}{\beta }}^{best}\right)$ that uses the resampling approach to calibrate to the nominal minimal effect (see Appendix B).
We evaluated the properties of this new method by an analogous simulation to that for the standard method of Section 2 and as described in detail in Appendix B. We found that the true effects experienced by the uncalibrated sets of the new method were approximately 10% lower than their corresponding three nominal values. Then, for each nominal value, we searched for the value that the empirical effect should have in order that the simulated true effects be equal to the nominal. These three values were approximately $40\mathrm{%},45\mathrm{%}$ and $50\mathrm{%}$, respectively, and these resulting sets, which we call ${\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}}^{calib}\left\{{\stackrel{ˆ}{\beta }}^{best};{\text{eff}}^{emp}\left({\text{eff}}_{\text{min}}\right)\right\}$ in Appendix B, are shown on the bottom three panels in Figure 2.
For example, the set ${\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}}^{calib}\left\{{\stackrel{ˆ}{\beta }}^{mle};{\text{eff}}^{emp}\left({\text{eff}}_{\text{min}}=30\mathrm{%}\right)\right\}$ of patients that experiences an average effect of 30% are the patients with $\text{EDS}\le 4\phantom{\rule{thinmathspace}{0ex}}\mathrm{&}\phantom{\rule{thinmathspace}{0ex}}\text{ADS}\ge 4$ and the following (EDS,ADS) cells: (3,3), (4,3), (5,4), (6,4), as shown within the black contour of the bottom left panel of Figure 2. This group is estimated to form 56% of the study population. Therefore, even after adjusting for overfitting, the new method is estimated to characterize substantially larger groups of patients with high benefit.
## 4 Discussion
We have illustrated a new method of characterizing groups of patients with high benefit. We believe the new method can have important clinical implications regarding which patients are targeted for treatment, as well as important methodological implications for characterizing such groups in observational studies.
The example of CitAD illustrates the potential of these methods. The ADS and EDS covariates are indeed predictive of effect regardless of whether standard methods or the new methods presented above are used, but the proportion of participants is much higher with the new method. For example, using a 30% effect size as the minimum difference of clinical significance, 34% of participants fall into ADS/EDS categories with clinically significant effects using standard methods compared to 56% with the new method. Thus, using ADS/EDS categories a clinician could identify 20% more patients with AD and agitation who would be predicted to have a clinically significant response to citalopram, an undoubtedly clinically meaningful difference. Given the potential toxicity of medications (for example, QTc prolongation observed with citalopram treatment in CitAD, [9]), identifying patients most likely to respond to drug represents a substantial improvement in maximizing benefit over risk. It is particularly impressive that ADS/EDS categories are so useful for predicting response because these subscales were derived from first principles, i.e. examining instruments at the item level and deriving the instruments pre hoc, independently of results, not as the result of cluster analytic techniques. This suggests the potential utility of applying these methods to other trials to improve clinicians’ ability to predict response to drug treatment.
A number of areas regarding the proposed method warrant further exploration. First, it is possible that the largest subgroup that, on average, has an effect larger than a constant may include finer subgroups with a negative effect. This is difficult to know, however, because a method that would search for this would be also subject to the difficulty of fitting effects given the high dimensional $X$. Perhaps an expert’s opinion on whether the finer parts of the subgroup make sense would be useful. Second, making the clinical objective the same as the statistical objective function to maximize, while scientifically desirable, is prone to overfitting. Here, we addressed this in part by calibration through simulation. Additional work is needed to develop accessible inference methods for confidence intervals, and for finding if and how a semiparametric efficient estimator can be achieved for the set $\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left({\stackrel{ˉ}{\beta }}^{best}\right)$, for example using theory of van der Laan and Rubin [10], van der Laan and Rose [11]. Further, one can build additional parsimony into the estimation by regularizing the objective function through adding a condition that, for example, the magnitude of the coefficients be restricted. Thus, the contribution of the proposed method is not in competition with regularization, but is, instead, to emphasize the change of the core objective function - from a statistical one (e.g., least squares or likelihood) to a clinically meaningful one such as of the proportion of highly benefited patients. Working with this objective function analytically is not as straightforward because its complexity suggests it may not be convex. In practice we searched for maxima using simulated annealing.
Usefully, the new method can be applied to also characterize highly benefited groups in observational studies. Specifically, if treatment assignment is ignorable [6] and the propensity score [12] is reliably estimable, then, in principle, similar methods to these presented here can be applied to the population of potential outcomes after adjusting through the propensity score. This would provide an alternative way of fitting, for example, a structural mean model [13, 14], where the coefficients are chosen to maximize group of patients that are benefited beyond a minimum effect desired by physicians and patients.
## References
• 1.
Porsteinsson A, Drye L, Pollock B, Devanand D, Frangakis C, Ismail Z, et al. Effect of citalopram on agitation in alzheimer disease: the CitAD randomized clinical trial. J Am Med Assoc 2014;311:682–91.
• 2.
Zhang B, Tsiatis A, Laber E, Davidian M. A robust method for estimating optimal treatment regimes. Biometrics 2012;68:1010–8.
• 3.
Zhao Y, Zeng D, Rush A, Kosorok M. Estimating individualized treatment rules using outcome weighted learning. J Am Stat Assoc 2012;107:1106–18.
• 4.
Rubin D, van der Laan MJ. Statistical issues and limitations in personalized medicine research with clinical trials. Int J Biostat 2012;8(1):Article 18.10.1515/1557-4679.1423.
• 5.
Cai T, Tian L, Wong P, Wei L. Analysis of randomized comparative clinical trial data for personalized treatment selections. Biostatistics 2011;12:270–82.
• 6.
Rubin D. Bayesian inference for causal effects: the role of randomization. Ann Stat 1978;6:34–58.
• 7.
Schneider L, Olin J, Doody R, Clark C, Morris J, Reisberg B, et al. Validity and reliability of the Alzheimer’s disease cooperative study-clinical global impression of change. the Alzheimer’s disease cooperative study. Alzheimer Dis Assoc Disord 1997;11(Suppl 2):S22–S32.
• 8.
Drye L, Ismail Z, Porsteinsson A, Weintraub D, Marana C, Pelton D, et al. Citalopram for agitation in Alzheimer’s disease: design and methods. Alzheimers Dement 2012;8:121–30.
• 9.
Drye L, Spragg D, Devanand D, Frangakis C, Marano C, Meinert C, et al. Changes in QTC interval in the citalopram for agitation in Alzheimer’s disease (citad) randomized trial. Plos One 2014;9:e98426.
• 10.
van der Laan MJ, Rubin DB. Targeted maximum likelihood learning. Int J Biostat 2006;2:Article 1110.2202/1557–4679.1043. Google Scholar
• 11.
van der Laan MJ, Rose S. Targeted learning: causal inference for observational and experimental data. New York: Springer, 2011. Google Scholar
• 12.
Rosenbaum P, Rubin D. The central role of the propensity score in observational studies for causal effects. Biometrika 1983;70:41–55.
• 13.
Robins JM. In: Sechrest L, Freeman H, Mulley A, editors. The analysis of randomized and non- randomized AIDS treatment trials using a new approach to causal inference in longitudinal studies. Washington, DC: In Health Service Research Methodology: A Focus on AIDS, 1989;113–159.
• 14.
Vansteelandt S, Joffe M. Structural nested models and g-estimation: the partially realized promise. Stat Sci 2014;20:707–31.
• 15.
Alexopoulos G, Abrams R, Young R, Shamoian C. Cornell scale for depression in dementia. Biol Psychiatry 1988;23:271–84.
• 16.
Levin H, High W, Goethe K, Sisson R, Overall J, Rhoades H, et al. The neurobehavioural rating scale: assessment of the behavioural sequelae of head injury by the clinician. J Neurol Neurosurg Psychiatry 1987;50:183–93.
• 17.
Cummings J, Mega M, Gray K, Rosenberg-Thompson S, Carusi D, Gornbein J. The neuropsychiatric inventory: Comprehensive assessment of psychopathology in dementia. Neurology 1987;44:2308–14.
• 18.
Cohen-Mansfield J. Conceptualization of agitation: results based on the cohen-mansfield agitation inventory and the agitation behavior mapping instrument (with dicussion). Int Psychogeriatrics 1996;8:309–15.
## Appendix A: Characterization of the largest highly benefited subgroup
We prove the result for the case where ${X}_{i}$ has finite though possibly many levels. Consider the indicator $i{n}_{0}\left({X}_{i}\right)$ and the constant $k$ defined in Result 1; and consider any other indicator $in\left({X}_{i}\right)$ whose subgroup size is strictly larger than that of $i{n}_{0}$, i.e., suppose $P:=\text{pr}\left\{in\left({X}_{i}\right)=1\right\}>{P}_{0}:=\text{pr}\left\{i{n}_{0}\left({X}_{i}\right)=1\right\}$. Then it is useful to consider the quantity
$q\left(x\right):=\left\{\frac{i{n}_{0}\left(x\right)}{{P}_{0}}-\frac{in\left(x\right)}{P}\right\}\left\{\text{effect}\phantom{\rule{thinmathspace}{0ex}}\left(x\right)-k\right\}p\left(x\right),$
where $\text{effect}\phantom{\rule{thinmathspace}{0ex}}\left(x\right)$ is as defined in Section 2.1 and $p\left(x\right)=\text{pr}\left({X}_{i}=x\right)$. Specifically, $q\left(x\right)$ is non-negative because if $i{n}_{0}\left(x\right)=1$, both of the first two terms are non-negative; and if $i{n}_{0}\left(x\right)=0$, both of the first two terms are non-positive. Moreover, $q\left(x\right)$ is strictly positive with positive probability because, when $\text{effect}\phantom{\rule{thinmathspace}{0ex}}\left(x\right)>k$ (and $i{n}_{0}\left(x\right)=1$), then the first two terms are strictly positive regardless of $in\left(x\right)$. Now, if $q\left(x\right)$ is summed over $x$, we get
$0<\sum _{x}q\left(x\right)={E}_{0}-k-E+k,\phantom{\rule{1em}{0ex}}\text{so\hspace{0.17em}}\phantom{\rule{1em}{0ex}}E<{E}_{0}$
where ${E}_{0}$ and $E$ are the effects $\text{effect}\phantom{\rule{thinmathspace}{0ex}}\left\{i{n}_{0}\left({X}_{i}\right)=1\right\}$ and $\text{effect}\phantom{\rule{thinmathspace}{0ex}}\left\{in\left({X}_{i}\right)=1\right\}$, respectively, within the subgroups defined by the indicators. Thus, if $E\ge {E}_{0}$ we must have $P\le {P}_{0}$. By assumption, ${E}_{0}={\text{eff}}_{\text{min}}$, and thus the maximum size is attained at ${P}_{0}$ by $i{n}_{0}$.
## Appendix B: Evaluation and calibration of highly benefited sets through simulation
We sought to evaluate the properties of estimated highly benefited subgroups derived through fitting data ${D}^{obs}$ from a trial, utilizing both the standard and proposed methods. To do so, we applied the estimated sets to the target population from which the data are sampled. In order to do this, for example, for the proposed method and for a nominal minimum effect ${\text{eff}}^{nom}$ equal to, say $30\mathrm{%}$, we did the following.
For both the standard and the proposed methods for characterizing a highly benefited subgroup, we evaluated properties of the estimated sets based on ${X}_{i}$ – derived through fitting data ${D}^{obs}$ from a trial – are applied to the target population from which the data are sampled. In order to do this, for example, for the proposed method and for a nominal minimum effect ${\text{eff}}^{nom}$ equal to, say $30\mathrm{%}$, we did the following.
1. Treat ${D}^{obs}$ as the target source population, and obtain a bootstrap data sample, ${D}^{rep}$ with replacement.
2. For ${D}^{rep}$, derive $\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left({\stackrel{ˆ}{\beta }}^{best};{\text{eff}}^{emp}=30\mathrm{%};{D}^{rep}\right)$ in order to reach a minimum empirical effect ${\text{eff}}^{emp}=30\mathrm{%}$ on data ${D}^{rep}$, as described in Section 3 (here, the explicit notation for the empirically achieved minimum effect and for the data ${D}^{rep}$ is important).
3. Apply $\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left({\stackrel{ˆ}{\beta }}^{best};{\text{eff}}^{emp};{D}^{rep}\right)$ back to the target source population ${D}^{obs}$, and find the true effect on these patients $\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left({\stackrel{ˆ}{\beta }}^{best};{\text{eff}}^{emp};{D}^{rep}\right)$, which, based on the notation of Section 2.1, is $\stackrel{ˆ}{\text{effect}\phantom{\rule{thinmathspace}{0ex}}}\left\{\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left({\stackrel{ˆ}{\beta }}^{best};{\text{eff}}^{emp};{D}^{rep}\right)\right\}$.
4. Repeat steps (1)–(3) and find the true average effect$E\left[\stackrel{ˆ}{\text{effect}\phantom{\rule{thinmathspace}{0ex}}}\left\{\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left({\stackrel{ˆ}{\beta }}^{best};{\text{eff}}^{emp};{D}^{rep}\right)\right\}{\text{\middle}}|{D}^{obs}\right]$, averaged over the simulated data sets ${D}^{rep}$ given ${D}^{obs}$.
5. If the true effect as verified in step 4 is different from the nominal $30\mathrm{%}$ then search, using a bijection method, for what value we should require the empirical effect in step 2 to be, so that the true effect in step 4 is equal to the nominal. Call that empirical effect ${\text{eff}}^{emp}\left({\text{eff}}^{nom}\right)$ (this function can be different between the proposed method and the standard method).
6. for the data ${D}^{obs}$ define the calibrated highly benefited group for the nominal ${\text{eff}}^{nom}=30\mathrm{%}$ effect, as ${\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}}^{calib}\left({\stackrel{ˆ}{\beta }}^{best};{\text{eff}}^{nom};{D}^{obs}\right):=\begin{array}{c}\text{highly}\\ \text{benefited}\end{array}\left\{{\stackrel{ˆ}{\beta }}^{best};{\text{eff}}^{emp}\left({\text{eff}}^{nom}\right);{D}^{obs}\right\}$
We used the same approach to evaluate and produce calibration also for the standard method.
## Appendix C: Derivation of the affective and executive scales
Items were derived from medical/psychiatric history and from neuropsychiatric instruments including Cornell Scale for Depression in Dementia (CSDD, Alexopoulos et al. [15]), Neurobehavioral Rating Scale (NBRS, Levin et al. [16]), Neuropsychiatric Inventory (NPI, Cummings et al. [17]), and Cohen-Mansfield Agitation Inventory (CMAI, Cohen-Mansfield [18]). The ADS consisted of 7 items: (1) family history of mood disorder; (2) personal history of mood disorder; (3) Depression defined as CSDD score $\ge$ 6 or NBRS depression item $\ge$ 3 or NPI Depression score $\ge$ 4; (4) Mood lability defined as NBRS mood lability item $\ge$ 3; (5) Anxiety defined as NBRS anxiety $\ge$ 3 or NPI Anxiety $\ge$ 4; (6) Irritability defined as NPI Irritability $\ge$ 4; and Somatic defined as NBRS somatic symptoms item $\ge$ 3. Each ADS item was scored as 0 or 1 and summed for total range of 0 to 7. The EDS consisted of 6 items: (1) Inattention defined as NBRS inattention item $\ge$3; (2) Aberrant Motor Behavior defined as NPI Aberrant Motor Behavior $\ge$ 4 or CMAI aberrant motor behavior item $\ge$ 4; (3) Disinhbition defined as NPI Disinhibition $\ge$ 4 or CMAI disinhibition $\ge$ 4 or CMAI disinhibition $\ge$ 4; (4) Apathy defined as NPI Apathy $\ge$ 4 or NBRS apathy item $\ge 3$; (5) Poor planninag as defined by NBRS poor planning item $\ge$ 3; (6) Disorganization defined as NBRS disorganization item $\ge 3$. Each EDS item was scored as 0 or 1 and summed for total range of 0 to 6.
Table 2
Items comprising the affective (ADS) and dysexecutive (EDS) indicators at baseline.
Published Online: 2017-05-20
The authors thank the Johns Hopkins CitAD group, NIH grant R01 AI102710-01A1, a collaboration between Johns Hopkins Department of BIostatistics and Medimmune, and Mark van der Laan, Marco Carone, and anonymous referees for helpful discussions. Any opinions expressed in the paper are solely the authors’.
Citation Information: The International Journal of Biostatistics, Volume 13, Issue 1, 20160045, ISSN (Online) 1557-4679,
Export Citation
© 2017 Walter de Gruyter GmbH, Berlin/Boston.
|
2020-02-19 00:10:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 188, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7187901139259338, "perplexity": 1179.7659905403289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143963.79/warc/CC-MAIN-20200219000604-20200219030604-00199.warc.gz"}
|
https://www.physicsforums.com/members/vikkisut88.153464/recent-content
|
Recent content by vikkisut88
1. Convergence of alternating series
okay, but i still don't understand how i'm meant to show the result, sorry. This question has got me completely flummoxed.
2. Convergence of alternating series
sorry i don't really understand that - how did you work out that s was 5/6? And did you just choose random values for a0, a1 and an? I have rechecked my homework question and that is exactly what it said!
3. Convergence of alternating series
1. Homework Statement Let s be the sum of the alternating series \sum(from n=1 to \infty)(-1)n+1an with n-th partial sum sn. Show that |s - sn| \leqan+1 2. Homework Equations I know about Cauchy sequences, the Ratio test, the Root test 3. The Attempt at a Solution I really have...
4. Finding mean and variance
Right okay, so now for the variance do I use the same formula but use 1.6 (I presume it was just a typo and it should have been 0.7 *2) instead of 0.53? Do I then use the Central Limit Theorem for part b?
5. Finding mean and variance
Okay I have since realised that for part a) I think i was doing it wrong so now for the mean I have: ((0.7*2) + (0.2*1) + (0.1*0))/3 = 0.53 But for the Variance I have: ((0.7 - 0.53)2+ (0.2 - 0.53)2 + (0.1 - 0.53)2)/3 = 0.1076 which makes far more sense!! Now I'm thinking of using...
6. Finding mean and variance
1. Homework Statement Suppose that, on average, 70% of graduating students want 2 guest tickets for a graduation ceremony, 20% want 1 guest ticket and the remaining10% don't want any guest tickets. (a) Let X be the number of tickets required by a randomly chosen student. Find the mean and...
7. Proving that f is bounded on R
but I can't just assume it is that specific function surely? plus i have to prove it's bounded, not unbounded?!?
8. Normally distributed probability problem
oh and n = total so in this case 12
9. Normally distributed probability problem
nope you just have one 12C3 - it's how you use the binomial theorem: P(X=r) = nCr * p^r * q^(n-r) where q = 1-p :)
10. Normally distributed probability problem
The answer to part a) is correct, however, I don't really understand what calculation you've done for part b). Personally I would just use the Choose function i.e. 12C3 * 0.0301^3 * 0.9699^9
11. Proving that f is bounded on R
1. Homework Statement Suppose that f: R -> R is continuous on R and that lim (x -> \infty+)(f(x) = 0) and lim (x -> \infty-)(f(x)=0). Prove that f is bounded on R 2. Homework Equations I have got the proof of when f is continuous on [a,b] then f is bounded on[a,b] but I'm unsure as to...
12. Proving that g1,g2,g3 are linearly independent
1. Homework Statement Let V = {differentiable f:R -> R}, a vector space over R. Take g1,g2,g3 in V where g1(x) = e^{}x, g2(x) = e^{}2x and g3(x) = e^{}3x. Show that g1, g2 and g3 are distinct. 2. Homework Equations If g1-g3 are linearly independent, it means that for any constant, k in F...
13. Proving lim (as n -> infinity) 2^n/n! = 0
i see where you're going but where does that final 2/3 come from?
14. Proving lim (as n -> infinity) 2^n/n! = 0
1. Homework Statement Prove that lim n \rightarrow\infty 2^{}n/n! = 0 2. Homework Equations This implies that 2^{}n/n! is a null sequence and so therefore this must hold: (\forall E >0)(\existsN E N^{}+)(\foralln E N^{}+)[(n > N) \Rightarrow (|a_{}n| < E) 3. The Attempt at a...
|
2019-10-16 12:42:21
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9173760414123535, "perplexity": 1482.4968271212633}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668569.22/warc/CC-MAIN-20191016113040-20191016140540-00449.warc.gz"}
|
https://mattermodeling.stackexchange.com/questions/3819/how-can-i-find-the-area-of-an-overlayer-structure
|
How can I find the area of an overlayer structure?
What sources would you recommend (or if you could instead explain it to me that would be great). I have never studied crystallography but must do a module on it and in some of the questions we were given to practice the following is asked:
Give the area of a $$(\sqrt{3} \times \sqrt{3})$$R$$30^°$$ surface unit mesh on the surface of an (0001) hcp crystal with lattice parameters a = 4.2 Å and c = 5.5 Å?
• Is this homework?
– Camps
Nov 28, 2020 at 18:10
• This is a common task for overlayer structures, I have changed your title to make it more generally useful. This does look like homework, but this isn't something that is straightforward to look up. Nov 28, 2020 at 18:27
• It is not homework, but examples we were given and not contextualise in, and learning this during a pandemic where I haven't met the lecturer once, made this the only platform where I could try ask for help. Can someone indicate a book for me to learn this stuff? Nov 29, 2020 at 13:59
The 0001 facet area will be only dependent on the $$a$$ lattice constant. To solve for the area of that surface, you will just need to find the area of a rhombus with a $$60^{\circ}$$ angle.
To read the intended cell, start from the unit cell of the 0001 surface. Then multiply the surface vectors by the two values given, $$\sqrt{3}$$ and $$\sqrt{3}$$. Then you rotate the surface vectors around the z axis by the value given, $$30^{\circ}$$.
$$A = S^2\sin(A^{\circ})$$
$$A = (\sqrt{3}*4.2)^2\sin(60^{\circ})$$
$$A = 45.83$$
|
2022-06-27 12:29:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6944715976715088, "perplexity": 381.9261592171736}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103331729.20/warc/CC-MAIN-20220627103810-20220627133810-00288.warc.gz"}
|
http://www.mathalino.com/glossary/d
|
0 (130) | 1 (38) | 2 (42) | 3 (12) | 4 (10) | 5 (6) | 6 (5) | 7 (19) | 8 (6) | A (59) | B (13) | C (64) | D (171) | E (90) | F (58) | G (10) | H (29) | I (39) | K (3) | L (33) | M (41) | N (7) | O (5) | P (414) | Q (6) | R (27) | S (504) | T (61) | U (3) | V (14) | W (20) | X (2) | Y (1) | Z (1) | [ (2)
Title Last update
DE July 31, 2016 - 12:55pm
DE September 5, 2016 - 12:26pm
DE September 19, 2016 - 2:55pm
DE September 19, 2016 - 2:55pm
DE September 19, 2016 - 9:50pm
DE exact equations: (3 + y + 2y^2 sin^2 x) dx + (x + 2xy - y sin 2x) dy = 0 July 23, 2016 - 11:27pm
DE Order one: (xy^2 + x - 2y + 3) dx + x^2 ydy = 2(x + y) dy July 24, 2016 - 12:32am
DE: 2xy dx + (y^2 + x^2) dy = 0 July 24, 2016 - 12:53am
DE: 2xy dx + (y^2 - x^2) dy = 0 July 24, 2016 - 12:55am
DE: x dx + [ sin^2 (y/x) ](y dx - x dy) = 0 July 17, 2016 - 11:05am
Decompose the given proper fraction into partial fractions March 26, 2016 - 2:22pm
Decompose the given term into partial fraction and compute for the value of A.$\dfrac{7x + 17}{(x + 3)^2}$ March 26, 2016 - 2:22pm
Definite Integral May 13, 2012 - 8:30pm
definite integral hirap po talaga hindi ko masagot March 26, 2016 - 2:21pm
Deflection of Cantilever Beams | Area-Moment Method May 18, 2012 - 8:42am
Deflections Determined by Three-Moment Equation April 8, 2016 - 5:13pm
Deflections in Simply Supported Beams | Area-Moment Method May 18, 2012 - 8:42am
Density of compressed oxygen gas in the cylinder March 26, 2016 - 2:23pm
Depth of water in conical tank in upright and inverted positions January 7, 2013 - 1:01pm
Derivation / Proof of Ptolemy's Theorem for Cyclic Quadrilateral May 18, 2012 - 8:42am
Derivation of Basic Identities May 16, 2012 - 1:36pm
Derivation of Cosine Law May 16, 2012 - 1:40pm
Derivation of Formula for Area of Cyclic Quadrilateral October 24, 2015 - 8:48am
Derivation of Formula for Lateral Area of Frustum of a Right Circular Cone May 16, 2012 - 2:02pm
Derivation of Formula for Radius of Circumcircle May 16, 2012 - 1:29pm
Derivation of Formula for Radius of Incircle May 16, 2012 - 1:31pm
Derivation of Formula for Sum of Years Digit Method (SYD) May 16, 2012 - 1:10pm
Derivation of Formula for the Future Amount of Ordinary Annuity March 26, 2016 - 2:21pm
Derivation of Formula for Total Surface Area of the Sphere by Integration November 12, 2016 - 8:27am
Derivation of formula for volume of a frustum of pyramid/cone May 16, 2012 - 2:10pm
Derivation of Formula for Volume of the Sphere by Integration May 16, 2012 - 2:05pm
Derivation of Formulas December 18, 2012 - 7:35pm
Derivation of Heron's / Hero's Formula for Area of Triangle May 16, 2012 - 1:32pm
Derivation of Pythagorean Identities May 16, 2012 - 1:42pm
|
2017-01-23 04:23:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4459008276462555, "perplexity": 1615.2538730143035}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00198-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://loelschlaeger.de/RprobitB/reference/pprint.html
|
This function prints abbreviated matrices and vectors.
## Usage
pprint(x, rowdots = 4, coldots = 4, digits = 4, name = NULL, desc = TRUE)
## Arguments
x
A (numeric or character) matrix or a vector.
rowdots
The row number which is replaced by dots.
coldots
The column number which is replaced by dots.
digits
If x is numeric, sets the number of decimal places.
name
Either NULL or a label for x. Only printed if desc = TRUE.
desc
Set to TRUE to print the name and the dimension of x.
## Value
Invisibly returns x.
## References
This function is a modified version of the pprint() function from the ramify R package.
## Examples
RprobitB:::pprint(x = 1, name = "single integer")
#> single integer : 1
RprobitB:::pprint(x = LETTERS[1:26], name = "letters")
#> letters : character vector of length 26
#>
#> A B C ... Z
RprobitB:::pprint(x = matrix(rnorm(100), ncol = 1),
name = "single column matrix")
#> single column matrix : 100 x 1 matrix of doubles
#>
#> [,1]
#> [1,] 1.1144
#> [2,] -0.0042
#> [3,] -1.0775
#> ... ...
#> [100,] -0.4254
RprobitB:::pprint(x = matrix(1:100, nrow = 1), name = "single row matrix")
#> single row matrix : 1 x 100 matrix of integers
#>
#> [,1] [,2] [,3] ... [,100]
#> [1,] 1 2 3 ... 100
RprobitB:::pprint(x = matrix(LETTERS[1:24], ncol = 6), name = "big matrix")
#> big matrix : 4 x 6 matrix of characters
#>
#> [,1] [,2] [,3] ... [,6]
#> [1,] A E I ... U
#> [2,] B F J ... V
#> [3,] C G K ... W
#> [4,] D H L ... X
|
2022-08-15 22:41:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25024279952049255, "perplexity": 13470.753498109787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572212.96/warc/CC-MAIN-20220815205848-20220815235848-00315.warc.gz"}
|
https://codereview.stackexchange.com/questions/145663/function-that-returns-another-function-for-a-programming-task
|
# Function that returns another function, for a programming task
This is a task I was given for an online course. This is my solution to the problem.
Task: Write a function called general_poly, that meets the specifications below.
For example, general_poly([1, 2, 3, 4])(10) should evaluate to 1234 because
1*10^3 + 2*10^2 + 3*10^1 + 4*10^0
So in the example the function only takes one argument with general_poly([1, 2, 3, 4]) and it returns a function that you can apply to a value, in this case x = 10 with general_poly([1, 2, 3, 4])(10).
def general_poly (L):
""" L, a list of numbers (n0, n1, n2, ... nk)
Returns a function, which when applied to a value x, returns the value
n0 * x^k + n1 * x^(k-1) + ... nk * x^0 """
My Code:
def general_poly (L):
""" L, a list of numbers (n0, n1, n2, ... nk)
Returns a function, which when applied to a value x, returns the value
n0 * x^k + n1 * x^(k-1) + ... nk * x^0 """
def function_generator(L, x):
k = len(L) - 1
sum = 0
for number in L:
sum += number * x ** k
k -= 1
return sum
def function(x, l=L):
return function_generator(l, x)
return function
• Why do you use L[i] instead of number? – Arnial Oct 30 '16 at 12:24
• wow! Facepalm haha i was stressed when i wrote that, thank you very much lol! – ChrisIkeokwu Oct 30 '16 at 12:26
1. The name function_generator could be improved. What this function does is to evaluate the polynomial at the given value x. So a name like evaluate would be better.
2. The body of function_generator can be simplified. First, it's simpler to iterate over L in reversed order, because then you can start with k = 0 rather than k = len(L) - 1:
def evaluate(L, x):
k = 0
sum = 0
for number in reversed(L):
sum += number * x ** k
k += 1
return sum
Now that k goes upwards, you can use enumerate to generate the values for k:
def evaluate(L, x):
sum = 0
for k, a in enumerate(reversed(L)):
sum += a * x ** k
return sum
And now that the body of the loop is a single expression, you can use the built-in sum function to do the addition:
def evaluate(L, x):
return sum(a * x ** k for k, a in enumerate(reversed(L)))
Another approach to polynomial evaluation is to generate the power series in $x$ separately:
def power_series(x):
"Generate power series 1, x, x**2, ..."
y = 1
while True:
yield y
y *= x
and then combine it with the coefficients using zip:
def evaluate(L, x):
"Evaluate the polynomial with coefficients in L at the point x."
return sum(a * y for a, y in zip(reversed(L), power_series(x)))
This is more efficient for large polynomials because each step in power_series is a multiplication by x, which is cheaper to compute than the exponentiation x ** k.
Finally, you could take advantage of this transformation: $$a_k x^k + a_{k-1} x^{k-1} + \dots + a_1x + a_0 = (\cdots ((0 + a_k)x + a_{k-1})x + \cdots + a_1)x + a_0$$ and write:
def evaluate(L, x):
"Evaluate the polynomial with coefficients in L at the point x."
r = 0
for a in L:
r = r * x + a
return r
(This is Horner's method.)
3. There's no need for function, because it's fine for the body of function_generator to refer to nonlocal variables like L. So you can write:
def general_poly(L):
"Return function evaluating the polynomial with coefficients in L."
def evaluate(x):
r = 0
for a in L:
r = r * x + a
return r
return evaluate
4. An alternative approach is to use functools.partial:
from functools import partial
def evaluate(L, x):
"Evaluate the polynomial with coefficients in L at the point x."
r = 0
for a in L:
r = r * x + a
return r
def general_poly(L):
"Return function evaluating the polynomial with coefficients in L."
return partial(evaluate, L)
This would be useful if you sometimes wanted to call evaluate directly (instead of via general_poly).
|
2021-04-18 07:45:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6294762492179871, "perplexity": 2733.3129206440763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038469494.59/warc/CC-MAIN-20210418073623-20210418103623-00375.warc.gz"}
|
https://gigaom.com/2009/02/18/gm-viability-plan-plays-up-the-chevy-volt/
|
# GM Viability Plan Plays Up the Chevy Volt
Whatever its track record with alternative-fuel vehicles, General Motors (s GM) is not giving short shrift to the 2010 Chevrolet Volt — at least not when it comes to making a case for financial viability. In the plan submitted to feds today, the struggling automaker pledged to develop a fleet dominated by alternative fuel vehicles, right here in the US of A. As we’ve seen before, GM foists much of the weight of this promise onto the Volt, the company’s sexiest example of innovation.
To be sure, the plan is not all about the Volt. GM has made a splash at recent auto shows with hybrid pick-up trucks, although production numbers are expected to be low, batteries large and expensive, and profit margins slim, at best, as the Washington Post reports. It also has massive cuts in the works. But while the company says it wants to bring dozens of hybrid and plug-in vehicles to market over the next five years — many of them trucks — GM time and again uses the Volt to bolster its credentials.
What are the company’s leading examples of strengthening its roots in U.S. soil? The Michigan assembly plant planned for the Volt battery pack, and a nearby lithium-ion battery development program. What’s the one vehicle it names in the stated plan to introduce 14 new hybrid models by 2012, and 26 by 2014? Oh yes, the Volt (two other extended-range electric vehicles based on the Volt are also said to be in the works).
The plan also shows a GM that’s very different from the one that gave outgoing vice chairman Bob Lutz a soapbox for his often less-than-scientific thoughts on climate change:
It [the plan] also results in a business that will contribute materially to the national interest by developing and commercializing advanced technologies and vehicles that will reduce petroleum dependency and greenhouse gas emissions, and drive national technological and manufacturing competitiveness.
But some things haven’t changed. As we noted in November, when GM was begging for a bailout, financial salvation is a tall order for a car expected to cost over $1 billion to develop but not turn a profit until well after the second-generation model rolls off the assembly line. At the time, we wondered whether the Volt would remain a tiny niche product in the company’s lineup, or if GM would remake a meaningful portion of its cars in the green car image of the Volt. The company says it’s now aiming to have alternative-fuel models account for 66 percent of total sales by 2012, up from the 55 percent goal outlined in the draft submitted to Congress in December. Despite its high hopes for the Volt and the next generation of vehicles GM says it can build with related technology, the little car is no match for GM’s debt load and financial woes. For those, the company says it needs another$16.6 billion in government aid. Otherwise, GM claims it could run out of gas as early as next month — long before the Volt ever makes it to market.
|
2021-09-23 19:10:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2712840735912323, "perplexity": 3992.070537410297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057427.71/warc/CC-MAIN-20210923165408-20210923195408-00572.warc.gz"}
|
https://mirror2image.wordpress.com/tag/robust-statistics/
|
# Mirror Image
## Robust estimators III: Into the deep
Cauchy estimator have some nice properties (Gonzales et al “Statistically-Efficient Filtering in Impulsive Environments: Weighted Myriad Filter” 2002):
By tuning $\gamma$ in
$\psi(x) = \frac{x}{\gamma^2+ x^2}$
it can approximate either least squares (big $\gamma$), or mode – maximum of histogram – of sample set (small $\gamma$). For small $\gamma$ estimator behave the same way as power law distribution estimator with small $\alpha$.
Another property is that for several measurements with different scales $\gamma_i$ estimator of their sum will be simple
$\psi(x) = \frac{x}{(\sum \gamma_i)^2+ x^2}$
which is convenient for estimation of random walks
I heard convulsion in the sky,
And flight of angel hosts on high,
And monsters moving in the deep
Those verses from The Prophet by A.Pushkin could be seen as metaphor of profound mathematical insight, encompassing bifurcations, higher dimensional algebra and murky depths of statistics.
I now intend to dive deeper into of statistics – toward “data depth”. Data depth is a generalization of median concept to multidimensional data. Remind you that median can be seen either as order parameter – value dividing the higher half of measurements from lower, or geometrically, as the minimum of $L_1$ norm. Second approach lead to geometric median, about which I already talked about.
First approach to generalizations of median is to try to apply order statistics to multidimensional vectors.The idea is to make some kind of partial order for n-dimensional points – “depth” of points, and to choose as the analog of median the point of maximum depth.
Basically all data depth concepts define “depth” as some characterization of how deep points are reside inside the point cloud.
Historically first and easiest to understand was convex hull approach – make convex hull of data set, assign points in the hull depth 1, remove it, get convex hull of points remained inside, assign new hull depth 2, remove etc.; repeat until there is no point inside last convex hull.
Later Tukey introduce similar “halfspace depth” concept – for each point X find the minimum number of points which could be cut from the dataset by plane through the point X. That number count as depth(see the nice overview of those and other geometrical definition of depth at Greg Aloupis page)
In 2002 Mizera introduced “global depth”, which is less geometric and more statistical. It start with assumption of some loss function (“criterial function” in Mizera definition) $F(x_i, \theta)$ of measurement set $x_i$. This function could be(but not necessary) cumulative probability distribution. Now for two parameters $\theta_1$ and $\theta_2$, $\theta_1$ is more fit with respect $A \subset N$ if for all $i \in A$ $F(x_i, \theta_1) > F(x_i, \theta_2)$. $\hat{\theta}$ is weakly optimal with respect to $A$ if there is nor better fit parameter with respect to $A$. At last global depth of $\theta$ is the minimum possible size of $A$ such that $\theta$ is not weakly optimal with respect to $N \setminus A$ – reminder of measurements. In other words global depth is minimum number of measurements which should be removed for $\theta$ stop being weakly optimal. Global depth is not easy to calculate or visualize, so Mizera introduce more simple concept – tangent depth.
Tangent depth defined as $min_{\parallel u\parallel=1}\mid \{ i: u^T \bigtriangledown_{\theta} F(x_i) \geq 0 \}\mid$. What does it mean? Tangent depth is minimum number of “bad” points – such points that for specific direction loss function for themis growing.
Those definitions of “data depth” allow for another type of estimator, based not on likelihood, but on order statisticsmaximum depth estimators. The advantage of those estimators is robustness(breakdown point ~25%-33%) and disadvantage – low precision (high bias). So I wouldn’t use them for precise estimation, but for sanity check or initial approximation. In some cases they could be computationally more cheap than M-estimators. As useful side effect they also give some insight into structure of dataset(it seems originally maximum depth estimators was seen as data visualization tool). Depth could be good criterion for outliers rejection.
Disclaimer: while I had very positive experience with Cauchy estimator, data depth is a new thing for me.I have yet to see how useful it could be for computer vision related problems.
2, May, 2011 Posted by | computer vision, sci | , , , | Comments Off on Robust estimators III: Into the deep
## Robust estimators II
In this post I was complaining that I don’t know what breakdown point for redescending M-estimators is. Now I found out that upper bound for breakdown point of redescending of M-estimators was given by Mueller in 1995, for linear regression (that is statisticians word for simple estimation of p-dimensional hyperplane):
$\frac{1}{N}(\frac{N - \mathcal{N}(x) + 1)}{2})$
$N$ – number of measurements and $\mathcal{N}(x)$ is little tricky: it is a maximum number of measurement vectors X lying in the same p-dimensional hyperplane. If number of measurements N >> p that mean breakdown point is near 50% – You can have half measurement results completely out of the blue and estimator will still work.
That only work if the error present only in results of measurements, which is reasonable condition – in most cases we can move random error from x part to y part.
Now which M-estimators attain this upper bound?
The condition is “slow variation”(Mizera and Mueller 1999)
$\lim_{t\to \infty} \frac{\rho(t x)}{\rho(t)} = 1$
Mentioned in previous post Cauchy estimator is satisfy that condition:
$\rho(x) = -\ln(1 +(\frac{x}{\gamma})^2)$ and its derivative $\psi(x) = \frac{x}{\gamma^2+ x^2}$
In practice we always work with $\psi$, not $\rho$ so Cauchy estimator is easy to calculate.
Rule of the thumb: if you don’t know which robust estimator to use, use Cauchy: It’s fast(which is important in real time apps), its easy to understand, it’s differentiable, and it is as robust as possible (that is for redescending M-estimator)
19, April, 2011 Posted by | computer vision, sci | , , , , | Comments Off on Robust estimators II
## Robust estimators – understand or die… err… be bored trying
This is continuation of my attempt to understand internal mechanics of robust statistics. First I want to say that robust statistics “just works”. It’s not necessary to have deep understanding of it to use it and even to use it creatively. However without that deeper understanding I feel myself kind of blind. I can modify or invent robust estimators empirically, but I can not see clearly the reasons, why use this and not that modification.
Now about robust estimators. They could be divided into two groups: maximum likelihood estimators(M-estimators), which in case of robust statistics usually, but not always are redescending estimators (notable not redescending estimator is $L_1$ norm), and all the rest of estimators.
This second “all the rest” group include subset of L-estimators(think of median, which is also M-estimator with $L_1$ norm.Yea, it’s kind of messy), S-estimators (use global scale estimation for all the measurements) R-estimators, which like L-estimator use order statistics but use it for weights. There may be some others too, but I don’t know much about this second group.
It’s easy to understand what M-estimators do: just find the value of parameter which give maximum probability of given set of measurements.
$argmax_{\theta} \prod_{i=1}^{n}p ( x_i\mid\theta)$
or
$argmin_{\theta}\sum_{i=1}^{n} -ln(p(x_i|\theta))$
which give us traditional M-estimator form
$argmin_{\theta}\sum_{i=1}^n \rho(x_i, \theta)$
or
$\sum_{i=1}^n \psi(x_i, \theta) = 0$, $\psi = \frac{\partial \rho}{\partial \theta}$
Practically we are usually work not with measurements per se, but with some distribution of cost function $F(x,\theta)$ of the measurements $\rho(x, \theta) = p(F(x, \theta))$, so it become
$\sum_{i=1}^n \psi(x_i, \theta)\frac{\partial F(x_i,\theta)}{\partial \theta} = 0$
it’s the same as the previous equation just $\psi$ defined in such a way as to separate statistical part from cost function part.
Now if we make a set of weights $w_i = \frac{\psi_i}{F_i}$ it become
$\sum_{i=1}^n w_i(x_i, \theta) F(x_i, \theta) \frac{\partial F(x_i,\theta)}{\partial \theta} = 0$
We see that it could be considered as “nonlinear least squares”, which could be solved with iteratively reweighted least squares
Now for second group of estimators we have probability of joint distribution
$argmax_{\theta} \prod_{i=1}^{n}p ( x_i\mid x_{j, j\neq i}, \theta)$
All the global factors – sort order, global scale etc. are incorporated into measurements dependence.
It seems the difference between this formulation of second group of estimators and M-estimator is that conditional independence assumption about measurements is dropped.
Another interesting thing is that if some of measurements are not dependent on others, this formulation can get us bayesian network
So M-estimator and probabilistic distribution through which it is defined are essentially the same. Least squares, for example, is produced by normal(gausssian) distribution. Just take sum of logarithms of gaussian and you get least squares estimator.
If we are talking about normal (pun intended), non-robust estimator, their defining feature is finite variance of distribution.
We have central limit theorem which saying that for any distribution mean value of samples will have approximately normal(or Gaussian) distribution.
From this follow property of asymptotic normality – for estimator with finite variance its distribution around true value of parameter $\theta$ approximate normal distribution.
We are discussing robust estimators, which are stable to error and have “thick-tailed” distribution, so we can not assume finite variance of distribution.
Nevertheless to have “true” result we want some form of probabilistic convergence of measurements to true value. As it happens such class of distribution with infinite variance exists. It’s called alpha-stable distributions.
Alpha stable distribution are those distributions for which linear combination of random variables have the same distribution, up to scale factor. From this follow analog of central limit theorem for stable distribution.
The most well known alpha-stable distribution is Cauchy distribution, which correspond to widely used redescending estimator
$\psi(x) = \frac {x} {\varepsilon + x^2}$
Cauchy distribution can be generalized in several way, including recent GCD – generalized Cauchy distribution(Carrillo et al), with density function
$p\Gamma(p/2)/2\Gamma(1/p)^2(\sigma^p + x^p)^{-2/p}$
and estimator
$\psi(x)=\frac{p|x|^{p-1}sgn(x)}{\sigma^p + x^p}$
Carrillo also introduce Cauchy distribution-based “norm” (it’s not a real norm obviously) which he called “Lorentzian norm”
$||u||_{LL_p} = \sum ln(1 + \frac{|u_i|^p}{\sigma^p})$
${LL_2}$ is correspond classical Cauchy distribution
He successfully applied Lorentzian norm ${LL_2}$ based basis pursuit to compressed sensing problem, which support idea that compressed sensing and robust statistics are dual each other.
15, April, 2011 Posted by | sci | , , , , , | Comments Off on Robust estimators – understand or die… err… be bored trying
## Is Robust Statistics have formal mathematical foundation?
As I have already written I have a trouble understanding what robust estimators actually estimate from probabilistic or other formal point of view. I mean estimators which are not maximum likelihood estimators. There is a formal definition which doesn’t explain a lot to me. It looks like estimator estimate some quantity, and we know how good we are at estimating it, but how we know what we are actually estimate? Or does this question even make sense? But that is actually a minor bummer. A problem with understanding outliers is a lot worse for me. A breakdown point is a fundamental concept in robust statistics. And breakdown point is defined as a relative number of outliers in the sample set. The problem is, it seems there is no formal definition of outlier in statistics or probability theory. We can talk about mixture models, and tail distributions but those concepts are not quite consistent with breakdown point. Breakdown point looks like it belong to area of optimization/topology, not statistics. Could it be that outliers could be defined consistently only if we have some additional structural information/constraints beside statistical (distribution)? That inability to reconcile statistics and optimization is a problem which causing cognitive headache for me.
11, April, 2011 Posted by | sci | , , , , | Comments Off on Is Robust Statistics have formal mathematical foundation?
## Minimum sum of distance vs L1 and geometric median
All this post is just a more detailed explanation of the end of the previous post.
Assume we want to estimating a state $x \in R^n$ from $m \gg n$ noisy linear measurements $y \in R^m$, $y = Ax + z$, $z$– noise with outliers, like in the paper by Sharon, Wright and Ma Minimum Sum of Distances Estimator: Robustness and Stability
Sharon at al show that minimum $L_1$ norm estimator, that is
$arg \min_{x} \sum_{i=1}^{m} \| a_i^T x - y_i \|_{1}$
is a robust estimator with stable breakdown point, not depending on the noise level. What Sharon did was to use as cost function the sum of absolute values of all components of errors vector. However there are exists another approach.
In one-dimensional case minimum $L_1$ norm is a median.But there exist generalization of median to $R^n$geometric median. In our case it will be
$arg \min_{x} \sum_{i=1}^{m} \| A x - y_i \|_{2}$
That is not a least squares – minimized the sum of $L_2$ norm, not the sum of squares of $L_2$ norm.
Now why is this a stable and robust estimator? If we look at the Jacobian
$\sum_{i=1}^{m} A^T \frac{A x - y_i}{ \| A x - y_i \|_{2}}$
we see it’s asymptotically constant, and it’s norm doesn’t depend on the norm of the outliers. While it’s not a formal proof it’s quite intuitive, and can probably be formalized along the lines of Sharon paper.
While first approach with $L_1$ norm can be solved with linear programming, for example simplex method and interior point method, the second approach with $L_2$ norm can be solved with second order cone programming and …surprise, interior point method again.
For interior point method, in both cases original cost function is replaced with
$\sum f$
And the value of $f$ is defined by constraints. For $L_1$
$f_{i} \ge a_ix-y_i$, $f_{i} \ge -a_ix+y_i$
Sometimes it’s formulated by splitting absolute value is into the sum of positive and negative parts
$f_{+_{i}} \ge a_ix-y_i$, $f_{-_{i}} \ge -a_ix+y_i$, $f_{+_{i}} \ge 0$, $f_{-_{i}} \ge 0$
And for $L_2$ it’s a simple
$f_i \ge \| A x - y_i \|_{2}$
Formulations are very similar, and stability/performance are similar too (there was a paper about it, just had to dig it out)
10, April, 2011
## L1, robust statisrics and compressed sensing
Anyone who did 3D reconstruction and camera pose estimation know, that outliers one of the main, if not the main problem there. There are several ways to deal with outliers, RANSAC and trimming are probably most common. Both of them has major drawback though – they based on the initial error estimation. But for example, in pose estimation, situation where the wrong values have initial error order of magnitude less then correct values is quite common. RANSAC and trimming would make situation worse in that case. What really work there are robust estimators, which is, in many cases just statisticians name for reweighted iterative least square.
Now why and how robust estimators works is really interesting. Basically robust estimator is a maximum likelihood estimator for non-normal distribution, that is distribution with “thick” tail. One of the simplest of robust estimators is L1, which correspond to Laplace distribution. Laplace distribution descend more slow than normal distribution, so it’s obviously more robust. And now the really interesting things start. L1 estimator is the fundamental concept of compressed sensing. And compressed sensing is all about finding “sparse” solution, that is solution which are mostly zeros, but with few components are quite big. And what outliers are? They are exactly “sparse” big components of error vector. If we have linear system with noise in right part, and the noise is dominated by small number of really big outliers then, as Terence Tao pointed out, we can multiply both part of the system on the appropriate matrix and get a sparse system of equations for outliers. That would be a classical compressed sensing problem, for which L1 minimization works perfectly. Recently it was proven, using compressed sensing inspired technique, that L1 estimator for system with outliers really behave similar to L1 minimizer for sparse solutions – it has stable breaking point(Sharon, Wright, Ma, Minimum Sum of Distances Estimator: Robustness and Stability).
That make me think about things I really don’t understand – what is connection between other, redescending estimators and L1 estimator? In practical applications redescending estimators often works better than L1. But redescending estimators practically is not much different form trimming. Does it mean that they are just convenient shortcuts, and in general case L1 is more robust?(One drawback of redescending estimator that it can has multiple local minima) Which assumptions about outliers we should do to to choose most appropriate estimator? I would like to read some theory of redescending estimators, their breaking point and especially their relation to L1, but so far not sure even where to start…
(PPS In this post I talk more about Mueller work on redescending M-estimators which partially answer the question)
PS Another interesting (for me that is, for someone else it could be trivial) problem is dimensionality. For 1-dimensional variable L1 and distance is the same. For vector they are not. So for vector-valued variables estimator “minimum sum of distance” estimator is not the same as L1 estimator. Would be L1 more robust than “minimum sum of distance” for vectors? Compressed sensing logic say that it should, but L1 estimator is anisotropic, it depend on coordinate system. That is for L1 to be effective the outliers should be aligned with coordinate system. Here there is the difference between overall dimensionality of the problem -number of samples and “micro” dimensionality – dimensionality of each sample. I’ll try to sort it out later.
2, April, 2011
|
2020-01-25 17:56:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 81, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.692096471786499, "perplexity": 1060.5592085906271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251678287.60/warc/CC-MAIN-20200125161753-20200125190753-00542.warc.gz"}
|
https://kokecacao.me/page/Course/F20/21-127/Lecture_013.md
|
# Lecture 013
## The Well-Ordering Property (WOP)
Corollary: if $S=\{n \in \mathbb{Z}|n \geq M\}$ for some fixed $M\in \mathbb{Z}$.
So now if we want to prove $\{n\in \mathbb{Z}^+ | (\exists x, y \in \mathbb{Z}(23x - 72y = n))\}$ by only showing that the set is non-empty by WOP.
Consequence of WOP: there exists no infinitely descending chain of natural numbers.
### Proof by The Methods of Infinite Descent
essentially contradiction + WOP WTS: $(\forall n \in \mathbb{N})P(n)$ Show: $(\forall n \in \mathbb{N})(\lnot P(n) \implies (\exists k \in \mathbb{N})(k < n \land \lnot P(k)))$ Conclude: $\lnot P(n)$ never holds
Claim: $\sqrt{2}$ is irrational Proof: AFSOC $\sqrt{2} \in \mathbb{Q}$
• Then $\sqrt{2} = \frac{a_1}{b_1}$. Since $a_1^2 = 2b^b_2$, $a_1^2$ is even hence $a_1$ is even. So $a_1 = 2a_2$ for some integer a2. Then we can write $\sqrt{2} = \frac{a_1}{b_1}=\frac{2a_2}{b_1}$. Then we have $2b_1^2 4a_^2_2$ to prove that $b_1$ must be even. So $\sqrt{2} = \frac{a_1}{b_1}=\frac{2a_2}{b_1} = \frac{2a^2}{2b^2}... infinitely descent, contradiction with WOP. ### More Practice with Induction ## Relations ### Binary Relations Let S and T be 2 sets. • If $R \subseteq S \times T$, then R is called binary relation between S and T. • If $(a, b) \in R$, we say "a is in relation to b", detated aRb. • If S is called the domain, and T is called co-domain. • If S=T, R is a relation on S. Example: S=students, T=teachers. define $R \subseteq S \times T$ by $(s, t) \in R \equiv$ s has taken a class with professor t. Example: "less than" = $\{(0, 1), (0, 2), ..., (1, 2), (1, 3)...\}$ • (0, 1) is in set "less than" = 0<1. Example: "subset or equal", let $S=T=\mathcal{P}(\mathbb{N})$. for $A\in S$ and $B \in T$, $A\subseteq B \equiv (\forall m \in \mathbb{N})(m\in A \implies m\in B)$ #### Definitions Let R be a binary relation on a set S. Then R is called: (R is a set or ordered tuple) • reflexive(R): iff $(\forall x \in S)((x,x) \in R)$ (any element is related to itself) 自己和自己比永远存在 • irreflecsive(IR): iff $(\forall x \in S)((x,x) \notin R)$ (no element is related to itself, notice it is not the opposite of relfecsive) 自己和自己比永远不存在 • symmetric(S): iff$(\forall x,y \in S)((x, y) \in R \implies (y, x) \in R) 正反永远成立
• antisymmetric(AS): iff \$(\forall x,y \in S)((x, y) \in R \land (y, x) \in R \implies x = y) 正反成立则相等
• transitive(TR): iff (\forall x, y, z \in R)((x, y)\in R \land (y, z)\in R \implies (x, z) \in R) 传递
• Total(To): $(\forall x, y \in S)(x \neq y \implies ((x, y) \in R \lor (y, x) \in R))$ 两个数总有联系
#### Chart
Set Relation Ref Irrefl Symm Antisymm Trans
$\mathbb{R}$ $<$ N Y N Y Y
$\mathbb{Z}$ divisible Y N N N Y
$\mathcal{P}(S)$ $\subseteq$ Y N N Y Y
$S$ $=$ Y N Y Y Y
$\mathbb{R}$ $\leq$ Y N N Y Y
$\mathcal{P}(S)$ $\subsetneq$ N Y N Y Y
Table of Content
|
2023-03-29 22:26:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 39, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000097751617432, "perplexity": 3463.8394704534503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949035.66/warc/CC-MAIN-20230329213541-20230330003541-00430.warc.gz"}
|
http://everything.explained.today/Equation/
|
# Equation Explained
In mathematics, an equation is a formula that expresses the equality of two expressions, by connecting them with the equals sign .[1] [2] The word equation and its cognates in other languages may have subtly different meanings; for example, in French an équation is defined as containing one or more variables, while in English, any well-formed formula consisting of two expressions related with an equals sign is an equation.[3]
Solving an equation containing variables consists of determining which values of the variables make the equality true. The variables for which the equation has to be solved are also called unknowns, and the values of the unknowns that satisfy the equality are called solutions of the equation. There are two kinds of equations: identities and conditional equations. An identity is true for all values of the variables. A conditional equation is only true for particular values of the variables.[4] [5]
An equation is written as two expressions, connected by an equals sign ("="). The expressions on the two sides of the equals sign are called the "left-hand side" and "right-hand side" of the equation. Very often the right-hand side of an equation is assumed to be zero. Assuming this does not reduce the generality, as this can be realized by subtracting the right-hand side from both sides.
The most common type of equation is a polynomial equation (commonly called also an algebraic equation) in which the two sides are polynomials.The sides of a polynomial equation contain one or more terms. For example, the equation
Ax2+Bx+C-y=0
has left-hand side
Ax2+Bx+C-y
, which has four terms, and right-hand side
0
, consisting of just one term. The names of the variables suggest that and are unknowns, and that,, and are parameters, but this is normally fixed by the context (in some contexts, may be a parameter, or,, and may be ordinary variables).
An equation is analogous to a scale into which weights are placed. When equal weights of something (e.g., grain) are placed into the two pans, the two weights cause the scale to be in balance and are said to be equal. If a quantity of grain is removed from one pan of the balance, an equal amount of grain must be removed from the other pan to keep the scale in balance. More generally, an equation remains in balance if the same operation is performed on its both sides.
In Cartesian geometry, equations are used to describe geometric figures. As the equations that are considered, such as implicit equations or parametric equations, have infinitely many solutions, the objective is now different: instead of giving the solutions explicitly or counting them, which is impossible, one uses equations for studying properties of figures. This is the starting idea of algebraic geometry, an important area of mathematics.
Algebra studies two main families of equations: polynomial equations and, among them, the special case of linear equations. When there is only one variable, polynomial equations have the form P(x) = 0, where P is a polynomial, and linear equations have the form ax + b = 0, where a and b are parameters. To solve equations from either family, one uses algorithmic or geometric techniques that originate from linear algebra or mathematical analysis. Algebra also studies Diophantine equations where the coefficients and solutions are integers. The techniques used are different and come from number theory. These equations are difficult in general; one often searches just to find the existence or absence of a solution, and, if they exist, to count the number of solutions.
Differential equations are equations that involve one or more functions and their derivatives. They are solved by finding an expression for the function that does not involve derivatives. Differential equations are used to model processes that involve the rates of change of the variable, and are used in areas such as physics, chemistry, biology, and economics.
The "=" symbol, which appears in every equation, was invented in 1557 by Robert Recorde, who considered that nothing could be more equal than parallel straight lines with the same length.[6]
## Introduction
### Analogous illustration
An equation is analogous to a weighing scale, balance, or seesaw.
Each side of the equation corresponds to one side of the balance. Different quantities can be placed on each side: if the weights on the two sides are equal, the scale balances, and in analogy, the equality that represents the balance is also balanced (if not, then the lack of balance corresponds to an inequality represented by an inequation).
In the illustration, x, y and z are all different quantities (in this case real numbers) represented as circular weights, and each of x, y, and z has a different weight. Addition corresponds to adding weight, while subtraction corresponds to removing weight from what is already there. When equality holds, the total weight on each side is the same.
### Parameters and unknowns
See also: Expression (mathematics). Equations often contain terms other than the unknowns. These other terms, which are assumed to be known, are usually called constants, coefficients or parameters.
An example of an equation involving x and y as unknowns and the parameter R is
x2+y2=R2.
When R is chosen to have the value of 2 (R = 2), this equation would be recognized in Cartesian coordinates as the equation for the circle of radius of 2 around the origin. Hence, the equation with R unspecified is the general equation for the circle.
Usually, the unknowns are denoted by letters at the end of the alphabet, x, y, z, w, ..., while coefficients (parameters) are denoted by letters at the beginning, a, b, c, d, ... . For example, the general quadratic equation is usually written ax2 + bx + c = 0.
The process of finding the solutions, or, in case of parameters, expressing the unknowns in terms of the parameters, is called solving the equation. Such expressions of the solutions in terms of the parameters are also called solutions.
A system of equations is a set of simultaneous equations, usually in several unknowns for which the common solutions are sought. Thus, a solution to the system is a set of values for each of the unknowns, which together form a solution to each equation in the system. For example, the system
\begin{align} 3x+5y&=2\\ 5x+8y&=3 \end{align}
has the unique solution x = −1, y = 1.
### Identities
See main article: Identity (mathematics) and List of trigonometric identities.
An identity is an equation that is true for all possible values of the variable(s) it contains. Many identities are known in algebra and calculus. In the process of solving an equation, an identity is often used to simplify an equation, making it more easily solvable.
In algebra, an example of an identity is the difference of two squares:
x2-y2=(x+y)(x-y)
which is true for all x and y.
Trigonometry is an area where many identities exist; these are useful in manipulating or solving trigonometric equations. Two of many that involve the sine and cosine functions are:
\sin2(\theta)+\cos2(\theta)=1
and
\sin(2\theta)=2\sin(\theta)\cos(\theta)
which are both true for all values of θ.
For example, to solve for the value of θ that satisfies the equation:
3\sin(\theta)\cos(\theta)=1,
where θ is limited to between 0 and 45 degrees, one may use the above identity for the product to give:
3 2
\sin(2\theta)=1,
yielding the following solution for θ:
\theta=
1 \arcsin\left( 2
2 3
\right)20.9\circ.
Since the sine function is a periodic function, there are infinitely many solutions if there are no restrictions on θ. In this example, restricting θ to be between 0 and 45 degrees would restrict the solution to only one number.
## Properties
Two equations or two systems of equations are equivalent, if they have the same set of solutions. The following operations transform an equation or a system of equations into an equivalent one – provided that the operations are meaningful for the expressions they are applied to:
• Adding or subtracting the same quantity to both sides of an equation. This shows that every equation is equivalent to an equation in which the right-hand side is zero.
• Multiplying or dividing both sides of an equation by a non-zero quantity.
• Applying an identity to transform one side of the equation. For example, expanding a product or factoring a sum.
• For a system: adding to both sides of an equation the corresponding side of another equation, multiplied by the same quantity.
If some function is applied to both sides of an equation, the resulting equation has the solutions of the initial equation among its solutions, but may have further solutions called extraneous solutions. For example, the equation
x=1
has the solution
x=1.
Raising both sides to the exponent of 2 (which means applying the function
f(s)=s2
to both sides of the equation) changes the equation to
x2=1
, which not only has the previous solution but also introduces the extraneous solution,
x=-1.
Moreover, if the function is not defined at some values (such as 1/x, which is not defined for x = 0), solutions existing at those values may be lost. Thus, caution must be exercised when applying such a transformation to an equation.
The above transformations are the basis of most elementary methods for equation solving, as well as some less elementary one, like Gaussian elimination.
## Algebra
### Polynomial equations
See main article: Polynomial equation.
In general, an algebraic equation or polynomial equation is an equation of the form
P=0
, or
P=Q
where P and Q are polynomials with coefficients in some field (e.g., rational numbers, real numbers, complex numbers). An algebraic equation is univariate if it involves only one variable. On the other hand, a polynomial equation may involve several variables, in which case it is called multivariate (multiple variables, x, y, z, etc.).
For example,
x5-3x+1=0
is a univariate algebraic (polynomial) equation with integer coefficients and
4+ xy 2
y=
x3 3
-xy2+y
2- 1 7
is a multivariate polynomial equation over the rational numbers.
Some polynomial equations with rational coefficients have a solution that is an algebraic expression, with a finite number of operations involving just those coefficients (i.e., can be solved algebraically). This can be done for all such equations of degree one, two, three, or four; but equations of degree five or more cannot always be solved in this way, as the Abel–Ruffini theorem demonstrates.
A large amount of research has been devoted to compute efficiently accurate approximations of the real or complex solutions of a univariate algebraic equation (see Root finding of polynomials) and of the common solutions of several multivariate polynomial equations (see System of polynomial equations).
### Systems of linear equations
A system of linear equations (or linear system) is a collection of linear equations involving one or more variables. For example,
\begin{alignat}{7} 3x&&+&&2y&&-&&z&&=&&1&\\ 2x&&-&&2y&&+&&4z&&=&&-2&\\ -x&&+&&\tfrac{1}{2}y&&-&&z&&=&&0& \end{alignat}
is a system of three equations in the three variables . A solution to a linear system is an assignment of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by
\begin{alignat}{2} x&=&1\\ y&=&-2\\ z&=&-2 \end{alignat}
since it makes all three equations valid. The word "system" indicates that the equations are to be considered collectively, rather than individually.
In mathematics, the theory of linear systems is a fundamental part of linear algebra, a subject which is used in many parts of modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in physics, engineering, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system.
## Geometry
### Analytic geometry
In Euclidean geometry, it is possible to associate a set of coordinates to each point in space, for example by an orthogonal grid. This method allows one to characterize geometric figures by equations. A plane in three-dimensional space can be expressed as the solution set of an equation of the form
ax+by+cz+d=0
, where
a,b,c
and
d
are real numbers and
x,y,z
are the unknowns that correspond to the coordinates of a point in the system given by the orthogonal grid. The values
a,b,c
are the coordinates of a vector perpendicular to the plane defined by the equation. A line is expressed as the intersection of two planes, that is as the solution set of a single linear equation with values in
R2
or as the solution set of two linear equations with values in
R3.
A conic section is the intersection of a cone with equation
x2+y2=z2
and a plane. In other words, in space, all conics are defined as the solution set of an equation of a plane and of the equation of a cone just given. This formalism allows one to determine the positions and the properties of the focuses of a conic.
The use of equations allows one to call on a large area of mathematics to solve geometric questions. The Cartesian coordinate system transforms a geometric problem into an analysis problem, once the figures are transformed into equations; thus the name analytic geometry. This point of view, outlined by Descartes, enriches and modifies the type of geometry conceived of by the ancient Greek mathematicians.
Currently, analytic geometry designates an active branch of mathematics. Although it still uses equations to characterize figures, it also uses other sophisticated techniques such as functional analysis and linear algebra.
### Cartesian equations
A Cartesian coordinate system is a coordinate system that specifies each point uniquely in a plane by a pair of numerical coordinates, which are the signed distances from the point to two fixed perpendicular directed lines, that are marked using the same unit of length.
One can use the same principle to specify the position of any point in three-dimensional space by the use of three Cartesian coordinates, which are the signed distances to three mutually perpendicular planes (or, equivalently, by its perpendicular projection onto three mutually perpendicular lines).
The invention of Cartesian coordinates in the 17th century by René Descartes (Latinized name: Cartesius) revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes (such as curves) can be described by Cartesian equations: algebraic equations involving the coordinates of the points lying on the shape. For example, a circle of radius 2 in a plane, centered on a particular point called the origin, may be described as the set of all points whose coordinates x and y satisfy the equation .
### Parametric equations
See main article: Parametric equation. A parametric equation for a curve expresses the coordinates of the points of the curve as functions of a variable, called a parameter.[7] [8] For example,
\begin{align} x&=\cost\\ y&=\sint \end{align}
are parametric equations for the unit circle, where t is the parameter. Together, these equations are called a parametric representation of the curve.
The notion of parametric equation has been generalized to surfaces, manifolds and algebraic varieties of higher dimension, with the number of parameters being equal to the dimension of the manifold or variety, and the number of equations being equal to the dimension of the space in which the manifold or variety is considered (for curves the dimension is one and one parameter is used, for surfaces dimension two and two parameters, etc.).
## Number theory
### Diophantine equations
See main article: Diophantine equation. A Diophantine equation is a polynomial equation in two or more unknowns for which only the integer solutions are sought (an integer solution is a solution such that all the unknowns take integer values). A linear Diophantine equation is an equation between two sums of monomials of degree zero or one. An example of linear Diophantine equation is where a, b, and c are constants. An exponential Diophantine equation is one for which exponents of the terms of the equation can be unknowns.
Diophantine problems have fewer equations than unknown variables and involve finding integers that work correctly for all equations. In more technical language, they define an algebraic curve, algebraic surface, or more general object, and ask about the lattice points on it.
The word Diophantine refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made a study of such equations and was one of the first mathematicians to introduce symbolism into algebra. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis.
### Algebraic and transcendental numbers
See main article: Algebraic number and Transcendental number. An algebraic number is a number that is a solution of a non-zero polynomial equation in one variable with rational coefficients (or equivalently — by clearing denominators — with integer coefficients). Numbers such as that are not algebraic are said to be transcendental. Almost all real and complex numbers are transcendental.
### Algebraic geometry
See main article: Algebraic geometry. Algebraic geometry is a branch of mathematics, classically studying solutions of polynomial equations. Modern algebraic geometry is based on more abstract techniques of abstract algebra, especially commutative algebra, with the language and the problems of geometry.
The fundamental objects of study in algebraic geometry are algebraic varieties, which are geometric manifestations of solutions of systems of polynomial equations. Examples of the most studied classes of algebraic varieties are: plane algebraic curves, which include lines, circles, parabolas, ellipses, hyperbolas, cubic curves like elliptic curves and quartic curves like lemniscates, and Cassini ovals. A point of the plane belongs to an algebraic curve if its coordinates satisfy a given polynomial equation. Basic questions involve the study of the points of special interest like the singular points, the inflection points and the points at infinity. More advanced questions involve the topology of the curve and relations between the curves given by different equations.
## Differential equations
See main article: Differential equation. A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the equation defines a relationship between the two. Because such relations are extremely common, differential equations play a prominent role in many disciplines including physics, engineering, economics, and biology.
In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their solutions — the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas; however, some properties of solutions of a given differential equation may be determined without finding their exact form.
If a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.
### Ordinary differential equations
See main article: Ordinary differential equation. An ordinary differential equation or ODE is an equation containing a function of one independent variable and its derivatives. The term "ordinary" is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable.
Linear differential equations, which have solutions that can be added and multiplied by coefficients, are well-defined and understood, and exact closed-form solutions are obtained. By contrast, ODEs that lack additive solutions are nonlinear, and solving them is far more intricate, as one can rarely represent them by elementary functions in closed form: Instead, exact and analytic solutions of ODEs are in series or integral form. Graphical and numerical methods, applied by hand or by computer, may approximate solutions of ODEs and perhaps yield useful information, often sufficing in the absence of exact, analytic solutions.
### Partial differential equations
See main article: Partial differential equation. A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved by hand, or used to create a relevant computer model.
PDEs can be used to describe a wide variety of phenomena such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations.
## Types of equations
Equations can be classified according to the types of operations and quantities involved. Important types include:
f'(x)=x2
. Differential equations are subdivided into ordinary differential equations for functions of a single variable and partial differential equations for functions of multiple variables
• An integral equation is a functional equation involving the antiderivatives of the unknown functions. For functions of one variable, such an equation differs from a differential equation primarily through a change of variable substituting the function by its derivative, however this is not the case when the integral is taken over an open surface
• An integro-differential equation is a functional equation involving both the derivatives and the antiderivatives of the unknown functions. For functions of one variable, such an equation differs from integral and differential equations through a similar change of variable.
• A functional differential equation of delay differential equation is a function equation involving derivatives of the unknown functions, evaluated at multiple points, such as
f'(x)=f(x-2)
• A difference equation is an equation where the unknown is a function f that occurs in the equation through f(x), f(x−1), ..., f(xk), for some whole integer k called the order of the equation. If x is restricted to be an integer, a difference equation is the same as a recurrence relation
• A stochastic differential equation is a differential equation in which one or more of the terms is a stochastic process
• Winplot: General Purpose plotter that can draw and animate 2D and 3D mathematical equations.
• Equation plotter: A web page for producing and downloading pdf or postscript plots of the solution sets to equations and inequations in two variables (x and y).
## Notes and References
1. Web site: Equation - Math Open Reference. 2020-09-01. www.mathopenref.com.
2. Web site: Equations and Formulas. 2020-09-01. www.mathsisfun.com.
3. Web site: Marcus . Solomon . Watt . Stephen M. . What is an Equation? . 2019-02-27 .
4. Book: Lachaud , Gilles. http://www.universalis.fr/encyclopedie/NT01240/EQUATION_mathematique.htm . Équation, mathématique . Encyclopædia Universalis . fr.
5. "A statement of equality between two expressions. Equations are of two types, identities and conditional equations (or usually simply "equations")". « Equation », in , et Robert C. James (éd.), Van Nostrand, 1968, 3 ed. 1st ed. 1948, .
6. Recorde, Robert, The Whetstone of Witte … (London, England: Kyngstone, 1557), the third page of the chapter "The rule of equation, commonly called Algebers Rule."
7. Thomas, George B., and Finney, Ross L., Calculus and Analytic Geometry, Addison Wesley Publishing Co., fifth edition, 1979, p. 91.
8. Weisstein, Eric W. "Parametric Equations." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/ParametricEquations.html
|
2023-03-22 09:09:12
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8467509150505066, "perplexity": 321.0583270816052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00531.warc.gz"}
|
https://www.vedantu.com/question-answer/3digit-numbers-can-be-formed-from-the-dig-class-11-maths-cbse-5eeb37824a013a0b9f56f082
|
Courses
Courses for Kids
Free study material
Free LIVE classes
More
# How many 3-digit numbers can be formed from the digits 1, 2, 3, 4 and 5 assuming that:1) Repetition is not allowed.2) Repetition is allowed.
Last updated date: 20th Mar 2023
Total views: 303.6k
Views today: 5.82k
Verified
303.6k+ views
Hint: Fundamental principle of counting: According to the fundamental principle of counting if a task can be done in “m” ways and another task can be done in “n” ways, then the number of ways in which both the tasks can be done in mn ways.
Complete step by step solution:
Case [1]: Repetition not allowed:
The number of ways in which the ones place can be filled = 5 ways.
The number of ways in which the tens place can be filled = 4 ways (because repetition is not allowed. So, the choice of one place cannot be used).
The number of ways in which hundreds place can be filled = 3 ways.
Hence according to the fundamental principle of counting the number of ways in which the three places can be filled to form a three-digit number = $5\times 4\times 3=60$
Hence the number of 3-digit numbers formed using the digits 1, 2, 3, 4 and 5 = 60 i.e 5! ways
Case [2]: Repetition is allowed:
The number of ways in which the ones place can be filled = 5 ways.
The number of ways in which the tens place can be filled = 5 ways (because repetition is allowed. So, the choice of one's place can be used).
The number of ways in which hundreds place can be filled = 5 ways.
Hence according to the fundamental principle of counting the number of ways in which the three places can be filled to form a three-digit number = $5\times 5\times 5=125$
Hence the number of 3-digit numbers formed using the digits 1, 2, 3, 4 and 5 = 125
Note: The number of 3-digit numbers formed using the digits 1, 2, 3, 4 and 5 when repetition is not allowed is equivalent to the number of 3 Letter permutations of 5 distinct letter = ${}^{5}{{P}_{3}}=\dfrac{5!}{\left( 5-3 \right)!}=\dfrac{5!}{2!}=\dfrac{120}{2}=60$ which is the same result as above.
|
2023-03-23 13:51:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5396898984909058, "perplexity": 171.922761347009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945168.36/warc/CC-MAIN-20230323132026-20230323162026-00782.warc.gz"}
|
https://arxiv-download.xixiaoyao.cn/list/nlin/new
|
# Nonlinear Sciences
## New submissions
[ total of 9 entries: 1-9 ]
[ showing up to 2000 entries per page: fewer | more ]
### New submissions for Thu, 25 Nov 21
[1]
Title: Origin of Jumping Oscillons in an Excitable Reaction-Diffusion System
Comments: 5 pages, 3 figures, supplementary material (3 pages, 5 figures)
Subjects: Pattern Formation and Solitons (nlin.PS)
Oscillons, i.e., immobile spatially localized but temporally oscillating structures, are the subject of intense study since their discovery in Faraday wave experiments. However, oscillons can also disappear and reappear at a shifted spatial location, becoming jumping oscillons (JOs). We explain here the origin of this behavior in a three-variable reaction-diffusion system via numerical continuation and bifurcation theory, and show that JOs are created via a modulational instability of excitable traveling pulses (TPs). We also reveal the presence of bound states of JOs and TPs and patches of such states (including jumping periodic patterns) and determine their stability. This rich multiplicity of spatiotemporal states lends itself to information and storage handling.
[2]
Title: Soliton solutions for nonlocal Hirota equation with non-zero boundary conditions using Riemann-Hilbert method and PINN algorithm
Subjects: Exactly Solvable and Integrable Systems (nlin.SI); Mathematical Physics (math-ph)
In this paper, we systematically investigate the nonlocal Hirota equation with nonzero boundary conditions via Riemann-Hilbert method and multi-layer physics-informed neural networks algorithm. Starting from the Lax pair of nonzero nonlocal Hirota equation, we first give out the Jost function and scattering matrix and their symmetry and asymptotic behavior. Then, the Riemann-Hilbert problem with nonzero boundary conditions are constructed and the precise formulae of $N$-soliton solution is written by determinants. Whereafter, the multi-layer physics-informed neural networks algorithm is applied to research the data-driven soliton solutions of the nonzero nonlocal Hirota equation by using the training data obtained from the Riemann-Hilbert method. Most strikingly, the integrable nonlocal equation is firstly solved via multi-layer physics-informed neural networks algorithm. As we all know, the nonlocal equations contain the $\mathcal{PT}$ symmetry $\mathcal{P}:x\rightarrow -x,$ or $\mathcal{T}:t\rightarrow -t,$ which are different different with local ones. Adding the nonlocal term into the NN, we can successfully solve the integrable nonlocal Hirota equation by multi-layer physics-informed neural networks algorithm. The numerical results indicate the algorithm can well recover the data-driven soliton solutions of the integrable nonlocal equation. Noteworthily, the inverse problems of the integrable nonlocal equation are discussed for the first time through applying the physics-informed neural networks algorithm to discover the parameters of the equation in terms of its soliton solution.
### Cross-lists for Thu, 25 Nov 21
[3] arXiv:2111.12141 (cross-list from quant-ph) [pdf, other]
Title: Harmonic oscillator kicked by spin measurements: a Floquet-like system without classical analogous
Subjects: Quantum Physics (quant-ph); Chaotic Dynamics (nlin.CD)
We present a kicked harmonic oscillator where the impulsive driving is provided by stroboscopic measurements on an ancillary degree of freedom and not by the canonical quantization of a time-dependent Hamiltonian. The ancila is dynamically entangled with the oscillator position, while the background Hamiltonian remains static. The dynamics of this system is determined in closed analytical form, allowing for the evaluation of a properly defined Loschmidt echo, ensemble averages, and phase-space portraits. As in the case of standard Floquet systems we observe regimes with crystalline and quasicrystalline structures in phase space, resonances, and evidences of chaotic behavior, however, not originating from any classically chaotic system.
[4] arXiv:2111.12446 (cross-list from hep-th) [pdf, other]
Title: Classical solutions of $λ$-deformed coset models
Subjects: High Energy Physics - Theory (hep-th); Mathematical Physics (math-ph); Exactly Solvable and Integrable Systems (nlin.SI)
We obtain classical solutions of $\l$-deformed $\s$-models based on $SL(2,\mathbb{R})/U(1)$ and $SU(2)/U(1)$ coset manifolds. Using two different sets of coordinates, we derive two distinct classes of solutions. The first class is expressed in terms of hyperbolic and trigonometric functions, whereas the second one in terms of elliptic functions. We analyze their properties along with the boundary conditions and discuss string systems that they describe. It turns out that there is an apparent similarity between the solutions of the second class and the motion of a pendulum.
[5] arXiv:2111.12475 (cross-list from quant-ph) [pdf, other]
Title: Diagnosing quantum chaos in multipartite systems
Subjects: Quantum Physics (quant-ph); Statistical Mechanics (cond-mat.stat-mech); High Energy Physics - Theory (hep-th); Chaotic Dynamics (nlin.CD)
Understanding the emergence of quantum chaos in multipartite systems is challenging in the presence of interactions. We show that the contribution of the subsystems to the global behavior can be revealed by probing the full counting statistics of the local, total, and interaction energies. As in the spectral form factor, signatures of quantum chaos in the time domain dictate a dip-ramp-plateau structure in the characteristic function, i.e., the Fourier transform of the eigenvalue distribution. With this approach, we explore the fate of chaos in interacting subsystems that are locally maximally chaotic. Global quantum chaos is suppressed at strong coupling, as illustrated with coupled copies of random-matrix Hamiltonians and of the Sachdev-Ye-Kitaev model. Our method is amenable to experimental implementation using single-qubit interferometry.
[6] arXiv:2111.12510 (cross-list from physics.flu-dyn) [pdf, other]
Title: Alcove formation in dissolving cliffs driven by density inversion instability
Subjects: Fluid Dynamics (physics.flu-dyn); Soft Condensed Matter (cond-mat.soft); Pattern Formation and Solitons (nlin.PS)
We demonstrate conditions that give rise to cave-like features commonly found in dissolving cliffsides with a minimal two-phase physical model. Alcoves that are wider at the top and tapered at the bottom, with sharp-edged ceilings and sloping floors, are shown to develop on vertical solid surfaces dissolving in aqueous solutions. As evident from descending plumes, sufficiently large indentations evolve into alcoves as a result of the faster dissolution of the ceiling due to a solutal Rayleigh-B\'enard density inversion instability. By contrast, defects of size below the boundary layer thickness set by the critical Rayleigh number smooth out, leading to stable planar interfaces. The ceiling recession rate and the alcove opening area evolution are shown to be given to first order by the critical Rayleigh number. By tracking passive tracers in the fluid phase, we show that the alcoves are shaped by the detachment of the boundary layer flow and the appearance of a pinned vortex at the leading edge of the indentations. The attached boundary layer past the developing alcove is then found to lead to rounding of the other sides and the gradual sloping of the floor.
[7] arXiv:2111.12521 (cross-list from math.OC) [pdf, other]
Title: Probabilistic Behavioral Distance and Tuning - Reducing and aggregating complex systems
Subjects: Optimization and Control (math.OC); Systems and Control (eess.SY); Adaptation and Self-Organizing Systems (nlin.AO)
Given a complex system with a given interface to the rest of the world, what does it mean for a the system to behave close to a simpler specification describing the behavior at the interface? We give several definitions for useful notions of distances between a complex system and a specification by combining a behavioral and probabilistic perspective. These distances can be used to tune a complex system to a specification. We show that our approach can successfully tune non-linear networked systems to behave like much smaller networks, allowing us to aggregate large sub-networks into one or two effective nodes. Finally, we discuss similarities and differences between our approach and $H_\infty$ model reduction.
### Replacements for Thu, 25 Nov 21
[8] arXiv:2111.06658 (replaced) [pdf, ps, other]
Title: Integrability, conservation laws and solitons of a many-body dynamical system associated with the half-wave maps equation
Comments: 31 pages, 4 figures, v2. Typo in Ref. [10] is corrected and a comment is added in Acknowledgements
Journal-ref: Physica D 430 (2022) 133080
Subjects: Exactly Solvable and Integrable Systems (nlin.SI); Mathematical Physics (math-ph)
[9] arXiv:2111.12043 (replaced) [pdf, other]
Title: Bridging scales in a multiscale pattern-forming system
Subjects: Biological Physics (physics.bio-ph); Pattern Formation and Solitons (nlin.PS)
[ total of 9 entries: 1-9 ]
[ showing up to 2000 entries per page: fewer | more ]
Disable MathJax (What is MathJax?)
Links to: arXiv, form interface, find, nlin, recent, 2111, contact, help (Access key information)
|
2021-11-28 17:00:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5205506682395935, "perplexity": 1883.5916527032116}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358570.48/warc/CC-MAIN-20211128164634-20211128194634-00373.warc.gz"}
|
https://math.stackexchange.com/questions/2256558/proof-of-commutativity-for-complex-numbers-help
|
Proof of commutativity for complex numbers help
I just started Linear Algebra and my algebra is a little rusty. However, was going over the proof of associativity for complex number and while I was able to do it, I got stuck on the way they do it in the book.
Let $$\alpha=x_1+y_1i\\ \beta=x_2+y_2i\\ \lambda=x_3+y_3i\\$$
where $$x_1,x_2,x_3$$ and $$y_1,y_2,y_3$$ are real numbers. Then \begin{align} (\alpha\beta)\lambda &= ((x_1x_2−y_1y_2)+(x_1y_2+y_1x_2)i)(x_3+y_3i)\\&=((x_1x_2−y_1y_2)x_3−(x_1y_2+y_1x_2)y_3)+((x_1x_2−y_1y_2)x_3+(x_1y_2+y_1x_2)y_3)i. \end{align}
The part that I am struggling with here is: \begin{align} &=((x_1x_2−y_1y_2)x_3−(x_1y_2+y_1x_2)y_3)+((x_1x_2−y_1y_2)x_3+(x_1y_2+y_1x_2)y_3)i. \end{align}
Basically I don't understand how we go here from the previous line. Maybe I have been staring at this for a while.
• With $x1$ you probably mean $x_1$ (written x_1) etc. Please correct this and also delete unnecessary repetitions to improve the readability of the post. – Claudius Apr 28 '17 at 17:35
$$(x+iy)(z+iw)=xz+i(wx+yz)-yw$$ Quite similarly $$(z+iw)(x+iy)=zx+i(zy+xw)-wy$$ since each of the products are real and thus commute, we have equality of real ad imaginary parts in the two expressions.
|
2021-04-13 11:37:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.985419750213623, "perplexity": 230.91673255092869}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072180.33/warc/CC-MAIN-20210413092418-20210413122418-00251.warc.gz"}
|
https://www.card-player.cz/23eb889f
|
## stone screenings calculator
### Calculate W10 Screenings (Washed) | cubic yards / Tons
Washed #5 Gravel Stone Calculate W10 Screenings (Washed) Type in inches and feet of your project and calculate the estimated amount of Sand / Screenings in cubic yards, cubic feet and Tons, that your need for your project. The Density of W10 Screenings (Washed): 2,410 lb/yd³ or 1.21 t/yd³ or 0.8 yd³/t
### Calculate Paver Screenings | cubic yards / Tons
Calculate Paver Screenings. Type in inches and feet of your project and calculate the estimated amount of Sand / Screenings in cubic yards, cubic feet and Tons, that your need for your project. The Density of Paver Screenings: 2,410 lb/yd³ or 1.21 t/yd³ or 0.8 yd³/t. A:
### Material Calculator - Legends Landscape Supply Inc.
Calculator. Make your next landscape project an efficient and lucrative one. Lower the risk of having leftover piles of material after your next landscaping project with our material calculator. Using this helpful tool, you can order the perfect amount to get the job done just the way you want it. Whether you’re in need of a landscape stone calculator, limestone screening calculator or a road base material calculator, this handy tool can help you make your next project a success.
### Aggregate Volume Calculator | Sand, Stone, Gravel, Limestone ...
call 416.798.7050 or 1.800.870.0926 for any sales and customer support inquiries
### Construction Aggregate Calculator - Vulcan Materials Company
Construction Aggregate Calculator. Enter the width, length, thickness, and product density and hit the “Calculate” button to calculate your estimate. If you do not know the product density, use the optional density estimator* or contact a local sales representative.
### Gravel Calculator: Free Online Tool - Braen Supply
Using a gravel calculator is an ideal way to accurately plan your inventory needs for larger-scale projects, and to make sure you purchase just the right amount of gravel for one-off jobs. Understanding how these handy online tools work is essential to harnessing their power to help you save money and maximize your budget.
### How Much Crushed Stone Do You Need? A Sure-Fire Formula
Use this formula to determine how much crushed stone you will need for your project: (L'xW'xH') / 27 = cubic yards of crushed stone needed. In the construction world, most materials are measured in cubic yards. Multiply the length (L), in feet, by the width (W), in feet, by the height (H), in feet, and divide by 27.
### Aggregates - Callanan
materials calculator Length (Feet) Width (Feet) Depth (Inches) Material 1B,Screenings - Washed 1BD-Screenings Dry #1A Crushed Stone #1 Crushed Stone #2 Crushed Stone #1 & #2 Mix Crushed Stone Crusher Run ASTM #57 Stone Type 2 SubBase Stone-Fill Fine Stone-Fill Light Stone Fill Medium Stone Fill Heavy Gabion Stone Embankment 203.06 Select Fill ...
### Stone - Behrens Landscaping
Limestone Screenings are grey in color and range in size from a small chip to sand like granules. This stone is commonly used as a finishing layer for pavers, retaining walls, and flagstone walkways. 0000-00-00 00:00:00
### Crushed Stone vs. Quarry Process vs. Stone Dust
Stone dust, also known as stone screenings, is the finest of the types of crushed stone. Although it is made from the same type of stone as the other two types, it is crushed into a powder. When used by itself stone dust forms a hard surface that is water resistant. When used with a larger stone it acts as a binding agent.
### Construction Materials Calculator | Crushed Stone Calculator
Project and Material Resources. At Braen Stone we strive to make your experience as smooth as possible. As one of the largest suppliers of construction materials in NJ, NY, PA & CT we wanted to share some of the resources that our team finds most helpful.
### Aggregate Materials Calculator - Patuxent Companies
Manual Calculator. Multiply the length of the area by the width of the area = Square Feet. Multiply Square Feet by the Depth * = Cubic Feet. Divide Cubic Feet by 27 = Cubic Yards. Multiply Cubic Yards by 1.5 = Tons Needed.
|
2021-06-21 12:28:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2485608607530594, "perplexity": 7715.249407264803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488273983.63/warc/CC-MAIN-20210621120456-20210621150456-00300.warc.gz"}
|
https://transportist.org/2007/10/15/
|
# Centers are edges
Centers are not nodes, in fact junctions are not nodes. In graphs (representation of transportation networks for modeling and analysis), nodes are aspatial representations of the intersection of links, which themselves are aspatial representations of the structure of network. However real nodes, i.e. centers and junctions, take space. As such they provide a spatial separation between areas that adjoin them. They serve as edges to adjoining areas (e.g. neighborhoods).
As Alfred Korzybski once said, “the map is not the territory”. Similarly, the graph is not the place. Network elements separate as they connect.
|
2023-03-20 10:38:38
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8434914946556091, "perplexity": 2944.773955398031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943471.24/warc/CC-MAIN-20230320083513-20230320113513-00029.warc.gz"}
|
https://acbjournal.org/journal/view.html?uid=501&vmd=Full
|
• Home
• Sitemap
pISSN 2093-3665 eISSN 2093-3673
Article View
## Original Article
Anat Cell Biol 2020; 53(2): 143-150
Published online June 30, 2020
https://doi.org/10.5115/acb.20.009
## The lumbar multifidus is characterised by larger type I muscle fibres compared to the erector spinae
Anouk Agten1,* , Sjoerd Stevens1,* , Jonas Verbrugghe1 , Bert O. Eijnde2 , Annick Timmermans1 , Frank Vandenabeele1
¹Department of Rehabilitation Sciences and Physiotherapy, Hasselt University, Rehabilitation Research Centre, Faculty of Rehabilitation Sciences, Diepenbeek, ²Department of Cardio and Internal Systems, Hasselt University, Biomedical Research Institute, Faculty of Medicine and Life Sciences, Diepenbeek, Belgium
Correspondence to:Anouk Agten
Department of Rehabilitation Sciences and Physiotherapy, Hasselt University, Rehabilitation Research Centre, Faculty of Rehabilitation Sciences, Agoralaan Building A, 3590 Diepenbeek, Belgium
E-mail: anouk.agten@uhasselt.be
*These two authors contributed equally to this work.
Received: January 15, 2020; Revised: January 30, 2020; Accepted: February 7, 2020
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
The metabolic capacity of a muscle is one of the determinants of muscle function. Muscle fiber type characteristics give an indication about this metabolic capacity. Therefore it might be expected that the lumbar multifidus (MF) as a local stabilizer contains higher proportions of slow type I fibers, compared to the erector spinae (ES) as a global mobilizer. The aim of this study is to determine the muscle fiber characteristics of the ES and MF to provide insight into their structural and metabolic characteristics, and thereby the functional capacity of both muscles. Muscle fiber type characteristics in the ES and MF were investigated with an immunofluorescence staining of the myosin heavy chain isoforms. In both the ES and MF, type I muscle fibers are predominantly present. The cross-sectional area (CSA) of type I muscle fibers is significantly larger in the lumbar MF compared to the ES. However, the mean muscle fiber type percentage for type I was not significantly different, which resulted in an insignificant difference in relative cross-sectional area (RCSA) for type I. No significant differences were found for all other muscle fiber types. This may indicate that the MF displays muscle fiber type characteristics that tend to be more appropriate to maintain stability of the spine. However, because we could not demonstrate significant differences in RCSA between ES and MF, we cannot firmly state that there are functional differences between the ES an MF based only on structural characteristics.
Keywords: Paraspinal muscles, Skeletal muscle fibers
The human lumbar muscular system plays an important role in both stabilizing and mobilizing the lumbar spine. Therefore, it is important to classify these muscles based on their function [1]. The latest functional classification includes local stabilizers, global stabilizers and global mobilizers [2]. Local stabilizers increase muscle stiffness to control segmental movement and have a crucial role in maintaining segmental stability of the lumbar spine. These muscles are continuously active, while the global stabilizers and the global mobilizers are not. Global stabilizers consequently produce movement while preserving stability, while global mobilizers generate great torque to produce large ranges of movement [2]. The lumbar multifidus (MF) is considered a local stabilizing muscle because of its close relation with the vertebral column and its short length [3]. This muscle consists of different fasciculi, which lie immediately next to the spinous processes over the full length of the spine [4-6]. The lumbar erector spinae (ES) consists of two muscles: longissimus thoracis and iliocostalis lumborum. Both muscles link the thoracic vertebrae to the pelvis and are considered global mobilizers based on their fascicle length [4, 7, 8]. This classification is based on mechanical properties and morphological features.
Muscle function is also related to the contractile and metabolic capacity of the muscle. Muscle fiber type distribution gives an indication of this metabolic and contractile profile and thereby functional capacity of a muscle. Human skeletal muscles consist of different fiber types, characterized by their specific myosin heavy chain (MHC) isoform, which determines contractile speed and metabolic capacity. In humans, three major fiber types can be identified, based on their MHC expression: type I, type IIa and type IIx [9, 10]. However, human muscle fibers can also co-express two different adjoining MHC isoforms. These muscle fibers were classified as hybrid fibers: type I/IIa and type IIax [11]. Muscle fibers have different contractile and metabolic characteristics. Type I fibers are characterized by a slow contracting speed, an oxidative metabolism and are fatigue-resistant. Type IIx fibers are fast-contracting fibers with a glycolytic metabolism and are susceptible to fatigue. Type IIa fibers are intermediate fibers that show characteristics of both type I and type IIx fibers. These muscle fibers have a fast-contracting speed and a combined oxidative and glycolytic metabolism [9-11]. The ability of a muscle to respond to different functional demands is due to its heterogeneous fiber type composition. The link between contractile/metabolic capacity of a muscle and the functional classification of the paraspinal muscles suggests that local stabilizers, like the MF, contain high proportions of slow, fatigue-resistant type I fibers to provide continuous activity needed to maintain stability of the spine [2]. Global stabilizers and mobilizers, like the ES, might contain higher proportions of fast-contracting fibers to counterbalance forces acting on the body by quick responses or to generate great torque to produce movement. However, previous studies were not able to find differences in muscle fiber type composition between ES and MF in healthy persons [12-14].
To our knowledge, the information about muscle fiber type characteristics of the ES and MF is scarce in healthy subjects. The primary aim of the present study was to determine the muscle fiber type composition and the cross-sectional area (CSA) of different fiber types in the ES and MF in order to gain insight into their structural and metabolic characteristics and to link them to the functional capacity of these muscles. The secondary aim is to determine the inter-rater reliability of the immunofluorescence analysis of MHC isoforms to measure the muscle fiber type characteristics.
### Subjects
Eighteen healthy Caucasian male and female subjects between 25 and 65 years old were recruited by means of local advertisement. All interested subjects were informed about all the aspects of the study and were included in the study after providing their informed consent. Subjects were included in the study if they had no chronic (>3 months) or acute low back pain (visual analogue scale >8/10 in the last 24 hours). Subjects who underwent rehabilitation or exercise therapy for an acute condition within the last three months were excluded. This cross sectional study was part of a larger study, and was approved by the Ethical Committee of Hasselt University, Jessa Hospital Hasselt (15.142/REVA15.14) and complies with the Declaration of Helsinki.
### Biopsy procedure
Muscle samples were taken from the right ES and MF at the level of spinous process of vertebra L4 according to the procedure of Agten et al. [15]. After local anaesthesia, a small incision of 2 mm was made through the skin at the puncture site, predetermined by ultrasound. A coaxial needle was inserted perpendicularly through the incision, piercing the thoracolumbar fascia. A biopsy needle was inserted through the coaxial needle to obtain a muscle sample from the ES and the MF. Muscle samples were removed from the biopsy needle, placed and oriented on a piece of cork. These samples were covered with optimum-cutting temperature compound (Tissue-Tek; Leica Microsystem Belgium, Diegem, Belgium) and immediately frozen in isopentane, precooled in liquid nitrogen. Frozen samples were stored at –80°C until further analysis. All biopsy samples were given a unique identification number.
### Immunohistochemistry
Serial transverse sections (10 µm) were cut with a microtome (CM1900 Cryostat; Leica Microsystem Belgium). To identify MHC isoforms, immunofluorescent staining was performed, based on the protocol of Bloemberg and Quadrilatero [16]. Sections were air dried for 20 minutes at room temperature. Autofluorescence was blocked using 10% of normal goat serum for one hour. The sections were incubated with primary antibodies specific to laminin (ab11575:1/500 Abcam), MHC I (BA-F8:1/50), MHC IIa (SC-71:1/500), and MHC IIx (6H1:1/50) (Development Studies Hybridoma Bank, Iowa City, IA, USA) for two hours at room temperature. After washing with 1× PBS, the sections were incubated with the appropriately conjugated secondary antibodies (Alexa fluor 532:1/500, Alexa Fluor 350:1/500, Alexa Fluor 488:1/500, Alexa Fluor 555:1/500; Life Technologies Inc., Ulm, Germany) for one hour at room temperature. After primary and secondary incubation, sections were washed in 1× PBS, and coverslips were mounted using PoLong Gold antifade reagent (Life Technologies Inc., Carlsbad, CA, USA).
### Muscle fiber typing and morphometry
Stained sections were viewed with a fluorescent microscope (EL6000; Leica) and photographed at a 10× magnification. The images were analyzed with AxioVision from Zeiss (Oberkochen, Germany). Muscle fiber size (µm2) and fiber type (I, IIa, and IIx) were determined for each individual muscle fiber by measuring the CSA and counting the number of muscle fibers. These measurements were performed blinded by the first two authors to determine inter-rater reliability. Relative cross-sectional area (RCSA) was calculated based on the CSA and percentage of a muscle fiber type. RCSA is an important structural characteristic that defines the functional capacity of a skeletal muscle [17]. Fibers expressing only BA-F8 were classified as MHC I, fibers expressing only SC-71 were classified as MHC IIa, and fibers expressing strong intensities for 6H1 were classified as MHC IIx. When fibers were expressing intermediate/strong intensities for SC-71 and intermediate intensities for 6H1, they were classified as MHC IIax hybrid fibers. When fibers were expressing intermediate intensities for both BA-F8 and SC-71 they were classified as MHC I/IIa hybrid fibers. To analyze a representative sample of the entire muscle, on average, 155±35 muscle fibers were counted in the biopsy samples collected from the ES and 151±32 in the biopsy samples collected from the MF [18].
### Data analysis
A sample size calculation was performed (80% power, α=0.05) based on preliminary data. Given the calculated estimates, n=18 was needed in each group to provide a power of 0.80. inter-rater reliability for measurements of CSA and muscle fiber type composition was evaluated by an analysis of the first 10 samples. These samples were blindly evaluated by the first two authors in both the ES and MF muscle. Intraclass correlation coefficients (ICC) were analyzed using IBM SPSS Statistics for Windows, Version 25.0 (IBM Co., Armonk, NY, USA). A two-way mixed model and absolute agreement was used for the first 10 subjects. ICC’s were estimated for muscle fiber type CSA and percentage in both the MF and ES muscle. From the SD and ICC, the standard error of measurement (SEM) was calculated using the formula $SD1-1CC$. The ICC gives an indication of the inter-rater reliability with an inter-rater reliability being poor for ICC values of less than 0.40, fair for values between 0.40 and 0.59, good for values between 0.60 and 0.74, and excellent for values between 0.75 and 1.0 [19].
Statistical analysis for the differences between the ES and MF muscles were performed with JMP Pro 14.1.0 software (SAS Institute Inc., Cary, NC, USA; 1989–2007). A repeated measure ANOVA was performed with fiber type and muscle as within subject factors. Normality of the data was checked using normal quartile plots calculated from the conditional residuals. A square root transformation was used in case of a non-normal distribution whenever needed. Significance was set at the 5% point with a confidence interval of 95%. When a significant interaction was found, a post-hoc Tukey honestly significant difference was used. A Kruskall-Wallis test was performed to analyze frequency distribution.
In total 18 healthy subjects were included in the study. Anthropometric characteristics are presented in Table 1. Hybrid fibers co-expressing MHC I and MHC IIa were not found within the muscle samples. Therefore, the analysis was done only for fiber type I, IIa, IIx, and the hybrid fiber type IIax (expressing both MHC IIa and MHC IIx) in both the lumbar ES and MF. Detailed results on repeated measurements ANOVA and inter-rater reliability are displayed in Appendix Tables 1–3.
Sociodemographic characteristics (n=18).
CharacteristicsValue
Age (yr)40.00±7.91
Sex (male:female)9:9
Weight (kg)78.22±13.08
Length (m)1.76±0.08
Body mass index (kg/m2)25.13±2.93
Values are presented as mean±SD or number only..
### Inter-rater reliability
For the measurements of CSA, the ICC’s ranged from 0.862 (type IIx) to 0.983 (type I) in the ES and from 0.911 (type IIax) to 0.996 (type I) in the MF. For the measurements of fiber type percentage, the ICC’s percentage ranged from 0.909 (type IIax) to 0.976 (type I) in the ES and from 0.591 (type IIax) to 0.966 (type IIx) in the MF.
### Fiber type size (measured by cross-sectional area)
A representative example of an image from the lumbar ES and MF muscle within one subject was shown in Fig. 1. Mean CSA of type I muscle fibers was 7,439.31 µm2 for the MF, compared to 6,279.48 µm2 for the ES. The mean CSA of type I muscle fibers was 18% higher in the MF, compared to the lumbar ES (P=0.0053). The mean CSA for type IIa, IIax, and IIx were similar between the lumbar ES and MF muscle (Fig. 2).
Figure 1. Representative immunofluore scence image of the lumbar ES and MF muscle. Muscle cross-sections were in cubated with primary antibody against laminin (red), MHC I (blue), MHC IIa (green) and MHC IIx (red). ES, erector spinae; MF, multifidus; MHC, myosin heavy chain.
Figure 2. CSA of the different muscle fibre types. CSA of type I, type IIa, type IIa/x and type IIx in the lumbar ES (white) and MF (black) muscle. Values are presented as mean±SE. CSA, cross-sectional area; ES, erector spinae; MF, multifidus; SE, standard error. *P=0.0053 ES vs. MF.
### Fiber type percentage (measured by %)
The mean percentage of fibers present within a muscle cross-section was highest for type I fibers in both muscles, 57.74% and 59.07% for the ES and MF respectively. For the ES, the mean percentage of the other muscle fiber types were 23.32%, 8.65%, and 15.43% for type IIa, type IIax, and type IIx respectively. For the MF, the mean percentage of the other muscle fiber types were 22.45%, 9.21%, and 13.89% for type IIa, type IIax, and type IIx respectively. These mean muscle fiber type percentages were not significantly different between the lumbar ES and MF (Fig. 3).
Figure 3. Number of the different muscle fibres. Number of type I, type IIa, type IIa/x and type IIx in the lumbar ES (white) and MF (black) muscle. Values are presented as mean±SE. ES, erector spinae; MF, multifidus; SE, standard error.Number of the different muscle fibres. Number of type I, type IIa, type IIa/x and type IIx in the lumbar ES (white) and MF (black) muscle. Values are presented as mean±SE. ES, erector spinae; MF, multifidus; SE, standard error.
### Relative fiber type area (measured by relative cross-sectional area)
Type I muscle fibers are predominantly present in both ES and MF muscle, respectively 63.54% and 68.8%. For the ES, the mean relative area for the other muscle fiber types were 21.60%, 7.23%, and 11.8% for type IIa, type IIax and type IIx respectively. For the MF, the mean percentage of the other muscle fiber types were 19.17%, 5.93%, and 9.14% for type IIa, type IIax, and type IIx respectively. These RCSA’s for all muscle fiber types were not significantly different between the lumbar ES and MF (Fig. 4).
Figure 4. Relative RCSA of the different muscle fibres. RCSA of type I, type IIa, type IIa/x and type IIx in the lumbar ES (white) and MF (black) muscle. Values are presented as mean±SE. ES, erector spinae; MF, multifidus; RCSA, relative cross-sectional area; SE, standard error.
### Frequency distribution
Substantial differences in the frequency distribution of muscle fiber type I were observed (Fig. 5).
Figure 5. Muscle fibre size distribution (in percentage) for type I muscle fibres in the MF (white) and ES (black). Values are presented as mean±SE. ES, erector spinae; MF, multifidus; SE, standard error. *P<0.05 ES vs. MF.
Small type I muscle fibers from 3,000–3,999 µm2 and from 4,000–4,999 µm2 were more prevalent in the ES than in the MF (P=0.0291 and P=0.0458, respectively). A shift in muscle fiber size distribution was observed for type I muscle fibers, with a higher percentage of small fibers in the ES compared to the MF. In the ES, 34% of the muscle fibers were smaller than 5,000 µm2, compared to 21% in the MF.
Based on muscle fiber type characteristics, the present study reveals that the MF and the ES can both be considered postural muscles that provide stability in the lumbar vertebral column, because of their predominance in type I muscle fibers. Based on our results, the MF seems to display muscle fiber type characteristics that tend to be more appropriate to maintain stability of the spine compared to the ES, due to the fact that the MF is characterized by significant larger type I muscle fibers. However, there were no differences in fiber type percentage between both muscles. This resulted in non-significant differences in the RCSA of type I between the ES and the MF. Because we did not demonstrate significant differences in RCSA between ES and MF, we cannot firmly state that there are functional differences between these two muscles based only on muscle fiber type characteristics.
To our knowledge, there are only two studies that have examined differences in muscle fiber type characteristics between ES and MF in healthy subjects using muscle biopsy samples [12, 14]. Thorstensson and Carlson [14] biopsied 16 healthy persons (age 20–30 years), Jørgensen et al. [12] biopsied 10 healthy males (age 21–29 years) both at the level of the L3 vertebra. In contrast to our results, both studies found no significant differences in muscle fiber type characteristics between both muscles, using ATPase staining. In order to investigate muscle fiber type composition, we used immunofluorescence analysis of MHC isoforms, which is a more sensitive and reliable method compared to the traditionally used technique in which myosin ATPase activity is determined. With ATPase staining it is difficult to identify hybrid fibers [16]. Our study showed a good inter-rater reliability for the measurement of CSA and fiber type percentages. Although ICC’s varied from fair to excellent, in much of the outcomes, the SEM was high compared to the mean values. This indicates that the between subject variability was high in both the ES and MF muscle, mainly for the percentage and CSA of type IIax, despite having a fair to excellent inter-rater reliability. This indicates muscle fiber type composition is highly variable between different subjects.
In our study, the muscle biopsies were taken at the level of L4, while the muscle samples in the study of Thorstensson and Carlson [14] and the study of Jørgensen et al. [12] were obtained at the level of L3. This could possibly explain the differences in our results. The MF is best developed in the lower lumbar region, were the volume of fiber bundles is greatest [6]. The differences in biopsy level could contribute to different muscle fiber characteristics.
Other studies investigated paraspinal muscle fiber characteristics in the lumbar ES and MF muscle in cadaveric specimens [12, 13, 20, 21]. However, these samples cannot be considered as healthy tissue, due to the fact that these samples have been collected post-mortem in which protein degradation caused by cellular breakdown and autolytic activity, and structural alterations of muscle tissue cannot be excluded [22]. Rantanen et al. [13] and Hesse et al. [20] found no significant differences in muscle fiber type characteristics between both muscles, while Jørgensen et al. [12] found significantly smaller IIx muscle fibers in the lumbar MF compared to the longissimus and iliocostalis. Moreover, they showed that the number of type I muscle fibers was significantly higher in the longissimus compared to the MF and the iliocostalis, while the number of type IIx muscle fibers was significantly lower in the longissimus [12]. In contrast to these studies, our study showed significantly larger type I muscle fibers in the MF compared to the ES. It could be possible that there are differences between deep and superficial paraspinal muscles, as indicated by Sirca and Kostevcs [21].
In both the ES and MF, type I muscle fibers are predominantly present. These results are in line with the findings of MacDonald et al. [23], who confirmed the postural role of both muscles, indicated by the presence of a large RCSA of type I muscle fibers. However, muscle fiber characteristics are not the only determinant of muscle function: other mechanisms could also play an important role. Mechanical properties, such as pennation angle, fascicle length and proportion of fleshy to tendinous fascicle parts have a major impact on muscle function [24, 25]. Functional diversification could also be influenced by neural control, in which the size of the motor unit, the amount of muscle spindles, and the control by the motor cortex are all possible contributors [26-29].
This is the first study comparing muscle fiber type characteristics between the ES and the MF in healthy subjects and using a multicolour immunofluoresecent staining technique to visualise MHC, which is a much more reliable technique compared to the ATPase staining [16]. In contrast to previous studies [12, 14], we found significantly larger type I muscle fibers in the lumbar MF compared to the ES. We did not demonstrate clear differences in RCSA between the ES and MF, which suggests there are no or only small differences in muscle function based on muscle fiber type characteristics. Future studies should focus on observing paraspinal muscle fiber type characteristics in different low back pain populations to unravel possible structural alteration that can contribute to clinical symptoms.
### Supplemental Materials
Our gratitude goes to Dr. Geert Souverijns (Jessa Hospital, Department of Radiology, Belgium) who gave us the opportunity to work with a physician in his department. We also want to thank the department of Pneumology (KULeuven) for the help with the immunofluorescence staining.
### Author Contributions
Conceptualization: AA, SS, JV, BOE, AT, FV. Data acquisition: AA, SS, JV, FV. Data analysis or interpretation: AA, SS, FV. Drafting of the manuscript: AA, SS. Critical revision of the manuscript: AA, SS, JV, BOE, AT, FV. Approval of the final version of the manuscript: all authors.
### Conflicts of Interest
1. Gibbons SGT, Comerford MJ. Strength versus stability part I: concept and terms. Orthopaedic Division Review 2001;43:21-7.
2. Comerford MJ, Mottram SL. Movement and stability dysfunction--contemporary developments. Man Ther 2001;6:15-26.
3. Goel VK, Kong W, Han JS, Weinstein JN, Gilbertson LG. A combined finite element and optimization investigation of lumbar spine mechanics with and without muscles. Spine (Phila Pa 1976) 1993;18:1531-41.
4. Macintosh JE, Valencia F, Bogduk N, Munro RR. The morphology of the human lumbar multifidus. Clin Biomech (Bristol, Avon) 1986;1:196-204.
5. Hansen L, de Zee M, Rasmussen J, Andersen TB, Wong C, Simonsen EB. Anatomy and biomechanics of the back muscles in the lumbar spine with reference to biomechanical modeling. Spine (Phila Pa 1976) 2006;31:1888-99.
6. Rosatelli AL, Ravichandiran K, Agur AM. Three-dimensional study of the musculotendinous architecture of lumbar multifidus and its functional implications. Clin Anat 2008;21:539-46.
7. Bogduk N. A reappraisal of the anatomy of the human lumbar erector spinae. J Anat 1980;131(Pt 3):525-40.
8. Bustami FM. A new description of the lumbar erector spinae muscle in man. J Anat 1986;144:81-91.
9. Schiaffino S, Reggiani C. Molecular diversity of myofibrillar proteins: gene regulation and functional significance. Physiol Rev 1996;76:371-423.
10. Caiozzo VJ. Plasticity of skeletal muscle phenotype: mechanical consequences. Muscle Nerve 2002;26:740-68.
11. Scott W, Stevens J, Binder-Macleod SA. Human skeletal muscle fiber type classifications. Phys Ther 2001;81:1810-6.
12. Jørgensen K, Nicholaisen T, Kato M. Muscle fiber distribution, capillary density, and enzymatic activities in the lumbar paravertebral muscles of young men. Significance for isometric endurance. Spine (Phila Pa 1976) 1993;18:1439-50.
13. Rantanen J, Rissanen A, Kalimo H. Lumbar muscle fiber size and type distribution in normal subjects. Eur Spine J 1994;3:331-5.
14. Thorstensson A, Carlson H. Fibre types in human lumbar back muscles. Acta Physiol Scand 1987;131:195-202.
15. Agten A, Verbrugghe J, Stevens S, Boomgaert L, O Eijnde B, Timmermans A, Vandenabeele F. Feasibility, accuracy and safety of a percutaneous fine-needle biopsy technique to obtain qualitative muscle samples of the lumbar multifidus and erector spinae muscle in persons with low back pain. J Anat 2018;233:542-51.
16. Bloemberg D, Quadrilatero J. Rapid determination of myosin heavy chain expression in rat, mouse, and human skeletal muscle using multicolor immunofluorescence analysis. PLoS One 2012;7:e35273.
17. Mannion AF, Dumas GA, Cooper RG, Espinosa FJ, Faris MW, Stevenson JM. Muscle fibre size and type distribution in thoracic and lumbar regions of erector spinae in healthy subjects without low back pain: normal values and sex differences. J Anat 1997;190(Pt 4):505-13.
18. Ceglia L, Niramitmahapanya S, Price LL, Harris SS, Fielding RA, Dawson-Hughes B. An evaluation of the reliability of muscle fiber cross-sectional area and fiber number measurements in rat skeletal muscle. Biol Proced Online 2013;15:6.
19. Cicchetti DV. Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychol Assess 1994;6:284-90.
20. Hesse B, Fröber R, Fischer MS, Schilling N. Functional differentiation of the human lumbar perivertebral musculature revisited by means of muscle fibre type composition. Ann Anat 2013;195:570-80.
21. Sirca A, Kostevc V. The fibre type composition of thoracic and lumbar paravertebral muscles in man. J Anat 1985;141:131-7.
22. Ehrenfellner B, Zissler A, Steinbacher P, Monticelli FC, Pittner S. Are animal models predictive for human postmortem muscle protein degradation? Int J Legal Med 2017;131:1615-21.
23. MacDonald DA, Moseley GL, Hodges PW. The lumbar multifidus: does the evidence support clinical beliefs? Man Ther 2006;11:254-63.
24. Ward SR, Tomiya A, Regev GJ, Thacker BE, Benzl RC, Kim CW, Lieber RL. Passive mechanical properties of the lumbar multifidus muscle support its role as a stabilizer. J Biomech 2009;42:1384-9.
25. Stark H, Fröber R, Schilling N. Intramuscular architecture of the autochthonous back muscles in humans. J Anat 2013;222:214-22.
26. Loeb EP, Giszter SF, Saltiel P, Bizzi E, Mussa-Ivaldi FA. Output units of motor behavior: an experimental and modeling study. J Cogn Neurosci 2000;12:78-97.
27. Amonoo-Kuofi HS. The density of muscle spindles in the medial, intermediate and lateral columns of human intrinsic postvertebral muscles. J Anat 1983;136(Pt 3):509-19.
28. Botterman BR, Binder MD, Stuart DG. Functional anatomy of the association between motor units and muscle receptors. Am Zool 1978;18:135-52.
29. Tsao H, Danneels L, Hodges PW. Individual fascicles of the paraspinal muscles are activated by discrete cortical networks in humans. Clin Neurophysiol 2011;122:1580-7.
|
2020-07-08 13:38:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49611103534698486, "perplexity": 9188.575099443577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655897027.14/warc/CC-MAIN-20200708124912-20200708154912-00457.warc.gz"}
|
http://www.math.toronto.edu/mein/teaching/MAT240/Week4a.html
|
Odd club problem.
A famous linear algebra puzzle: In a town with $n$ inhabitants, there are $N$ clubs. Each club has an odd number of members, and for any two distinct clubs there is an even number of common members. Prove that $n\ge N$.
Solution: Enumerate the inhabitants by $1,\ldots,n$, as well as the clubs by $1,\ldots,N$. Let $v_1,\ldots,v_N\in\mathbb{Z}_2^n$ be the `club vectors' where for each club one puts a $1$ in the $j$-th entry if $j$ is a member, and $0$ for non-members. We claim that $v_1,\ldots,v_N$ are linearly independent. Thus assume $$(\ast)\ \ \ a_1 v_1+a_2 v_2\ldots +a_N v_N=0,$$ with scalars $a_1,\ldots,a_N\in \mathbb{Z}_2$. For fixed $j$, the dot product $v_i\cdot v_j$ equals the number modulo 2 of common members of clubs $i$ and $j$. It is thus $1$ if $i=j$, and $0$ if $i\neq j$. Hence, taking the dot product of both sides of $(\ast)$ with $v_j$ we obtain $$a_j=0$$ as required. This shows $v_1,\ldots,v_N$ are linearly independent. Since $\dim(\mathbb{Z}_2^n)=n$, it follows that $N\le n$.
Note: In this solution, we use the dot product of vectors in $F^n$, defined similar to the dot product in $\mathbb{R}^n$: If $u,v$ are vectors with components $$u=(x_1,\ldots,x_n),\ \ v=(y_1\ldots,y_n)$$ then $$u\cdot v=x_1y_1+\ldots+x_n y_n.$$
|
2017-12-18 04:54:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9527158141136169, "perplexity": 121.67383515406667}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948608836.84/warc/CC-MAIN-20171218044514-20171218070514-00763.warc.gz"}
|
http://dynamicsystems.asmedigitalcollection.asme.org/issue.aspx?journalid=117&issueid=935079
|
0
IN THIS ISSUE
### Research Papers
J. Dyn. Sys., Meas., Control. 2016;138(5):051001-051001-8. doi:10.1115/1.4032687.
Neural networks are powerful tools for black box system identification. However, their main drawback is the large number of parameters usually required to deal with complex systems. Classically, the model's parameters minimize a L2-norm-based criterion. However, when using strongly corrupted data, namely, outliers, the L2-norm-based estimation algorithms become ineffective. In order to deal with outliers and the model's complexity, the main contribution of this paper is to propose a robust system identification methodology providing neuromodels with a convenient balance between simplicity and accuracy. The estimation robustness is ensured by means of the Huberian function. Simplicity and accuracy are achieved by a dedicated neural network design based on a recurrent three-layer architecture and an efficient model order reduction procedure proposed in a previous work (Romero-Ugalde et al., 2013, “Neural Network Design and Model Reduction Approach for Black Box Nonlinear System Identification With Reduced Number of Parameters,” Neurocomputing, 101, pp. 170–180). Validation is done using real data, measured on a piezoelectric actuator, containing strong natural outliers in the output data due to its microdisplacements. Comparisons with others black box system identification methods, including a previous work (Corbier and Carmona, 2015, “Extension of the Tuning Constant in the Huber's Function for Robust Modeling of Piezoelectric Systems,” Int. J. Adapt. Control Signal Process., 29(8), pp. 1008–1023) where a pseudolinear model was used to identify the same piezoelectric system, show the relevance of the proposed robust estimation method leading balanced simplicity-accuracy neuromodels.
Commentary by Dr. Valentin Fuster
J. Dyn. Sys., Meas., Control. 2016;138(5):051002-051002-23. doi:10.1115/1.4032742.
Recent demands on improved system efficiency and reduced system emissions have driven improvements in hydraulic system architectures as well as system supervisory control strategies employed in mobile multi-actuator machinery. Valve-controlled (VC) architectures have been in use for several decades and have seen moderate improvements in terms of system efficiency. Further, throttle-less concepts such as displacement-controlled (DC) actuation have been recently proposed and successfully demonstrated efficiency improvements in numerous prototypes (wheel-loaders, excavators, and skid-steer loaders) of different sizes. The combination of electric or hydraulic hybrid systems for energy recovery (for a single actuator) with VC actuation for the rest of the actuators has also been recently deployed by original equipment manufacturers (OEMs) on some excavator models. The combination of DC actuation together with a series hydraulic hybrid actuator for the swing drive has been previously proposed and implemented as part of this work, on a mini-excavator. This combination of highly efficient DC actuation with hydraulic hybrid configuration allows drastic engine downsizing and efficiency improvements of more than 50% compared to modern-day VC-actuated systems. With a conservative, suboptimal supervisory control, it was previously demonstrated that over 50% energy savings with a 50% downsized engine over the standard load-sensing (LS) architecture for a 5-t excavator application. The problem of achieving maximum system efficiency through near-optimal supervisory control (or system power management) is a theoretically challenging problem, and has been tackled for the first time in this work for DC hydraulic hybrid machines, through a two-part publication. In Part I, the theoretical aspects of this problem are outlined, supported by simulations of the theoretically optimal supervisory control as well as an implementable, near-optimal rule-based supervisory control strategy that included a detailed system model of the DC hybrid hydraulic excavator. In Part II, the world's first prototype DC hydraulic hybrid excavator is detailed, together with machine implementation of the novel supervisory control strategy proposed in Part I. The main contributions of Part I are summarized below. Dynamic programming (DP) was employed to solve the optimal supervisory problem, and benchmark implementable strategies. Importantly, the patterns in optimal state trajectories and control histories obtained from DP were analyzed and identified for different working cycles, and a common pattern was found for engine speed and DC unit displacements across different working cycles. A rule-based strategy was employed to achieve near-optimal system efficiency, with the design of the strategy guided by optimal patterns. It was found that the strategy replicates optimal system behavior with the same rule for controlling engine speed for different cycles, but different rules for the primary unit (of the series-hybrid swing drive) for different cycles. Thus, in terms of practical implementation of a rule-based approach, the operator is to be provided with a family of controllers from which one can be chosen so as to have near-optimal system behavior under all kinds of cyclical operation.
Commentary by Dr. Valentin Fuster
J. Dyn. Sys., Meas., Control. 2016;138(5):051003-051003-12. doi:10.1115/1.4032743.
The problem of achieving maximum system efficiency through near-optimal supervisory control (or system power management) in mobile off-highway machines is a theoretically challenging problem. It has been tackled for the first time in this work for displacement-controlled (DC) hydraulic hybrid multi-actuator machines such as excavators, through a two-part publication. In Part I, the theoretical aspects of this problem were outlined, supported by simulations of the theoretically optimal supervisory control (relying on dynamic programming) as well as a novel, implementable rule-based supervisory control strategy (designed to replicate theoretically optimal results). In Part II of the publication, the world's first prototype hydraulic hybrid excavator using throttle-less DC actuation is described, together with machine implementation of the novel supervisory control strategy proposed in Part I. The design choice, or set of component sizes implemented on the prototype, was driven by an optimal sizing study that was previously done. Measurement results from implementation of two different supervisory control strategies are also presented and discussed—the first, a conservative, suboptimal strategy that commanded a constant engine speed and proved that drastic engine downsizing can be performed in excavator and similar applications. The second strategy implemented was the novel, near-optimal rule-based strategy (or the “minimum-speed” strategy) proposed in Part I that exploited all available system degrees-of-freedom, by commanding the minimum-required engine speeds (to meet DC actuator flow requirements) at every instant in time. While the actual engine was not downsized on the prototype excavator, both the single-point and minimum-speed strategies showed that for the aggressive, digging cycles that such machines are typically used for, the DC hydraulic hybrid architecture enables engine operation at or near 50% of maximum engine power without loss of productivity. As described in Part I, actually downsizing the engine by 50% with use of the near-optimal, minimum-speed strategy will enable significant gains in efficiency (in terms of grams of fuel consumed) over standard valve-controlled architectures (55%) as well as DC nonhybrid architectures (25%) in cyclical operation.
Commentary by Dr. Valentin Fuster
J. Dyn. Sys., Meas., Control. 2016;138(5):051004-051004-15. doi:10.1115/1.4032745.
A new set of linearized differential equations governing relative motion of inner-formation satellite system (IFSS) is derived with the effects of $J2$ as well as atmospheric drag. The IFSS consists of the “inner satellite” and the “outer satellite,” this special configuration formation endows its some advantages to map the gravity field of earth. For long-term IFSS in elliptical orbit, the high-fidelity set of linearized equations is more convenient than the nonlinear equations for designing formation control system or navigation algorithms. In addition, to avoid the collision between the inner satellite and the outer satellite, the minimum sliding mode error feedback control (MSMEFC) is adopted to perform a real-time control on the outer satellite in the presence of uncertain perturbations from the system and space. The robustness and steady-state error of MSMEFC are also discussed to show its theoretical advantages than traditional sliding mode control (SMC). Finally, numerical simulations are performed to check the fidelity of the proposed equations. Moreover, the efficacy of the MSMEFC is performed to control the IFSS with high precision.
Commentary by Dr. Valentin Fuster
J. Dyn. Sys., Meas., Control. 2016;138(5):051005-051005-20. doi:10.1115/1.4032461.
Time delay is a common phenomenon in robotic systems due to computational requirements and communication properties between or within high-level and low-level controllers as well as the physical constraints of the actuator and sensor. It is widely believed that delays are harmful for robotic systems in terms of stability and performance; however, we propose a different view that the time delay of the system may in some cases benefit system stability and performance. Therefore, in this paper, we discuss the influences of the displacement-feedback delay (single delay) and both displacement and velocity feedback delays (double delays) on robotic actuator systems by using the cluster treatment of characteristic roots (CTCR) methodology. Hence, we can ascertain the exact stability interval for single-delay systems and the rigorous stability region for double-delay systems. The influences of controller gains and the filtering frequency on the stability of the system are discussed. Based on the stability information coupled with the dominant root distribution, we propose one nonconventional rule which suggests increasing time delay to certain time windows to obtain the optimal system performance. The computation results are also verified on an actuator testbed.
Commentary by Dr. Valentin Fuster
J. Dyn. Sys., Meas., Control. 2016;138(5):051006-051006-7. doi:10.1115/1.4032505.
There is a constant interest in the performance capabilities of active suspensions without the associated shortcomings of degraded fuel economy. To this effect, electrodynamic dampers are currently being researched as a means to approach the performance of a fully active suspension with minimal or no energy consumption. This paper investigates the regenerative capabilities of these dampers during fully active operation for a range of controller types—emphasizing road holding, ride, and energy regeneration. A model of an electrodynamic suspension is developed using bond graphs. Two model predictive controllers (MPCs) are constructed: standard and frequency-weighted MPCs. The resulting controlled system is subjected to International Organization for Standardization (ISO) roads A–D and the results are presented. For all of the standard MPC weightings, the suspension was able to recover more energy than is required to run the suspension actively. All of the results for optimal energy regeneration occurred on the standard Pareto tradeoff curve for ride comfort and road holding. Frequency weighting the controller increased suspension performance while also regenerating 3–12% more energy than the standard MPC.
Commentary by Dr. Valentin Fuster
J. Dyn. Sys., Meas., Control. 2016;138(5):051007-051007-9. doi:10.1115/1.4032747.
This paper considers observer design problem of singularly perturbed systems (SPSs) with multirate sampled and delayed measurements. The outputs are classified into two sets which are measured at different sampling rates and subject to transmission delays. The error system is modeled as a continuous-time SPS with a slow-varying delay and a fast-varying delay. A new Lyapunov functional taking the delay properties into account is constructed. Based on the Lyapunov–Krasovskii functional, sufficient conditions for stability of the error system are proposed by which an observer design method is proposed. A realistic example is used to illustrate the obtained results.
Commentary by Dr. Valentin Fuster
### Technical Brief
J. Dyn. Sys., Meas., Control. 2016;138(5):054501-054501-7. doi:10.1115/1.4032744.
In this paper, a novel algorithm for indirect tire failure indication is described. The estimation method is based on measuring changes in the lateral dynamics behavior resulting from certain types of tire failure modes including excessive deflation or significant thread loss in a combination of tires. Given the fact that both failures will notably affect the lateral dynamics behavior, quantifying these changes constitutes the basis of the estimation method. In achieving this, multiple models and switching method are utilized based on lateral dynamics models of the vehicle that are parametrized to account for the uncertainty in tire pressure levels. The results are demonstrated using representative numerical simulations.
Topics: Pressure , Vehicles , Tires
Commentary by Dr. Valentin Fuster
|
2017-12-13 22:50:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38292428851127625, "perplexity": 2000.8441502922446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948531226.26/warc/CC-MAIN-20171213221219-20171214001219-00565.warc.gz"}
|
http://jayvijay.co/t58zaj/chemical-properties-of-alkaline-earth-metals-pdf-7047e1
|
# chemical properties of alkaline earth metals pdf
Alkaline metals and earth alkali metals properties electronic 3 1 the periodic table alkaline earth metals definition alkaline earth metals What Are The Properties Of Alkaline Earth MetalsAlkaline Earth MetalsGeneral Characteristics Of Pounds Alkaline Earth MetalsPpt Look At The Following Patterns What Are BasedGeneral Characteristics Of Pounds Alkaline Earth MetalsIfas ⦠This group of elements includes beryllium, magnesium, calcium, strontium, barium, and radium.The elements of this group are quite similar in their physical and chemical properties. Redrawn from references 4 and 5. Ca + Cl 2 â CaCl 2. Petrocelli, A.W. They readily lose their two outermost electrons to form cations with charge +2. Alkali Metals' Chemical Properties, information contact us at info@libretexts.org, status page at https://status.libretexts.org, not reported; unstable at room temperature (298 K). Presumably Ca, Sr, Ba, and Ra would react this way as well, although due to their higher reactivity the reaction is likely to be violent. Electropositive character: The alkaline earth metals show electropositive character which increases from Be to Ba. The sulfates decompose by liberating SO3 according to the reaction, $\sf{MSO_4(s)~~\overset{\Delta}{\longrightarrow}~MO(s)~~+~~SO_3(g)}$, Note that another possible decomposition mode is, $\sf{MSO_4(s)~~\overset{\Delta}{\longrightarrow}~MO(s)~~+~~ {\textstyle \frac{1}{2}}~O_2(g)~~+~~SO_2(g)}$. Beryllium and to a lesser extent Magnesium form polar highly covalent compounds. Because Beryllium only possesses two valence electrons its compounds also tend to be electron deficient and Bridging Be-X-Be bonds are common. Legal. Müller, M.; Karttunen, A. J.; Buchner, M. R., Speciation of Be2+ in acidic liquid ammonia and formation of tetra- and octanuclear beryllium amido clusters. Beryllium and magnesium do not react with water at room temperature, although they do dissolve in acid and react with steam at high temperatures and pressures to give the oxide (which can be thought of as a dehydrated hydroxide). > Locate semiconductors, halogens, and noble gases in the periodic table. The Alkaline Earths as Metals OUTLINE 1.1. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. Chemical properties of alkaline earth metals . Barium 18 1.2.6. alkali elements Li, Na, K, Rb, Cs, Fr In contrast, MOFs built on alkaline earth metals are comparatively much less explored. Beryllium hydride even forms an adduct with two BH3 to give the a structure in which a central Be is linked to the terminal BH2 groups by 3-center-2-electron Be-H-B bonds, as shown in Scheme $$\sf{\PageIndex{V}}$$.7. 7. However, that reaction is not the one asked for by the prompt since it involves the liberation of two different molecular gases (O2 and SO2). Calcium 12 1.2.4. 5. Main Group Al, Ga, In, Sn, Tl, Pb, Bi, Po. 8.3.2. 6. Calcium, strontium, barium, (and (presumably radium), which react with water to liberate dihydrogen gas and form hydroxides. Metals And Their Properties- Physical and Chemical All the things around us are made of 100 or so elements. The alkaline earth metals are less reactive than the alkali metals. E. Horwood: New York, 1990, pg. The consumption of water in this reaction forms the basis for the use of calcium hydride as a drying agent for organic solvents. Scheme $$\sf{\PageIndex{II}}$$. Alkali metal carbonates and nitrates thermally decompose to release CO2 and a mixture of NO2 & O2, respectively. An alkaline earth metal atom is smaller in size compared to its adjacent alkali metals. Unlike the heavier alkaline earth metals, Beryllium does not react directly with hydrogen and the resulting hydride, though still nucleophilic, acts as a polar covalent hydride and hydrolyzes relatively slowly. (e) Like alkali metals, alkaline earth metals predominantly form ionic bonds in their compounds but are less ionic than alkali metals. The elements have very similar properties: they are all shiny, silvery-white, somewhat reactive metals at standard temperature and pressure.. $\sf{M(s)~~+~~2~H_2O(l)~\rightarrow~M^{2+}~~+~~2~OH^-~~+~~H_2(g)~~~~~~~~~~~~~~~~~~~~~~~~~~(M~=~Ca,~Sr,~Ba,~and~presumably~Ra)}$. They are a very different family, even Like the alkali metals, Ca, Sr, and Ba dissolve in liquid ammonia to give solutions containing solvated electrons, although these have not been as heavily studied as those of the alkali metals. Properties of IRMOF-14 and its analogues M-IRMOF-14 (M = Cd, alkaline earth metals): Electronic structure, structural stability, chemical bonding, and optical properties 1. 2. $\sf{2~M(s)~~+~~O_2~\longrightarrow~~2~MO(s)~~~~~~~~~~~~~~~~~~~~~~~~~(M~=~Be,~Mg,~Ca,~Sr,~Ba,~and~presumably~Ra)}$, $\sf{M(OH)_2(aq)~~+~~H_2O_2(aq)~\longrightarrow~~MO_2(s)~~+~~2~H_2O~~~~~~~~~~~~~~~~~~~~~~~~(M~=~Mg,~Ca,~Sr,~Ba,~and~presumably~Ra)}$. 5.2 Alkaline earth metals. These elements were classified by Lavoisier in to metals and non-metals by studying their properties. Topic: Element Families SciLinks code: HK4046 Families of Elements > Locate alkali metals, alkaline-earth metals, and transition metals in the periodic table. Toney, J.; Stucky, G. D. J. Organomet. One particularly remarkable structure is that of basic beryllium acetate in which a central oxygen bridges four Be atoms, as shown in Scheme $$\sf{\PageIndex{IV}}$$ along with that of Be4(NO3)6O, which is thought to possess an anlogous structure. This figure is a copy of that used in section 8.3.2. James G. Speight, in Natural Water Remediation, 2020. The typical explanation for this trend involves the mechanism of carbonate decomposition depicted in Scheme $$\sf{\PageIndex{II}}$$. What are the Properties of the Alkaline Earth Metals? The alkaline earth metals tend to form +2 cations. 1. 2. Evolutionary structure searches are coupled with density functional theory calculations to predict the most stable stoichiometries and structures of beryllium and barium polyhydrides, MHn with n > 2 and M = Be/Ba, under pressure. The classic example of alkaline earth cations' ability to polarize anions involves the decomposition of the metal carbonates. Superoxides. Alkaline-earth metal - Alkaline-earth metal - Physical and chemical behaviour: The alkaline-earth elements are highly metallic and are good conductors of electricity. Write a decomposition reaction that involves the liberation of a single molecular gas from the sulfate to give an oxide. M3^~açxÏ¿ÉWfÇ;
Ûýí2ñ\v´êB¦êÙ¼[üQ¦7oë?°ªÀp[ZZy4 3§¥C\$ÀJ0 ®@C£ P* "Ñ &OM*öâ°ZI~c. The decomposition range midpoint for MgCO3, CaCO3, SrCO3, and BaCO3 are calculated from the data reported in Maitra, S., Chakrabarty, N., & Pramanik, J.. Cerâmica 2008, 54(331), 268-272. As a result, Beryllium tends to form polar covalent bonds rather than ionic ones. Approximately 200 atmospheric particulate samples were collected over a 1âyear period on a 20âmeterâhigh tower located on a beach on the windward coast ⦠The decomposition temperatures for BeSO4 , MgSO4 , CaSO4 , and SrSO4 are from Massey, A. G., Main group chemistry. The extent of covalency is even greater in the case of Beryllium, which with a radius of only 113 Å and valence electrons that experience a Slater effective nuclear charge of +1.95 atomic charge units, has considerable ability to polarize nearby Lewis bases. Chemical Science 2020, 11 (21), 5415-5422. Seyferth, D., The Grignard Reagents. This reaction is a exothermic reaction. Halogen and alkaline earth metals. 152. Scheme $$\sf{\PageIndex{V}}$$. The alkaline earth metals are the elements that correspond to group 2 of the modern periodic table. The chemistries of magnesium and beryllium demonstrates the danger of drawing an overly rigorous distinction between elements as metals, nonmetals, and metalloids. > Relate an elementâs chemical properties to ⦠Alkaline earth metals and alloys containing alkaline earth metals regarded as reducing agents. With the exception of magnesium, the alkaline earth metals $\sf{MCO_3(s)~~\overset{\Delta}{\longrightarrow}~MO(s)~~+~~CO_2(g)}$, $\sf{M(NO_3)_2(s)~~\overset{\Delta}{\longrightarrow}~MO(s)~~+~~2~NO_2(g)~~+~~O_2(g)}$. The alkaline earth metals react directly with halogens to give the dihalides, although given the exothermicity of reactions involving the powerfully reducing alkaline earth metals with oxidizing halogens in the laboratory it is usually safer to react the hydroxides or oxides with the appropriate hydrohalic acid. Scheme $$\sf{\PageIndex{II}}$$. Mg-Al alloys are used in construction of aircrafts. In Van Nostrand's Encyclopedia of Chemistry, G.D. Considine (Ed.). The ease of losing electrons makes the alkaline earth good metals reducing agents. Like the alkali metals, Ca, Sr, and Ba dissolve in liquid ammonia to give solutions containing solvated electrons, although these have not been as heavily studied as those of the alkali metals. 3Mg + 2NH 3 â Mg 3 N 2 + H 2. Redrawn from reference 6. Zinc forms an analogous structure and (C) the structure of Be4(NO3)6O is postulated to be analogous to that of basic beryllium acetate, with bridging nitrate ligands taking the place of the acetates. Comparing with the alkali metals is denser than alkaline earth metals in the same period. Magnesium 8 1.2.3. Decomposition temperatures of alkaline earth carbonates. Even though the BeHn stoichiometries we explored do not become thermodynamically stable with respect to decomposition into the classic hydride BeH2 and H2 up to ⦠Figure $$\sf{\PageIndex{1}}$$. Mg is used to prepare alloys with Al, Zn, Mn and Sn. $\sf{Mg(s)~~+~~R-X~~\overset{THF, catalytic~I_2}{\longrightarrow}~R-Mg-X}$. 3. 4. $\sf{MH_2(s)~~+~~H_2O(l)~~\rightarrow~M(OH)_2~~+~~H_2(g)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(M~=~Mg, Ca,~Sr,~Ba,~and~presumably~Ra)}$. Structure of monomeric "RMgX" Grignard reagent in THF solution. General Properties 1 1.2. Organometallics 2009, 28 (6), 1598-1605. J. Chem. Due to release of hydrogen gas, ammonia behaves as an acid. The metals and non-metals differ in their properties. Solid State Communications 1988, 67(5), 491-494. The structure of BeCl2 is by Ben Mills - Own work, Public Domain, commons.wikimedia.org/w/inde...?curid=4759802, who rendered it from X-ray data reported in Rundle, R. E.;Lewis, P.H. Properties of the Alkaline Earth Metals 4 1.2.1. As discussed in section 8.2.1. Thus in liquid ammonia Be forms species with bridging Be-N-Be bonds like the tetrameric cluster shown in Scheme $$\sf{\PageIndex{III}}$$. Standard reduction potentials in volts for reduction of selected aqueous cations in acid solution. Formation of an adduct between BeH2 and two BH3 (in the form of B2H6). Properties of the Alkaline Earth Metals . Table $$\sf{\PageIndex{1}}$$. 1. They also form clusters in which the lone halogen lone pairs are used to bridge multiple Mg centers, as illustrated by the complex in Scheme $$\sf{\PageIndex{III}}$$. Figure $$\sf{\PageIndex{3}}$$. Chemical properties. Magnesium is the second most abundant metallic element in the sea, and it also occurs as carnallite in earth crust. At higher temperatures the polymeric chains dissociate into Be2Cl4 dimers and BeCl2 monomers. Alkaline earth metals are good reducing agents that tend to form the +2 oxidation state. The structure of BeB2H8 may also be thought of as an adduct between Be2+ and two BH4- ligands. $\sf{M(s)~~+~~2~H^+~~\rightarrow~M^{2+}~~+~~H_2(g)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(M~=~Be~and~Mg)}$, $\sf{M(s)~~+~~H_2O(g)~~\overset{high~T,~P}{\longrightarrow}~MO(s)~~+~~H_2(g)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(M~=~Be~and~Mg)}$. Alkaline earth metals: A group of chemical elements in the periodic table with similar properties: shiny, silvery-white, somewhat reactive at standard temperature and pressure. 6. Alkaline earth metals react with halogens and form halides. $\sf{M(s)~~+~~H_2(g)~~\rightarrow~MH_2~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(M~=~Mg, Ca,~Sr,~Ba,~and~presumably~Ra)}$. Group 2: the alkaline earth metals Physical Properties Metals Halides, oxides, hydroxides, salts of oxoacids Complex ions in aqueous solution Complexes with ⦠We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. 1952, 20(1): 132-134. 1971, 28, 5. Alkaline earth metal sulfates undergo decomposition reactions similar to those of the carbonates and nitrates. The heavier alkaline earth metals (Mg through Ba) also reduce hydrogen to give hydrides. ; Johnson, Q.C. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. The hydrolysis of BeO and MgO usually require high temperatures and pressures. Similarly, the beryllium halides are network covalent solids consisting of polymer chains of held together by Be-X-Be bridges, as may be seen from the structure of BeCl2 in Figure $$\sf{\PageIndex{2}}$$. The alkaline earth metals are all reactive elements, losing their 2 outer electrons to form a 2+ ion with non-metals. Their reducing character increases down the group. Group 2A: Alkaline Earth Metals R11 Atomic Properties ⢠Alkaline earth metals have an electron configuration that ends in ns 2. ⢠The alkaline earth metals are strong reducing agents, losing 2 electrons and form-ing ions with a 2 charge. Cu-Be alloys are used in the preparation of high strength springs. The potentials for reduction of Ca2+, Sr2+, and Ba2+ to the metal of ~ -3 V are even similar to those of the alkali metals. In compounds formed between Be and H bridging two-center-three electron Be-H-Be bonds are common. Complex factors govern decomposition of the nitrates but, as shown in Table $$\sf{\PageIndex{1}}$$, the decomposition of the alkaline earth carbonates shows that on going down the group carbonate decomposition requires increasingly higher temperatures. Scheme $$\sf{\PageIndex{IV}}$$. The reaction is dependent on the presence of the Lewis base donor ethers like Et2O or THF, which coordinates the Mg2+, completing its coordination sphere, giving tetrahedral complexes like that depicted in Scheme $$\sf{\PageIndex{II}}$$. Hydrogen's Chemical Properties, these alkaline earth hydrides are ionic salts of hydride ion. Alkali Metals' Chemical Properties.Because of their tendency to form +2 cations, the alkaline earth metals are good reducing agents. Alkali Metals' Chemical Properties.Because of their tendency to form +2 cations, the alkaline earth metals are good reducing agents. You should remember that there is a separate group called the alkaline earth metals in Group Two. Strontium 15 1.2.5. IA Metals: Alkali Metals INTRODUCTION: The alkali metals are a group in the periodic table consisting of the chemical elements lithium (Li), sodium (Na), potassium (K), rubidium (Rb), caesium (Cs). Models explaining how alkaline earth metal cations facilitate carbonate decomposition. (A) Structure of BeCl2 consisting of (B) polymeric chains of edge-linked BeCl4 tetrahedra. Alkali and alkaline earth metals are respectively the members of group 1 and group 2 elements. Divalent Alkaline Earth cations polarize anions. 2. They have a gray-white lustre when freshly cut but tarnish readily in air, particularly the heavier members of the group. $\sf{MO~~+~~H_2O~\longrightarrow~~M(OH)_2~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(M~=~Be,~Mg,~Ca,~Sr,~Ba,~and~presumably~Ra)}$. 8.6.1: Alkaline Earth Metals' Chemical Properties, https://chem.libretexts.org/@app/auth/2/login?returnto=https%3A%2F%2Fchem.libretexts.org%2FBookshelves%2FInorganic_Chemistry%2FMap%253A_Inorganic_Chemistry_(Miessler_Fischer_Tarr)%2F08%253A_Chemistry_of_the_Main_Group_Elements%2F8.06%253A_Group_2_The_Alkaline_Earth_Metals%2F8.6.01%253A_8.4.2._Alkaline_Earth_Metals'_Chemical_Properties, 8.6.1: Preparation and General Properties of the Alkaline Earth Elements. Phys. Unlike the alkali metal, all the alkaline earth metals react with oxygen to give oxides of formula MO, although the peroxides of the heavier alkaline earths can be formed by solution phase precipitation of the metal cation with a source of peroxide anion (O22-). Like molecular compounds, Gringard reagents undergo ligand substitution reactions in solution according to the Schlenk equilibrium. Radium 19 The alkaline earth metals comprise Group 2 of the periodic table and include the elements Be, Mg, Ca, Sr, Ba and Ra. $\sf{M^{2+}(aq)~~+~~2~OH^-~~\rightarrow~M(OH)_2~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(M~=~Be,~Mg, Ca,~Sr,~Ba,~and~presumably~Ra)}$. The alkaline earth metals are six chemical elements in group 2 of the periodic table.They are beryllium (Be), magnesium (Mg), calcium (Ca), strontium (Sr), barium (Ba), and radium (Ra). Slurried or finely divided barium have been known to react with explosive force when mixed with such halogenated hydrocarbons as carbon tetrachloride, trichlorotrifluoroethane, fluorotrichloromethane, tetrachloroethylene, trichloroethylene, etc. Occurrence of Alkaline Earth Metal. The reactivity of these elements increases on going down the group. Rank the alkaline earth metal sulfates in order of increasing decomposition temperature. The structure of BeH2 is by Ben Mills - Own work, Public Domain, commons.wikimedia.org/w/inde...?curid=3930832, who rendered it from X-ray data reported in Smith, G.S. Chem. Since the alkali metals are the most electropositive (the least electronegative) of elements, they react with a great variety of nonmetals. (f) The alkaline earth metals are less reducing than alkali metals. The oxides of the heavier alkaline earth metals react with water to give the hydroxides. This video is highly rated by Class 11 students and has been viewed 1049 times. 2. although unlike the alkali metals the reduction is slow and usually liberates hydrogen without fire or explosion. ; Smith, D. K.; Cox, D. E.; Snyder, R. L.; Zhou, R-S.; Zalkin, A. The alkaline earths possess many of the characteristic properties of metals.Alkaline earths have low electron affinities and low electronegativities.As with the alkali metals, the properties depend on the ease with which electrons are lost.The alkaline earths have two electrons in the outer shell. Ba + I 2 â BaI 2. As can be seen from Figure $$\sf{\PageIndex{2}}$$, Alkaline earth metals' possess large negative M2+/0 standard reduction potentials which strongly favor the +2 cation. Beryllium 4 1.2.2. In terms of Mg the influence of covalency is evident from the structures of the Gringard reagents Mg forms on reaction between the metal and alkyl halides. Beryllium hydride, BeH2, consists of a solid lattice in which tetrahedrally coordinated Be are connected by Be-H-Be bonds, as shown in Figure $$\sf{\PageIndex{3}}$$. Scheme $$\sf{\PageIndex{I}}$$. Oct 14, 2020 - Chemical Properties of Alkaline Earth Metals Class 11 Video | EduRev is made by best teachers of Class 11. Ammonia and alkaline earth metals. $\sf{M(s)~~+~~X_2^-~~\rightarrow~MX_2~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(M~=~Be,~Mg, Ca,~Sr,~Ba,~and~presumably~Ra; X~=~F,~Cl,~Br,~I)}$, $\sf{M(OH)_2~~+~~2~HX~~\rightarrow~MX_2~~+~~2~H_2O~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(M~=~Mg, Ca,~Sr,~Ba,~and~presumably~Ra; X~=~F,~Cl,~Br,~I)}$, $\sf{MO~~+~~2~HX~~\rightarrow~MX_2~~+~~H_2O~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(M~=~Mg, Ca,~Sr,~Ba,~and~presumably~Ra; X~=~F,~Cl,~Br,~I)}$. This is due to the fact that , the elements of group II A are packed more tightly due to the greater nuclear charge and smaller size. Watch the recordings here on Youtube! As the alkaline earth metal cation becomes larger on going from Be to Ba, its ability to polarize the carbonate anion is lessened, making it more difficult to form the oxide. Part 2: Alkaline Earth Metals 53 Introduction to Alkaline Earth Metals 53 The Discovery and Naming of Alkaline Earth Metals 54 5 Beryllium 56 The Astrophysics of Beryllium 57 Beryllium on Earth 59 The Chemistry of Beryllium 61 Reducing the Critical Mass in Nuclear Weapons 62 Beryllium Is Important in Particle Accelerators 64 chemical properties of alkaline earth metals The chemical reactions of the alkaline earth metals are quite comparable to that of alkali metals. Alkaline Earth Metals. Like other metal oxides containing low oxidation state metals, the alkaline earth oxides are basic. (A) Structure of basic beryllium acetate, Be4(OAc)6O, (B) in which the central OBe4 tetrahedron is circumscribed to make it easier to see that the structure consists of a OBe46+ tetrahedron in which the Be---Be edges are linked by bridging acetate ligands. Scheme $$\sf{\PageIndex{III}}$$. chemical properties as magnesium and calcium because they have the same number of electrons in their outer shell. (A) Ionic model in which the negative charge buildup is stabilized by interaction between the dication and one of the carbonate oxygens. Beryllium is sufficiently hard to scratch glass, but barium is only slightly harder than lead. For more information contact us at info@libretexts.org or check out our status page at https://status.libretexts.org. But due to smaller size and greater charge and hence high ionisation energy, these are much less reactive than the corresponding alkali metals. Wiley, 2005. doi:10.1002/0471740039.vec2421. (B) Semi-covalent representation of the same interaction, now depicted as a M=O bond (which should not be taken to imply that such a bond actually exists). The elements in group 1 are called the alkali metals. Consequently their hydroxides are more commonly prepared by addition of base to a soluble salt. Be is used in the manufacture of alloys. Reactivity towards the halogens: All the alkaline earth metals combine with halogen ⦠Group 1 is on the left-hand side of the periodic table The alkali metals share similar physical and chemical properties . The alkaline earth metals (beryllium (Be), magnesium (Mg), calcium (Ca), strontium (Sr), barium (Ba), and radium (Ra)) are a group of chemical elements in the s-block of the periodic table with very similar properties: 1. shiny 2. silvery-white 3. somewhat reactive metals at standard temperature and pressure 4. readily lose their two outermost electrons to form cations with a 2+ charge 5. low densities 6. low melting points 7. low boiling poi⦠Have questions or comments? In this chapter, there are various important properties that you need to learn such as electronic configuration, ionization enthalpy, hydration enthalpy, chemical properties, etc. Chemical properties : (a) They are less reactive than alkali metals. However, making use of earthâabundant members of the alkaline earth metal families can introduce unique advantages (e.g., cost and environmental safety) and soughtâafter properties (e.g., gas adsorption and proton conduction). ⢠Because radium is luminous, it was once used to make the hands and numbers on watches glow in the dark. Lithium (Li) Melting Point:453.69K/ 180.54°C Boiling Point:1615K/ 1342°C Density:0.534g/cm³ Atomic Mass:6.94 Atomic Number:3 Sodium (Na) Melting Point:370.87K/ 97.72°C Boiling Point:1156K/ 883°C Density:0.968g/cm³ Atomic Mass:22.99 Atomic Number:11 Chemical properties of all Tetrameric Be cluster formed in liquid ammonia. 3. Thus they react with water and other electrophiles. (A) Structure of BeH2 consisting of (B) a network of BeH4 tetrahedra linked via 3-center-2-electron Be-H-Be bonds. Missed the LibreFest? Formation energies, chemical bonding, electronic structure, and optical properties of metalâorganic frameworks of alkaline earth metals, AâIRMOF-1 (where A = Be, Mg, Ca, Sr, or Ba), have been systemically investigated with DFT methods.The unit cell volumes and atomic positions were fully optimized with the PerdewâBurkeâErnzerhof functional. (A) Halogens like Cl can bridge multiple metal centers via their lone pairs, allowing for the formation of species like (B) "(RMgCl)2(MgCl2)2". The expected decomposition order of the alkaline earth metal sulfates along with known decomposition temperatures is, $\sf{\underset{580~^{\circ}C}{BeSO_4}~<~\underset{895~^{\circ}C}{MgSO_4}~<~\underset{1149~^{\circ}C}{CaSO_4}~<~\underset{1374~^{\circ}C}{SrSO_4}~<~BaSO_4~<~RaSO_4}$. This is because both Be and Li can form compounds with considerable covalent character and, as might be expected from their relative paucity of electrons, they share much in common with the electron deficient row 13 metalloids B and Al. Figure $$\sf{\PageIndex{2}}$$. Uses of alkaline earth metals - definition Some important uses of alkaline earth metals are: 1. ) also reduce hydrogen to give the hydroxides and Chemical Properties: ( a ) of. 28 ( 6 ), 1598-1605 - definition Some important uses of earth! Gringard reagents undergo ligand substitution reactions in solution according to the Schlenk..: the alkaline earth metals are the elements in group 1 is on the side... There is a separate group called the alkali metals acid solution of elements, losing their outer... Alkaline earth metals tend to form a 2+ ion with non-metals with the alkali metals 2 the... 21 ), which react with water to liberate dihydrogen gas and form hydroxides of BeO and MgO require... Carbonate decomposition of these elements increases on going down the group through Ba ) also reduce hydrogen to give oxide! With charge +2 also reduce hydrogen to give an oxide Mg ( s ) ~~+~~R-X~~\overset { THF, }... Beryllium and to a lesser extent magnesium form polar highly covalent compounds National Foundation! Make the hands and numbers on watches glow in the periodic table the alkali metals ' Chemical Properties.Because their... Is denser than alkaline earth metals are the elements that correspond to group 2 of carbonates! A soluble salt important uses of alkaline earth metals are good reducing agents standard reduction potentials in for! The liberation of a single molecular gas from the sulfate to give the hydroxides alkaline... K. ; Cox, D. E. ; Snyder, R. L. ;,... The Schlenk chemical properties of alkaline earth metals pdf, ( and ( presumably radium ), 491-494 periodic table character increases. Tends to form the +2 oxidation state metals, the alkaline earth metals ( Mg through Ba ) also hydrogen. Hands and numbers on watches glow in the same period to give hydrides compounds formed Be... Nitrates thermally decompose to release CO2 and a mixture of NO2 & O2, respectively the alkaline earth react! Lavoisier in to metals and non-metals by studying their Properties is highly by. And BeCl2 monomers ) structure of BeB2H8 may also Be thought of as an acid J. ;,... There is a copy of that used in the dark of BeO and MgO usually require high and. Beso4, MgSO4, CaSO4, and it also occurs as carnallite in earth crust extent... Of drawing an overly rigorous distinction between elements as metals, the alkaline earth metals are: 1 and! Deficient and Bridging Be-X-Be bonds are common ) polymeric chains of edge-linked BeCl4 tetrahedra periodic the! Like molecular compounds, Gringard reagents undergo ligand substitution reactions in solution to. Is luminous, it was once used to make the hands and numbers on watches in. Heavier members of the metal carbonates under grant numbers 1246120, 1525057, and 1413739 ( least... Liberate dihydrogen gas and form hydroxides of ( B ) a network of tetrahedra! Lose their two outermost electrons to form +2 cations Mg is used to make the hands and numbers watches! ) polymeric chains of edge-linked BeCl4 tetrahedra beryllium and to a lesser extent form. More commonly prepared by addition of base to a lesser extent magnesium form polar covalent bonds rather than ionic.! Give the hydroxides the basis for the use of calcium hydride as drying. Agents that tend to form a 2+ ion with non-metals this figure a... Are: 1 the reactivity of these elements were classified by Lavoisier in to metals alloys! 2+ ion with non-metals highly rated by Class 11 5 ), which react with to! And MgO usually require high temperatures and pressures and 1413739: New York, 1990, pg possesses valence! E. Horwood: New York, 1990, pg { I } } \ ) V }! Halogens and form hydroxides cations in acid solution Properties, these are much less explored Be-X-Be bonds are.. Carbonate oxygens and it also occurs as carnallite in earth crust the consumption of water in this reaction forms basis! Solution according to the Schlenk equilibrium BeCl4 tetrahedra is the second most abundant metallic element the... Electropositive ( the least electronegative ) of elements, they react with a great variety of nonmetals the oxides the... Should remember that there is a copy of that used in the preparation of high strength springs ammonia as. In compounds formed between Be and H Bridging two-center-three electron Be-H-Be bonds \longrightarrow } ~R-Mg-X } \.! Hard to scratch glass, but barium is only slightly harder than lead J. ; Stucky, G. D. Organomet., G.D. Considine ( Ed. ) the classic example of alkaline earth metals show character! Main group Chemistry Mg ( s ) ~~+~~R-X~~\overset { THF, catalytic~I_2 } { \longrightarrow ~R-Mg-X! 1 } chemical properties of alkaline earth metals pdf \ ) metals Class 11 Video | EduRev is made by best teachers of Class 11 |... The hydroxides noble gases in the same period Video is highly rated by 11! National Science Foundation support under grant numbers 1246120, 1525057, and metalloids, L.! Bh4- ligands also reduce hydrogen to give the hydroxides, which react with a variety... Is stabilized by interaction between the dication and one of the metal carbonates organic solvents BeSO4, MgSO4,,. Ionisation energy, these are much less explored { 2 } } \ ] NO2 & O2,...., which react with water to give the hydroxides oxides of the alkaline earth metals are comparatively much reactive. Smaller in size compared to its adjacent alkali metals ) ionic model in which the negative charge buildup is by. Increases on going down the group 1990, pg the left-hand side of the carbonates nitrates! In earth crust Cox, D. E. ; Snyder, R. L. ; Zhou, R-S. ; Zalkin,.... 2 of the metal carbonates and nitrates '' Grignard reagent in THF solution earth hydrides ionic! Libretexts content is licensed by CC BY-NC-SA 3.0 metals reducing agents occurs as carnallite in earth crust scratch glass but... Than the corresponding alkali metals metal oxides containing low oxidation state high and... For BeSO4, MgSO4, CaSO4, and it also occurs as carnallite in earth crust..! Is on the left-hand side of the alkaline earth metals show electropositive character which increases from Be to.... 14, 2020 National Science Foundation support under grant numbers 1246120, 1525057, and SrSO4 from. Periodic table â Mg 3 N 2 + H 2 in group two in,,! Are basic occurs as carnallite in earth crust and numbers on watches glow in the preparation of high springs. Compounds, Gringard reagents undergo ligand substitution reactions in solution according to the Schlenk equilibrium molecular! Heavier alkaline earth metals regarded as reducing agents alkali metal carbonates with a great of. They are less reactive than alkali metals electropositive character: the alkaline earth metals in the form of )... Presumably radium ), which react with water to liberate dihydrogen gas and form hydroxides cations, the earth!, and it also occurs as carnallite in earth crust how alkaline earth metals are the most (! Metals react with a great variety of nonmetals \PageIndex { III } } \ ) physical and Properties. ( f ) the alkaline earth metals are good reducing agents bonds are.!, MgSO4, CaSO4, and SrSO4 are from Massey, A. G., main group,. And to a soluble salt IV } } \ ) form polar covalent bonds rather than ionic ones hence. Its adjacent alkali metals in size chemical properties of alkaline earth metals pdf to its adjacent alkali metals Al,,. Barium, ( and ( presumably radium ), 491-494 monomeric RMgX '' Grignard in. Decomposition reaction that involves the liberation of a single molecular gas from the to! Chemical Properties also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, 1413739! ; Snyder, R. L. ; Zhou, R-S. ; Zalkin chemical properties of alkaline earth metals pdf a buildup is stabilized interaction! ) of elements, they react with water to give the hydroxides D. E. ;,!, G. D. J. Organomet since the alkali metals, Gringard reagents undergo ligand substitution reactions in according. 11 Video | EduRev is made by best teachers of Class 11 electron Be-H-Be bonds f ) the alkaline metals! Has been viewed 1049 times chains dissociate into Be2Cl4 dimers and BeCl2 monomers water in this reaction the. Increasing decomposition temperature slow and usually liberates hydrogen without fire or explosion should remember there. Which increases from Be to Ba in which the negative charge buildup is stabilized by interaction between the dication one! An adduct between Be2+ and two BH4- ligands water in this reaction forms the basis for the use calcium. Magnesium and beryllium demonstrates the danger of drawing an overly rigorous distinction elements. E. ; Snyder, R. L. ; Zhou, R-S. ; Zalkin,.. Of Class 11 students and has been viewed 1049 times, 11 ( 21,! Information contact us at info @ libretexts.org or check out our status page https... Earth metal sulfates in order of increasing decomposition temperature tends to form polar highly covalent.!, pg CC BY-NC-SA 3.0 adduct between Be2+ and two BH4- ligands ), 5415-5422 electropositive ( the least )! The polymeric chains dissociate into Be2Cl4 dimers and BeCl2 monomers greater charge and high... Formation of an adduct between BeH2 and two BH4- ligands into Be2Cl4 dimers and monomers! Members of the group K. ; Cox, D. K. ; Cox, D. K. ; Cox D.! Than the corresponding alkali metals share similar physical and Chemical Properties: ( a structure... { \longrightarrow } ~R-Mg-X } \ ) and non-metals by studying their Properties by. These elements increases on going down the group 3 } } \ ) beryllium is sufficiently chemical properties of alkaline earth metals pdf scratch. Elements as metals, the alkaline earth hydrides are ionic salts of hydride ion rather than ionic ones 1413739. Oxidation state { III } } \ ) RMgX '' Grignard reagent in solution.
|
2021-03-04 17:53:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7576155066490173, "perplexity": 5553.454108694176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369512.68/warc/CC-MAIN-20210304174506-20210304204506-00528.warc.gz"}
|
https://tex.stackexchange.com/questions/421801/ways-to-modify-chapter-titles-in-koma-class
|
# Ways to modify chapter titles in KOMA-class
I am using KOMA-class. Let's say I want my chapter titles to look like this:
However the closest thing I can get is this:
The code I used is this:
\documentclass{scrbook}
\usepackage{kantlipsum} %providing dummy text
\RedeclareSectionCommand[style=section,indent=0pt,beforeskip=2\baselineskip]{chapter}
\setkomafont{chapter}{\normalfont\large\scshape}
\renewcommand*{\raggedsection}{\centering}
\begin{document}
\kant[1]
\chapter{An Interesting Chapter Title}
\kant[2]
\begin{center}
\vspace{\baselineskip}
\\
\rule{2em}{1pt}
\end{center}
\noindent\kant[3]
\end{document}
1. The font for the chapter title.
2. Centering.
3. Not starting a new page.
4. Vertical space before the title.
But I don't know how to add characters or even a new line after the actual chapter title is printed.
I would also like to know about further ways than those I already used, KOMA-classes provide to change chapter titles (whether it makes sense to use them is another thing).
Edit: As schtandard remarked, one has to make a decision how the title should look when it takes several lines. So let's see for example if we can get the following to work:
My code (which is certainly terrible):
\begin{center}
\vspace{\baselineskip}
\scshape\large
\parbox{11cm}{\centering 1. An Interesting Chapter Title Which is Way Too Long to Fit in One Line}
\\[\baselineskip]
\rule{2em}{1pt}
\end{center}
Note that here the box has a fixed length of 11cm which is fine for taking the picture, but in practice, one should be able to specify a maximal width so that the box becomes shorter if the title is short.
• Do you want to change the section or chapter titles? – Johannes_B Mar 18 '18 at 9:59
• @Johannes_B In my current document, it's the chapter titles, but I guess it will be rather similar. – John Dorian Mar 18 '18 at 10:02
• Do you only have chapter headings that fit in one line or do some of them break into several lines? If so, where should the ornaments go in that case? – schtandard Mar 18 '18 at 10:44
• @schtandard Good question, I did not think about that one yet. To make it more difficult, let's say the two lines of the chapter title should be put in a box with a defined maximal width and the ornaments should be left and right to that box, vertically centered. (Just trying to see what is possible here.) – John Dorian Mar 18 '18 at 10:51
• Then how do you want your titles to appear in the ToC? – remco Mar 18 '18 at 10:59
If you use style=section for chapters then you have to redefine \sectionlinesformat to change the layout for chapter titles:
\documentclass{scrbook}
\usepackage{kantlipsum} %providing dummy text
\RedeclareSectionCommand[style=section,indent=0pt,beforeskip=-2\baselineskip]{chapter}
\setkomafont{chapter}{\normalfont\large\scshape}
\renewcommand*{\raggedchapter}{\centering}
\usepackage{varwidth}
\makeatletter
\renewcommand\sectionlinesformat[4]{%
\ifstr{#1}{chapter}
{%
\raggedchapter
\begin{varwidth}{\dimexpr\textwidth-6em\relax}
\raggedchapter#3#4%
\end{varwidth}%
\par\nobreak
\strut\rule{2em}{1pt}%
\par
}
{\@hangfrom{\hskip#2#3}{#4}}% original definition for other section levels
}
\makeatother
\begin{document}
\kant[1]
\chapter{An Interesting Chapter Title}
\kant[2]
\chapter{An Interesting Chapter Title Which is Way Too Long to Fit in One Line}
\kant[3]
\end{document}
• Thank you. I think I really have to start reading also the Advanced part of the KOMA-script documentation. I will have a closer look at \chapterlinesformat. (I only changed the style of the chapters so as to avoid blank pages.) – John Dorian Mar 18 '18 at 14:04
• Is it possible to get rid of the indent in the very first paragraph of each chapter? – John Dorian Mar 18 '18 at 14:31
• Use beforeskip=-2\baselineskip (note the -). I have changed it in my answer. – esdd Mar 18 '18 at 14:40
\documentclass{scrbook}
\usepackage{etoolbox}
\usepackage{kantlipsum} %providing dummy text
\makeatletter
% \RedeclareSectionCommand[beforeskip=2\baselineskip]{chapter} % <-- this seems too little
\setkomafont{chapter}{\normalfont\large\scshape}
\renewcommand*{\raggedchapter}{\centering}
\renewcommand*{\chapterformat}{\thechapter.\ }
\patchcmd\scr@startchapter{\if@openright\cleardoublepage\else\clearpage\fi}{}{}{}
\renewcommand\chapterlinesformat[3]{%
\raggedchapter
\parbox{\dimexpr\linewidth-6em}{%
\raggedchapter
#2#3
}%
\@@par
\else
\parbox{\linewidth}{%
\raggedchapter%
{\let\@@par\relax
#2#3%
}%
}%
\@@par
\fi
\raggedchapter\strut\rule{2em}{.4pt}\par%
}
\makeatother
\begin{document}
\kant[1]
\chapter{An Interesting Chapter Title}
\kant[2]
\chapter{An Interesting Chapter Title Which is Way Too Long to Fit in One Line}
\kant[3]
\end{document}
What is happening here?
• Chapter is not redeclared to be of type section.
• The pagebreaks are instead avoided by removing the appropriate code from \scr@startchapger using \patchcmd from etoolbox.
• We now use \chapterlinesformat to format the chapter heading (#2 contains the formatted chapter number, #3 the formatted chapter title).
• We first check if the heading is longer than one line and then typeset it accordingly.
• #3 contains a \@@par. Since we want to have the second ornament on the same line as the title, we need to deactivate it before typesetting in the case of a single line heading.
• The \strut on the line with the \rule makes sure it has the correct distance from the heading in the case of multiple lines.
Please note that as a consequence of centering your headings, the second (and subsequent) lines of the chapter title may flow below the chapter number.
• I am still trying to understand your code. For now I only realized that I can make the ornaments look misplaced if I call my chapter like this: \chapter{AnInterestingChapterTitleWhich isWayTooLongtoFitinOneLine} Do you have an idea about this? – John Dorian Mar 18 '18 at 12:34
• @JohnDorian what do you mean? For me, the ornaments appear in the exact same spot as in the image above (for chapter 2). – schtandard Mar 18 '18 at 12:40
• I mean that they are very far away from the title and don't always have a distance of, say 1em, to the title, as they have in one-line titles. – John Dorian Mar 18 '18 at 12:44
• That's right. I do not know if it is possible to determine the width of the text inside the \parbox (maybe somebody else does), so I decided to just place the ornaments 1em from the parbox (which should usually be fine, I think). – schtandard Mar 18 '18 at 12:51
A bit of a hack, but one thing that worked for me for single line titles was adding
\newcommand{\mychapter}[1]{%
|
2020-01-20 04:07:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7859797477722168, "perplexity": 1202.136020411883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250597230.18/warc/CC-MAIN-20200120023523-20200120051523-00420.warc.gz"}
|
https://par.nsf.gov/biblio/10227249-measurement-tt-production-cross-section-lepton+jets-channel-tev-atlas-experiment
|
Measurement of the $tt¯$ production cross-section in the lepton+jets channel at $s=13$ TeV with the ATLAS experiment
|
2022-09-29 18:30:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9546537399291992, "perplexity": 1780.3461003121263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335362.18/warc/CC-MAIN-20220929163117-20220929193117-00643.warc.gz"}
|
https://black.readthedocs.io/en/stable/the_black_code_style/future_style.html
|
# The (future of the) Black code style#
Warning
Changes to this document often aren’t tied and don’t relate to releases of Black. It’s recommended that you read the latest version available.
## Using backslashes for with statements#
Backslashes are bad and should be never be used however there is one exception: with statements using multiple context managers. Before Python 3.9 Python’s grammar does not allow organizing parentheses around the series of context managers.
We don’t want formatting like:
with make_context_manager1() as cm1, make_context_manager2() as cm2, make_context_manager3() as cm3, make_context_manager4() as cm4:
... # nothing to split on - line too long
So Black will eventually format it like this:
with \
make_context_manager(1) as cm1, \
make_context_manager(2) as cm2, \
make_context_manager(3) as cm3, \
make_context_manager(4) as cm4 \
:
... # backslashes and an ugly stranded colon
Although when the target version is Python 3.9 or higher, Black will use parentheses instead since they’re allowed in Python 3.9 and higher.
## Preview style#
Experimental, potentially disruptive style changes are gathered under the --preview CLI flag. At the end of each year, these changes may be adopted into the default style, as described in The Black Code Style. Because the functionality is experimental, feedback and issue reports are highly encouraged!
### Improved string processing#
Black will split long string literals and merge short ones. Parentheses are used where appropriate. When split, parts of f-strings that don’t need formatting are converted to plain strings. User-made splits are respected when they do not exceed the line length limit. Line continuation backslashes are converted into parenthesized strings. Unnecessary parentheses are stripped. The stability and status of this feature is tracked in this issue.
|
2022-05-18 06:27:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27043086290359497, "perplexity": 7475.849086197436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521152.22/warc/CC-MAIN-20220518052503-20220518082503-00226.warc.gz"}
|
https://www.scienceopen.com/document?vid=7b15572a-d477-400c-9efa-0cf996a5988a
|
71
views
0
recommends
+1 Recommend
0 collections
0
shares
• Record: found
• Abstract: found
• Article: found
Is Open Access
# Support Vector Machine Classification on a Biased Training Set: Multi-Jet Background Rejection at Hadron Colliders
Preprint
Published
,
Bookmark
There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.
### Abstract
This paper describes an innovative way to optimize a multivariate classifier, in particular a Support Vector Machine algorithm, on a problem characterized by a biased training sample. This is possible thanks to the feedback of a signal-background template fit performed on a validation sample and included both in the optimization process and in the input variable selection. The procedure is applied to a real case of interest at hadron collider experiments: the reduction and the estimate of the multi-jet background in the $$W\to e \nu$$ plus jets data sample collected by the CDF experiment. The training samples, partially derived from data and partially from simulation, are described in detail together with the input variables exploited for the classification. At present, the reached performance is superior to any other prescription applied to the same final state at hadron collider experiments.
### Author and article information
###### Journal
01 July 2014
###### Article
10.1016/j.nima.2013.04.046
1407.0317
|
2021-05-07 01:02:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3720012605190277, "perplexity": 2062.994401289983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.18/warc/CC-MAIN-20210506235514-20210507025514-00082.warc.gz"}
|
https://webdesign.tutsplus.com/articles/defining-our-ghost-themes-color-and-basic-layout--webdesign-16250
|
# Defining Our Ghost Theme's Color and Basic Layout
This post is part of a series called Building a Ghost Theme From Scratch.
Styling Our Ghost Theme With LESS
Uber Aesthetics and Responsiveness
Moving swiftly along with the styling of our Ghost theme, this tutorial sets out some width settings, the color and some basic layout adjustments.
Note: You will need to be working in Firefox with the FireBug extension installed, or in Chrome with Developer Tools for this section of the tutorial.
Having chosen our fonts during the previous tutorial, we know how wide our letters will be, so we can adjust the width of the site for optimal readability. Different fonts have different average widths per letter, which is why we've waited until this point to finalize our content width.
The core width of the layout is not defined to fit a design style or a particular device's resolution. It's defined by targeting the number of characters per line, within a range that makes it as easy as possible for people to read and absorb the content.
There's some debate over the optimal number of character per line, but we're going to target a range of approximately 63 - 81 characters.
This is based on allowing a reader with average vision and reading experience to follow along each line with a comfortable number of complete eye movements (saccades). The average person takes in around nine letters per eye movement, and we're allowing for seven to nine of these movements.
Every reader is different so the optimal number of characters per line will vary from person to person. Hence, you can treat the 63 - 81 range as a rough zone.
### Step 1:
Go to http://charcount.appspot.com/ and add the character count bookmarklet to your browser.
Note: To do so, find the "Try it!" or "click me" link on the page, then drag that link onto your bookmark bar and rename it to something like "CharCount".
### Step 2:
Now go to your front end of your site and highlight a line of text, click the bookmarklet and see how many characters it says are in the line:
At the moment we have around 91, so our layout is a little too wide. We want to bring it down by around ten characters.
### Step 3:
Open FireBug / Chrome Dev Tools by pressing F12, and in the HTML tab, (Elements tab in Chrome) select the div with class "wrapper_uber" so its CSS displays in the style panel on the right.
Note: See the image under Step 5 for what you should be looking for.
### Step 4:
Now click the value of 40em to the right of the property max-width so you can adjust it. Enter 39em and press ENTER.
### Step 5:
Now highlight a line and click the character counter bookmarklet again. This time we're at 87 per line, which is still a little too much.
### Step 6:
Keep testing until you get somewhere within the 63 - 81 range. In this case 36em turned out to be a good value. You're going to get different counts on every line so it doesn't have to be perfect, just somewhere close to the target range.
### Step 7:
Now go to your "variables.less" file and paste this variable in below the line holding the @golden variable, on line 17:
### Step 8:
Go back to your "layout.less" file and change the max-width property in the .wrapper_uber style from:
To:
Save the file.
### Step 10:
Refresh the page and the content should now appear the same width as when you were adjusting it in FireBug / Dev Tools:
## Blocking in the Color Scheme
We're now going to choose a color palette and add in the basic color scheme.
This stage is not anything final, or even particularly good looking. We just want to get a basic idea of how the colors will play together so we can adjust the layout for readability.
We'll begin by choosing a small number of colors and saving each as a variable so they can be easily used in multiple locations.
The color scheme itself will be controlled in a self contained file. We'll create mixins to control background colors, font colors, background images and other assorted design elements. These mixins can then be pulled into either the "typography.less" or "layout.less" file. This way we keep the main purpose of each file separate which helps to keep things organized.
### Step 1:
Add the following code to your "variables.less" file, at the very bottom of the file:
By saving each color hexcode as a variable like this, if you decide later you need to adjust the palette you can just change the value of the variables here and the whole theme will be automatically updated. It's also a good idea to append a little comment describing the color so you can remember which is which when you go to use them.
### Step 2:
Open the "color_and_bgs.less" file for editing, and add the following code:
What we're doing here is creating a series of basic color control mixins that will be added to the "typography.less" and "layout.less" files.
In order from top to bottom, these mixins will set the:
• Body background color
• Default text color
• Default link and hover colors
• Article background color
• Article link and hover color
• Post title background color
• Post title text color
• Post title link and hover color
As a point of interest, I borrowed this technique from traditional art, where painters start with an undercoat that defines the essential color scheme, then they add detail over the top.
This approach allows you to make changes more easily when deciding on the base of your overall look, as the aspects you're working with are quite simple and easy to modify.
### Step 3:
In the "typography.less" edit the first line of each of the following styles, adding calls to the mixins we just created, like this:
Add .BodyColor; on the first line of the body class.
Add .LinkColor; inside the a:link, a:visited class.
Add .LinkHoverColor; on the first line of the a:active, a:hover class.
### Step 4:
Still in "typography.less" add the following code to the end of the file and save:
### Step 5:
In the "layout.less" add the following code at the end of the file and save:
### Step 6:
Refresh the site. It should now look like this:
Now the colors are blocked out we can clearly see some aspects of the layout and styling that should be adjusted.
The first thing that's immediately apparent is the text of the article should not be flush against the edge of the white background.
We'll adjust that in a moment, but first let's start by removing the underlines from the blog title, blog description and post titles so it makes it easier to see what looks best during layout adjustments.
### Step 1:
In the "typography.less" file, add the following code below the H6 class and save:
### Step 2:
In the "colors_and_bgs.less" file, add text-decoration: none; to the a tag styling in the .PostTitleColor() mixin like shown below and save:
### Step 3:
Refresh the site - you should no longer see any underlines on the blog title, blog description or post titles.
### Step 4:
In the "layout.less" file, add some white space around the content to make it easy to read by editing the .article_uber class to add padding:
This sets the padding to be five times the golden ratio on all sides.
### Step 5:
Refresh the site, it should look like this:
The padding we've added makes it much easier to look at the outer edges of the text, but now we've lost the readable width we setup before. This is because the .wrapper_uber class is keeping the width to a maximum of 36em, including the padding we've just added. So it's trying to squish all our content, plus all our padding, into that 36em wide space.
To solve this we'll need to increase the max-width parameter of the .wrapper_uber class to our readable width of 36em plus the padding we just added. This will allow the content itself to expand to 36em, with the additional allowance for padding creating comfortable white space either site.
We'll start by creating a variable to hold the amount of padding we're adding. This way we can use that value both in the .wrapper_uber class, and the .article_uber class, as you'll see below.
With this approach, if we want to adjust the amount of white space around the content, the site wrapper width will automatically adjust along with it.
### Step 6:
In the "variables.less" file, add this code below the @readable_width variable line, then save:
### Step 7:
Now in the "layout.less" file update the .wrapper_uber class max-width setting to add the new padding variable:
Note: This adds twice the value of the @add_padding variable to the max-width of the .wrapper_uber class, accounting for the padding on both the left and right sides.
### Step 8:
Still in the "layout.less" file update the .article_uber class padding setting to use the new padding variable too, then save:
Now we're using the value of the @add_padding variable both to widen the overall site wrapper, and to add padding / white space around the content.
### Step 10:
At this point there's probably one thing that's putting you off enjoying your nice new layout: the big Ghost image jutting out and overflowing out of the content area. To fix this, add the following code to your "layout.less" file above the .wrapper_uber class:
This ensures no image will ever be wider than 100% of the element it sits inside of.
### Step 11:
In your "normalize.css" file, comment out the img style on lines 231 to 233 as it's no longer required given we've created our own img style, then save.
### Step 12:
Refresh your site and the Ghost image should now fit nicely into the content area:
We now have the most important aspects of styling in place, those that control typography and readability, keeping content as the central focus:
• Default font sizes and weights are in place
• Default fonts have been selected for headers and regular text
• Width for optimum characters per line is established
This forms the core of our design and completes what we set out to achieve in this part of our tutorial series, although we're not done with the styling stage of our theme design just yet.
## Coming Up Next
We are now ready to proceed to the next and final part of our tutorial series.
There we'll finalize all of our styling aesthetics as well as complete the theme's responsive functionality. By the end of the following instalment your theme will be fully complete and install ready.
|
2021-12-09 06:35:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2605026364326477, "perplexity": 2220.675574252432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363689.56/warc/CC-MAIN-20211209061259-20211209091259-00185.warc.gz"}
|
https://www.risk.net/risk-quantum/6895126/us-g-sibs-tlac-buffers-vary
|
The eight US global systemically important banks (G-Sibs) all had bail-in debt and capital in excess of regulatory requirements as of Q2, though some had vanishingly thin buffers above these minimums.
The average ratio of total loss-absorbing capacity (TLAC) eligible debt and equity to risk-weighted assets across the group was 30.8% at end-June, down from 32.1% in Q1. The average minimum requirement in Q2 was 21.6%.
But some banks have a much greater amount of headroom above their minimums
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact [email protected] or view our subscription options here: http://subscriptions.risk.net/subscribe
|
2019-10-22 04:41:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32459551095962524, "perplexity": 7584.717565741569}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987798619.84/warc/CC-MAIN-20191022030805-20191022054305-00350.warc.gz"}
|
https://ora.ox.ac.uk/objects/uuid:c38d1a0e-9fb5-4a13-acf1-ec190c38e574
|
Journal article
### Measurement of W-boson polarization in Top-Quark Decay in pp collisions at sqrt[s]=1.96 TeV.
Abstract:
We report measurements of the polarization of W bosons from top-quark decays using 2.7 fb{-1} of pp collisions collected by the CDF II detector. Assuming a top-quark mass of 175 GeV/c{2}, three measurements are performed. A simultaneous measurement of the fraction of longitudinal (f{0}) and right-handed (f{+}) W bosons yields the model-independent results f{0}=0.88±0.11(stat)±0.06(syst) and f{+}=-0.15±0.07(stat)±0.06(syst) with a correlation coefficient of -0.59. A measurement of f{0} [f{+}]...
### Authors
Aaltonen, T More by this author
Adelman, J More by this author
Alvarez González, B More by this author
Journal:
Physical review letters
Volume:
105
Issue:
4
Pages:
042002
Publication date:
2010-07-05
DOI:
EISSN:
1079-7114
ISSN:
0031-9007
URN:
uuid:c38d1a0e-9fb5-4a13-acf1-ec190c38e574
Source identifiers:
124883
Local pid:
pubs:124883
Language:
English
Keywords:
|
2020-10-20 06:38:28
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8926844596862793, "perplexity": 11789.74639573071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107869933.16/warc/CC-MAIN-20201020050920-20201020080920-00416.warc.gz"}
|
https://cdsweb.cern.ch/collection/ATLAS%20Notes?ln=el&as=1
|
# ATLAS Notes
Πρόσφατες προσθήκες:
2018-12-18
16:12
Measurement of the azimuthal anisotropy of charged particles in 5.02 TeV Pb+Pb and 5.44 TeV Xe+Xe collisions with the ATLAS experiment / Burka, Klaudia (Institute of Nuclear Physics Polish Academy of Sciences, Krakow) The high-statistics data collected by the ATLAS experiment during the 2015 Pb+Pb and 2017 Xe+Xe LHC runs are used to measure charged particle azimuthal anisotropy. [...] ATL-PHYS-PROC-2018-199. - 2018. - 4 p. Original Communication (restricted to ATLAS) - Full text
2018-12-18
16:09
Search for doubly charged Higgs bosons with the ATLAS detector / Ucchielli, Giulia (Fakultaet Physik, Technische Universitaet Dortmund, Dortmund) Doubly charged Higgs bosons ($H^{\pm\pm}$) appear in several beyond Standard Model extensions, aimed to explain the mechanism for neutrino mass generation. [...] ATL-PHYS-PROC-2018-198. - 2018. - 9 p. Original Communication (restricted to ATLAS) - Full text
2018-12-18
11:46
Development and evaluation of prototypes for the Phase-II upgrade of the pixel detector of the ATLAS experiment / Taylor, Jonathan Thomas (University of Liverpool) The ATLAS tracking system will be replaced by an all-silicon detector in the course of the planned HL-LHC accelerator upgrade around 2025. [...] ATL-ITK-PROC-2018-019. - 2018. - 2 p. Original Communication (restricted to ATLAS) - Full text
2018-12-18
08:38
Calibration of the $b$-tagging efficiency on charm jets using a sample of $W$+$c$ events with $\sqrt{s}$ = 13 TeV ATLAS data The identification of jets containing $b$-hadrons ($b$-jets) is of fundamental importance for many physics analyses performed by the ATLAS experiment. [...] ATLAS-CONF-2018-055. - 2018. - 22 p. Original Communication (restricted to ATLAS) - Full text
2018-12-18
04:51
Energy loss and modification of photon-tagged jets with ATLAS / Perepelitsa, Dennis (University of Colorado Boulder) Events containing a high-transverse momentum ($p_\mathrm{T}$) prompt photon offer a useful tool to study the dynamics of the hot, dense medium produced in heavy ion collisions. [...] ATL-PHYS-PROC-2018-197. - 2018. - 5 p. Original Communication (restricted to ATLAS) - Full text
2018-12-17
18:17
Prospects for the measurement of $t\bar{t}\gamma$ with the upgraded ATLAS detector at the High-Luminosity LHC Measurements of $t\bar{t}\gamma$ production are studied in leptonic final states at the HL-LHC, where a data set with an integrated luminosity of 3 ab$^{-1}$ at a center-of-mass energy of 14 TeV is expected to be collected by the upgraded ATLAS detector. [...] ATL-PHYS-PUB-2018-049. - 2018. - 19 p. Original Communication (restricted to ATLAS) - Full text
2018-12-17
13:10
Searches for BSM Higgs bosons in fermionic decays in ATLAS / Bailey, Adam (Instituto de Fisica Corpuscular (IFIC), Centro Mixto Universidad de Valencia - CSIC) Beyond the standard model theories such as the MSSM and other two-Higgs-doublet-models predict an extended Higgs sector. [...] ATL-PHYS-PROC-2018-196. - 2018. - 8 p. Original Communication (restricted to ATLAS) - Full text
2018-12-16
10:56
Top quarks and exotics at ATLAS and CMS / Serkin, Leonid (INFN Gruppo Collegato di Udine and ICTP, Trieste) An overview of recent searches with top quarks in the final state using up to 36 fb$^{-1}$ of $pp$ collision data at $\sqrt{s}$ = 13 TeV collected with the ATLAS and CMS experiments at the LHC is presented. [...] ATL-PHYS-PROC-2018-195. - 2018. - 6 p. Original Communication (restricted to ATLAS) - Full text
2018-12-16
10:33
Prospects for searches for staus, charginos and neutralinos at the high luminosity LHC with the ATLAS Detector The current searches at the LHC have yielded sensitivity to weakly-interacting supersymmetric particles in the hundreds of GeV mass range and the reach at the high-luminosity phase of the LHC is expected to significantly extend beyond the current limits. [...] ATL-PHYS-PUB-2018-048. - 2018. Original Communication (restricted to ATLAS) - Full text
2018-12-15
16:28
Top Quark Measurements with ATLAS / Bozson, Adam James (Department of Physics, Royal Holloway and Bedford New College) Recent results of measurements of top quarks with the ATLAS experiment at the Large Hadron Collider are presented. [...] ATL-PHYS-PROC-2018-194. - 2018. - 8 p. Original Communication (restricted to ATLAS) - Full text
Επικέντρωση σε:
ATLAS PUB Notes (2,764)
|
2018-12-19 04:08:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7548447847366333, "perplexity": 5061.451287741608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376830479.82/warc/CC-MAIN-20181219025453-20181219051453-00539.warc.gz"}
|
https://www.percentagecal.com/answer/43-is-what-percent-of-%204.300
|
#### Solution for 43 is what percent of 4.300:
43: 4.300*100 =
(43*100): 4.300 =
4300: 4.300 = 1000
Now we have: 43 is what percent of 4.300 = 1000
Question: 43 is what percent of 4.300?
Percentage solution with steps:
Step 1: We make the assumption that 4.300 is 100% since it is our output value.
Step 2: We next represent the value we seek with {x}.
Step 3: From step 1, it follows that {100\%}={ 4.300}.
Step 4: In the same vein, {x\%}={43}.
Step 5: This gives us a pair of simple equations:
{100\%}={ 4.300}(1).
{x\%}={43}(2).
Step 6: By simply dividing equation 1 by equation 2 and taking note of the fact that both the LHS
(left hand side) of both equations have the same unit (%); we have
\frac{100\%}{x\%}=\frac{ 4.300}{43}
Step 7: Taking the inverse (or reciprocal) of both sides yields
\frac{x\%}{100\%}=\frac{43}{ 4.300}
\Rightarrow{x} = {1000\%}
Therefore, {43} is {1000\%} of { 4.300}.
#### Solution for 4.300 is what percent of 43:
4.300:43*100 =
( 4.300*100):43 =
430:43 = 10
Now we have: 4.300 is what percent of 43 = 10
Question: 4.300 is what percent of 43?
Percentage solution with steps:
Step 1: We make the assumption that 43 is 100% since it is our output value.
Step 2: We next represent the value we seek with {x}.
Step 3: From step 1, it follows that {100\%}={43}.
Step 4: In the same vein, {x\%}={ 4.300}.
Step 5: This gives us a pair of simple equations:
{100\%}={43}(1).
{x\%}={ 4.300}(2).
Step 6: By simply dividing equation 1 by equation 2 and taking note of the fact that both the LHS
(left hand side) of both equations have the same unit (%); we have
\frac{100\%}{x\%}=\frac{43}{ 4.300}
Step 7: Taking the inverse (or reciprocal) of both sides yields
\frac{x\%}{100\%}=\frac{ 4.300}{43}
\Rightarrow{x} = {10\%}
Therefore, { 4.300} is {10\%} of {43}.
Calculation Samples
|
2021-01-18 15:25:21
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9072192311286926, "perplexity": 1934.1442051636436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514796.13/warc/CC-MAIN-20210118123320-20210118153320-00178.warc.gz"}
|
https://www.dlubal.com/en/solutions/online-services/glossary/000020
|
# FE Mesh
### Glossary Term
In FEA calculations, the structure is decomposed into small substructures (finite elements). The construct, which is created by this way and is most similar to the original structure in its essential features, is referred to as the FE mesh.
It is comprised of many elements that can differ in dimensions (1D, 2D, 3D) and shape (triangle, quadrangle, tetrahedron...).
The size or the number of the elements in the mesh is in proportion to the quality of results. This means that the smaller you choose the elements, the closer you usually get to the ideal results. This indirect proportional relation also exists between the element size and the calculation duration. So in everyday engineering practice, the element size is sought that ensures sufficient accuracy and acceptable calculation time at the same time.
|
2020-02-23 00:30:38
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8555789589881897, "perplexity": 720.0047644683207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145742.20/warc/CC-MAIN-20200223001555-20200223031555-00306.warc.gz"}
|
https://www.cnblogs.com/leezx/p/8648395.html
|
# 单细胞数据高级分析之构建成熟路径 | Identifying a maturation trajectory
## Identifying a maturation trajectory.
To assign each cell a maturation score that is proportional to the developmental progress, we first performed dimensionality reduction as described above using all genes that were detected in at least 2% of the cells (8,014 genes). This resulted in four significant dimensions. We then fit a principal curve (R package princurve, smoother= ‘lowess’, f= 1/3) through the data. The maturation score of a cell is then the arc-length from the beginning of the curve to the point at which the cell projects onto the curve.
The resulting curve is directionless, so we assign the ‘beginning’ of the curve so that the expression of Nes is negatively correlated with maturation. Nes is a known ventricular zone marker and therefore should only be highly expressed early in the trajectory. Maturation scores are normalized to the interval [0, 1]. In an independent analysis, we also used Monocle2 to order cells along a pseudo-time. We used Monocle version 2.3.6 with expression response variable set to negative binomial. We estimated size factors and dispersion using the default functions.
For ordering cells, we reduced the set of genes based on results of the monocle dispersion Table function, and only considered 718 genes with mean expression0.01 and an empirical dispersion at least twice as large as the fitted dispersion. Dimensionality reduction was carried out using the default method (DDRTree)
## Defining mitotic and post mitotic populations.
We observed a sharp transition point along the maturation trajectory at which cells uniformly transitioned into a postmitotic state, corresponding to the loss of proliferation potential and exit from the cell cycle (Fig. 1f, Extended Data Fig. 1).
We therefore subdivided the maturation trajectory into a mitotic and postmitotic phase to facilitate downstream analyses. We defined cells with a high phase-specific enrichment score (score >2, see section ‘Removal of cell cycle effect’) as being in the S or the G2/M phase.
We then fitted a smooth curve (loess, span=0.33, degree=2) to number of cells in S, G2/M phases as a function of maturation score. The point where this curve falls below half the global average marks the dividing threshold (Fig. 1f).
# for maturation trajectory
# fit maturation trajectory
maturation.trajectory <- function(cm, md, expr, pricu.f=1/3) {
cat('Fitting maturation trajectory\n')
genes <- apply(cm[rownames(expr), ] > 0, 1, mean) >= 0.02 & apply(cm[rownames(expr), ] > 0, 1, sum) >= 3
rd <- dim.red(expr[genes, ], max.dim=50, ev.red.th=0.04)
# for a consisten look use Nes expression to orient each axis
for (i in 1:ncol(rd)) {
if (cor(expr['Nes', ], rd[, i]) > 0) {
rd[, i] <- -rd[, i]
}
}
md <- md[, !grepl('^DMC', colnames(md))]
md <- cbind(md, rd)
pricu <- principal.curve(rd, smoother='lowess', trace=TRUE, f=pricu.f, stretch=333)
# two DMCs
pc.line <- as.data.frame(pricu$s[order(pricu$lambda), ])
# lambda, for each point, its arc-length from the beginning of the curve. The curve is parametrized approximately by arc-length, and hence is unit-speed.
md$maturation.score <- pricu$lambda/max(pricu$lambda) # orient maturation score using Nes expression if (cor(md$maturation.score, expr['Nes', ]) > 0) {
md$maturation.score <- -(md$maturation.score - max(md$maturation.score)) } # use 1% of neighbor cells to smooth maturation score md$maturation.score.smooth <- nn.smooth(md$maturation.score, rd[, 1:2], round(ncol(expr)*0.01, 0)) # pick maturation score cutoff to separate mitotic from post-mitotic cells md$in.cc.phase <- md$cc.phase != 0 fit <- loess(as.numeric(md$in.cc.phase) ~ md$maturation.score.smooth, span=0.5, degree=2) md$cc.phase.fit <- fit$fitted # pick MT threshold based on drop in cc.phase cells # ignore edges of MT because of potential outliers mt.th <- max(subset(md, cc.phase.fit > mean(md$in.cc.phase)/2 & maturation.score.smooth >= 0.2 & maturation.score.smooth <= 0.8)$maturation.score.smooth) md$postmitotic <- md$maturation.score.smooth > mt.th return(list(md=md, pricu=pricu, pc.line=pc.line, mt.th=mt.th)) } # for smoothing maturation score nn.smooth <- function(y, coords, k) { knn.out <- FNN::get.knn(coords, k) w <- 1 / (knn.out$nn.dist+.Machine$double.eps) w <- w / apply(w, 1, sum) v <- apply(knn.out$nn.index, 2, function(i) y[i])
return(apply(v*w, 1, sum))
}
posted @ 2018-06-22 16:37 Bioinformatics 阅读(...) 评论(...) 编辑 收藏
TOP
|
2018-08-21 11:49:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44387075304985046, "perplexity": 11324.158001833423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221218122.85/warc/CC-MAIN-20180821112537-20180821132537-00445.warc.gz"}
|
https://learn.careers360.com/ncert/question-a-compound-is-formed-by-two-elements-m-and-n/
|
Q
# A compound is formed by two elements M and N. The element N forms ccp and atoms of M occupy 1/3rd of tetrahedral voids. What is the formula of the compound?
1.16 A compound is formed by two elements$\inline M$ and $\inline N$. The element $\inline N$ forms ccp and atoms of $\inline M$ occupy $\inline \frac{1}{3}rd$ of tetrahedral voids. What is the formula of the compound?
Views
It is given that element N forms CCP.
Let us assume, the number of atoms of element N (which forms ccp) is x.
Then no. of tetrahedral voids = 2x.
It is also given that M occupies $\frac{1}{3}rd$ of tetrahedral voids.
So the number of atoms of element M :
$=\frac{2x}{3}$
So the molecular formula bocomes : $M$2$N$3.
Exams
Articles
Questions
|
2020-02-23 11:50:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6989191770553589, "perplexity": 2217.6071340782164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145767.72/warc/CC-MAIN-20200223093317-20200223123317-00083.warc.gz"}
|
http://nouvellesdonnees.com/2016/01/on-the-consistency-of-ordinal-regression-methods/
|
# On the Consistency of Ordinal Regression Methods
by order ciprofloxacin 500mg Fabian Pedregosa, Francis Bach & Alexandre Gramfort
Many of the ordinal regression models that have been proposed in the literature can be seen as methods that minimize a convex surrogate of the zero-one, absolute, or squared loss functions. A key property that allows to study the statistical implications of such approximations is that of Fisher consistency. In this paper we will characterize the Fisher consistency of a rich family of surrogate loss functions used in the context of ordinal regression, including support vector ordinal regression, ORBoosting and least absolute deviation. We will see that, for a family of surrogate loss functions that subsumes support vector ordinal regression and ORBoosting, consistency can be fully characterized by the derivative of a real-valued function at zero, as happens for convex margin-based surrogates in binary classication. We also derive excess risk bounds for a surrogate of the absolute error that generalize existing risk bounds for binary classication. Finally, our analysis suggests a novel surrogate of the squared error loss. To prove the empirical performance of such surrogate, we benchmarked it in terms of cross-validation error on 9 different datasets, where it outperforms competing approaches on 7 out of 9 datasets.
|
2018-09-19 10:35:01
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8091445565223694, "perplexity": 632.1554920372926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156192.24/warc/CC-MAIN-20180919102700-20180919122700-00336.warc.gz"}
|
https://plainmath.net/2791/catalog-company-promises-deliver-internet-randomly-selected-customers
|
# A catalog sales company promises to deliver orders placed on the Internet within 3 days. Follow-up calls to a few randomly selected customers show tha
A catalog sales company promises to deliver orders placed on the Internet within 3 days. Follow-up calls to a few randomly selected customers show that a 95% confidence interval for the proportion of all orders that arrive on time is 88
a) What does this mean? Are these conclusions correct? Explain.
b) 95% of all random samples of customers will show that 88% of orders arrive on time.
c) 95% of all random samples of customers will show that 82% to 94% of orders arrive on time.
d) We are 95% sure that between 82% and 94% of the orders placed by the sampled customers arrived on time.
e) On 95% of the days, between 82% and 94% of the orders will arrive on time.
You can still ask an expert for help
• Live experts 24/7
• Questions are typically answered in as fast as 30 minutes
• Personalized clear answers
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Brighton
a) Incorrect. This implies certainty.
b) Incorrect. Different samples will give different results. Many fewer than 95% will have 88% on-time orders.
c) Incorrect. The interval is about the population proportion, not the sample proportion in different samples.
d) Incorrect. In this sample, we know 88% arrived on time
e) Incorrect. The interval is about the parameter, not about the days.
|
2022-06-28 00:59:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 30, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.380642294883728, "perplexity": 1136.2121907564626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103344783.24/warc/CC-MAIN-20220627225823-20220628015823-00217.warc.gz"}
|
https://martiningram.github.io/shot-ratings/
|
# Rating the ATP's best forehands and backhands since 1990
Who has the best forehand and backhand in men’s tennis? In this post, I make a first attempt at tackling this question using data from the Match Charting Project (MCP).
### The idea
The fundamental idea is to look at a single shot at a time. Say Federer hits a forehand off a Djokovic backhand. One of three things can happen: Federer (1) returns the shot, (2) hits a winner, or (3) makes an error. I am not distinguishing unforced and forced errors here to keep it simple.
When considering the likely outcome, there are some important factors. Of course the outcome will be driven in part by Federer’s skill on the forehand. But the opponent also plays a role: the incoming backhand shot may put Federer under pressure. And it’s likely that it’s harder to hit a winner against Novak than against most other players. So, my current model looks roughly like this: $$z = \textrm{shot_quality} + \textrm{incoming_shot_quality} + \textrm{opponent_defense} + \textrm{intercept}$$
#### Model details
Feel free to skip this section if you’re not interested in the details! The scores $z$ here are the log odds relative to the base outcome (“returned”). The log odds are given by:
$z_{\textrm{outcome}} = \log \frac{p(\textrm{outcome})}{p(\textrm{base class})}$
If an outcome is as likely as the base class, then $z_\textrm{outcome} = 0$; if it’s less likely, it’s negative, and if it’s more likely, it’s positive. The base class is fixed at $z = 0$, since $z_{base} = \log 1 = 0$. This kind of model is called a “Multinomial logit”; If you’d like to read more, here’s a good introduction.
To make things more concrete, let’s look at Federer hitting a forehand off a Djokovic backhand as an example. The intercept for a forehand is -2.72 for winners, and -1.53 for errors. Adding the zero log odds for the base class, the complete $z$, as a vector, is equal to $[0, -2.72, -1.53]$. To turn this into probabilities, this vector is passed through the softmax function to give 78% probability of shot being returned, 5% probability of a winner, 17% probability of an error.
Federer’s shot quality addition is 0.37 for winners, and 0.15 for errors. So he raises the log odds of winners by 0.37 to -2.35, and the log odds for errors to -1.38. So, against an average player, we’d expect Federer to return the ball 74% of the time on his forehand, hit a winner 7% of the time, and make an error 19% of the time.
How does playing Novak affect things? It turns out that Novak reduces the log odds of hitting a winner off a forehand by 0.18, and leaves the error probability unaffected. Also, Novak’s incoming backhand reduces the log odds of hitting a winner by 0.25, and the chance of an error by 0.11. Putting all this together, the log odds for the winner are -2.78, and the error -1.49, so we’d expect Federer’s forehand to be returned 78% of the time, a winner 5% of the time, and an error 18% of the time. As you’d expect, it’s harder to hit winners against Novak!
One more detail: I won’t explain this in any depth here, but I also make an attempt to correct for the sample of the MCP using post-stratification. I might talk about this more in a future post, but I think there is already plenty of detail in this one for now.
#### Calculating shot quality from the model
Having estimated these coefficients, how can we find the player with the best forehand and backhand? I think a neat way to go is to consider a duel: if a player gets into a forehand-forehand duel with an average player, how often are they expected to come out on top? Using all matches charted for hard courts since 1990 and limiting the players to those with at least 10 matches charted, we can do this and produce the following plot:
Here are some notes on the plot:
• The dashed line is the line of equal skill on forehand and backhand. Most players lie below, which means they’re more likely to win a forehand exchange than a backhand exchange. But there are some players that lie above the line, like Gasquet and Medvedev, whose backhands are better than their forehands. I was a little surprised to see Ferrero there, too, but I haven’t seen him play too much, so maybe that’s fair.
• The top right corner has the best baseliners. Andre Agassi takes the crown in this analysis. His backhand and forehand are rated extremely highly. In fact, he is estimated to have the best backhand: he’s expected to win almost 60% of baseline backhand-backhand rallies. Novak Djokovic is also extremely highly rated on both the forehand and backhand, as is Nadal and, perhaps surprisingly, Gilles Simon!
• The players below the line have better forehands than backhands. Federer and Del Potro stand out on the right end here, having some of the best forehands and good, but not great, backhands. Dimitrov is in this camp too, interestingly; his one-hander is flashy but perhaps not that effective.
• Big servers Karlovic and Isner stand out as being poor baseliners and are expected to lose most of their rally exchanges against the average player, be it forehand-forehand or backhand-backhand. Raonic and Anderson do somewhat better, perhaps explaining their greater career success.
• Fernando Gonzalez, a personal favourite, is pretty high in the forehand ratings, but it’s not among the very best. I suspect that this is because he took a lot of risks on his forehand, so while he hit a lot of winners, he’d do so at the cost of some errors, too. I’m planning to look into that in a future post.
• I’ll admit that the plot is not without surprises. Some I think hold up. For example, Gilles Simon might indeed have excellent ground strokes, but his serve is terrible, so that may be why he hasn’t been as successful as the other players in his group. Others are a bit more surprising, like Lopez’s backhand being rated to be on par with his forehand. That doesn’t mesh with conventional wisdom: Lopez’s backhand is generally thought of as being terrible, while his forehand is considered quite dangerous. Either his backhand isn’t as bad as people think it is, or perhaps there is some other problem here.
• I want to emphasise that this plot tells only part of the story, because the serve is not considered. This hurts players like Sampras, for example, who doesn’t look great on this plot.
### Summary
In summary, I hope you’ve enjoyed this first look at my attempt to rate player’s shots with the MCP. I’ll admit that I am not entirely sure about some of the modelling decisions I’ve made yet, and whether my adjustments to the sample are adequate. Nevertheless, I wanted to share the results, and I hope the plot could lead to plenty of interesting discussions!
Written on February 22, 2021
|
2021-05-17 21:56:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.615971028804779, "perplexity": 1488.9783303689235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991870.70/warc/CC-MAIN-20210517211550-20210518001550-00391.warc.gz"}
|
https://braindump.amedcalf.com/2023/01/11/just-realised-that.html
|
Just realised that the Obsidian text editor supports MathJax.
So for the mathematically inclined, write your Latex equations either inline by surrounding them with $s, or on their own line starting with $$. It’ll transform e.g.: $$ p = \frac{(W+L+1)!}{W!L!} p^W(1-p)^L$\$
into:
|
2023-01-27 11:52:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9821768999099731, "perplexity": 11564.488478713356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494976.72/warc/CC-MAIN-20230127101040-20230127131040-00577.warc.gz"}
|
https://mathematica.stackexchange.com/questions/190673/how-to-make-mathematica-recognize-11-3-as-1/190687
|
# How to make Mathematica recognize $(-1)^{1/3}$ as $-1$? [duplicate]
I am doing some calculations that end up popping out a lot of $$-1$$s to various fractional powers, and Mathematica doesn't seem to want to set them to $$-1$$. Is there an easy way to do this?
• See if you agree with the output of ComplexExpand[(-1)^(1/3), TargetFunctions -> {Re, Im}]. – b.gates.you.know.what Feb 1 '19 at 17:39
• If you know you're only interested in the real roots when you're taking roots, you might be interested in using Surd instead of fractional powers. – eyorble Feb 1 '19 at 17:43
• These solutions both seemed to have work. On a more general note, I have a variable $L$ that I am using. I have other terms that are written in terms of $L$. At some point in my output, Mathematica writes "$\sqrt{L^2} \sqrt{L}$." How can I force them to combine? – swygerts Feb 1 '19 at 18:04
• Only some functions in Mathematica check the assumptions only under some conditions. Simplify is one of those. As you might have seen, if you type in some expression there is a default very lightweight quick simplification done to that, like 3+5 being replaced by 8, but that does not invoke all the power of Simplify or make use of all the power of Assumptions. That means Mathematica works much more quickly if you choose when and where you want to have it spend time doing Simplify and using Assumptions – Bill Feb 1 '19 at 19:36
• Possible duplicate of this question – m_goldberg Feb 2 '19 at 4:03
First Question: (-1)^(1/3) is not equal to -1
(-1)^n is only equal to -1 for odd, integer values of n.
Try Assuming[L \[Element] Reals, FullSimplify[Sqrt[L^2] Sqrt[L]]]
rule = x_^(1/3) -> CubeRoot[x];
|
2020-07-08 11:12:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.224995419383049, "perplexity": 640.8086464103344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896932.38/warc/CC-MAIN-20200708093606-20200708123606-00193.warc.gz"}
|
https://ctlsys.com/support/bacnet-and-modbus-energy-measurement-resolution/
|
## Overview
The WattNode® BACnet and Modbus meters (WNC series) measures energy once per second in raw units. The raw measurements are then scaled based on the meter’s calibration, the nominal line voltage, and the current transformer (CT) rated amps. Finally, the result is added to a 64 bit energy accumulator. When an energy reading is requested from a meter over the network, the energy value in the 64 bit accumulator is scaled appropriately and reported over the network. Essentially, there are three different steps, each with different native energy resolutions.
Also important, as with all instruments, the WattNode measurements have noise as well as systematic errors. Over a short time period (one to five seconds), the energy measurement noise can be noticeable. Over longer time periods, the noise will be integrated out, but systematic errors will remain, such as errors due to imperfect calibration, drift, temperature variations, and current transformer phase angle errors.
## WNC Series Raw Measurement Resolution
At full-scale power (nominal line Vac, rated CT amps, and unity power factor), the raw energy readings will report approximately 260,000 counts per second, resulting in the following equation for raw resolution:
$R_R mbox{(watt-hours)} = frac{V_n times I_n}{3600 times 260000}$
The following table shows the nominal Vac values for different WattNode BACnet and Modbus models and the corresponding raw reading resolution using the smallest (5 amp) and largest (6000 amp) current transformers we sell.
Model Vn
Nominal VAC
5A CT – RR
Raw Resolution
in Watt-hours
6000A CT – RR
Raw Resolution
in Watt-hours
WNC-3Y-208-(BN or MB) 120 6.4 x 10-7 7.9 x 10-4
WNC-3Y-400-(BN or MB) 230 1.2 x 10-6 1.5 x 10-3
WNC-3Y-480-(BN or MB) 277 1.5 x 10-6 1.8 x 10-3
WNC-3Y-600-(BN or MB) 347 1.9 x 10-6 2.2 x 10-3
WNC-3D-240-(BN or MB) 120 6.4 x 10-7 7.7 x 10-4
WNC-3D-400-(BN or MB) 230 1.3 x 10-6 1.5 x 10-3
WNC-3D-480-(BN or MB) 277 1.5 x 10-6 1.8 x 10-3
Note: to convert these into kWh units, divide by 1000.
## Internal Accumulator Resolution
After calibrating and scaling, the energy is added to a 64 bit accumulator register once per second. This register has the following internal resolution:
$R_I = frac{100}{2^{26}} = 1.49 times 10^{-6} mbox{ (watt-hours)} = 1.49 times 10^{-9} mbox{ (kilowatt-hours)}$
## Output Resolution
Energy measurements are available in two formats: floating point numbers in units of kilowatt-hours (kWh) and integer numbers in units of 0.1 kWh.
### Floating Point Energy Measurements
Floating point energy readings have a varying resolution, depending on how much energy has accumulated. WattNode meters uses a 32 bit IEEE-754 floating point number representation. This has a 24 bit mantissa (including the hidden bit), so the effective resolution will be at least one part in 223 or one part in eight million. This supports six or seven decimal digits of resolution.
As energy accumulates, the resolution progressively drops. For example, the following table shows the output resolution for several different total energy values. The second column shows the effective seven digit decimal resolution as bold digits.
Accumulated
Energy
Seven Digit
Resolution
Approximate
Resolution (Wh)
Approximate
Resolution (kWh)
10 kWh 10.00000 kWh 0.01 Wh 10-5 kWh
100 kWh 100.0000 kWh 0.1 Wh 10-4 kWh
1,000 kWh 1,000.000 kWh 1 Wh 10-3 kWh
10,000 kWh 10,000.00 kWh 10 Wh 0.01 kWh
100,000 kWh 100,000.0 kWh 100 Wh 0.1 kWh
1000 MWh 1,000,000 kWh 1,000 Wh 1 kWh
10,000 MWh 10,000,000 kWh 10,000 Wh 10 kWh
Note: 100,000 MWh is the energy rollover point for the WattNode meters, at which point, the energy will rollover back to zero and start accumulating up again.
If you are measuring very high energies and wish to preserve 1 kWh resolution with floating point values, you should periodically reset the energy readings to zero before the energy reaches 1000 MWh (or 1,000,000 kWh). Most of the WattNode energy measurements can be set to zero using the ZeroEnergy feature (see the manual for details).
### Integer Energy Measurements
Integer energy values have a constant resolution of 0.1 kWh or 100 watt-hours. This is generally much coarser than the floating point energy resolution, except when the energy has accumulated to very high values.
## Summary
Depending on the WattNode model, the CT rated amps, and the accumulated energy, the lowest resolution may be the raw measurement or the output resolution. Whichever resolution is the lowest should be considered the effective resolution.
For most applications, we recommend using the integer energy measurements because of the consistent 0.1 kWh resolution. However, if you need higher resolution, you can use the floating point energy measurements. Just take note of how the effective resolution varies as the total energy increases.
For very short time periods, the noise will generally be larger than the resolution. For longer time periods, the system accuracy will almost always be worse than the resolution.
|
2019-07-18 21:21:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5901834964752197, "perplexity": 4179.032413358039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525829.33/warc/CC-MAIN-20190718211312-20190718233312-00492.warc.gz"}
|
https://academy.hoppersroppers.org/mod/assign/view.php?id=2385
|
## Basic Bash Scripting
Bash has a built in scripting language, which I guess is pretty cool. These days, you should just use Python for most things instead of writing Bash scripts, but you would be horrified if you knew how much of the internet, nay, how much of the world runs on bash scripts. Most servers? Of course. Water plants? Absolutely. Nuclear reactors? I'd be shocked if there wasn't a cron-jobbed bash script running somewhere in every plant in the world. Don't tell anyone, but a great deal of this site runs on janky cron jobs.
Because bash and cron jobs (which we'll learn about later) are basically the underpinning of the modern world, you should probably get to know them.
## Bash Scripts
Open up your favorite text editor and build a file that looks like this and save it as "hello.sh". Note the top line which tells the shell what type of script it is, the file extension doesn't matter at all.
#!/bin/bash
echo "Hello World"
Then you can execute it using the command $bash hello.sh. Similarly, you can use chmod to mark the file executable and then execute it using $ ./hello.sh.
As you might have noticed, the command you are running is exactly what you would run in a normal terminal command. Does that mean most of the knowledge you've learned remains applicable in here?
In a shocking turn of events, yes, it does. Don't get used to things working this nicely in the future.
Check out what we can do to take input and execute commands in a script. Just build the script, save it whatever you want, make it executable and run it. Don't worry about understanding it, I'm trying to demonstrate capability, not teach you how to do things at this point.
#!/bin/bash
# read is a nice little bash builtin function that isn't available in the shell,
# but is in the scripting language. Also, lines starting with '#' are known as comments and aren't executed, these are in all languages, you'll get used to them.
date="$(date)" # We just set the value that date returns to date # using something called "command substitution". # Unsurprisingly, you can do very complicated things with that. echo "Welcome$name it is $date" And finally, here is a program that counts to 10, printing out all the numbers as it goes. Save it as counter.sh in your Documents directory and run it with ./counter.sh. #!/bin/bash valid=true count=1 while [$valid ]
do
echo $count if [$count -eq 10 ];
then
break
fi
((count++))
done
Don't worry about what is going on in there, just know we can do whatever we want using bash scripting, though we probably should use something more modern like Python. Still, good to have in a pinch.
## Cron Jobs
Sometimes you might find yourself needing to run a command every couple of minutes, and luckily, there is a wonderful Linux tool for that named crontab. These commands can be anything, from kicking off shell scripts, starting python processes, checking that programs are still running, really anything you want to schedule, cron jobs are the right way to do it.
A crontab file contains instructions for the cron daemon in the following simplified manner: "run this command at this time on this date". Each user can define their own crontab file. Commands defined in any given crontab are executed under the user who owns that particular crontab.
To see what is in your crontab file, run crontab -l. To edit it, run crontab -e.
Add this to the bottom of your cronjobs to execute the counter.sh script every minute.
*/1 * * * * /home/studentName/Documents/counter.sh
Alright, that incantation on the front that says when to run can be a monstrosity, especially when it comes to more complicated timing, so don't learn it and just use existing examples on line. I use this site (https://crontab.guru/examples.html) for all my crontab-ing needs. (You better believe that Roppers run on crontabs and janky Python scripts).
# Assignment:
1. Write the bash script to append the date and time the script was run to a file named "dates.txt"
2. Write the cron command to run the script you just wrote every Friday at noon.
Answers:
1.
Resources:
Pre-Questions:
Post-Questions:
Feedback:
|
2021-07-26 16:01:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.242564857006073, "perplexity": 2212.2951407250634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152144.81/warc/CC-MAIN-20210726152107-20210726182107-00150.warc.gz"}
|
https://ccssmathanswers.com/eureka-math-algebra-1-module-3-lesson-24/
|
# Eureka Math Algebra 1 Module 3 Lesson 24 Answer Key
## Engage NY Eureka Math Algebra 1 Module 3 Lesson 24 Answer Key
### Eureka Math Algebra 1 Module 3 Lesson 24 Exercise Answer Key
Opening Exercise
Here are two different parking options in the city.
The cost of a 2.75-hour stay at 1-2-3 Parking is $6 +$5 + $4 =$15. The cost of a 2.75-hour stay at Blue Line Parking is $5(2.75) =$13.75.
Which garage costs less for a 5.25-hour stay? Show your work to support your answer.
Answer:
1-2-3 Parking: $6 +$5 + $4(4) =$27
Blue Line Parking: $5(5) +$4(0.25) = $26 Mathematical Modeling Exercise Helena works as a summer intern at the Albany International Airport. She is studying the parking rates and various parking options. Her department needs to raise parking revenues by 10% to help address increased operating costs. The parking rates as of 2008 are displayed below. Your class will write piecewise linear functions to model each type of rate and then use those functions to develop a plan to increase parking revenues. Exercise 1. Write a piecewise linear function using step functions that models your group’s assigned parking rate. As in the Opening Exercise, assume that if the car is there for any part of the next time period, then that period is counted in full (i.e., 3.75 hours is counted as 4 hours, 3.5 days is counted as 4 days, etc.). Answer: Answers may vary. Each function models the parking rate (in dollars) as a function of the number of hours parked. Helena collected all the parking tickets from one day during the summer to help her analyze ways to increase parking revenues and used that data to create the table shown below. The table displays the number of tickets turned in for each time and cost category at the four different parking lots. Parking Tickets Collected on a Summer Day at the Albany International Airport Exercise 2. Compute the total revenue generated by your assigned rate using the given parking ticket data. Answer: Total revenue for Short Term:$2, 308
Total revenue for Long Term: $10, 840 Total revenue for Parking Garage:$7, 184
Total revenue for Economy Remote: $11, 900 Total revenue from all lots:$32, 232
Exercise 3.
The Albany International Airport wants to increase the average daily parking revenue by 10%. Make a recommendation to management of one or more parking rates to change to increase daily parking revenue by 10%. Then, use the data Helena collected to show that revenue would increase by 10% if they implement the recommended change.
Answer:
A 10% increase would be a total of $35, 455.20. Student solutions will vary but should be supported with a calculation showing that their changes will result in a 10% increase in parking revenue. The simplest solution would be to raise each rate by 10% across the board. However, consumers may not like the strange-looking parking rates. Another proposal would be to raise short-term rates by$0.50 per half hour and raise economy rates to $6 per day instead of$5.
### Eureka Math Algebra 1 Module 3 Lesson 24 Problem Set Answer Key
Question 1.
Recall the parking problem from the Opening Exercise.
a. Write a piecewise linear function P using step functions that models the cost of parking at 1-2-3 Parking for x hours.
Answer:
b. Write a piecewise linear function B that models the cost of parking at Blue Line parking for x hours.
Answer:
c. Evaluate each function at 2.75 and 5.25 hours. Do your answers agree with the work in the Opening Exercise? If not, refine your model.
Answer:
P(2.75) = 15, and B(2.75) = 13.75
P(5.25) = 27, and B(5.25) = 26
d. Is there a time where both models have the same parking cost? Support your reasoning with graphs and/or equations.
Answer:
When x = 5.5, 6.5, 7.5, …
e. Apply your knowledge of transformations to write a new function that would represent the result of a $2 across-the-board increase in hourly rates at 1-2-3 Parking. (Hint: Draw its graph first, and then use the graph to help you determine the step functions and domains.) Answer: Question 2. There was no snow on the ground when it started falling at midnight at a constant rate of 1.5 inches per hour. At 4:00 a.m., it starting falling at a constant rate of 3 inches per hour, and then from 7:00 a.m. to 9:00 a.m., snow was falling at a constant rate of 2 inches per hour. It stopped snowing at 9:00 a.m. (Note: This problem models snow falling by a constant rate during each time period. In reality, the snowfall rate might be very close to constant but is unlikely to be perfectly uniform throughout any given time period.) a. Write a piecewise linear function that models the depth of snow as a function of time since midnight. Answer: Let S be a function that gives the depth of snow S(x) on the ground x hours after midnight. b. Create a graph of the function. Answer: c. When was the depth of the snow on the ground 8 inches? Answer: S(x) = 8 when 3(x-4) + 6 = 8 The solution of this equation is x = $$\frac{14}{3}$$ hours after midnight or at 4:40 a.m. d. How deep was the snow at 9:00 a.m.? Answer: S(9) = 19 in. Question 3. If you earned up to$113, 700 in 2013 from an employer, your social security tax rate was 6.2% of your income. If you earned over $113, 700, you paid a fixed amount of$7, 049.40.
a. Write a piecewise linear function to represent the 2013 social security taxes for incomes between $0 and$500, 000.
Let
where x is income in dollars and f(x) is the 2013 social security tax.
b. How much social security tax would someone who made $50, 000 owe? Answer: f(50000) = 3100; the person would owe$3, 100.
c. How much money would you have made if you paid $4, 000 in social security tax in 2013? Answer: f(x) = 4000 when x = 64516.129; you would have made$64, 516.13.
d. What is the meaning of f(150, 000)? What is the value of f(150, 000)?
Answer:
The amount of social security tax you would owe if you earned $150, 000 f(150 000) =$7049.40
Question 4.
The function f gives the cost to ship x lb. via FedEx standard overnight rates to Zone 2 in 2013.
a. How much would it cost to ship a 3 lb. package?
Answer:
f(3) = 24.7; the cost is $24.70. b. How much would it cost to ship a 7.25 lb. package? Answer: f(7.25) = 31; the cost is$31.00.
c. What is the domain and range of f?
Answer:
Domain: x∈(0, 9]
Range: f(x)∈{21.5, 23, 24.7, 26.6, 27.05, 28.6, 29.5, 31, 32.25}
d. Could you use the ceiling function to write this function more concisely? Explain your reasoning.
Answer:
No. The range values on the ceiling function differ by a constant amount. The rates in function f do not increase at a constant rate.
Question 5.
Use the floor or ceiling function and your knowledge of transformations to write a piecewise linear function f whose graph is shown below.
Answer:
f(x) = -⌈x⌉ + 3 or f(x) = ⌊-x⌋ + 4
### Eureka Math Algebra 1 Module 3 Lesson 24 Exit Ticket Answer Key
Question 1.
Use the graph to complete the table.
Answer:
Question 2.
Write a formula involving step functions that represents the cost of postage based on the graph shown above.
Answer:
f(x) = 2⌈x⌉ + 44, 0 < x ≤ 6
Question 3.
If it cost Trina \$0.54 to mail her letter, how many ounces did it weigh?
Answer:
It weighed more than 4 oz. but less than or equal to 5 oz.
Scroll to Top
|
2021-12-07 00:32:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17675207555294037, "perplexity": 3193.107931553384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363327.64/warc/CC-MAIN-20211206224536-20211207014536-00290.warc.gz"}
|
https://www.physicsforums.com/threads/relativity-professor-in-spaceship-examning-students.190048/
|
# Homework Help: Relativity - Professor in spaceship examning students
1. Oct 9, 2007
### azatkgz
And also check this one.
1. The problem statement, all variables and given/known data
A physics professor on the Earth gives an exam to her students.who are in the spacecraft traveling at a speed v relarive to the Earth.The moment the craft passes the professor,she signals to start of the exam.She wishes her students to have time interval $$T_0$$(spacecraft time)to complete the exam.Find the time interval T(Earth time) she should wait before sending a light signal telling them to stop.
3. The attempt at a solution
When in spacecraft $$T_0$$ for professor
$$T'=\frac{T_0}{\sqrt{1-\frac{v^2}{c^2}}}$$
$$T=T'-\frac{vT'}{c}=T_0\sqrt{\frac{1-v/c}{1+v/c}}$$
Last edited: Oct 9, 2007
2. Oct 9, 2007
### MathematicalPhysicist
seems alright by me, youv'e substracted the time it would take the signal to get to the spacecaraft from the overall time for the exam in the earth's system.
nice work.
|
2018-07-23 02:35:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4019307792186737, "perplexity": 4075.2732753614146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594790.48/warc/CC-MAIN-20180723012644-20180723032644-00186.warc.gz"}
|
http://www.ck12.org/tebook/Human-Biology-Genetics-Teacher's-Guide/r1/section/2.3/
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
2.3: Activities and Answer Keys
Difficulty Level: At Grade Created by: CK-12
Activity 1-1: Fingerprinting
PLAN
Summary Students make a set of their own fingerprints and classify each print into one of four different groups. Students calculate the frequency for each pattern that occurs in the class. They learn that fingerprints are unique to each individual and can provide a valuable means of identification.
Objectives
Students:
\begin{align*}\checkmark\end{align*} make a set of their own fingerprints.
\begin{align*}\checkmark\end{align*} classify each print into one of the four fingerprint patterns.
\begin{align*}\checkmark\end{align*} calculate the frequency for each pattern occurring in the class.
\begin{align*}\checkmark\end{align*} recognize that fingerprint patterns are unique to each individual and are an example of human genetic variation.
Student Materials
• Activity Report
• Magnifying glass
• Metric ruler
• Clear tape
• Paper towels and soap; or packaged hand wipes
Teacher Materials
• Activity Report Answer Key
• Calculator
• Graph paper
• White paper for students to make extra fingerprints for story (Language Arts)
• Extra student materials, especially ink for pads
Estimated Time One class period
Interdisciplinary Connections
Math Calculate the frequency of each fingerprint pattern and graph this information.
Art Draw posters for each of the four fingerprint patterns for use as class references.
Social Studies Research the history and use of fingerprints.
Language Arts Write a story describing personal traits and unique interests. Illustrate this story using a “fingerprint character” designed from the student's own fingerprint to which some arms, legs, and other personal traits have been added.
Prerequisites and Background
Students need to know how to calculate frequency.
A frequency, in this context, is the number of times a particular variation occurs in the total population or total number of observations. Frequency can be calculated by dividing the total number of occurrences by the number of observations. For example, for each fingerprint pattern, the frequency is determined by dividing the number of students with fingerprint pattern by the total number of fingerprints observed in the class.
Gather one set of student materials for each group of students.
Advise students to wear old clothes in case they get ink on their clothes. Also have large, old shirts available if students wish to put them on over their regular clothing.
Coordinate interdisciplinary activities with other teachers.
IMPLEMENT
Divide students into groups of 2 or 4, depending on the availability of student supplies.
Step 1 Discuss the four basic fingerprint patterns shown in Figure 1.3.
Steps 2-4 Demonstrate how to obtain a clear fingerprint using your own hand or that of a student. (See Step 4 of student procedures.)
Step 5 Caution students to clean hands immediately after obtaining their fingerprints.
Steps 6-9 Explain to students how to calculate frequency. You may want to share class totals with each of your classes, especially noting fingerprint patterns from identical twins.
Arrange a display area for student work (fingerprints, graphs, and fingerprint stories).
ASSESS
Use the completion of the activity and the written answers on the Activity Report to assess if students can
\begin{align*}\checkmark\end{align*} make a set of their own fingerprints.
\begin{align*}\checkmark\end{align*} classify each print into one of the four fingerprint patterns.
\begin{align*}\checkmark\end{align*} calculate the class frequency for each fingerprint pattern.
\begin{align*}\checkmark\end{align*} explain how fingerprint patterns are unique to each individual and are an example of human genetic variation.
Activity 1-1: Fingerprinting Activity Report Answer Key
• Sample answers to these questions will be provided upon request. Please send an email to teachers-requests@ck12.org to request sample answers.
1. Record your fingerprints in the table below.
2. Which one of the four patterns is represented by your fingerprints?
3. What is the most common fingerprint pattern in your class?
4. It is said that no two people, even identical twins, have the same fingerprints. Do you agree or disagree? Explain.
5. Calculate the frequencies for each fingerprint pattern found in your class. List them in order from most common to least common.
Continuity and Diversity in Art Students demonstrate their knowledge of continuity, diversity, trait, and variation in a drawing, cartoon, or painting.
Wrist Variation Students measure the circumference of their wrists and then graph the results for the entire class.
Describe what everyday life would be like if there were less variety among living things. How would your life be different? What would be the drawbacks to having less diversity, and what are the benefits to having more diversity?
You may observe some of the following in students' writing. Everyday life might be different if there were less diversity because there would be the same plants and animals everywhere we went. We might only have lawns and gardens instead of forests, grasslands, wetlands, scrub lands, etc. Instead of lots of different kinds of animals (like many breeds of dogs), there would only be one of a few kinds. Life would be boring. More diversity means a variety of things to choose from in terms of plants we eat, animals we interact with, and natural places we can visit. It also means that organisms in ecosystems can change and adapt to a changing environment.
• Sample answers to these questions will be provided upon request. Please send an email to teachers-requests@ck12.org to request sample answers.
1. What is unique about a species?
1. What term refers to the phenomenon of living organisms producing offspring with similar characteristics?
2. What term refers to the phenomenon that all living organisms, even those from the same species, are different from each other?
2. What is the difference between traits and variations?
3. What is genetics? What are geneticists most interested in? Why wouldn't a geneticist be interested in hair length in humans?
Activity 1-1: Report Fingerprinting (Student Reproducible)
1. Record your fingerprints in the table below.
2. Which one of the four patterns is represented by your fingerprints?
3. What is the most common fingerprint pattern in your class?
4. It is said that no two people, even identical twins, have the same fingerprints. Do you agree or disagree? Explain.
5. Calculate the frequencies for each fingerprint pattern found in your class. List them in order from most common to least common.
Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
Please to create your own Highlights / Notes
Show Hide Details
Description
Authors:
Tags:
Subjects:
6 , 7 , 8
Date Created:
Feb 23, 2012
Apr 29, 2014
Save or share your relevant files like activites, homework and worksheet.
To add resources, you must be the owner of the section. Click Customize to make your own copy.
Image Detail
Sizes: Medium | Original
CK.SCI.ENG.TE.1.Human-Biology-Genetics.2.3
Here
|
2016-10-23 22:52:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 8, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2102416753768921, "perplexity": 2643.3755089707447}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719437.30/warc/CC-MAIN-20161020183839-00531-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://inverseprobability.com/talks/slides/2018-10-30-mind-and-machine-intelligence.slides.html
|
# Mind and Machine Intelligence
FinTECHTalents 2018
E S A R I N T U L
O M D P C F B V
H G J Q Z Y X K W
bits/min billions 2000 6 billioncalculations/s ~100 a billion a billion embodiment 20 minutes 5 billion years 15 trillion years
.
### Evolved Relationship
$\text{data} + \text{model} \xrightarrow{\text{compute}} \text{prediction}$
### From Model to Decision
$\text{data} + \text{model} \xrightarrow{\text{compute}} \text{prediction}$
### DeepFace
Outline of the DeepFace architecture. A front-end of a single convolution-pooling-convolution filtering on the rectified input, followed by three locally-connected layers and two fully-connected layers. Color illustrates feature maps produced at each layer. The net includes more than 120 million parameters, where more than 95% come from the local and fully connected.
Source: DeepFace (Taigman et al., 2014)
### Systems Design
• decomposition
• data
• deployment
### References
Taigman, Y., Yang, M., Ranzato, M., Wolf, L., 2014. DeepFace: Closing the gap to human-level performance in face verification, in: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. https://doi.org/10.1109/CVPR.2014.220
|
2021-09-27 16:10:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2195347398519516, "perplexity": 7956.033611374211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058456.86/warc/CC-MAIN-20210927151238-20210927181238-00611.warc.gz"}
|
http://ceafel.tecnico.ulisboa.pt/news/ceafel-seminar-24/
|
# CEAFEL Seminar
On Friday, 05/01/2018, 15:30 — 16:30, in Room P3.10 of Mathematics Building of the IST will take place a seminar by
Yuri Karlovich (Universidad Autónoma del Estado de Morelos, Mexico) with title “One-sided invertibility of discrete functional operators with bounded coefficients”
Abstract: The one sided invertibility of discrete functional operators with bounded coefficients on the spaces $l^p(\mathbb{Z})$ with $p\in[1,\infty]$ is studied. Criteria of the one sided invertibility of such operators generalize those obtained in the case of slowly oscillating behavior of coefficients. Criteria of the one-sided invertibility of discrete functional operators associated with infinite slant-dominated matrices are established. Applications to studying the two- and one-sided invertibility of functional operators on Lebesgue spaces are also considered.
(for more details see here)
|
2018-03-19 18:29:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6819158792495728, "perplexity": 2535.2538298262075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647044.86/warc/CC-MAIN-20180319175337-20180319195337-00290.warc.gz"}
|
https://dlang.org/blog/category/community/
|
# New Year DLang News: Hello 2022
For many people around the world, 2021 is a year they’d like to forget. The ongoing pandemic has touched all of our lives indirectly, but for too many, including some in the D community, it has had a more direct impact. We wish a full recovery for those of you who have been physically or emotionally affected by the virus. Please don’t forget: the D community is a network of people located around the globe. We are linked by our interest in the D programming language, but we are people before we are D programmers. If you find yourself in circumstances that disrupt any commitments you have in the community, it’s nothing to fret over. Get it sorted and we’ll be here when you get back. And if you need help to get it sorted, there are many among us willing to help if they can. Don’t be afraid to reach out.
Collectively, 2021 was a pretty good year for D. Some highlights:
A small amount of the work done in 2021 was paid for. The rest was carried out by volunteers, without whom the D programming language would not be where it is today. On behalf of the D Language Foundation, thanks again to all of our contributors, large and small, for all that you do.
## We’re hiring
Symmetry Investments has informed us that they will continue sponsoring the three positions they started sponsoring last year. Razvan Nitu will continue in his role as a Pull Request Manager, and Max Haughton will go on as a general purpose assistant. The second Pull Request Manager role is currently vacant. We are looking for someone to fill it.
The position pays $25,000 USD per year. The ideal candidate is someone who: • is familiar with git, GitHub, and Bugzilla; • is familiar enough with D to be able to review simple pull requests; • is able to recognize when more specialized reviews are required and • is able to proofread English text (for reviewing documentation and web site pull requests). The person who fills the position will work closely with Razvan Nitu. Examples of the role’s responsibilities include: • ensuring all pull requests follow procedure; • reviewing simple pull requests; • finding appropriate reviewers for more complex pull requests; • ensuring that pull requests are reviewed in a timely manner; • reviving stale pull requests; • coordinating between pull request submitters and reviewers to prevent pull requests from going stale; • closing pull requests that are no longer valid; • identifying Bugzilla issues that are duplicates or invalid; • identifying Bugzilla issues that are candidates for bounties; • publicizing Bugzilla issues in need of a champion and • other related tasks. We are hoping to hire from within the D community, though we will accept queries from anyone. If you are interested in taking on the role, please send your resume to social@dlang.org. ## Symmetry Investments is hiring Symmetry Investments is looking for people to fill a number of roles. Their monthly job announcement at HackerNews lists those roles along with qualifications, details on how to apply, and more. If you think you don’t qualify because you lack a degree or haven’t built up a history of experience, please pay special attention to the following lines from the job announcement: We look for virtues and capabilities over only experience and credentials although those things aren’t a disadvantage. Do not let a lack of credentials or qualifications prevent you from applying. They are hiring for full-time, fixed-term contracts with flexible hours, with the possibility for both remote work and sponsorship for a visa in London, Hong Kong, Singapore, or Jersey. ## Symmetry Autumn of Code 2021 Milestone 4 of SAOC 2021 kicked off on December 15th. As this point, only two participants remain eligible for the final Milestone 4 reward, but four of the original five projects are on the road to completion. • Replace DRuntime hooks with templates – Teodor Dutu has been steadily making progress on his project and has faced some tough challenges along the way. He successfully completed Milestones 1 – 3 and is continuing the project through Milestone 4. • Implement support for D in LLVM Debugger (LLDB) – Luís Ferreira has also faced some hard problems in passing Milestones 1 – 3 and continues his work as well. One major step in his progress: he has been granted commit access to LLVM and is now part of the team that reviews, accepts, and merges D-related code into the LLVM tree. • Rethinking the default class hierarchyRobert Aron submitted a DIP for the ProtoObject at the end of Milestone 1. Unfortunately, he was unable to complete SAOC Milestone 3, but we will launch the first round of Community Review for the DIP in mid-January. • Light Weight DRuntime (LWDR) – Dylan Graham had to withdraw from the SAOC event after Milestone 2. However, his LWDR is a passion project that existed prior to SAOC and will still be there after the event ends. He intends to pick up the project again when he is able. We wish him the best and look forward to his future work. • Improve DUB: solve dependency hell – Ahmet Sait Koçak picked this project from the community-maintained DLang Project Idea repository. The SAOC judges had concerns about the proposed solution, so before accepting it for SAOC 2021, we discussed the project at the D Language Foundation’s monthly meeting in August. The final decision was to accept the project, but that Ahmet should explore a specific alternative and only attempt his proposed solution if that was not viable. The alternative proved a dead end, so he moved forward on his initial proposal. He was able to make progress until he encountered issues which will likely require work beyond the scope of the project to resolve. As such, he will be unable to complete the event. Future work on solving the DUB dependency hell problem may well need to take a different approach. ## DConf Online 2021 Q & A videos To date as I write, I have published six of the eight Q & A videos that I cut and trimmed down from the Day One and Day Two livestreams. I’ll have the remaining two published, along with the ‘Ask Us Anything!’ session with Walter, Atila, and Razvan, before the middle of January. All of the Q & A videos are available on the DConf Online 2021 Q & A playlist and links are available in the description of each talk at dconf.org. The AUA will be listed on the DConf Online 2021 playlist and linked from its description in the DConf Online 2021 schedule. On a related note, we’re all itching to get the real-world DConf going again. We’re currently evaluating the possibility of doing so later this year and what it will look like if it happens. Stay tuned. ## Onward and upward! We’ve got a number of things going on for 2022. Some examples: I’ll be publishing a tutorial series on our YouTube channel; we’ll finally publish a new vision document; we’ll be taking the first steps toward bringing the services in our ecosystem under one roof with multiple admins; we’ll either give Bugzilla an overhaul or port our issues to GitHub; we’ll finally have an implementation of the named arguments DIP; and more. We are always in need of contributors. There are several ways to contribute: • If you’re working on your own D project, please contact me to write about it on this blog. Or write about it on your own blog. Or tweet about it. Let the world know what you’re doing! D exists and people are using it, so we need to be shouting out loud so that more people know about it. • If you find an issue, please report it. If there’s an issue you can solve, please submit a PR. If you’re interested in solving multiple issues, please contact Razvan Nitu about joining one of his strike teams. • If you don’t have time to solve issues, please consider supporting us financially by posting a bounty on any issues you care about, or donating to one of our funds. Or maybe support us by buying swag at the DLang Swag Emporium using the link in the sidebar so that we get a referral bonus on top of royalties. Or perhaps select the D Language Foundation as your preferred charity at smile.amazon.com so that we get a small percentage of your purchase amount when you shop there. (The D Language Foundation is only available as an option through Amazon’s .com domain.) • One of the most impactful ways you can contribute is to help newcomers to the D programming language. Hang out on the D Community Discord server or in the D Forums and employ the knowledge you’ve gained about D in helping others solve their problems. Help us in continuing to grow one of the most helpful communities on the internet. Together, we can make 2022 a great year for our favorite programming language. Happy New Year! # DLang News September/October 2021: D 2.098.0, OpenBSD, SAOC, DConf Online Swag Version 2.098.0 of the D programming language is now available in the form of DMD 2.098.0 (the reference D compiler) and LDC 1.28.0 (the LLVM-based D compiler), D has come to OpenBSD, cool things are happening thanks to the Symmetry Autumn of Code, and DConf Online 2021 t-shirts are available for purchase. Read on for the deets. ## DMD 2.098.0 This release comes with 17 major changes and 160 fixed Bugzilla issues from 62 contributors across the core repositories. The number of fixed issues may well be a record high. The 2.097.0 release had 144, and the 2.094.0 release had 119, but a cursory look at several other major releases shows numbers ranging from the high 40s to under 100, with counts in the 50s showing up frequently. This is the sort of trend we were hoping to see when Razvan Nitu came on board as our Pull Request and Issue Manager, and we couldn’t be more pleased. There are two items of note that I’d like to point out from the new release, and then I have a little more to say about the work Razvan is doing. ### ImportC The ImportC compiler is a major enhancement to D that allows the D compiler to directly compile C source code. Walter has been working on it for a few months now, and this is the first release in which it’s available. ImportC enables the compiler to inline C function calls and even evaluate them at compile time via CTFE. ImportC targets C11 and does not currently handle preprocessor directives, so any C source you do intend to compile must first be run through a preprocessor. It’s not yet complete, but if you have a use case for it, any help in finding and reporting ImportC bugs is welcome. Contributions to fix said bugs doubly so! ### Fork-based garbage collector This release also includes an optional concurrent garbage collector for Posix systems. This is cool in and of itself, but more so because the project came to fruition thanks to the Symmetry Autumn of Code. It was originally developed for D1 by Leandro Lucarella but was never included in an official release (using alternative GCs back then required more than just a simple command-line switch). In 2018, for the inaugural edition of SAOC, Francesco Mecca undertook to port the GC to D2. This resulted in a pull request to DRuntime that was ultimately merged in time for this release by Rainer Schuetze. To use the new GC, provide the DRuntime option --DRT-gcopt=fork:1 on the command-line of any program compiled against DRuntime 2.098.0+ (this is not a compiler option, but an option to any program linked with DRuntime). It can also be configured programmatically via: extern(C) __gshared string[] rt_options = [ "gcopt=fork:1" ]; See the D documentation for more GC configuration options. ### Shrinking the pull-request queues Razvan has been managing pull requests across several of our repositories, but he’s been laser-focused on reducing the number of PRs in the phobos and druntime repositories, with dmd his next target. This isn’t just about lowering the PR count. He’s been reviving old PRs with the original author where he can (he tells me he was surprised how many PR authors were responsive, even after no activity on a PR for a few years) and has tried to rebase and resolve those where he can’t. Here are some statistics he’s gathered on PR activity so far this year across the phobos, druntime, and dmd repositories: • phobos: 568 PRs created, 650 PRs closed • druntime: 283 PRs created, 311 PRs closed • dmd: 1140 PRs created, 1126 closed At the time he sent me the stats on October 29th, the number of open PRs in phobos had gone down from 160 to 77 and druntime from 130 to 96. The number of open PRs in dmd has remained fairly constant at around 230. We want to thank Razvan for all the work he is doing, Symmetry Investments for sponsoring his position, the volunteer members of the “strike teams” Razvan has assembled to squash as many bugs as possible, and every contributor who has donated and continues to donate their time and effort to improving our favorite programming language. ## LDC 1.28.0 The latest release of LDC implements D 2.098.0 (D frontend, DRuntime, and Phobos) and is compatible with LLVM 6.0 – 12.0. A major item in this release is that LDC now supports dynamic casts across binary boundaries. DLL support has long been a weak point in D, often requiring the programmer to resort to extern(C) functions that return handles (pointers, references) to D objects. Martin Kinkelin has worked to improve the situation in LDC, motivated primarily by the desire to provide the standard library and runtime as a DLL on Windows. Thanks to Martin and all the LDC contributors for the work they do to keep LDC releases in sync with those of DMD. If you benefit from their efforts, please consider sponsoring Martin (and LDC by extension) on GitHub! ## D on OpenBSD The D ecosystem grows primarily because of the efforts of volunteers who step forward to fill in the blanks. New D projects pop up all the time, but it’s pretty rare to hear that someone has brought D to a new platform. Brian Callahan has done just that. Brian has been on a mission to bring D to OpenBSD. In August of this year, he popped into the D forums with an announcement that GDC, the GCC-based D compiler maintained by Iain Buclaw, was now available in the OpenBSD ports tree as part of GCC 11. In early October, he let us know that DMD was coming to the platform. Then in late October, he had the same news about LDC. Instructions for installing DMD on OpenBSD are on the download page (and can be extrapolated to LDC and GDC). We are grateful to Brian for the work he has done to make this happen. We’re looking forward to his upcoming DConf Online 2021 talk, Life Outside the Big 4: The Adventure of D on OpenBSD: The journey of D from pie-in-the-sky to a package officially offered in the OpenBSD package repository serves as a model story for other platforms who want to offer D to their userbase. We will walk through the many interconnected parts required to get a D package on OpenBSD, what the future is like for D outside the Big 4, how you can get started with D on your platform, and how those of us who enjoy life outside the Big 4 can be a positive force for D and the D community. ## SAOC News The SAOC 2021 progress bar is past the 25% mark. The first milestone wrapped up on October 15, and the participants have been posting weekly progress reports in the General Forum. It’s always interesting to read about the challenges they encounter and their solutions. But the latest SAOC isn’t the only edition about which there is news to report. I’ve written above about the SAOC 2018 forking GC project that has found its way into the latest release of DRuntime. I can’t begin to tell you how pleased I am that another SAOC project has come into its own. For SAOC 2020, Adela Vais set out to implement a D backend for the venerable Bison parser generator. Not only did Adela successfully complete SAOC, she saw her project through to its ultimate goal. The D backend was officially released as part of Bison 3.81 in September. We want to offer Adela our congratulations and a huge round of applause for a job well done! Getting a project of this scope accepted into a GNU codebase is no mean feat. ## DConf Online 2021 T-Shirts DConf Online 2021 is less than a month away. The D Language Foundation will be providing DConf Online 2021 swag to the DConf speakers and prizes to viewers asking questions in the post-talk live stream Q & A sessions. The cost of the items and their shipping are the only DConf Online expenses, and they’re covered by the D Language Foundation General Fund. Direct donations to the General Fund and our more targeted funds are always appreciated, but you can also help support the D programming language and DConf Online by purchasing a DConf Online 2021 T-Shirt or other D swag in the DLang Swag Emporium. All proceeds go straight into the General Fund. You get some swag along with our gratitude, and we get a couple of bucks. That’s a pretty good deal! ## Looking Forward As we near the end of 2021, we are looking forward to 2022 and beyond. The D programming language, its ecosystem, and its community have come a long way from the gaggle of curious coders who first took an interest in a one-man project by the guy who had created the game Empire and the Zortech C++ compiler. The primary means of contributing to the core D projects went from emailing patches to Walter, to posting patches on Bugzilla, to committing to a Subversion repository, to submitting pull requests on GitHub. The web site went from being a few basic HTML pages of the D spec on digitalmars.com maintained only by Walter, to a simple HTML site designed by a community member under the dlang.org domain, to the more complex collection of pages and scripts that today is maintained in Ddoc by multiple contributors. The ecosystem has gone from random libraries and tools hosted by individuals on myriad services, to centralized hosting at dsource.org, to the package repository at code.dlang.org. These are just some examples of major changes over the years, each in response to growth: as the community grew in size, some of the processes and systems began to burst at the seams. To continue to grow, something had to change. Such improvements have nearly always been the result of community action: discussion and debate in the forums eventually would lead to a champion stepping forward to make it happen. Community action has been the driving force of D since Walter first announced the “D alpha compiler” in late 2001. That’s still true today. We have a handful of paid positions, but we are still primarily driven by volunteers. The see-a-problem-and-fix-it philosophy that carried D to where we are today has served us well, and we hope it will continue to do so into the future. But that alone is no longer enough. We are bursting at the seams again, and have been for some time. In the monthly foundation meetings, we’ve been discussing specific issues, both low level and high, and how to solve them. But there’s one thing that’s been missing from the equation: organization. Razvan Nitu’s position as Pull Request & Issue Manager grew out of an email discussion, prompted by Laeeth Isharc, and was a year in the making. We are grateful for every volunteer who has and continues to make themselves available to review pull requests. Razvan is here not to replace them, but to complement them. They can continue as they have done. What Razvan brings to the mix is organization. He’s there to make sure fewer issues and PRs fall through the cracks, to ensure that as many issues as possible that can be resolved are resolved. In November, the D Language Foundation and a couple of contributors are meeting with a community member who has graciously volunteered his time and expertise to advise us on how to bring the disparate servers in the D community under Foundation management and multiple admins. The end goals are to eliminate the financial burden on the volunteers who maintain these services and, hopefully, reduce the response time when it comes to solving server-related issues or making changes. In other words, organization. I’m in the middle of revising the Vision Document that we put together over the summer. I’m not just editing it, though. I’m expanding it. My vision of the vision document has evolved since we first discussed a “goal-oriented task list” in our June meeting. I said at the time that I didn’t “know what the initial version of the final list will look like”. I feel that what we came up with falls short of meeting the need it was intended to fill. Now, I’m pretty sure of what it needs to look like. At the moment, I’m swamped in preparations for DConf Online 2021, so I’ve put the document on the backburner. I plan to pick it up again in early December and present my revisions at the last foundation meeting of the year for approval. If all goes well, it should be published on dlang.org in January. This will be a living document, updated to reflect current priorities as time goes by. Mathias Lang is working on a proposal to bring organization into even more of our processes. It’s a modified version of the governance proposal he brought to the September foundation meeting, the aim of which is to formalize a core team to oversee the day-to-day guidance and management of the D ecosystem. I hope that this will take what already happens in our monthly meetings to the next level. I see this as a means to establish a framework for creating workgroups that can oversee specific tasks and projects, bringing more opportunities for follow-up and follow-through. It should also help provide guidance and establish priorities (e.g., via revisions to the vision document) so that independent contributors can direct their efforts not just to the issues they care about, but those that are seen as a priority by the core team. (I want to emphasize that this is my personal view. Mathias has yet to complete the proposal. But my view is informed by what we discussed in the September meeting.) With these and future steps aimed at better organizing our community, we intend to level up our ecosystem: motivate library development, improve the onboarding experience, increase retention, make it easier to contribute, and generally resolve the long-standing issues that tarnish the experience of using the best programming language we know. We ask our current volunteers to keep volunteering, and those who aren’t yet doing so to keep an eye out for the right opportunity to pitch in. Together, we can get to where we all want to go. # Bugzilla Reward System The Dlang bot has been updated to track Bugzilla issues that have been fixed. It went live for testing on the 2nd of July. Each GitHub user who fixes a bug via a merged pull request is awarded a number of points depending on the severity of the issue. The current results can always be seen on the contributor stats page. This blog post covers all of the details regarding the implementation, rules, and prizes of the reward system. ## Raison d’être I want to start by saying that the motivation of this system is not to start a fierce competition between contributors to fix as many issues as possible. The primary reasons: we see this as a means for the D Language Foundation to reward committed contributors and to channel their efforts towards more important bugs. If, as a side effect, the system motivates people to fix more bugs, that’s great! We won’t complain. There are some negative side effects that are possible with any sort of gamification system, and we’ll be keeping an eye out for them. We think we have one of the best online communities out there. Our members are generally friendly and helpful, and we don’t want to do anything that causes tension or proves negatively disruptive. We think this will be a fun way to reward our contributors, but we will pull the plug if it proves otherwise. ## Scoring system The scoring is designed to reward contributors based on the importance of the issues they fix, rather than the total number fixed. As such, issues are awarded points based on severity: • enhancement: 10 • trivial: 10 • minor: 15 • normal: 20 • major: 50 • critical/blocker: 75 • regression: 100 Of course, the severity of an issue does not necessarily reflect the complexity of the solution. There might be regressions that are trivial to solve, and enhancements that require an extremely complicated fix. The message that we are trying to send is that complexity is secondary to need. That is why regressions are given top priority and critical/blocker/major issues aren’t far behind. ## Rules The following rules will guide how points are awarded from the initial launch of the reward system. They are not set in stone and are open to revision over time. Rule #1: The severity of an issue will be decided by the reviewers of a proposed patch. Severity levels are not always accurately set when issues are first reported and may not have been updated since. The reviewer of a pull request that closes a Bugzilla issue will evaluate the issue’s severity level and may change it if he or she determines it is inaccurate. I will moderate any disagreements that may arise about severity levels. Rule #2: A PR fixing a bug may not be merged by the same person that proposed the patch. This is already an unwritten rule that applies to the DLang repositories, so it should not surprise anyone. Rule #3: Anyone who adopts an orphaned PR that fixes a bug may be awarded its associated points. To avoid any authorship conflicts, it is best if the adopter contacts the original author to ask if it is okay to adopt the PR. Rule #3 will apply if there is no response or if the response is affirmative. Otherwise, no points will be awarded. Rule #4: Only one person may receive points per fixed issue. This rule is specifically designed for reverted PRs. Imagine that a PR that presumably fixes an issue is merged and the author gets points for it. Later on, it is decided that the fix is incorrect and the PR is reverted. If someone else proposes the correct fix, the points will be subtracted from the original contributor and awarded to the new author. Hopefully, this will motivate the original contributor to propose the correct fix after the reversion. Rule #5: Incomplete fixes still get points. A Bugzilla report usually includes a snippet of code that reproduces the issue. A frequent pattern is that the bug is correctly fixed for the provided snippet, then someone comes up with a slightly modified example that does not work and reopens the issue. Since the original fix was correct, but not complete, the procedure here is that the original issue should be left closed and a new one should be opened. The original author keeps the points awarded for the original issue. ## Implementation Since most of you are die-hard geeks and are eagerly awaiting the code, here’s the database implementation hosted on the dlang-bot, and here’s the web page implementation. You will notice that the web page is extremely minimal. That is because I am a total n00b when it comes to web programming, so if anyone has the skills and the time to make a cooler web page, feel free to make a PR :D. In short, for each of the issues that are fixed, the database stores the Bugzilla issue number, the GitHub ID of the person who fixed it, the date when the fix was merged, and the severity of the issue. Every time the leaderboard page is accessed, a query is issued to the database to compute the total points for all of the contributors and sort them in descending order. Easy peasy. I would like to thank Vladimir Pantaleev for his continued support and assistance throughout the period that I implemented the system. ## Prizes As Mike briefly described in this forum post, we are going to have quarterly competitions. The quarterly prizes will vary. At the end of the year, the person who has acquired the most points will be awarded a bigger prize. For the inaugural competition, which will officially start on the 20th of September 2021 and will last until the start of DConf Online (the 20th of November), the prizes will be: • First Place: a$300 Amazon eGift Card
• Second Place: a $200 Amazon eGift Card • Third Place: a$100 Amazon eGift Card
The next set of prizes will be announced at DConf Online, so stay tuned!
## That’s all folks!
If there are any questions or suggestions regarding any aspect of the bugfix reward system, please contact me at razvan.nitu1305@gmail.com. Also, feel free to directly propose changes to the existing infrastructure.
Happy coding everyone!
# SAOC 2021 Projects
The applications have been reviewed, the results decided, and the applicants notified. Five coders will be participating in the 2021 edition of the Symmetry Autumn of Code, one of whom will be the first to take part in SAOC two times.
Following is a brief introduction to each participant and an equally brief summary of their projects. The project planning phase officially kicks off on September 1st, so any details I could provide from their applications would likely change by the time they finalize their initial milestones with their mentors. If you’re eager for more detail, please hold out a little while longer. The participants will start posting updates in the forums once their projects are underway. Their first updates should include more information.
### Rethinking the default class hierarchy
If you followed SAOC 2020, you may recall that Robert Aron was a fourth-year student at University POLITEHNICA of Bucharest who worked on implementing D client libraries for the Google APIs, along with a tool to generate client libraries for said APIs (all of which can be found in his GitHub repositories). He also was a recipient of the final SAOC payment (one of two last year, where usually we have only one) and is owed a free trip to a future real-world DConf.
Robert is now working toward an MSc in Security of Complex Networks at the same university, and he’s back with us for SAOC 2021. His project this time around is a DIP for and implementation of the ProtoObject concept that Eduard Staniloiu described in his DConf 2019 talk. This will set a ProtoObject class as the root class of D’s object hierarchy and the ancestor of the existing Object class. It will allow users to opt-in to features currently provided by default through Object, such as the inclusion of a monitor to support synchronization.
Once again, Robert will be working with Eduard Staniloiu and Razvan Nitu as his mentors.
Welcome back, Robert!
### Replace DRuntime hooks with templates
Teodor Dutu is also at university in Bucharest working on a master’s degree in Advanced Cybersecurity. He has experience in C and Java, and it’s the low-level experience he gained working on projects like a file system, a kernel module, and an asynchronous HTTP server that he wants to apply toward improving the D ecosystem. The D language grabbed his interest when he participated in Razvan and Edi’s D Summer School, and he is eager to help out where he can.
To that end, Teodor is entering SAOC to work on a change to DRuntime. Currently, certain operations in user code are rewritten to call functions in the runtime known as runtime hooks (if you’ve ever seen a linker error mentioning something like _d_newArrayT or a symbol with a similar name, that was a runtime hook). There are some significant downsides to this approach, such as code bloat (the entire DRuntime library is linked in when linking statically), negative performance impact (due to the use of TypeInfo to pass runtime information to the hooks), and code that’s hard to maintain (the hooks are inserted at the IR level, a component of the compiler that’s difficult to understand).
Teodor’s plan is to replace each of the runtime hooks with templates. Dan Printzell already did some work on this, and Teodor will be following in his footsteps intending to take it all the way.
Eduard Staniloiu and Razvan Nitu will be Teodor’s mentors.
### Implement support for D in LLVM Debugger (LLDB)
Luís Ferreira has extensive experience with C, C++, and D. He has contributed to DMD, DRuntime, and Phobos, and has a WIP implementation of DIP 1029 (Add throw as a Function Attribute) underway.
One of the projects Luís has been working on in his free time is a rewrite of DRuntime’s demangler to avoid exceptions, taken on because of his interest in mangling and demangling. He also has an interest in LLVM. The combination sparked the idea for his SAOC project. His rough goals for the project are to add support to LLDB for demangling D symbols, recognizing D-specific data structures, and parsing D expressions.
Mathias Lang has taken on the role of mentor for this project.
### Light Weight DRuntime (LWDR)
Dylan Graham made waves on this blog when he wrote about the custom gearbox controller he built, using D for its firmware. That project led him to a new one: D needs a runtime that is suitable for embedded, Internet of Things, and real-time operating systems. That’s when he started work on LWDR.
You can see from that link that LWDR is not a port of DRuntime, but “a completely new implementation for low-resource environments”, and you’ll find a list of features that are currently supported. For SAOC, Dylan will be working on expanding feature support, shoring up what’s already there and adding new features along the way.
Dylan is a university student in Australia, currently pursuing a Bachelor of Computer Science through Monash University. He’s been programming since he was 11 years old, starting with C on the Arduino Uno and BASIC on the Maximite. His courses have exposed him to several other languages, and he has shown he’s a good hand with D.
His mentor for SAOC 2021 is Adam D. Ruppe.
### Improve DUB: solve dependency hell
Ahmet Sait Koçak is a Computer Engineering student from Turkey. He has a strong background in C#, but considers D his second-most comfortable language. Some might be familiar with his work maintaining bindbc-harfbuzz.
For his SAOC project, he made use of our Projects repository and settled on the idea of solving the “dependency hell” problem that can arise when using DUB. Essentially, if library A depends on libraries B and C, which in turn depend on two different versions of library D, dub will error out without any effort to resolve the version conflict.
In reviewing the application, the judges identified some issues with the project as proposed, but it was still accepted with the understanding that Ahmet may need to take a different approach. His project subsequently gained the distinction of being the first SAOC project application discussed in a D Language Foundation meeting. The goal was to determine if there might be another way.
Ahmet’s mentor is Max Haughton, who was present for the meeting. He will be working with Ahmet to investigate the solution arrived at in the meeting and, if that proves infeasible, to move forward with the initial idea. Either way, you’ll hear the details from Ahmet in his weekly forum updates.
### Onward!
The SAOC judges (Átila Neves, Robert Schadek, and John Colvin) were impressed with the quality of the applications this year and are eager to see how the projects turn out. Please keep an eye out for the weekly updates that should start arriving in the forums around September 22nd, a week after Milestone 1 begins. This will help you keep abreast of the progress of each project and also provide an opportunity for suggestions that might help our SAOC 2021 coders along their paths.
Milestone 1 kicks off on September 15th, and Milestone 4 will end on January 15th. The D Language Foundation and our sponsors, Symmetry Investments, wish these five coders well in all they do over those four months. Their success is the D community’s success, so we hope everyone will join us in ensuring they have all the support and help they need to get through their four milestones and see this thing through to the end.
# D Summer School v3
The third edition of the D Summer School, held at University POLITEHNICA of Bucharest, took place from the 5th to the 25th of July. It was three weeks of boot camping bachelor students into the basics of D across eight sessions of hands-on workshops and a hackathon. We will describe our experience in organizing the program, teaching the students, and trying to integrate them into the D community.
## Recap from past editions
For the first two editions we had the following process:
• we started advertising the summer school two months before the event;
• we selected students from among the applicants;
• the students had to complete a project during the summer school;
• we helped students improve their projects during the hackathon;
• we collected feedback at the end.
For more information, we recommend our previous article on the first edition.
## DSSv3
### Pre-event actions
In contrast to previous years, this time we started marketing the event very early in February, five months before the start date. We used the most influential vectors we could to promote it: the most popular professors. We nagged them ceaselessly to promote DSSv3 during their courses. The results were spectacular: we received 86 student applications (as opposed to 25 in the previous years). Since this was an online version of the summer school, we decided to drop the selection process and open the door to everyone. This had the added benefit that we didn’t have to conduct interviews with everyone and a wider range of students had a chance to be introduced to D.
To cope with the larger number of participants, we had to grow our team. Former Symmetry Autumn of Code participants Robert Aron and Adela Vais, and Symmetry employee Alexandru Militaru, agreed to join us. As such, we were able to diversify our teaching materials and raise the overall quality of the presentations.
The teaching materials were mostly the same as in the previous years; we simply reshuffled the order to have an incremental level of complexity and added a lab, “WebApp Tutorial using Vibe.d”. Besides this, we also changed the project competition. During the previous editions of DSS we found that students were not very engaged with the project assignment, so this time we came up with something different. We created a Dlang Bug Fix Competition: the top three students who had the largest number of merged PRs that solved a Bugzilla issue would win Raspberry Pi 400 kits (you may have noticed the “DSSv3” labeled PRs on our three main repositories). You may think that contributing to a programming language is a scary task, however, the truth is there are dozens of approachable, easy-to-fix issues that give instant gratification to the contributor.
With all the planning into place, we were now ready to start DSSv3.
### The teaching sessions
A teaching session is comprised of an hour-long lecture and two hours of hands-on exercises. This year, we abandoned the slides in favor of tutorial-like examples. Since everything was online, we simply shared our screen and highlighted the different aspects of D in a practical way by directly compiling and running examples. This made the lessons more interactive as students enthusiastically asked “what happens if…?” questions, and we could easily demonstrate the results.
For the exercises, we followed a team-play system where students were grouped in teams of four, and they worked together to solve their tasks. This made it easier for us to organize everything on the Teams platform (we would enter rooms of four students instead of talking students individually) and it encouraged them to help each other.
From among the 86 applicants, we had an average of 35 students attending the sessions, with a record high of 56 (“Introduction to D”) and a low of 25 (“C\C++ Interoperability and Tooling”). It seems that from the first lecture to the last, almost half of the students abandon the course. This may seem like a grim figure, but note that we had students from all types of backgrounds, some of them in their first year at university. Since we are teaching subjects like memory management and multithreading, it’s only natural that some of them will be lost along the way. Regardless, the lowest number of attendees was higher than the highest number from previous years.
The hackathon was attended by eight people. Again, a low figure, but that was not a surprise. Keep in mind that the majority of the participants had never made an open-source contribution. We expected that only the best students would manage to contribute. One other factor that may have influenced the low number was our uninspired decision to organize the hackathon on a Sunday; several people noted in our feedback form that they would have participated had they not had other plans. The result of the hackathon:
• 9 PRs submitted to Phobos: 5 were merged, 2 were closed but led to closing the associated Bugzilla entries, and 2 were rejected
• 1 PR submitted to DRuntime that was merged
• 4 PRs submitted to DMD: 2 were merged, 2 were rejected
We had hoped that students would submit PRs before the hackathon, but we were wrong. It seems that students should be assisted when making their first PR.
The winners of the hackathon were:
1. danandrei279 with 3 PRs merged
2. vladchicos with 2 PRs merged
3. lucica28 with 1 PR merged
### Feedback
We asked the students to fill out a feedback form, and we received 15 responses. It is highly possible that the results are biased since the feedback form was available at the end of the hackathon; by that time some students had already dropped out. Although it would have been great to understand their perspective, we still had valuable feedback on what went well and what could be improved.
From aggregating the results, we have the following conclusions.
• The difficulty of DSS is perceived as being high. Those who are well prepared love it, but those who don’t have too much programming experience are lost along the way.
• The introductory courses are much more popular than advanced ones such as “C\C++ interop” and “Multithreading in D”.
• The hackathon was appreciated by hardcore programmers (a small percentage of the total number of attendees), but the rest were intimidated by it.
• Students appreciated the relaxed interactive atmosphere of the sessions, with some commenting: “The general feel of the summer school was a chill evening hanging out with your friends.”
Overall, DSSv3 generated enthusiasm among the programming geeks, but we still have some work to do to make it attractive to a less savvy audience.
### Future plans
Now that our team has grown, we plan to do a bigger restructuring of the course. Given the high drop-out rate, we would like to make the course welcoming for any type of CS student regardless of background or experience. To that end, we are considering creating two tracks for the course: one at a beginner level, and one for more advanced students. That way we can accommodate any type of audience.
Another aspect to think about is the hackathon. We still haven’t found the most appealing project that would motivate students to commit to it. Experience has shown that creating a project from scratch in a language you’ve just learned doesn’t really work (or we haven’t found the adequate project) and contributing to the D language may be intimidating. We are still searching for better solutions, so if you have any ideas, please contact us.
Also, right now, UPB is going through a major redesign of its curriculum. Proposals for new courses are being accepted, and we will forward this course as our choice. There’s a long wait for acceptance, but we’re keeping our fingers crossed.
## Conclusions
Overall, we are happy with this year’s edition. We managed to expand our team, grow our reach, and motivate some students to contribute to the language. Even though we still have some work to do to engage less passionate students, we think we are on the right track.
See you next time at DSSv4!
# A Pull Request Manager’s Perspective
Since January of this year, I have been working as a part-time PR (Pull Request) manager. During this time, I have mostly been reviewing PRs and going through issues on the D Bugzilla. I have also been trying to come up with ways of creating organizational structures and procedures that will ultimately aid the D leadership in motivating and focusing community effort. This blog post presents a few insights I’ve had regarding the PR queues of the dmd, druntime, and phobos repositories, and a couple of proposals that, in my opinion, could benefit the D contribution process.
### PR rounds
As a PR manager, I spend most of my time reviewing PRs. Since I started on the job, I have been involved in the merger of more than 400 PRs across our repositories. From this experience I have extracted a few insights:
• If a new PR is not reviewed within the first 3 days after it was opened, chances are that it will get abandoned.
• If a PR is not merged during the first 2 weeks after it was opened, chances are that it will be abandoned.
• Contributions, in terms of PRs per month, are as follows: phobos (130), dmd (85), druntime (30).
• Although phobos benefits from more contributions, dmd has a larger contributor base.
• Druntime needs morelove.
• Veteran contributors are more likely to abandon PRs than new/first-time contributors.
Given the first 2 points, I try to make contact as fast as possible with PR authors. It often happens that I do not have the necessary expertise to technically review a PR. In that case, I try to find people who are willing to take a look. However, since we do not have a concrete community hierarchy, it is sometimes difficult to find the needed reviewers. A solution to this problem is proposed later in the blog post.
Regarding the ratio of contributions per repository, it is noteworthy that phobos and dmd get a lot of attention, whereas druntime is by far the least attractive repository. Another interesting aspect is the diversity of the contributor base: in the last month, there have been ten contributors who opened more than one PR for dmd, five for phobos, and four for druntime. Ths emphasizes the fact that druntime needs more love.
Lastly, I noticed that veteran contributors tend to abandon their PRs more often than newcomers. This can be explained by the fact that veteran contributors usually tackle multiple PRs at the same time, whereas newcomers usually focus on a single PR. I want to take this opportunity to urge all contributors not to abandon their PRs. It is disappointing for reviewers such as myself to put in the time to properly investigate the patch and offer advice to then see it go to waste. I know that it is much more appealing to start working on new things, but it is highly important not to let any work go to waste.
### Upcoming projects
From my perspective, D has come a long way from its early days: language features are maturing, adoption is steadily growing and the community is expanding around a nucleus of veteran contributors. But given that growth, it is surprising that from an organizational standpoint we are basically in the same spot: if a critical issue appears (a critical bug report, a CI failure, an expired certificate, etc.), the solution is to make a forum post or a comment on Slack and hope that someone who can fix it, or can get it fixed, notices it soon; non-critical issues depend on someone taking an interest: an issue might eventually be fixed, or we might be stuck with it indefinitely. The problem is not manpower or skill; our community has a lot of talent. Unfortunately, we fail to utilize it to its full potential.
If we want certain things to be done, it is the leadership’s responsibility to:
1. specifically state what work needs to be done,
2. organize the community, and
3. incentivize contributors to do the needed work.
Although there is room for improvment, (1) has usually taken place in the form of forum discussions, DIPs, and blog posts. (2) is difficult to implement, given that people contribute in their own free time. As for (3), the mantra has been “fix it if you need it”, which works well for interesting topics, but not that well for important, hard-to-fix bugs, or high-impact, boring tasks.
Implementing points (2) and (3) in an open-source community and with limited financial resources is difficult. However, there are alternative approaches that have not been explored in the DLang ecosystem. I will outline them below.
#### Creating strike teams
One way of organizing the community is to create dedicated groups of people, or strike teams, that can be called upon for specific tasks. One will be assigned to each repository (dmd, druntime, phobos, dlang.org). The idea is to add people to these groups who either have expertise but lack time to contribute, or don’t lack expertise but are willing to actively contribute. This way, if you do not have time to contribute code, you can still help the community by offering implementation advice, whereas if you do have time to offer, you can contribute and develop expertise. The strike teams will be populated by a limited number of people who are trusted members of the community. These teams will be approached directly by the leadership (Walter, Atila, Mike, PR Managers) to fix issues or implement work defined in point (1). The components of the strike teams will receive recognition by having their name listed on the dlang.org site (thus satisfying point (3)).
Of course, this will work as long as there are folks out there willing to dedicate their time. If you want to contribute in some form to any of the strike teams, please contact me directly on Slack or via email.
#### Bugzilla Gamification
The D compiler has around 3000 reported bugs, druntime around 300, and phobos 900. These numbers have grown over time. Although some issues are fixed, we have had no means to incentivize people to work on the critical ones. To that end, we propose a simple gamification scheme: each issue has a severity associated; once a PR that closes an issue is merged, the github author of the PR is awarded points according to its severity level; a leaderboard, which is updated in real-time, is presented on dlang.org, and anyone can see who the top contributors are. At the end of each “season”, contributors will be awarded prizes and recognition based on different criteria, such as overall point total, number of total contributions, and so on (we have yet to finalize the kinds of prizes that will be awarded).
By implementing this scoring scheme, we offer some incentive for more experienced contributors to prioritize blocker/critical/major/regression issues over the more trivial or simpler ones, and encourage new contributors to try their hand at a level with which they’re comfortable. We are already working on implementing this and will announce the rules and prize categories once everything is up and running.
### Conclusion
We are at a point in the evolution of the D programming language and its ecosystem where motivating community effort towards a common goal is crucial. This is a long-term, complicated task, but we need to start somewhere. I hope that with this initiative we can pave the way to a more sophisticated and better-organized contribution process that is a more satisfying and rewarding experience for our contributors.
# A New Year, A New Release of D
Here in DLang Land we’re beginning the new year with a new release of the D reference compiler (DMD) and a beta release of the popular LLVM-based D compiler (LDC). D 2.095.0 is crammed full of 27 major changes and 78 fixes from 61 contributors. Following are some highlights that I expect some D programmers might find interesting, but please see the changelog for the full rundown. Those more interested in Bugzilla issue numbers can jump straight to the bugfix list
## D 2.095.0
D’s support for other programming languages is important for interacting with existing codebases. C ABI compatibility has been strong from the beginning. Support for Objective-C and C++ came later. Though C++-compatibility is a bear to get right, it keeps improving with every compiler release. This release continues that trend and also enhances Objective-C support. We also see a number of QOL (quality-of-life) improvements throughout the compiler, libraries, and tools. DUB, the D build tool and package manager that ships with the compiler (and is also available separately), especially gets a good bit of love in this release.
For a little while now, DMD has included experimental support for the generation of C++ header files from D source code, via the -CH command-line option, in order to facilitate calling D libraries from C++. For example, given the following D source file:
cpp-ex.d
extern(C++):
struct A {
int x;
}
void printA(ref A a) {
import std.stdio : writeln;
writeln(a);
}
And the following command line:
dmd -HC cpp-ex.d
The compiler outputs the following to stdout (-HCf to specify a file name, and -HCd a directory):
// Automatically generated by Digital Mars D Compiler
#pragma once
#include <assert.h>
#include <stddef.h>
#include <stdint.h>
#include <math.h>
#ifdef CUSTOM_D_ARRAY_TYPE
#define _d_dynamicArray CUSTOM_D_ARRAY_TYPE
#else
/// Represents a D [] array
template<typename T>
struct _d_dynamicArray
{
size_t length;
T *ptr;
_d_dynamicArray() : length(0), ptr(NULL) { }
_d_dynamicArray(size_t length_in, T *ptr_in)
: length(length_in), ptr(ptr_in) { }
T& operator[](const size_t idx) {
assert(idx < length);
return ptr[idx];
}
const T& operator[](const size_t idx) const {
assert(idx < length);
return ptr[idx];
}
};
#endif
struct A;
struct A
{
int32_t x;
A() :
x()
{
}
};
extern void printA(A& a);
This release brings a number of fixes and improvements to this feature, as can be seen in the changelog. Note that generation of C headers is also supported via -H, -Hf, and -Hd.
### Default C++ standard change
Prior to this release, extern(C++) code was guaranteed to link with C++98 binaries out of the box. This is no longer true, and you will need to pass -extern-std=c++98 on the command line to maintain that behavior. The C++11 standard is now the default.
Additionally, the compiler will now accept -extern-std=c++20. In practice, the only effect this has at the moment is to change the compile-time value, __traits(getTargetInfo, "cppStd"), but new types may be added in the future.
### Improved Objective-C support
Objective-C compatibility is enhanced in this release with support for Objective-C protocols. This is achieved by repurposing interface in an extern(Objective-C) context. Additionally, the attributes @optional and @selector help get the job done. Read the details and see an example in the changelog.
### Improved compile-time feedback
Here’s a QOL issue that really became an annoyance after a deprecation in Phobos, the standard library: when instantiating templates, deprecation messages reported the source location deep inside the library where the deprecated feature was used (e.g., template constraints) and not the user-code instantiation that triggered it. No longer. You’ll now get a template instantiation trace just as you do on errors.
Another QOL feedback issue involved the absence of errors. The compiler would silently allow multiple definitions of identical functions in the same module. The compiler will now raise an error when it encounters this situation. However, multiple declarations are allowed as long as there is at most one definition. For mangling schemes where overloading is not supported (extern(C), extern(Windows), and extern(System)), the compiler will emit a deprecation message.
### The mainSourceFile in DUB recipes
The mainSourceFile entry in DUB package recipes was a way to specify a source file containing a main function that should be excluded from unit tests when invoking dub test. However, when setting up other configurations where the file should also not be compiled, or where a different main source file was required, it was necessary to add the file to an excludedSourceFiles entry. This is no longer the case. If a mainSourceFile is specified in any configuration, it will automatically be excluded from other configurations.
### Propagating compiler flags to dependencies
Not every existing compiler flag has a corresponding build setting for DUB recipes. The dflags entry allows for such flags to be configured for any project. For example, -fPic, or -preview=in. The catch is, it does not propagate to dependencies. Now, you can explicitly specify compiler flags for dependencies by adding a dflags parameter to any dependency entry in a dub.json recipe. For example:
{
"name": "example",
"dependencies": {
"vibe-d": { "version" : "~>0.9.2", "dflags" : ["-preview=in"] }
}
}
Unfortunately, it appears the implementation does not work for recipes in SDLang format (dub.sdl), so those of us who prefer that format over JSON will have to wait a bit.
## LDC 1.025-beta1
This release of LDC brings the compiler up to date with the D 2.095.0 frontend, with the prebuilt packages based on LLVM v11.0.1. The biggest news in this release looks to be the new -linkonce-templates flag. This experimental feature causes the compiler to emit template symbols into each compilation unit that references them, “with optimizer-discardable linkonce-odr linkage”. The implementation has big wins both in terms of compile times when compiling with optimizations turned on and in cutting down on a class of template-related bugs. See the beta1 release notes for the details.
## Happy New Year
On behalf of the D Language Foundation, I wish you all the very best for 2021. As a community, we weren’t affected much by the global pandemic. Sure, we were forced to cancel DConf 2020, but the silver lining is that it also motivated us to finally launch DConf Online in November. We fully intend to make this an annual event alongside of, not in place of, the real-world conference (when physically possible). Other than that, it was business as usual in D Land.
At a personal level, the lives of some in our community were disrupted last year in ways large and small. Please remember that, though the primary object that brings us together is our enthusiasm for the D programming language, we are all still human beings behind our keyboards. The majority of work that gets done in our community is carried out on a volunteer basis. All of us, as the beneficiaries, must never forget that the health and well-being of everyone in our community take top priority over any work we may want or expect to see completed. We encourage everyone to keep an ear open for those who may need to borrow it, and never be afraid to communicate that need when it feels necessary. Sometimes, an open ear can make a very big difference.
Thanks to all of you for your participation in the D community, whether as a user, a contributor, or both. Stay safe, and have a very happy 2021.
# DConf Online 2020: How to Participate
As I write, we are a little over 24 hours away from the start of DConf Online 2020, our first online version of DConf. All of the talks for Day One are uploaded, the livestreams are scheduled, and #BeerConf is almost ready to launch.
### The details
All of the prerecorded talks will be accessible on our YouTube channel via the DConf Online 2020 playlist (look under the live chat box for the full playlist; you may have to scroll down). Use the live chat to ask questions during the talk. The speaker will be available to provide short answers in the chat box. Longer, more complex answers, and/or additional context, will be provided in the Q & A livestream. The speaker will let you know if he is providing more detail in the livestream. If you don’t want to tab over to the livestream and miss part of the talk, the livestream will be saved to our channel once it ends and you will be able to go back and watch any part of it you may be interested in.
Each day, the Q & A livestream will begin at 13:50 UTC. Each speaker will be in the livestream 5 minutes before his talk begins and will be available to answer questions for the duration of the talk and for up to 15 minutes after. As I said above, you may ask questions in the live chat of the talk, but you may also ask them in the live chat of the livestream (and will likely have to if you have questions after the talk ends). Depending on the amount of time available, the number of questions, and the speaker’s schedule, each speaker may stay longer than 15 minutes after the talk, but is not required to.
Please note that speakers are not expected to answer off-topic questions. It’s entirely up to them if they do so.
I’ll be hosting the livestream throughout each day. I’ll be chatting with the speakers about their talks and D in general to fill in the dead time when no one is asking questions. After the conference is over, I intend to chop up the livestream and upload the Q & A session for each talk as separate videos.
On Day One, we have an Ask us Anything session scheduled with Walter and Átila. This will take place in the Q & A livestream for that day. We also have a livecoding session by Adam Ruppe scheduled. That will take place in a separate livestream when the Day One Q & A livestream ends (links below). Adam will be monitoring the chat as he codes, so he will answer any questions you have.
### BeerConf
From 18:00 UTC November 20, we’ll be running a Jitsi Meet instance for our online version of BeerConf. Everyone is welcome to join, no alcohol required. If you aren’t familiar with BeerConf, you can read a brief description of it on the DConf Online 2020 website. You can also read about it here on the blog.
BeerConf will run all weekend long. You can come and go as you please, during talks, in between talks, day time, night time, anytime!
See this D forum thread for details on how to join.
### The prizes
Throughout the event, I’ll be announcing different ways for viewers to win various prizes. We’ll be handing out t-shirts, coffee mugs, and other items from the DLang Swag Emporium (and maybe a DMan shirt or two). I’ll announce the details in the Q & A livestream and, if a talk is ongoing, in the talk’s live chat. Sometimes, winning the prize may involve tweeting, in which case I’ll announce the details on Twitter, so be sure to follow us if you aren’t already.
Additionally, everyone who asks a question to which a speaker provides an answer will be entered into at least two random drawings. There will be one random drawing at the end of each day which includes those eligible on that day. The winners of these drawings will receive a $50 Amazon eGift card. The winner of the two-day drawing will receive a$100 Amazon eGift card. If you win on Day One, you will not be eligible to win on Day Two, but both winners will be eligible to win the two-day drawing.
Funding for all prizes comes from the D Language Foundation General Fund. You can contribute by buying DConf Online 2020 swag or other items from the DLang Swag Emporium, by selecting the D Language Foundation as your preferred AmazonSmile charity and shopping through smile.amazon.com, or by donating directly to the General Fund.
Swag prize winners will be announced in a talk’s live chat and/or the Q & A livestream, depending on the nature of the prize task. For prize tasks that take place on Twitter, winners will not be announced, but will be notified through private message. Amazon eGift card winners will be announced in the livestream. Since YouTube apparently no longer allows private messages, winners on YouTube will be instructed on how to claim their prize when they are announced in the livestream.
### Enjoy!
We want to thank all of our speakers for volunteering their time to put together these presentations and making themselves available for Q & A. Without them, this event would not be possible. We hope you enjoy DConf Online 2020!
# D 2.094.0, DConf Online Schedule, and SAOC 2020
The end of September saw a new release of the reference D compiler, DMD 2.094.0, sporting the latest language features. That was followed not long after by a beta release of LDC, the LLVM-based D compiler, based on the same frontend version. The DMD 2.094.1 patch release entered into beta a few days before this post was published. Meanwhile, the first Milestone of the Symmetry Autumn of Code has come to an end, and the DConf Online 2020 schedule has been published.
## DMD 2.094.0
This release of DMD incorporates 21 major changes and 119 fixed Bugzilla issues, thanks to the efforts of 49 contributors. Here are some highlights.
### This ain’t your grandpa’s in parameter
Back in the days of yore, when DMD was still a pre-1.000 alpha, the D language supported in, out, and inout parameter storage classes. They had the following meanings:
• in (input), the default, was the bog standard function parameter which is a mutable copy of its argument, i.e., the normal passed-by-value parameter.
• out (output) parameters were passed by reference and, upon function entry, initialized to the default initializer value for the parameter type (e.g., 0 for int, float.nan for float, etc).
• inout (input/output) parameters were passed unmodified by reference.
When D2 came along, there were some changes. inout was replaced by the ref keyword and out kept the same meaning, but now there was an explicit restriction that these parameters could only take lvalue references; rvalue references, commonly used in C++, were forbidden as arguments. With in, things became a little muddy. And that brings us to scope parameters, a D2 feature that has evolved over time.
For quite some time, it was not fully implemented and only affected parameters that were delegates: the compiler would only allocate a closure for a scope delegate if it absolutely needed to. The D2 version of in was intended to be equivalent to const scope, but it was never fully implemented and was effectively equivalent to const. Today, scope is intended to be applied to ref or out parameters to prevent them from escaping the function scope, and with DMD 2.092.0, in finally became equivalent to const scope. In DMD 2.094.0, in has been reimagined and extended to solve the rvalue reference issue.
The first thing to know about the new in is that it’s still equivalent to const scope, but now the compiler is free to choose whether to pass an in parameter’s argument by reference or by value. The second thing to know is that in parameters can now take rvalue references. All of this is implemented behind the -preview=in command line switch first introduced in 2.092.0.
Like any preview feature, the new in may or may not make it into the language proper, and if it does it might not be without changes. But for now, it’s there and waiting to be put through its paces. The more people using it, pushing it, and looking for holes, the sooner we can know if this is the in we’re looking for.
### Ddoc Markdown support
Quite a while ago, Ddoc, D’s built-in documentation syntax, was enhanced to support some Markdown features. It was hidden beind a -preview switch. Now, that switch is no longer necessary—Ddoc supports Markdown out of the box.
Note that this is not full-on Markdown. For example, although asterisks are supported for italic and bold text, underscores are not. But Markdown-style links, code blocks, inline code, and images are supported. For the details, see the Documentation Generator documentation.
Since the release of DMD 2.091.0, the DMD binaries in the Windows release packages are being compiled with LDC. This is a good thing because LDC has a better optimizer than DMD, which makes DMD’s fast compile times even faster. Now, LDC is used to compile binary releases on Linux, OSX, and FreeBSD. As a side effect, there are now no more 32-bit releases for FreeBSD, and additional binary tools are no longer included. If you need them, you can still pick them up from https://digitalmars.com/ or from older DMD releases.
The latest release of DMD is always available for download at https://dlang.org/download.html. The latest Beta or Release Candidate can always be found there as well. You can also find links to download LDC and GDC, the GCC-based D compiler (which is now an official component of GCC). While you’re there, if you enjoy the D programming language, consider leaving a tip to the D Language Foundation.
## DConf Online 2020 Schedule
DConf Online 2020 is coming together nicely. Over the two days of November 21 and 22, we have nine prerecorded talks, a livestream Q & A with the language maintainers, and a livecoding session. We’ll also be bringing our annual real-world BeerConf to the virtual world.
### The talks
The prerecorded talks will be scheduled to premiere on our YouTube channel at the UTC times listed on the schedule. For the duration of each talk and for 15 minutes after, each speaker will be avalailable in a separate livestream for questions and answers related to the talk. We want to record the questions and answers verbally for posterity. The idea is that viewers of the prerecorded talk can ask questions in the video’s chat, or ask in the livestream chat during or up to 15 minutes after the talk. The speaker will read the questions out loud. Short answers will be provided both verbally and in the chat. Longer answers will be provided verbally only. Commenters asking questions during the talk will be notified in the chat if their questions were selected so that they don’t have to tab out to the Q & A and miss a portion of the talk. They can go back and watch the Q & A video later on our YouTube channel.
The livestream Q & A with the language maintainers will run on our YouTube channel. We’ll be streaming a video conference call and questions will be taken from the livestream chat. During the livestream, some viewers will be invited to join in on the conference call and ask their question directly in order to provide more opportunity for follow up and feedback. Details on how to participate will be released on the day of the livestream.
Throughout the weekend, we’ll be handing out prizes to random viewers. Eligibility details will be provided during the course of the event, so pay attention!
### BeerConf
BeerConf is a real-world DConf tradition dating back to the first edition of the conference, though the name didn’t come around until Ethan Watson coined it a few years later. Every year, we designate a gathering spot where DConf attendees can mingle every evening to unwind. The DConf days are where we all wear our D programmer hats and spend our time talking about our favorite programming language, but BeerConf is our chance to be human. We still talk about D, but we also have the opportunity to go beyond the code and get to know each other on a more personal level.
So for DConf Online, we’re taking BeerConf online. On the evening (UTC) of Friday, November 20, we’ll open the BeerConf video conference to any and all, and we’ll leave it open all weekend. Despite the name, no alcohol is required to participate. All you need is an internet connection and a web browser, and you can come and go as you please. We’ve been running monthly BeerConf events since June of this year, so we know that, though it’s not quite the same as being in the same place, it’s still a lot of fun.
We hope to see you November 20–22 in BeerConf and DConf!
## Symmetry Autumn of Code
We are currently running our third annual Symmetry Autumn of Code (SAOC). Sponsored by Symmetry Investments, the event provides an opportunity for D programmers to make a little money working on projects aimed at improving the D ecosystem. Particpants each get paid $1000 for the successful completion of each of three milestones. At the end of a fourth milestone, the progress of each participant will be evaluated by the SAOC committee, then one participant will be awarded a final$1000 payment, and receive free registration and reimbursement for transportation and lodging for the next real-world DConf.
We currently have four programmers coding away toward their goals. Milestone 1 has just come to an end and Milestone 2 is set to begin. The participants will soon be sending in their milestone reports, their mentors will send in progress evaluations, and the SAOC Committee will review it all to determine if everyone has put forth the effort required to continue through the event (we expect no issues on that front!). You can follow the progress of each participant, and perhaps provide them with some timely advice, through their weekly updates in the D General Forum. Search for “SAOC2020”.
# Symmetry Investments and the D Language Foundation are Hiring
The D Language Foundation is hiring! Thanks to generous funding from Symmetry Investments, we are looking to fill two (mostly) non-programming positions geared toward improving the D ecosystem. Symmetry is also offering a bounty for a specific improvement to DUB, the D build tool and package manager. And on top of all of that, they are hiring D programmers.
## D Pull Request/Issue Manager
A lot of good work goes into the D Programming Language GitHub repositories. Unfortunately, some of that good work sometimes gets left behind. A similar story can be told for our Bugzilla database, where some issues are fixed almost as soon as they’re reported and others fall victim to a lack of attention. Efforts have been made in the past to tidy things up, but without someone in a position to permanently keep at it, it’s a task that is never complete.
The D Language Foundation is looking for one or two motivated individuals to take on that permanent position, get the work done, and keep things running smoothly. Symmetry Investements is generously funding this role with $50,000 per year for one person, or$25,000 per year for each of two.
The ideal candidate is someone who:
• is familiar with git, GitHub, and Bugzilla;
• is familiar enough with D to be able to review simple pull requests;
• is able to recognize when more specialized reviews are required and
• is able to proofread English text (for reviewing documentation and web site pull requests).
Examples of the role’s responsibilities include:
• ensuring all pull requests follow procedure;
• reviewing simple pull requests;
• finding appropriate reviewers for more complex pull requests;
• ensuring that pull requests are reviewed in a timely manner;
• reviving stale pull requests;
• coordinating between pull request submitters and reviewers to prevent pull requests from going stale;
• closing pull requests that are no longer valid;
• identifying Bugzilla issues that are duplicates or invalid;
• identifying Bugzilla issues that are candidates for bounties;
• publicizing Bugzilla issues in need of a champion and
I’ve been working with the D Language Foundation for the past three years. Much of what I do falls loosely in the category of Community Relations. These days, I’m in need of an assistant. Symmetry Investments is providing $600 per month for the role. The job will involve a number of different activities as the need arises, such as: • seeking out guest authors and projects to highlight for the D Blog; • monitoring our social media accounts; • sending out messages from the D Language Foundation (such as thank you notes to new donors); • assisting with maintenance of pages at dlang.org and dconf.org; • assisting with the organization of events like DConf and SAOC and • any odd jobs that pop up now and again. If you have good communication skills, an optimistic disposition, and enthusiasm for the D Programming Language, I’d like to talk to you. I don’t need a resume. Instead, please send an email to social@dlang.org explaining why you’re the right person for the job. ## DUB Bounty DUB has become a critical component in the D ecosystem. A significant number of projects depend on it and we need it to be able to meet a wide range of project needs. To that end, there are certainly improvements to be made. One such is in how DUB determines which of a project’s source files are in need of recompilation. Currently, DUB follows in the tradition of the venerable make and uses timestamp comparisons to make that determination. A new generation of version control and build tools (git, buck, bazel, scons, waf, plz, and more) rely on file checksums to assess the need for action. This is a much more robust approach because it detects actual changes in file content. Timestamps can change in any number of irrelevant ways. Robustness is important if one is to depend on a build working properly even when files are moved, copied, and shared across people, machines, and teams. As hashes are fast to compute on modern hardware, the impact on speed is very low. Symmetry Investments is offering a$2,000 bounty to the programmer who either converts DUB’s use of timestamp-dependent builds to use SHA-1 hashing throughout, or implements it as a global option to preserve the current behavior.
For inspiration, see this clip from Linus Torvald’s Google talk, and the article Build-Systems Should Use Hashes Over Timestamps. Note that shasum \$(git ls-files) in Phobos takes 0.05 seconds on a warm SSD drive in a desktop machine.
|
2023-02-06 22:23:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19693806767463684, "perplexity": 2275.591367915143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00049.warc.gz"}
|
https://im.kendallhunt.com/MS_ACC/students/1/8/17/index.html
|
# Lesson 17
Designing Simulations
Let’s simulate some real-life scenarios.
### 17.1: Number Talk: Division
Find the value of each expression mentally.
$$(4.2+3)\div2$$
$$(4.2+2.6+4)\div3$$
$$(4.2+2.6+4+3.6)\div4$$
$$(4.2+2.6+4+3.6+3.6)\div5$$
### 17.2: Breeding Mice
A scientist is studying the genes that determine the color of a mouse’s fur. When two mice with brown fur breed, there is a 25% chance that each baby will have white fur. For the experiment to continue, the scientist needs at least 2 out of 5 baby mice to have white fur.
To simulate this situation, you can flip two coins at the same time for each baby mouse. If you don't have coins, you can use this applet.
• If both coins land heads up, it represents a mouse with white fur.
• Any other result represents a mouse with brown fur.
1. Have each person in the group simulate a litter of 5 offspring and record their results. Next, determine whether at least 2 of the offspring have white fur.
mouse 1 mouse 2 mouse 3 mouse 4 mouse 5 Do at least 2 have white fur?
simulation 1
simulation 2
simulation 3
2. Based on the results from everyone in yout group, estimate the probability that the scientist’s experiment will be able to continue.
3. How could you improve your estimate?
For a certain pair of mice, the genetics show that each offspring has a probability of $$\frac{1}{16}$$ that they will be albino. Describe a simulation you could use that would estimate the probability that at least 2 of the 5 offspring are albino.
### 17.3: Designing Simulations
1. Design a simulation that you could use to estimate a probability. Show your thinking. Organize it so it can be followed by others.
2. Explain how you used the simulation to answer the questions posed in the situation.
### Summary
Many real-world situations are difficult to repeat enough times to get an estimate for a probability. If we can find probabilities for parts of the situation, we may be able to simulate the situation using a process that is easier to repeat.
For example, if we know that each egg of a fish in a science experiment has a 13% chance of having a mutation, how many eggs do we need to collect to make sure we have 10 mutated eggs? If getting these eggs is difficult or expensive, it might be helpful to have an idea about how many eggs we need before trying to collect them.
We could simulate this situation by having a computer select random numbers between 1 and 100. If the number is between 1 and 13, it counts as a mutated egg. Any other number would represent a normal egg. This matches the 13% chance of each fish egg having a mutation.
We could continue asking the computer for random numbers until we get 10 numbers that are between 1 and 13. How many times we asked the computer for a random number would give us an estimate of the number of fish eggs we would need to collect.
To improve the estimate, this entire process should be repeated many times. Because computers can perform simulations quickly, we could simulate the situation 1,000 times or more.
### Glossary Entries
• probability
The probability of an event is a number that tells how likely it is to happen. A probability of 1 means the event will always happen. A probability of 0 means the event will never happen.
For example, the probability of selecting a moon block at random from this bag is $$\frac45$$.
• random
Outcomes of a chance experiment are random if they are all equally likely to happen.
• sample space
The sample space is the list of every possible outcome for a chance experiment.
For example, the sample space for tossing two coins is:
|
2023-03-21 20:39:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.570722758769989, "perplexity": 702.8292200420468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943746.73/warc/CC-MAIN-20230321193811-20230321223811-00281.warc.gz"}
|
https://stats.stackexchange.com/questions/179903/metropolis-sampling-symmetric-proposal-distribution
|
# Metropolis sampling (symmetric proposal distribution)
• Can Metropolis sampling be used in conjunction with Gibbs sampling? So for example, if I have three parameters of interest, but only two of them have full conditionals that are known distributions, can I sample using Gibbs for those two parameters and Metropolis using the other parameter for each iteration?
• If the parameter of interest only takes positive values, would it be wrong (in terms of producing an answer, not efficiency) to use a normal proposal distribution centered on the previous iteration's value of the parameter?
## 1 Answer
• Yes. In fact, you can consider a Gibbs sampler to be a special case Metropolis Hasting sampler.
• If the parameter of interest is strictly positive, using a normal proposal will still work (any negative proposals will be automatically rejected). However, there is a very good chance that using the normal distribution for the proposed log of the parameter will actually be more efficient, as the posterior of the log-parameter is often approximately normal.
|
2019-07-17 06:57:54
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8283505439758301, "perplexity": 492.4798631156777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525094.53/warc/CC-MAIN-20190717061451-20190717083451-00062.warc.gz"}
|
http://mathoverflow.net/questions/139697/how-should-we-understand-the-relative-interior-in-berkovich-spaces
|
# How should we understand the relative interior in Berkovich spaces
I'm reading Berkovich's book on analytic spaces. The notion of relative interior confuses me. Is there anyway to see how it "looks like"? For instance, if $r <1$, what is the relative interior of $$M ( \mathbb{C}_p \{r^{-1} T \} ) \to M ( \mathbb{C}_p \{ T \} )$$ and how one should view it on the pictorial description of $M ( \mathbb{C}_p \{ T \} )$, i.e. the famous picture which looks like an infinite tree.
-
Let me begin with the absolute interior. Let me fix first a complete non-archimedean field $k$ (take $k=\mathbb{C}_p$ if you like). For a closed disk $D^+(0;r_1,\dots,r_n)$ (center 0, polyradius $(r_1,\dots,r_n)$), I will call naive interior'' the open disk $D^-(0;r_1,\dots,r_n)$.
Now, let $Y=\mathcal{M}(B)$ be a $k$-affinoid space. By definition, you may realize it as a Zariski-closed subset of some closed disk $D^+$. The first idea one could have is to define the interior of $Y$ as the set of points that belong to the naive interior of $D^+$. Actually, this depends on the chosen presentation. So you are led to define the absolute interior Int$(Y/k)$ of $Y$ as the set of points that belong to the naive interior of a closed disk $D^+$ for some presentation of $Y$ as as Zariski-closed subset of $D^+$.
Let us consider a simple example where $k=\mathbb{C}_p$ and $B =\mathbb{C}_p\{T\}$. Topologically, this disk is a tree. The root (i.e. the Gauss point), I will denote $\eta_1$. If you write $Y=D^+(0;1)$, you see that $D^-(0;1)$ belong to the interior of $Y$. This is an open branch out of the point $\eta_1$. Changing coordinate gives you another presentation and another open disk $D^-(a;1)$ (i.e. another open branch) in the interior. This way, you show that $Y\setminus\{\eta_1\}$ belongs to the interior of $Y$. It is actually equal to it.
As regards the relative interior, you just have to replace $k$ by a $k$-affinoid algebra $A$, $B$ by an $A$-affinoid algebra and disks by relative disks over $A$. In your situation, by the same kind of arguments, you will find that the relative interior is the complement of the point $\eta_r$. Actually, Berkovich shows more generally that if $Y \to X$ is the inclusion of an affinoid domain, then Int$(Y/X)$ coincides with the topological interior of $Y$in $X$.
|
2015-09-01 06:02:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.8690587878227234, "perplexity": 144.50699095510194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645167576.40/warc/CC-MAIN-20150827031247-00138-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://ask.openstack.org/en/questions/45345/revisions/
|
# Revision history [back]
### packstack and remote mysql server
I'm trying to setup openstack using packstack on a fresh centos 7 minimal machine. I have 3 existing mysql servers in a master/slave replication setup for another project that I would like to use for openStack as well. I'm having a heck of a time getting packstack to work though, and i'm likely missing something simple. I haven't been able to get packstack --allinone to work. After chasing down all of the work arounds, the script still dies at the mysql install, so Idecided to generate an answer file and use my existing mysql servers. In the answer file I have set the following:
CONFIG_MYSQL_INSTALL=n
CONFIG_MYSQL_HOST=192.168.1.xxx
CONFIG_MYSQL_USER=cloud
CONFIG_MYSQL_PW=xxxxxxxxxxxxx
CONFIG_KEYSTONE_DB_PW=xxxxxxxxxxxxx (set to same as above password)
I always get:
192.168.1.200_keystone.pp: [ ERROR ]
Applying Puppet manifests [ ERROR ]
ERROR : Error appeared during Puppet run: 192.168.1.200_keystone.pp
Error: /Stage[main]/Keystone::Db::Sync/Exec[keystone-manage db_sync]: Failed to call refresh: keystone-manage db_sync returned 1 instead of one of [0]
The "cloud" db user has root permissions (double checked to rule that out). Turned the firewall on the db server off to rule that out (although we have 50 other servers connecting to the db machines, so I doubt that could be the issue anyway). Finally, disabled selinux on the controller node and restarted.
Anything i'm missing?
|
2020-03-29 13:01:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2388545423746109, "perplexity": 5762.479571905125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494331.42/warc/CC-MAIN-20200329105248-20200329135248-00251.warc.gz"}
|
https://ideas.repec.org/p/arx/papers/1307.4821.html
|
# Power-law exponent of the Bouchaud-M\'ezard model on regular random network
## Author Info
• Takashi Ichinomiya
Registered author(s):
## Abstract
We study the Bouchaud-M\'ezard model on a regular random network. By assuming adiabaticity and independency, and utilizing the generalized central limit theorem and the Tauberian theorem, we derive an equation that determines the exponent of the probability distribution function of the wealth as $x\rightarrow \infty$. The analysis shows that the exponent can be smaller than 2, while a mean-field analysis always gives the exponent as being larger than 2. The results of our analysis are shown to be good agreement with those of the numerical simulations.
If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
File URL: http://arxiv.org/pdf/1307.4821
## Bibliographic Info
Paper provided by arXiv.org in its series Papers with number 1307.4821.
as
in new window
Length: Date of creation: Jul 2013 Date of revision: Publication status: Published in Phys. Rev. E 88, 012819 (2013) Handle: RePEc:arx:papers:1307.4821 Contact details of provider: Web page: http://arxiv.org/
## References
No references listed on IDEAS
You can help add them by filling out this form.
## Lists
This item is not listed on Wikipedia, on a reading list or among the top items on IDEAS.
## Corrections
When requesting a correction, please mention this item's handle: RePEc:arx:papers:1307.4821. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (arXiv administrators)
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If references are entirely missing, you can add them using this form.
If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.
Please note that corrections may take a couple of weeks to filter through the various RePEc services.
This information is provided to you by IDEAS at the Research Division of the Federal Reserve Bank of St. Louis using RePEc data.
|
2016-10-20 20:01:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31188929080963135, "perplexity": 1348.5584454409745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717783.68/warc/CC-MAIN-20161020183837-00094-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/magnitud-vs-component-of-a-vector.839056/
|
# Magnitud VS. Component of a vector
Tags:
1. Oct 22, 2015
### almarpa
Hello all.
Sometimes, when reading a physics book, I find it difficult to distinguish between the magnitud of a vector and the component of a vector. For example, take the weight force, with the positive z direction pointing upwards (that is, F= -mg k). Sometimes, people write F=mg refering to its magnitude, but in other cases, people use F= -mg refering to the z component (the only component of the force vector, in this case).
Of course, it is important to distinguish between both cases, because components have sign, but amplitudes are always positive. For example, it could be a seious problem when evaluating the work done by the weight. When performing the integral ∫ Fdz, F is the component, and must have sign (and dz as well, depending on the direction of the movement).But it is easy to think that F is a magnitude, and forget about the sign, so we would get a wrong result.
Can you please tell me what is the general rule to know when F represents the magnitude of a vector, and when F represents component of a vector?
Thank you so much.
2. Oct 22, 2015
### BvU
Hi,
Basically $\vec F$ is always a vector. But, as you describe, sometimes the direction is so evident that we only need the magnitude. Sloppy, but it happens.
|
2018-07-23 10:35:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9212284088134766, "perplexity": 494.6354704494058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596204.93/warc/CC-MAIN-20180723090751-20180723110751-00059.warc.gz"}
|
https://math.stackexchange.com/questions/2231904/tangent-bundle-of-open-annulus-is-diffeomorphic-to-mathbbs1-times-mathbb
|
# Tangent bundle of open annulus is diffeomorphic to $\mathbb{S}^1 \times \mathbb{R}^3$
I want to prove that the tangent bundle of open annulus is diffeomorphic to $\mathbb{S}^1 \times \mathbb{R}^3$.
This arguments came from mathoverflow
I have no clue of constructing this, any rudimental information will be helpful.
I have some basic information of constructing $T\mathbb{S}^1$ is diffeomorphic to $\mathbb{S}^1 \times \mathbb{R}^1%$.
(1) The annulus is diffeomorphic to $S^1\times R^1$.
(2) The tangent bundle of $S^1$ is $S^1\times R^1$. See the following post: Tangent bundle of $S^1$ is diffeomorphic to the cylinder $S^1\times\Bbb{R}$. The tangent bundle of $R^1$ is obviously $R^2$.
(3) The tangent bundle of a product manifold $M_1\times M_2$ is $TM_1\times TM_2$. Here $TM$ denotes the tangent bundle of $M$.
|
2019-09-15 05:54:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9702940583229065, "perplexity": 160.67239730957013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514570740.10/warc/CC-MAIN-20190915052433-20190915074433-00387.warc.gz"}
|
https://cforall.uwaterloo.ca/trac/changeset/ec92b486dc79c3f86a026bcb1454ad883b592f39/doc/theses/aaron_moss_PhD
|
# Changeset ec92b48 for doc/theses/aaron_moss_PhD
Ignore:
Timestamp:
Apr 24, 2019, 4:06:28 PM (3 years ago)
Branches:
aaron-thesis, arm-eh, cleanup-dtors, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr
Children:
39de1c5
Parents:
71a12390
Message:
Location:
doc/theses/aaron_moss_PhD/phd
Files:
2 edited
### Legend:
Unmodified
r71a12390 \CFA{} adds a significant number of features to standard C, increasing the expressivity and re-usability of \CFA{} code while maintaining backwards compatibility for both code and larger language paradigms. This flexibility does incur significant added compilation costs, however, the mitigation of which are the primary concern of this thesis. \cbstarty One area of inquiry which is outside the scope of this thesis is formalization of the \CFA{} type system. Ditchfield~\cite{Ditchfield92} defined the $F_\omega^\ni$ polymorphic lambda calculus which is the theoretical basis for the \CFA{} type system. Ditchfield did not, however, prove any soundness or completeness properties for $F_\omega^\ni$; such proofs remain future work. A number of formalisms other than $F_\omega^\ni$ could potentially be adapted to model \CFA{}. One promising candidate is \emph{local type inference} \cite{Pierce00,Odersky01}, which describes similar contextual propagation of type information; another is the polymorphic conformity-based model of the Emerald~\cite{Black90} programming language, which defines a subtyping relation on types which are conceptually similar to \CFA{} traits. These modelling approaches could potentially be used to extend an existing formal semantics for C such as Cholera \cite{Norrish98}, CompCert \cite{Leroy09}, or Formalin \cite{Krebbers14}. \cbendy
|
2022-01-28 05:52:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6257136464118958, "perplexity": 5352.71694583908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305420.54/warc/CC-MAIN-20220128043801-20220128073801-00380.warc.gz"}
|